Recently I decided I wanted to start hosting a blog. If you’re reading this post, I was successful in that endeavor. While working out the details, I decided for sure that I wanted it to be a self-hosted operation. I quickly fell down the rabbit hole of researching various blogging platforms and static site generators.
TL;DR This is essentially what I want:
- walnuthomelab.com -> server A (LAN)
- service.walnuthomelab.com -> server A (LAN)
- blog.walnuthomelab.com -> server B (VPS)
- oelsherif.com -> server B (VPS)
Choices (Yup)
Dedicated blogging platforms:
- WordPress - A very capable platform, but much more than what I needed, and I didn’t want to deal with keeping it secure and up to date
- Ghost - A less bloated WordPress alternative, but still felt like too much, and again I didn’t want to worry so much about security or updates
- SSG - Static Site Generators create a full static HTML website based on raw data. There is no database to interact with, and since it’s a static page there’s an added security benefit, as there isn’t a plethora of plugins to install and maintain. There were a multitude of options available: Hugo, Gatsby, Jekyll, Next.js, etc. However, Hugo stood out to me the most.
The Chosen One
So now I had my blogging platform chosen. Hugo works by creating a project directory using Hugo commands, and then creating your posts/content in the form of markdown files. It can then convert that project into an HTML website (based on a theme) and serve it using a built-in live-updating web server. This is awesome, as it means I don’t have to spin up a separate Nginx container just to serve the generated site, I can simply point it to my reverse proxy and it’ll be available at my chosen subdomain, blog.walnuthomelab.com.
After learning how to operate Hugo from a container and pointing it at my reverse proxy, I decided I wanted to migrate the hosting infrastructure to the cloud; it works 100% fine locally, but I’m making an effort to start moving public facing things outside my LAN when practical/possible. To do that, I spun up a cloud instance on Linode.
While doing this, I realize I should probably have a professional-ish landing page for myself; it can house my resume and experience, an ‘About Me’ page, and link to my blog/Github/LinkedIn, etc. I decided to go with first-initial-last-name, so oelsherif.com.
Choices Pt. 2
Now I’m at a bit of a dilemma. I want to continue using walnuthomelab.com at home (Server A), because it’s for my homelab services. But I still want the blog to be located at blog.walnuthhomelab.com, because it’s about my homelab, but also on the Linode instance (Server B), since it’s public facing. I also want oelsherif.com on the Linode instance, because it is also public facing.
For example, I host Radarr at home. I could just access it from the IP address and port, but that gets kind of annoying after a while. I could just create a DNS record for it so I can use a domain name, but then I have to see that horrid “Not secure” message at the top of my browser. I’d like to have that nice SSL “locked” icon and a valid domain to go to. The solution to this is a reverse proxy.
I set everything up using SWAG Reverse Proxy with wildcard certs and DNS validation. At its core, SWAG is simply an Nginx reverse proxy that is paired with LetsEncrypt and Fail2Ban, packaged into a neat little Docker container. This way, there’s no need to create individual CNAME records at my DNS provider, and my services will be available at service.walnuthhomelab.com.
So once again, this is what we’re trying to accomplish:
- walnuthomelab.com -> server A (LAN)
- service.walnuthomelab.com -> server A (LAN)
- blog.walnuthomelab.com -> server B (VPS)
- oelsherif.com -> server B (VPS)
Step 1: Swag (Server A)
Docker compose file:
version: "3.8"
networks:
walnut01:
driver: bridge
services:
swag:
image: ghcr.io/linuxserver/swag
container_name: swag
networks:
- walnut01
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1000
- TZ=$TZ
- URL=firstdomain.com
- SUBDOMAINS=wildcard
- VALIDATION=dns
- DNSPLUGIN=cloudflare
# - STAGING=true
volumes:
- ${DOCKERCONFDIR}/swag:/config
ports:
- 443:443
- 80:80
restart: unless-stopped
Make sure to make the appropriate changes in the respective .ini file under /name-of-swag-volume/dns-conf/
, which in my case is cloudflare.ini. You’ll need to input your Cloudflare email and Global API key. You can make use of the STAGING environment variable to verify your configuration is correct without counting against LetsEncrypt’s rate limiting policies.
Run docker-compose up -d
(the -d flag means detached, so it won’t show all the additional output you may not want to see) and verify that Swag starts successfully and pulls your certs, check firstdomain.com and see if it puts you at the Swag default landing page. You can check the logs by running docker logs swag
.
Step 2: Cloudflare (DNS)
In Cloudflare DNS:
- Create an A record pointing ‘firstdomain.com’ to the public IP of Server A
- Create CNAME for ‘www’ pointing to ‘firstdomain.com’
- Create CNAME for ‘*’ pointing to ‘firstdomain.com’
- Create an A record pointing ‘subdomain.firstdomain.com’ to the public IP of Server B
This way, any services you don’t have a CNAME record for will be handled by the wildcard cert, pointing to your domain at Server A. You then create that last additional A record for any additional subdomains you specifically want pointed at Server B, as those will take precedence.
Step 3: Swag (Server B)
There are two ways we can proceed from here. Method 1 is to utilize the EXTRA_DOMAINS environment variable for Swag. This will allow Certbot to generate your certificate for multiple domains, all of which are packaged into that one certificate. Method 2 is to have a separate directory of certificates that you point Swag to. This will allow you slightly greater control over the individual certificates and how they’re generated, at the cost of Swag’s auto-renewal features.
> Method 1
As stated above, you’ll want to add the EXTRA_DOMAINS environment variable to your Swag config. Repeat the same steps used in Step 1 to create your Docker Compose file, only this time you’ll be adding the EXTRA_DOMAINS environment variable at the bottom of the Swag section. For this to work using Cloudflare, you’ll need to ensure that your domains are registered under the same Cloudflare account specified in your cloudflare.ini file.
version: "3.8"
networks:
walnut01:
driver: bridge
services:
swag:
image: ghcr.io/linuxserver/swag
container_name: swag
networks:
- walnut01
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1000
- TZ=$TZ
- URL=seconddomain.com
- SUBDOMAINS=wildcard
- VALIDATION=dns
- DNSPLUGIN=cloudflare
# - STAGING=true
- EXTRA_DOMAINS=firstdomain.com, *.firstdomain.com
volumes:
- ${DOCKERCONFDIR}/swag:/config
ports:
- 443:443
- 80:80
restart: unless-stopped
You can add one or more domains, delimiting by comma. Essentially this is altering the Certbot command that is run when the container is started. For more information on this particular subject, check here.
After you’ve verified that your Compose file looks correct, bring up your containers by running docker-compose up -d
and verify that Swag starts successfully. Use docker logs swag
to observe the process. You should see a line in the logs that looks something like this if it processed your additional domains correctly:
Requesting a certificate for *.seconddomain.com and 3 more domains
Once again, check firstdomain.com and see if it puts you at the Swag default landing page. You can check the logs by running. You can now set up your services as normal, and verify your second domain is working by pointing your service to the second domain manually in your subdomain.conf files.
Normally, they look like this:
server_name service.*
Instead, change it to this:
server_name service.firstdomain.com
If you did everything correctly, your service should now be reachable at the specified subdomain, and have a valid SSL certificate. If you check the certificate information in the browser, it should show *.firstdomain.com.
> Method 2
Before I figured out how to get Swag to automatically pull certs for additional domains, this is the method I used. I don’t think there’s anything wrong with this method, it just requires a different level of maintenance. If I were using more than 2-3 domains, I might roll this method just to have better control over where and how the files are being used.
This method involves pointing Swag to another set of certificates to use. As a proof of concept to make sure it would actually work, I simply downloaded the working certificate files from Swag’s LetsEncrypt folder on Server A, and uploaded them to a folder in Server B. You can find certificate files generated by Swag (or rather Certbot) under /path/to/swag-config/keys/letsencrypt
.
For example:
swag
|--keys
|--letsencrypt
|--fullchain.pem
|--privkey.pem
In theory, you should only need fullchain.pem
and privkey.pem
, but you can copy the whole folder if you wish. Once you have acquired these, you can upload them to Server B. I placed mine in a folder directly under the Swag config root folder, so on the same level as where the ‘keys’ folder is in the diagram above.
Server B example:
swag
|--keys
|--letsencrypt
|--fullchain.pem
|--privkey.pem
|--letsencrypt_whl
|--fullchain.pem
|--privkey.pem
I did it this way because I wanted to make sure the cert files stayed with the rest of the Swag instance, but placed it at the root level because I didn’t want Swag to blow out those files as part of its automated processes.
NOTE: There are multiple ways of creating the certificate files you want to use.You can roll a separate Certbot instance and have it generate as many certificates as you need and configure auto-renewal, or you can automate an upload/sync process from another remote server (for example, running an Ansible script on a cron job to rsync your certificates from one server to another). In the future I’ll either do this, or figure out a way to just have Ansible rsync the certs over on a weekly cron job.
Nginx Config
Next, we need to ensure Swag has a way to reference the second set of certificates. If you look under /swag/nginx/
, you’ll see a bunch of ‘.conf’ files. The Nginx portion of Swag loads and uses these to make things “work”. We’ll need to create a copy of the file named ssl.conf
. I named mine ssl_whl.conf
.
Once copied, open the new file, and you’ll see a section labeled #Certificates, like so:
# Certificates
ssl_certificate /config/keys/letsencrypt/fullchain.pem;
ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
# verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /config/keys/letsencrypt/fullchain.pem;
We simply need to change the file paths to match our newly uploaded certificates. In my case, the #Certificates section of my ssl_whl.conf
file looks like this:
# Certificates
ssl_certificate /config/letsencrypt_whl/fullchain.pem;
ssl_certificate_key /config/letsencrypt_whl/privkey.pem;
# verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /config/letsencrypt_whl/fullchain.pem;
Next, we’ll need to make sure the ssl_whl.conf
file is loaded for the services we want to use that domain for.
If you look under /config/nginx/proxy-confs/
, you’ll see a bunch of files that have ‘subdomain.conf.sample’ at the end. These are all preconfigured files for commonly used self-hosted services, like Radarr, Sonarr, and Sabnzbd. Whenever you want Swag to point at one of your services, you just remove the ‘.sample’ part and you’re off to the races. If we take a look at any of these, near the top-middle section of the config, we’ll see these lines:
server_name service.*
include /config/nginx/ssl.conf;
If you want a particular service to be pointed at your second domain and use your second set of certificates, change the lines as needed. You’ll want the server_name line to be more specific, Swag uses the wildcard with some fancy magic to automatically point at the main domain you first set things up with, so I just type the full intended domain name out. For the next line, simply change ssl.conf to the name of your new ssl file. For hosting my blog at walnuthomelab.com, my subdomain.conf file looks like this:
server_name blog.walnuthomelab.com;
include /config/nginx/ssl_whl.conf;
NOTE: If you want more specific instructions specifically on setting up Hugo with Swag, follow the guide I linked earlier, it’s really good.
For my landing page, I just used the base domain, the config looks exactly the same as the one above, but with server_name oelsherif.com
. Technically you could also tell Hugo to generate the HTML files in your Swag’s /www folder, but this felt like a cleaner way of doing it and it works perfectly fine.
Wrap Up
Assuming you followed all the steps properly, and assuming I didn’t forget to mention anything (admittedly I wrote a lot of this post about a month after implementing these changes), you should now be able to use Swag to connect your services to multiple domains on the same server.
The key points:
Method 1
- Make the correct DNS changes (A record pointing desired subdomain to IP of Server B)
- Utilize the EXTRA_DOMAINS environment variable in your Swag config
- Create/adjust your
service.subdomain.conf
files to use the correct server_name parameters.
######## ~~~~~ ########
Method 2
- Make the correct DNS changes (A record pointing desired subdomain to IP of Server B)
- Make the valid certificate files for your second domain available to Swag on Server
- Create an additional ssl.conf file to point at those certificate files
- Create/adjust your
service.subdomain.conf
files to use the correct ssl.conf and server_name parameters.
If any information in this post ends up being wrong or misleading, don’t hesitate to reach out to me to fix it.