Building a secure mail server

Naturally with owning a domain, I can setup e-mail. Now I could use any number of free services to do this, but I’m choosing to host my own solution here. With the theme of containerization, I looked to see if there was a good solution using docker, and that’s how I found Mailu.io. This project is a secure by design set of docker images which makes setting up a secure e-mail server surprisingly easy.

This is a fully fleshed out e-mail solution. Beyond providing simple POP, IMAP, and SMTP services; Mailu.io provides web access to a mail client out of the box. Security can be enabled simply with Let’s Encrypt (Bonus!). It has anti-spam and anti-virus modules. This project even has a setup utility that builds you a docker-compose.yml.

Before I can even set this up though I had to consider where I would run this server. As a mail server it’s desirable to have 100% up-time, if possible. This is why I decided on running this mail server in the Digital Ocean cloud. If I can minimize the memory foot print, I may be able to use this server for multiple purposes. I picked the smallest size for this server. 1 GB of RAM, and 25 GB of SSD. More than enough for now. Digital Ocean automatically setup my SSH keys, and installed a copy of Centos 7. I then ran the same few commands from the WordPress docker host setup.

# curl -fsSL https://get.docker.com/ | sh
# systemctl enable docker
# systemctl start docker
# curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# chmod 755 /usr/local/bin/docker-compose

I also need to add a few records to DNS so that other mail servers can find mine:

The top record is a SPF record used for reducing e-mail forgery by preventing changes to the sender address. The A record simply reports the mail.ericpark.dev IP, and the MX record says who manages mail for this domain.

After using the setup utility from Mailu.io I copied the resulting docker-compose.yml and after inspecting it, started it on my new cloud host. After it started I was able to access the admin console and away I went. It was much less painful than I have experienced previously. Admittedly, neither implementation ever met any real load, they are more like proof-of-concepts. However, it does enable me to receive e-mails at eric@ericpark.dev!

The server has a rather mild footprint, so I’ll be able to host other services on this host as well!

Under 500 MB. Still have half the memory left on this host for other services.

Upon further inspection of the docker-compose.yml, I thought it was interesting to see that it is implemented at its core with dovecot and postfix which ironically were what I manually set up the last time I built a mail server.

After the server was up and running I was able to send and receive a few e-mails from my gmail account. I went on to test the server for compliance using the following tool: MXToolbox

Looks good!

We like to see a good report! Particularly important is the Open Relay check. If this fails, your server could be used as a spam relay. No one wants more spam.

Expanding Eric Park dot dev

Ok, so I have a domain and a blog. Now what? I poked around the Internet and came up with a few ideas:

  • Setup a Mail server. Every domain needs mail right?
  • Setup Google Analytics for Web Traffic analysis.
  • Setup OpenVPN. Whenever I’m remote, I could connect to this to encrypt my connection, and access services on my home network without the need for SSH tunnels.
  • Build custom services, primarily with myself in mind, but I’d like the challenge of building something that could be used by others as well. Possibly some sort of open source project. This involves some of my other plans to build up a CI/CD environment to build, test, and deploy web applications. With Kubernetes!
  • I’ve seen some suggestions of creating a personal landing page which could be a organized list of common bookmarks in a pleasing format. This sounds interesting, and something I might implement to standardize my experience across platforms and browsers. Of course this should be protected with authentication. This could even be built and managed by the CI/CD solution mentioned above, and give me some experience with some new technologies I’ve been wanting to try out.
  • Build a home page. Right now ericpark.dev redirects to WordPress. Maybe something else will live here eventually.
  • Run my own Password Manager service. Think LastPass, but on my own hardware.
  • Run my own URL shortening service. An interesting idea.
  • Build my own private cloud, something analogous to Dropbox but on my own hardware.
  • Build a contact server. Instead of keeping all of my contacts in Google I could move them to my own LDAP solution. I’d have to see how well I could integrate this into my phone if I really wanted to replace Google. I could also use this for authentication to my own apps.
  • Set up URLs to redirect to social media accounts. This is simply kind of a branding thing. Really not all that useful for my personal domain, but easy enough to configure.
  • Setup a web proxy. I could see doing this for fun. It’s pretty far down the list though.

Some of these ideas are certainly more tin-foil hat than others, but in this day-in-age where everyone is trying to profit off our data. The more I could move onto my own services the better. For the most part, I see this as an opportunity to implement solutions I wouldn’t have before. Essentially replacing many cloud solutions that many of us take for granted everyday. It certainly gives me something to do!

Docker-izing WordPress

In this post we’ll be setting up Docker and deploying WordPress on my docker host. This involves setting up a Nginx reverse-proxy with a valid TLS Certificate which will map HTTP requests to ericpark.dev to their respective services, and deploying a working WordPress consisting of a MySQL database and the WordPress container powered by Apache.

Before I even get started on working on the docker host. I first configure my firewall to allow access on port 80 and 443 to docker1. This is a simple port forwarding rule.

Packets from the Internet now hit the docker1 host on 80 and 443

In addition to this, if I want to be able to reach the site from inside my own network I need to tell my own DNS that it should direct me to the internal IP of the Nginx server.

Adding a host override in my local DNS

I also have to configure the VM to allow packets on 80 and 443 to enter my server.

firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --reload
firewall-cmd --list-ports

With this completed packets from the internet now reach services on my docker host which are listening on 80 or 443. Now on to the main event.

After the OS is installed on the VM it is a simple matter to install the docker service.

# curl -fsSL https://get.docker.com/ | sh
# systemctl enable docker
# systemctl start docker

The script at https://get.docker.com/ automatically configures the yum repository and installs the docker service. I will be using docker-compose as well to configure the multiple services I will be running. So let’s install that:

# curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# chmod 755 /usr/local/bin/docker-compose

Now we have the tools to put together our WordPress site. docker-compose is used to group containers which make a up a complete service. While I could deploy the Nginx container along with the WordPress and MySQL containers, I am keeping them separate since Nginx will serve other services beyond WordPress. It will be the load balancer for all of ericpark.dev.

Let’s look at the WordPress docker-compose.yml:

version: '3'
 services:
   mysql:
     image: mysql:8.0.16
     container_name: mysql
     command: --default-authentication-plugin=mysql_native_password
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: secret-pw
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress
     volumes:
       - /ds1/wordpress_mysql:/var/lib/mysql
       - ./my.cnf:/etc/mysql/my.cnf
     networks:
       - wordpress-net
   wordpress:
     image: wordpress:5.2.1-apache
     container_name: wordpress
     restart: always
     depends_on:
       - mysql
     environment:
       WORDPRESS_DB_HOST: mysql:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_NAME: wordpress
       WORDPRESS_CONFIG_EXTRA: "define('WP_HOME','https://ericpark.dev/wordpress'); define('WP_SITEURL','https://ericpark.dev/wordpress');"
     volumes:
       - /ds1/wordpress_wp-content:/var/www/html/wp-content
     networks:
       - wordpress-net
 networks:
   wordpress-net:

As I’ve been leading onto, mysql and wordpress are their own services. Each one running in an independent container. So let me break it down a bit:

MySQL:

  • I specifically pick a version number for the MySQL image. Preventing accidentally upgrading just because the latest tag was moved.
  • I name the container mysql for convenience, if you are running multiple instances, you probably want to leave this off. This considerably shortens docker commands on these containers.
  • Since I’m using MySQL 8 I add --default-authentication-plugin=mysql_native_password to the startup command.
  • I set the container to restart if it exits. This also serves to restart the container on reboot.
  • I’m passing some environment variables into MySQL which create the desired database. These are defined in the containers docs. Don’t worry hackers, that’s not my actual password.
  • For volumes, I have set the data directory to be on the file system, which makes data persist between deployments. Very desirable! I’m also passing in a configuration file, which tweaks the database to use less memory. Note: this is likely not a production friendly change but I set performance_schema = 0 I have not really analyzed what this does other than to reduce memory usage on my limited environment. This config file exists in the same directory as the docker-compose.yml.
  • Finally the container is added to it’s own network for communication with WordPress, the app to database communication never leaves docker.

WordPress:

  • Again choosing a specific version, which does happen to be the latest.
  • Name for convenience.
  • Setting depends_on: mysql to start the mysql container first. This could be improved on with a wrapper script. As WordPress will throw errors while waiting for the database to come up.
  • More environment variables. More docs!
  • Note: The HOST configuration is using ‘mysql’ as the hostname, this is due to naming the service at the top of the file, and using a docker network which provides DNS resolution for the service names.
  • WORDPRESS_CONFIG_EXTRA is set to take care of the URL rewrite I am doing. Since Nginx will be used for more than just WordPress. I will map this service to ericpark.dev/wordpress/.
  • Add a volume to persist the uploaded content.
  • Connect the container to the same network as the MySQL service.

After this I execute docker-compose up -d to start the app. Now since I have not exposed any ports on the host. The app is actually inaccessible. I need to deploy the Nginx reverse-proxy to accept requests and pass them to WordPress.

Here’s the Nginx docker-compose.yml:

version: '3'
services:
  nginx:
    image: nginx:1.17.0
    container_name: nginx
#This should be commented during initial ssl generation
#    command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
    ports:
      - 80:80
      - 443:443
  certbot:
    image: certbot/certbot
#This should be commented during initial ssl generation
#    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
    volumes:
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
networks:
  default:
    external:
      name: wordpress_wordpress-net

Things get a little interesting here with the TLS certificate generation:

  • The command additions are used to set the server to renew the TLS cert when it’s time. These should be commented out during the initial creation.
  • nginx.conf which is the configuration file for Nginix is loaded the from same directory as the docker-compose.yml
  • A couple of data directories are created to share the certs between the certbot and Nginx containers.
  • I map the docker host’s HTTP(80) port and HTTPS(443) port to the Nginx container. It will handle all requests for the ericpark.dev domain.
  • We also attach this container to the WordPress network to allow our connections.

Here’s the nginx.conf:

events {}
http {

server {
    listen 80;
    server_name ericpark.dev;
    location / {
        return 301 https://$host$request_uri;
    }

    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }
}

server {
    listen                    443 ssl;
    server_name               ericpark.dev;
    ssl_certificate           /etc/letsencrypt/live/ericpark.dev/fullchain.pem;
    ssl_certificate_key       /etc/letsencrypt/live/ericpark.dev/privkey.pem;
    include                   /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam               /etc/letsencrypt/ssl-dhparams.pem;
    add_header                Strict-Transport-Security "max-age=604800";

    location / {
      return 301 https://$host/wordpress$request_uri;
    }

    location /wordpress/ {
      proxy_pass http://wordpress:80/;

      proxy_set_header X-Forwarded-Host $host;
      proxy_set_header X-Forwarded-Proto https;
    }
  }
}
  • The server listens on both 80 and 443.
  • 80 simply redirects all requests except for the Let’s Encrypt agent challenge URL to HTTPS.
  • 443 is configured to pick up TLS certs from certbot.
  • It has a redirect for ericpark.dev to /wordpress/
  • The /wordpress/ context is configured to terminate the TLS connection and pass the HTTP connection to the WordPress container.

Now we have a bit of a catch-22 here. The Nginx server wont start because there are no certs in the specified locations, and certbot can’t generate the certificates because it needs Nginx to proxy it the challenge request.

A couple of scripts come into play here:

#!/bin/bash

if ! [ -x "$(command -v docker-compose)" ]; then
  echo 'Error: docker-compose is not installed.' >&2
  exit 1
fi

domains=(ericpark.dev)
rsa_key_size=4096
data_path="./data/certbot"
email="eric@ericpark.dev"
staging=0 # Set to 1 if you're testing your setup to avoid hitting request limits

if [ -d "$data_path" ]; then
  read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision
  if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then
    exit
  fi
fi


if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then
  echo "### Downloading recommended TLS parameters ..."
  mkdir -p "$data_path/conf"
  curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/options-ssl-nginx.conf > "$data_path/conf/options-ssl-nginx.conf"
  curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot/ssl-dhparams.pem > "$data_path/conf/ssl-dhparams.pem"
  echo
fi

echo "### Creating dummy certificate for $domains ..."
path="/etc/letsencrypt/live/$domains"
mkdir -p "$data_path/conf/live/$domains"
docker-compose run --rm --entrypoint "\
  openssl req -x509 -nodes -newkey rsa:1024 -days 1\
    -keyout '$path/privkey.pem' \
    -out '$path/fullchain.pem' \
    -subj '/CN=localhost'" certbot
echo


echo "### Starting nginx ..."
docker-compose up --force-recreate -d nginx
echo
#!/bin/bash

domains=(ericpark.dev)
rsa_key_size=4096
data_path="./data/certbot"
email="eric@ericpark.dev"
staging=0 # Set to 1 if you're testing your setup to avoid hitting request limits

echo "### Deleting dummy certificate for $domains ..."
docker-compose run --rm --entrypoint "\
  rm -Rf /etc/letsencrypt/live/$domains && \
  rm -Rf /etc/letsencrypt/archive/$domains && \
  rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot
echo


echo "### Requesting Let's Encrypt certificate for $domains ..."
#Join $domains to -d args
domain_args=""
for domain in "${domains[@]}"; do
  domain_args="$domain_args -d $domain"
done

# Select appropriate email arg
case "$email" in
  "") email_arg="--register-unsafely-without-email" ;;
  *) email_arg="--email $email" ;;
esac

# Enable staging mode if needed
if [ $staging != "0" ]; then staging_arg="--staging"; fi

docker-compose run --rm --entrypoint "\
  certbot certonly --webroot -w /var/www/certbot \
    $staging_arg \
    $email_arg \
    $domain_args \
    --rsa-key-size $rsa_key_size \
    --agree-tos \
    --force-renewal" certbot
echo

echo "### Reloading nginx ..."
echo "Do this: docker-compose exec nginx nginx -s reload"

I execute these in two parts because it seemed to work better this way. The first script downloads two of the configuration files for nginx.conf, and generates dummy certs so that Nginx can start. It then starts the app. In the second script, we trigger certbot to generate the TLS certs, which makes use of the challenge URL setup in Nginx. After this I need to manually execute the reload for it to take effect though the following command: docker-compose exec nginx nginx -s reload

To enable the auto-renew for the TLS cert, un-comment the lines in docker-compose.yml now and restart using: docker-compose up -d

At this point I can finally access the WordPress page, and my TLS cert is marked as valid in Chrome.

Valid at last.

Victory!

Finally, let’s also check how secure our TLS setup is. There are various tools out there to perform tests, but I chose to use Qualys SSL Server Test. Lots of information to be gleaned here if you want to harden your setup further than what is provided by default with Let’s Encrypt. Suggestions from these types of tests would be implemented in the nginx.conf for the reverse-proxy. Tightening some of these settings could limit the number of clients which can connect to your web apps. For instance disabling TLSv1.0, or TLSv1.1 will prevent older browsers without support for TLSv1.2 from accessing your site.

Not too shabby.

Let’s recap by tracing how your HTTP request got to this page today.

Your browser asks for the DNS record associated with ericpark.dev. This DNS record ultimately points to the IP address of my ISPs modem. This is passed to my router which is configured to relay all traffic on 80 or 443 to this VM. firewalld has been configured to allow these connections, and the traffic makes it to the Nginx container since it has been mapped to the host’s 80 and 443 ports. Nginx then maps the /wordpress/ requests to the WordPress container. Which makes connections to the MySQL database to retrieve data required to form the page which is then returned back to you. This communication is all secured by being encrypted using the TLS certificate over HTTPS.

Setting up the DNS configuration

When you buy a domain from a registrar, you are essentially buying the rights to the domain name, and the ability to control DNS records associated with it. Since I had dealt with GoDaddy in the past, this was the registrar I went with when I purchased ericpark.dev. While I know I could figure out the Domain API with GoDaddy, I already have working scripts to update Digital Oceans DNS API, and since I still intend on hosting services in their cloud, I decided to use them for my DNS needs too.

This is relatively easy to configure, I am essentially configuring GoDaddy’s DNS to hand over responsibility to Digital Ocean. I’ve gone into the DNS management in GoDaddy, and set the name servers to Digital Ocean:

With this complete, I head over to DO to setup my initial DNS records:

As you can see I’ve already begun expanding my domain with a mail server in the cloud. More on that later.

The records of importance here are:

  • The A record to ericpark.dev. This simply points to the IP that my domain can be found at.
  • The CAA record which states letsencrypt.org is my CA

To be fair all we need to get started the A record, but as I explored TLS best practices. I created the CAA record as well. This essentially tells everyone that Let’s Encrypt is the only CA that should issue a certificate for my domain.

This is a good moment to talk about Dynamic DNS. Setting these DNS records up manually is great, and it gets the job done, but what happens if my IP address changes? For some ISPs this rarely happens, but its good to be prepared. Previously, if my IP changed and I was using a statically configured sub-domain name. I would lose the ability to connect to my machines. DNS would no longer be correct when it told me which IP was mine. Enter Dynamic DNS. The idea here is that we periodically, preferably when a change is detected, update the DNS servers with the latest IP. It’s on my todo list to make this more reliable, but I have configured a script to be run at 4 AM each day to update the Digital Ocean DNS servers with my latest IP.

#!/bin/sh

# Your domain name
domain="ericpark.dev"

#record id to update
record="xxx"

# api key
api_key="xxx"

# lookup ip
ip="$(curl http://ipecho.net/plain)"
echo content="$(curl \
         -k \
         -H "Authorization: Bearer $api_key" \
         -H "Content-Type: application/json" \
         -d '{"data": "'"$ip"'"}' \
         -X PUT "https://api.digitalocean.com/v2/domains/$domain/records/$record")"

It boils down to the following. Get your API key from DO, and get the record ID you want to keep updated with the /domains/$domain/records API call, and populate the script.

With this configured, when you enter ericpark.dev into your browser, it is correctly told to find my server at my current IP. In the next post we finally setup Docker and deploy WordPress.

The Infrastructure

In this post I’m going to be focusing on the basic infrastructure running the network and blog. I imagine I’ll have another post with diagrams laying out my entire network once I find time to diagram it all out.

This blog runs on a ZOTAC ZBOX-CI327NANO. It’s a pre-built bare-bones nano PC with a quad-core IntelĀ® CeleronĀ® Processor N3450. I’ve equipped it with 8 GB of RAM, and a 128 GB SSD I had laying around from previous desktop upgrade. Minus the drive this ran me about $200. This was a great buy for me considering a UniFi Security Gateway which is a popular homelab-ers choice costs about $130, and this hardware was chosen accomplish the same functionality, and more. Plus getting the experience of homebrewing it along the way.

This PC has two Gigabit Ethernet ports which were essential for my configuration. One is connected directly to my ISP’s gateway. The other is connected to the rest of the home network. While with Xfinity I was able to put the ISP’s router into bridged mode to act only as a modem, and I actually saw a bandwidth increase as a result. With Frontier I have only been able to add my router to the DMZ. Which while effective is not completely ideal. Frontier’s cable package uses Ethernet for transport, and from my admittedly minimal research, it appears I may break this trying to accomplish the same configuration. I may revisit this in the future.

In an effort to run my environment more like a data-center, I opted to run VMWare ESXi. The installation of which could be a whole other post. It involved creating my own ESXi 6.0 ISO using PowerShell utilities to re-add drivers removed by VMWare to support my officially unsupported hardware. This meant I could install ESXi on almost any host I chose. This allows me to run multiple Virtual Machines on my hardware. Separating concerns and simplifying any one server’s functions. Even with modest hardware, it runs a few services for me with good performance.

The primary virtual machine running on this host is OPNSense as my edge-router appliance. This VM is configured in such a way that it is the only one that communicates directly with the Internet. It runs services like DHCP, DNS, UPnP, Dynamic DNS, acts as my firewall, and provides statistics on my connection.

This ESXi host also runs a minimally provisioned Centos VM that I use as a server to log into when I’m remote. It’s a SSH server with utilities and tools to troubleshoot and manage my network. While I may look into OpenVPN someday. Currently I use SSH proxy-ed ports to accomplish everything I have needed to so far. This VM also allows me to deny all access to the OPNSense OS externally as an added layer of security. I only access the management of my router through the internal network.

Finally that brings us to the Centos VM running Docker that houses this blog. I have currently provisioned this machine, dubbed docker1, with 2 GB of RAM. This is roughly the size of $10/mo droplet at Digital Ocean. In the future we’ll get into hosting services in the cloud. This made for a comparable setup. I still have limited room to grow this if it becomes needed.

If you’ve been keeping track, I only use open source software where possible. Unless my projects involve the use of free or affordable software I’d like learn. For instance, the conscious decision to use VMWare ESXi instead of KVM. A nice side-effect of this is that it keeps the cost of my home lab down.

In my next post we’ll take a look at configuring my domain’s DNS entries. So that requests from your web browser find the correct server to serve this page.

The .dev TLD

Welcome to ericpark.dev. A new domain in my namesake. When Google announced they would open up a new TLD it sparked an idea to create a domain of my very own. I’ve used free sub-domains in the past from afraid.org simply for convenient access to my home computers, but I decided I want to carve out my own piece of the web.

Google enforces the use of HTTPS on .dev sites be means of HSTS. I have generated self-signed certs in the past, but I had no experience getting one from a CA which I would certainly need. That’s where Let’s Encrypt comes in. A CA which is free was great news for my new little hobby project.

Please bear with me with respect to the WordPress theme or lack thereof. I’ll hopefully be giving this blog a face-lift in the near future.

In the following posts I will be building the WordPress server that this very page is serving from. Running on my own personal hardware using Virtualization and Containerization to power the application. I’ll be adding in Let’s Encrypt containers to generate valid TLS certificates.