Run a Postfix mailserver with TLS and SPF in Kubernetes

Run Postfix mailserver in Kubernetes - a first try with much space for improvment ;-)

May 17, 2018

CHANGELOG

2018-06-14

  • updated etcdctl parameter to use a secure connection

While moving over some of my old stuff (Apache Roller blog with Apache webserver, Tomcat, PostgreSQL and stuff like that) to my little Kubernetes cluster I also wanted to move my mailserver from my old host to Kubernetes (or K8s for short) among other stuff. Postfix (or mailserver in general I guess) and Kubernetes isn’t really a match made in heaven ;-) But since I wanted to manage as much as possible via Kubernetes I gave the thing a try.

I’m still not finished and need things to automate besides other stuff so let’s call this thing a first alpha version with much room for improvment. More on potential improvements in the text and at the end of the text. So if you want to do the same consider this text a first step or starting point.

Also note that this is only about Postfix and not about Postfix plus whatever IMAP server. Personally I use Archiveopteryx — An advanced PostgreSQL-based IMAP/POP server as final delivery destination to store my mails. Postfix delivers the mails via LMTP protocol to my Archiveopteryx. But it’s up to you if you want to use Dovecot, Cyrus IMAP or whatever. However we’ll cover the basic settings to where Postfix should deliver the virtual users (because you don’t want to store mails in a pod’s filesystem, right? ;-) ).

The first thing I wanted to mention and is a little off topic is a tool called kubectx. Since I’ve quite a few namespaces already it’s a little bit annoying to provide -n whatever_namespace to kubectl command all the time. kubectx provides a very handy tool called kubens which switches to whatever namespace you like. So as long you work in the same namespace you no longer need -n for kubectl. That said let’s start:

I decided to put everything mailserver related to it’s own K8s namespace as it has no dependencies to serivces running in other namespaces:

kubectl create namespace mailserver

As mentioned above if you use kubens tool mentioned above you just need to enter

kubens mailserver

now and there is no need to further specify -n mailserver or --namespace=mailserver to kubectl. If you don’t have kubens installed remember to specifiy the namespace to kubectl all the time (I won’t do it in the following text).

As we run K8s we need a Postfix container image. I used Jessie Frazelle’s Postfix Dockerfile as a template and modified it to my needs. You can clone my Github repo kubernetes-postfix which includes quite a few resources already to setup Postfix running in Kubernetes:

git clone https://github.com/githubixx/kubernetes-postfix

In the repository you find the Dockerfile for the Postfix container which looks like this:

FROM alpine:3.7

RUN apk add --no-cache \
        bash \
        ca-certificates \
        libsasl \
        mailx \
        postfix \
        rsyslog \
        runit \
        postfix-policyd-spf-perl

COPY service /etc/service
COPY runit_bootstrap /usr/sbin/runit_bootstrap
COPY rsyslog.conf /etc/rsyslog.conf

STOPSIGNAL SIGKILL

ENTRYPOINT ["/usr/sbin/runit_bootstrap"]

To keep the container image small we use Alpine Linux as a base and install a few packages which are needed to operate Postfix. The first two COPY lines copy the files needed for runit. runit is an init scheme for Unix-like operating systems that initializes, supervises, and ends processes throughout the operating system. runit is an init daemon, so it is the direct or indirect ancestor of all other processes. It is the first process started during booting, and continues running until the system is shut down. runit basically starts and supervises Postfix itself (as you can see in service/postfix/run and rsyslogd (service/rsyslog/run). Have a look at the files in the Github repo if you need more information. The runit_bootstrap script is our entrypoint for the container.

Before we can build the container image we need a bunch of additional files.

The first ones are for Forward Secrecy. From the Postfix docs: The term "Forward Secrecy" (or sometimes "Perfect Forward Secrecy") is used to describe security protocols in which the confidentiality of past traffic is not compromised when long-term keys used by either or both sides are later disclosed. For better forward secrecy settings we create two DH parameter files which we later use in Postfix’s main.cf. With prime-field EDH, OpenSSL wants the server to provide two explicitly-selected (prime, generator) combinations. One for the now long-obsolete “export” cipher suites, and another for non-export cipher suites. Postfix has two such default combinations compiled in, but also supports explicitly-configured overrides:

mkdir dhparam
openssl dhparam -out dhparam/dh512.pem 512
openssl dhparam -out dhparam/dh2048.pem 2048

Next we need a few configuration files for Postfix. I prepared a few files which are included in the repo you checkout out above. Adjust the files to your needs. I put quite a lot comments into the files so I won’t repeat everything here again. Just to get a quick overview:

etc/mail/aliases

The aliases table provides a system-wide mechanism to redirect mail for local recipients. The redirections are processed by the Postfix local delivery agent. Basically this file isn’t that interesting as we are mainly interested in receiving mails from other mail servers and deliver the mails for our “virtual” users to a final destination like a Dovecot IMAP server or Archiveopteryx vi LMTP protocol as mentioned above. Nevertheless it’s a good habit to create aliases for some common users and send the mails to a administrator mail account e.g.

etc/postfix/headercheck
etc/postfix/bodycheck

Sometimes it is very useful to filter mails directly where they’re received which is Postfix in our case. You can use regular expression rules in this files to reject mails that contains some keywords or even a specific character set e.g. That’s quite handy if a spam wave pop’s up which isn’t recognized by your spam filter e.g.

etc/postfix/main.cf

This is most important configuration of course. That’s the main Postfix configuration file. Please read the comments and adjust to your needs (and add additional parameters if needed of course ;-) )! We’ll revisit some parametes in just a second.

etc/postfix/vmailbox

Here you finally specify the mail addresses which you’re accept and want to deliver the message to it’s final destination. The script service/postfix/run contains a line which convert’s the file into a database that Postfix can use for fast lookup (postmap /etc/postfix/vmailbox) before Postfix get’s started.

Now we can build the Postfix image. As the K8s cluster needs to be able to pull the image later when we deploy the Postfix DaemonSet you should include your Docker registry in the image tag name. E.g. if your Docker registry host is registry.domain.tld and listens on port 5000 you just run (don’t forget the . at the end ;-) ):

docker build -t registry.domain.tld:5000/postfix:0.1 .

Push that image to your Docker registry now:

docker push registry.domain.tld:5000/postfix:0.1

Because we want to be able to change the Postfix configuration quickly without building a new image we store the configuration files in a K8s ConfigMap. So if you’re still in my kubernetes-postfix repo execute the following commands:

kubectl create configmap postfix --from-file=etc/postfix/
kubectl create configmap postfix --from-file=etc/mail/

This copies all the files we needed to the postfix ConfigMap. We’ll mount the files via the subPath option into the Postfix container so that Postfix can read the configuration files. If you want to change any of the files later just change the file accordingly and run

kubectl create configmap postfix --dry-run -o yaml --from-file=etc/postfix/ | kubectl replace -f -

This replaces the files. Afterwards you can just kill the Postfix container and Postfix will restart in a new container with the new files (but of course that’s maybe the worst “reload configuration strategy” ;-) ).

Next we create two secrets. The first one includes our DH parameters that we created above:

kubectl create secret generic dhparam --from-file=dhparam/

The other secret contains the TLS certifcate files for your mailserver. This is a little bit tricky and I definitely need to automate this but for now here is the “do it manually version”. In my K8s tutorial (Kubernetes the Not So Hard Way With Ansible (at Scaleway) - Part 8 - Ingress with Traefik) I used Traefik proxy to obtain TLS certificates automatically for my web site from Let’s Encrypt which are valid for three month and needs to be renewed before this period expires. Traefik stores the certificates in etcd (which we already use for our K8s cluster) where we can fetch the certificates from and use it for Postfix TLS configuration. As you may have already seen in the mail.cf configuration file we need two files:

...
# Let's Encrypt certificate file
smtpd_tls_cert_file = /etc/postfix/certs/fullchain.pem
# Let's Encrypt key file
smtpd_tls_key_file  = /etc/postfix/certs/privkey.pem
...

I just want to mention cert-manager here as another possible solution besides Traefik proxy. cert-manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources. It will ensure certificates are valid and up to date periodically, and attempt to renew certificates at an appropriate time before expiry.

In the setup I describe here it is important that Traefik and Postfix at least runs together on one of the worker nodes and that the DNS entry for you mailserver (e.g. mail.domain.tld) points to this node. The HTTP(s) service for mail.domain.tld needs to answer on port 80443 (in this case Traefik proxy) so that Let’s Encrypt is able to verify and the SMTP service (in this case Postfix) needs to answer on port 25. So the DNS entry mail.domain.tld will point to the same IP but is used for different services on different ports. You can’t specify different IPs for HTTP(s) and SMTP.

Since Traefik also renew’s certifcates before they expire and since I already have Traefik running it was a obvious choice for me. Basically Traefik can only request certificates for HTTPS ingress. So we need to cheat a little bit ;-) Other mailserver will connect to our mailserver by looking up the MX DNS record. Maybe the MX record for our mailserver looks like this:

# dig domain.tld

domain.tld.           60      IN      MX      10 mail.domain.tld.

So in this case we need a TLS certificate for mail.domain.tld from Let’s Encrypt. All we need to get this is a K8s ingress for mail.domain.tld which Traefik manages. Traefik will intercept and answer the challange request from Let’s Encrypt and finally stores the certificate files in etcd as already stated above. If you only deploy the Ingress resource with no backend then of course if someone requests mail.domain.tld in a browser he will get a error message. To avoid this we can add a simple nginx backend which just redirects all incoming requests to where ever you like or delivers a static page (whatever you like).

So just create a simple nginx configuration (adjust to your needs of course and you can find the files in my kubernetes-postfix repo in the directory nginx too):

events {
  worker_connections  1024;
}

http {
  include  /etc/nginx/mime.types;

  server {
    listen       80;
    server_name  mail.domain.tld;
    root         /opt/httpd/docs/mail.domain.tld;
    
    location / {
      return 301 https://www.domain.tld;
    }
  }
}

This nginx configuration just handles all HTTP/HTTPS requests to mail.domain.tld and redirects the requests to www.domain.tld. We only listen on port 80 because HTTPS is handled by Traefik proxy and forwards the requests to this pod.

Next we need a simple Dockerfile to create a nginx container image:

FROM nginx:1.13-alpine

RUN install -d -o root -g root -m 0755 /etc/nginx && \
    install -d -o nobody -g nobody -m 0755 /opt/httpd/docs/mail.domain.tld

COPY nginx.conf /etc/nginx/nginx.conf

Finally we build the nginx image and push it to our private Docker registry:

docker build -t registry.domain.tld:5000/mail-domain-tld:0.1 .
docker push registry.domain.tld:5000/mail-domain-tld:0.1

Again you should include your Docker registry in the image tag name. E.g. if your Docker registry host is registry.domain.tld and listens on port 5000 then the example above would be ok. I’ll name the nginx container like the mail domain we want to serve just . replaced with -.

Now we have our “dummy” webserver ready so we can roll it out. First we need a Kubernetes Deployment to bring the nginx webserver online. The Deployment could look like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-mail-domain-tld
  namespace: mailserver
  labels:
    app: nginx-mail-domain-tld
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-mail-domain-tld
  template:
    metadata:
      labels:
        app: nginx-mail-domain-tld
    spec:
      imagePullSecrets:
      - name: registry-domain-tld
      restartPolicy: Always
      containers:
      - name: nginx-mail-domain-tld
        image: registry.domain.tld:5000/mail-domain-tld:0.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 2
          initialDelaySeconds: 10
          periodSeconds: 20
          successThreshold: 1
          tcpSocket:
            port: 80
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 2
          periodSeconds: 20
          successThreshold: 1
          tcpSocket:
            port: 80
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 10m
            memory: 10Mi
          requests:
            cpu: 10m
            memory: 10Mi
        ports:
        - containerPort: 80

Save the file as deployment.yml. What we define here is a Deployment called nginx-mail-domain-tld in mailserver namespace and put a label on it. We run only one replica of the pod. If you have a private Docker registry then Kubernetes needs to authenticate against this registry. That what imagePullSecrets is for. For more information about this topic and how to create a imagePullSecret have a look here: kubernetes.io - Pull an Image from a Private Registry.

We also define the image we want to deploy via image: registry.domain.tld:5000/mail-domain-tld:0.1 and further define a few probes and limit resource usage for the pod. Finally we define a containerPort: 80 on which the nginx webserver is listening.

Roll out the deployment via

kubectl apply -f deployment.yml

Now we need a Kubernetes Service:

kind: Service
apiVersion: v1
metadata:
  name: nginx-mail-domain-tld
  namespace: mailserver
  labels:
    app: nginx-mail-domain-tld
spec:
  selector:
    app: nginx-mail-domain-tld
  ports:
  - name: http
    port: 80

I guess this should be pretty obvious what happens here. Save the content in a file called service.yaml and roll it out:

kubectl apply -f service.yaml

The service is needed for Traefik and the Ingress could look like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: mail-domain-tld
  namespace: mailserver
  annotations:
    kubernetes.io/ingress.class: "traefik"
  labels:
    app: nginx-mail-domain-tld
spec:
  rules:
  - host: mail.domain-tld
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-mail-domain-tld
          servicePort: 80

Save the content in a file called ingress.yaml. BEFORE you now create the Ingress resource make sure that mail.domain.tld (replace with your mail domain name of course) points to the correct IP where Traefik runs on. That’s important as otherwise Let’s Encrypt won’t be able to call your domain and so Traefik won’t be able to request the certificate.

Now roll out the Ingress resource:

kubectl apply -f ingress.yaml

When Traefik creates the Ingress resource now it also tries to get the certificates for mail.domain.tld from Let’s Encrypt. This could take a few minutes until Traefik stores the certificate data in etcd. To verify that you have a valid Let’s Encrypt certificate you can use curl e.g.:

curl -v https://mail.domain.tld

In the output you should see something like this:

...
Server certificate:
  ...
  issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
  ...

Now we can grab the certificate data from etcd and use it for Postfix. Basically you can view the value of the key /traefik/acme/account/object like this (replace controller1 with the node name of the first etcd node):

ssh controller1 '\
 K8S_CERT_DIR=/var/lib/kubernetes
 ETCDCTL_API=3 \
 sudo -E etcdctl get /traefik/acme/account/object \
   --print-value-only \
   --endpoints=https://127.0.0.1:2379 \
   --cacert=${K8S_CERT_DIR}/ca-etcd.pem \
   --cert=${K8S_CERT_DIR}/cert-etcd.pem \
   --key=${K8S_CERT_DIR}/cert-etcd-key.pem \
 | gunzip' | jq

From all the output we get what we need is the value of the keys PrivateKey and Certificate of the domain mail.domain.tld. We can accomplish this with some filter that the jq utility offers (of course again replace mail.domain.tld with our mail domain). Save the certificate files now in a secure place e.g. in a directory called certs_mail-domain-tld:

ssh controller1 '\
 K8S_CERT_DIR=/var/lib/kubernetes
 ETCDCTL_API=3 \
 sudo -E etcdctl get /traefik/acme/account/object \
   --print-value-only \
   --endpoints=https://127.0.0.1:2379 \
   --cacert=${K8S_CERT_DIR}/ca-etcd.pem \
   --cert=${K8S_CERT_DIR}/cert-etcd.pem \
   --key=${K8S_CERT_DIR}/cert-etcd-key.pem \
 | gunzip' \
 | jq '.DomainsCertificate.Certs[].Certificate | select(.Domain == "mail.domain.tld") | .PrivateKey' \
 | sed 's/"//g' \
 | base64 -d > certs_mail-domain-tld/privkey.pem

and

ssh controller1 '\
 K8S_CERT_DIR=/var/lib/kubernetes
 ETCDCTL_API=3 \
 sudo -E etcdctl get /traefik/acme/account/object \
   --print-value-only \
   --endpoints=https://127.0.0.1:2379 \
   --cacert=${K8S_CERT_DIR}/ca-etcd.pem \
   --cert=${K8S_CERT_DIR}/cert-etcd.pem \
   --key=${K8S_CERT_DIR}/cert-etcd-key.pem \
 | gunzip' \
 | jq '.DomainsCertificate.Certs[].Certificate | select(.Domain == "mail.domain.tld") | .Certificate' \
 | sed 's/"//g' \
 base64 -d > certs_mail-domain-tld/fullchain.pem

Now we can store the certificate files as Kubernetes secrets:

kubectl create secret generic mail-domain-tld --from-file=certs_mail-domain-tld/

If you want to replace the certificates later (they’re only valid for 3 month) you can use this command:

kubectl create secret generic mail-domain-tld --dry-run -o yaml --from-file=certs_mail-domain-tld/ | kubectl replace -f -

We’ll mount the secrets later into the container’s filesystem so that Postfix is able to read the files. As I don’t have persistent storage yet (only the storage the K8s nodes provide locally) I need to make sure that my Postfix pod will basically stay on one node (it’s pinned to that node so to say). This is - well call it - not so nice of course but for me the only option ATM. If you have persistent storage for K8s like https://rook.io/ or https://www.gluster.org/ or whatever you could basically skip the next step. In this case you can create a persistent volume that moves with your pod as the pod get’s scheduled on another node if the node crashes on which the Postfix pod runs on e.g. The persistent storage is needed for the mail queue. But there is one problem: A mailserver should have a PTR DNS record for reverse lookups (we’ll talk about this further down). So if the Postfix pod moves to another node the PTR record needs to be changed too. So in case your Postfix container “moves” between nodes you may need a init container (or some other process) that changes the PTR record via some API dynamically if your provider supports it. For Scaleway you may write a script that talks directly to the API (see https://github.com/scaleway/scaleway-cli/blob/master/pkg/api/api.go#L474 or https://github.com/scaleway/api.scaleway.com/blob/master/contents/ip.md) or use Terraform (see https://github.com/terraform-providers/terraform-provider-scaleway/issues/34) or maybe the Scaleway CLI scw supports it in the meantime…

So to be able to “pin” a pod to a worker node we give the node a label (you can use any node you want of course but I’ll stay with the first one called k8s-worker1):

kubectl label nodes k8s-worker1 mailserver=true

In this case the label is a simple key/value: mailserver=true. You can see what labels a node has via

kubectl get nodes -o wide --show-labels

As mentioned above i don’t have persistent storage ATM so I need to store the Postfix spool directories on the worker nodes local filesystem. If you read my other blog posts how to setup K8s with Ansible and implemented it you can create the needed directories on the hosts with this commands:

ansible k8s_worker -m file -a 'path=/opt/postfix state=directory'
ansible k8s_worker -m file -a 'path=/opt/postfix/var/spool/postfix state=directory'
ansible k8s_worker -m file -a 'path=/opt/postfix/var/mail state=directory'
ansible k8s_worker -m file -a 'path=/opt/postfix/var/mail-state state=directory'

Otherwise just use ssh and mkdir -p ... to create the directories. Of course you can create the directories wherever you want. You just need to adjust the mounts then later in the K8s deployment.

Now we finally setup to create the important K8s resources to run Postfix. Let’s start with a Role:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: mailserver
  name: postfix
rules:
- apiGroups: 
    - ""
  resources: 
    - configmaps
    - secrets
  resourceNames:
    - postfix
  verbs:
    - get

Adjust to your needs and save this as role.yaml. As we created ConfigMap’s and Secret’s we of course need permissions to at least fetch it. That’s what we see in resources. In verbs we define that we just want to get the resources (aka no write permissions needed). The "" in apiGroups indicates the core API group.

Next we need a RoleBinding:

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: postfix
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: postfix
subjects:
- kind: ServiceAccount
  name: postfix
  namespace: mailserver

Save this as rolebinding.yaml. This is basically the “glue” between the Role and the ServiceAccount:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: postfix
  namespace: mailserver

Save this to serviceaccount.yaml. We’ll use this ServiceAccount in our DaemonSet configuration to assign the Postfix pod the grants he needs to operate which we defined in the Role above.

So the final K8s resource we define is a DaemonSet. This WOULD start the Postfix pod on all K8s worker nodes but as the configuration contains a nodeSelector it will only start on the node that contains the label we assigned to a node as we did above. So here it is:

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: postfix
  namespace: mailserver
  labels:
    app: postfix
spec:
  selector:
    matchLabels:
      app: postfix
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: postfix
    spec:
      nodeSelector:
        mailserver: 'true'
      serviceAccountName: postfix
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      imagePullSecrets:
      - name: registry-domain-tld
      containers:
      - image: registry.domain.tld:5000/postfix:0.1
        name: postfix
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 2
          tcpSocket:
            port: 25
          initialDelaySeconds: 10
          periodSeconds: 60
        readinessProbe:
          failureThreshold: 2
          tcpSocket:
            port: 25
          periodSeconds: 5
        resources:
          requests:
            memory: "64Mi"
            cpu: "50m"
          limits:
            memory: "64Mi"
            cpu: "50m"
        ports:
        - name: smtp
          containerPort: 25
          hostPort: 25
        - name: smtp-auth
          containerPort: 587
          hostPort: 587
        securityContext:
          privileged: true
        volumeMounts:
        - name: config
          subPath: bodycheck
          mountPath: /etc/postfix/bodycheck
          readOnly: true
        - name: config
          subPath: headercheck
          mountPath: /etc/postfix/headercheck
          readOnly: true
        - name: config
          subPath: main.cf
          mountPath: /etc/postfix/main.cf
          readOnly: true
        - name: config
          subPath: vmailbox
          mountPath: /etc/postfix/vmailbox
          readOnly: true
        - name: aliases
          subPath: aliases
          mountPath: /etc/mail/aliases
          readOnly: true
        - name: var-mail
          mountPath: /var/mail
        - name: var-mail-state
          mountPath: /var/mail-state
        - name: var-spool-postfix
          mountPath: /var/spool/postfix
        - name: certs
          subPath: fullchain.pem
          mountPath: /etc/postfix/certs/fullchain.pem
          readOnly: true
        - name: certs
          subPath: privkey.pem
          mountPath: /etc/postfix/certs/privkey.pem
          readOnly: true
        - name: dhparam
          subPath: dh512.pem
          mountPath: /etc/postfix/dh512.pem
          readOnly: true
        - name: dhparam
          subPath: dh2048.pem
          mountPath: /etc/postfix/dh2048.pem
          readOnly: true
      volumes:
      - name: config
        configMap:
          name: postfix
      - name: aliases
        configMap:
          name: mail
      - name: var-mail
        hostPath:
          path: /var/mail
      - name: var-mail-state
        hostPath:
          path: /var/mail-state
      - name: var-spool-postfix
        hostPath:
          path: /var/spool/postfix
      - name: certs
        secret:
          secretName: mail-domain-tld
      - name: dhparam
        secret:
          secretName: dhparam

Again adjust to your needs and save as daemonset.yaml. Let’s have a quick look at the important parts of the DaemonSet definition:

      nodeSelector:
        mailserver: 'true'

This will cause the Postfix pod to be only scheduled on the node (potentially this can also be more nodes of course) which we configured with the label mailserver="true".

      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet

This will cause the Postfix to listen on the hosts network and NOT on a internal pod IP. This makes it possible to reach the Postfix service from the internet (if the firewall allowes it ;-) ). For Pods running with hostNetwork, you should explicitly set its DNS policy to ClusterFirstWithHostNet.

      imagePullSecrets:
      - name: registry-domain-tld

I need this setting to pull container images from my private Docker registry. Have a look at Pull an Image from a Private Registry for more information. This secret basically contains my Docker credentials which are needed to login to the private registry.

        livenessProbe:
          failureThreshold: 2
          tcpSocket:
            port: 25
          initialDelaySeconds: 10
          periodSeconds: 60
        readinessProbe:
          failureThreshold: 2
          tcpSocket:
            port: 25
          periodSeconds: 5

The kubelet uses liveness probes to know when to restart a Container and uses readiness probes to know when a Container is ready to start accepting traffic (see Configure Liveness and Readiness Probes).

        resources:
          requests:
            memory: "64Mi"
            cpu: "50m"
          limits:
            memory: "64Mi"
            cpu: "50m"

Here we specify resource limits. Here we ask in requests that the pod gets at least this amount of CPU and memory. This is guaranteed. I also used the same settings for limits. If I would set the limits resources higher then the one in requests then the pod may get more resources if it needs it and if the resources are available on that node. If you don’t specify limits then the pod may get as many resources as the node provides. But setting at least a memory limit makes sense to avoid pods getting killed because of memory pressure which may be caused by a process in a pod running wild and consume lot’s of memory. So it will be killed before it could do harm to other pods running by OOM killer (see also: Resource Quality of Service in Kubernetes).

        ports:
        - name: smtp
          containerPort: 25
          hostPort: 25

This maps hostPort 25 into containerPort 25. Basically Postfix process running in the container could use a different port then 25 as for connectivity the hostPort is important as this is the one which needs to be available from the internet.

        securityContext:
          privileged: true

As we need to bind Postfix to a port < 1024 we need this setting.

Let’s have finally a look at some parts of the volumeMounts and volumes. In volumes we have

      - name: config
        configMap:
          name: postfix

Remember that we created a ConfigMap called postfix from all Postfix configuration files above? We basically “mount” this ConfigMap in volumes and name this “volume” config. As you may also recall this ConfigMap contains more “files”. You can see them if you use this command:

kubectl get configmap postfix -o yaml
apiVersion: v1
data:
  bodycheck: |
    ...
  headercheck: |
    ...
  main.cf: 
    ...
  vmailbox: |
    ...
kind: ConfigMap
metadata:
  name: postfix
  namespace: mailserver
  ...

In volumeMounts we now tell K8s to use subPath: bodycheck (which contains the content of the bodycheck file) and mount it into the container: mountPath: /etc/postfix/bodycheck". Additionally it should bereadOnly`:

        - name: config
          subPath: bodycheck
          mountPath: /etc/postfix/bodycheck
          readOnly: true

But we also have different kind of volumes like this one:

      - name: var-spool-postfix
        hostPath:
          path: /var/spool/postfix

This volume defines a hostPath which is a directory on the worker node the pod runs on. As we need to persist the Postfix spool directory somewhere a directory on the host is a valid option. As already explained a persistent volume would be more suitable but I don’t have one ATM ;-) To mount this directory on the host into the container we define this volume:

        - name: var-spool-postfix
          mountPath: /var/spool/postfix

And finally we have also volumes that contain our secrets like this one:

      - name: certs
        secret:
          secretName: mail-domain-tld

We created this secret above and in this case it contains our two TLS certificate files. Again we use the subPath key to specify which file in the secret we want to mount and where to mount inside the container so that Postfix is able to load the files that were specified in it’s config file:

        - name: certs
          subPath: fullchain.pem
          mountPath: /etc/postfix/certs/fullchain.pem
          readOnly: true
        - name: certs
          subPath: privkey.pem
          mountPath: /etc/postfix/certs/privkey.pem
          readOnly: true

Now you can finally create the K8s resources:

kubectl create -f role.yaml
kubectl create -f rolebinding.yaml
kubectl create -f serviceaccount.yaml
kubectl create -f daemonset.yaml

Next make sure that you opened port 25 on the firewall ;-) If you used my ansible-role-harden-linux you can simply adjust the harden_linux_ufw_rules variable by adding this lines

harden_linux_ufw_rules:
  - rule: "allow"
    to_port: "25"
    protocol: "tcp"

and roll out the changes via Ansible. Since we (hopefully) already created the DNS entry for mail.domain.tld and the MX record for domain.tld we should now be able to verify if TLS connectivity works as expected (of course again replace mail.domain.tld with your mail domain):

echo QUIT | openssl s_client -starttls smtp -crlf -connect mail.domain.tld:25 | openssl x509 -noout -dates

depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = mail.domain.tld
verify return:1
250 DSN
DONE
notBefore=May  4 21:10:28 2018 GMT
notAfter=Aug  2 21:10:28 2018 GMT

That’s looking good :-) You can also just run echo QUIT | openssl s_client -starttls smtp -crlf -connect mail.domain.tld:25 which gives you more information.

There is one more thing that you should do now: Create a PTR record for your mail servers IP address. The PTR record is a reverse lookup which maps the IP address to the name. E.g. I guess you already created a A record for your mail domain e.g.

dig mail.domain.tld

...
;; ANSWER SECTION:
mail.domain.tld.      60      IN      A       123.234.123.234
...

But the opposite should also work e.g.:

dig -x 123.234.123.234

...
;; ANSWER SECTION:
234.123.234.123.in-addr.arpa. 60   IN      PTR     mail.domain.tld.
...

For Scaleway it’s pretty easy. Just login to the Scaleway UI, select Network on the left menu, choose the IP of your mailserver, click on it and then you can Edit the REVERSE value. If we take the example above the value would be mail.domain.tld. Basically that’s not a requirement to have a PTR record but there’re still lot’s of installations out there that do a reverse lookup of your mailserver’s IP and if no PTR record exits refuse your mail. Strict mailservers do a forward lookup on the name your mailserver introduces itself as such as mail.domain.tld, verify it is the IP address that is read off the connection, and do a PTR lookup on that IP address to see if it resolves to the same name.

Now we’re basically setup and you should be able to send and receive mails. If you have a Gmail account you can see if the mail was encrypted while it traveled from your mailserver to Gmail mailserver. If you open a mail in Gmail which you sent through your mailserver you can click on See security details.

What we can implement next is SPF (at least a SPF DNS record without having the SPF policy agent configured within Postfix). SPF (Sender Policy Framework) is a system that identifies to mail servers what hosts are allowed to send email for a given domain. Setting up SPF helps to prevent your email from being classified as spam. So to make it possible for other mailserver to check your SPF DNS record we need to generate a TXT record. MxToolBox SPF Record Generator helps us to create one. Just insert your domain name (that’s only your domain+TLD e.g. domain.tld and not mail.domain.tld) and click on Check SPF Record. Now you see the SPF Wizard at the end of the page. Answer the questions and/or fill out the input fields and generate the record. Now add this TXT DNS record to your DNS configuration. I suggest to use a small time to live (TTL) e.g. between 60 to 300 secs. in the beginning so that you can change quickly if needed. To verify if the record is set use dig e.g.:

dig domain.tld TXT

domain.tld. 60 IN TXT "v=spf1 ip4:123.123.234.234 ip4:234.234.123.123 ~all"

In my setup I have added the static IPs of all worker nodes. The Postfix DaemonSet runs only on one worker node because we specified that the pod should only be scheduled on nodes with the label mailserver=true. As this is only the case for one worker node Postfix runs only on that node. But if I decide to move the DaemonSet to another node then I don’t need to change the SPF record as it already contains the IPs of all my worker nodes.

That’s it for now. There’re a few other things I want to implement when I’ve time:

  • Add the SPF policy agent to Postfix
  • Implement DKIM (DomainKeys Identified Mail)
  • Implement DMARC (Domain Message Authentication, Reporting & Conformance)
  • Install rook.io + Ceph to have persistent storage for my K8s cluster and move the Postfix volumes there
  • Migrate my mail domain to a provider that supports DNS changes via API so that I can change the mail domain programatically. This would make it possible to run Postfix on all worker nodes in case that one node fails. A sidecar container could check if the pod changed the node and adjust the DNS A record accordingly. The same needs to be true for the PTR DNS record (reverse DNS)

Lot’s to do ;-) I’ll update the blog post accordingly.