The final step in migrating ‘everything-email’ from my old server to a kubernetes setup is the migration of mailing lists based on mailman. In previous posts, I migrated the core mailserver and webmail to kubernetes. It s recommended to read the first post first because that one explains the full mail system architecture and dependencies. In this post we will be focusing on the last part, which is migrating mailman.
Architecture
Before starting migration, it is important to get an overview of what components are involved and how they interact. This provides understanding that will make it easier to configure and troubleshoot.
The architecture of a mailman setup is as follows.
In the above picture, mailman-core and mailman-web are docker containers provided by the docker mailman project. As part of migration from mailman 2.1, which I am currently using, the mailman 3 project split up the architecture in a core project and in a web part. The core part provides mailing list functionality, and the web part provides management functionality using postorius and mail archiving functionality based on hyperkitty. Postorius and hyperkitty are both python applications on top of django and for convenience are hosted together in a single container. Postorius integrates hyperkitty in its user interface so for the end user it appears as if postorius is providing the archiving functionality. The mailman docker project also provides a postorius-only container without archiving functionality. You can also read about the mailman 3 architecture here.
Mailman-core and the MTA (postfix) communicate with each other. When a mail intended for a mailing list arrives, postfix receives this mail using SMTP and based on configuration delivers it to mailman using LMTP. Mailman then processes the message and sends individual messages out through postfix using the SMTP protocol.
Mailman-core archives mails through hyperkitty and so must use the API exposed by hyperkitty which is running in the mailman-web container. Conversely, postorius running in the mailman-web container uses an API provided by mailman-core to manage the mailing lists running in mailman-core. Mailman-web also provides a web interface through either HTTP or WSGI. The latter interface is the python-specific way to expose web applications using the WSGI protocol. Technically, the archiving interface used by mailman is using the same HTTP port, but conceptually these are different interfaces.
Additionally, mailman-core also generates configuration for the postfix mailserver to configure it to send mailing list traffic to mailman-core. It is clear from all this that the relations between these components are quite complex. There is a bi-directional relation between the mailserver and mailman-core and also a bidirectional relation between mailman-core and mailman-web. For each of these relations, configuration is required.
Persistence
To start of migrating things to kubernetes, the docker compose files are always a good place to start. However, in the examples I found both mailman-core and mailman-web both used the same database instance. This setup works but is weird since mailman-core and web are different components and so should be able to use different databases. Digging around in the documentation show this was the case. The picture below shows the required persistence for the setup.
Both mailman-core and mailman-web require a persistent volume for storing files and a database instance. Mailman-core in addition, writes out mailing list configuration that is read by postfix and allows postfix to determine what mail to deliver to mailman-core.
In what follows it is assumed that it is well known on how to set PersistentVolumes and use a MySQL database to create databases and users. The discussion will be about how to configure mailman to interact with its surroundings.
Deployment
In principle, the three involved containers docker-mailserver, mailman-core, and mailman-web can all be fully accessed using network communication so that colocation in a pod is not required. However, because of the close relation between mailman-core and mailman-web (the same versions of containers must be used), I have decided to deploy these two containers in one pod. The mailserver will still run in its own pod, which facillitates development, but which will also simplify later upgrades of mailman or the mailserver independently.
The only tricky thing which is not really as it should be with a kubernetes deployment is that the mailserver and mailman-core are sharing a persistent volume that stores configuration data. It would be better to have a more service based solution, where mailman would notify the mail container of updated configuration or where the mailserver could use a service on the mailman pod to retrieve configuration. With a shared volume there is no guarantee that the mailserver will use the updated configuration of mailman or how long it will take. However, since updates to the mailing lists will be rare in my setup, there is no real case for automating this aspect. Automation could be done by writing a simple webservice in for instance python flask.
Technically, this approach with a shared volume can work on kubernetes in the following ways:
- use a ReadWriteOnce volume and deploy mailman and the mailserver on the same kubernetes node. Note that people often assume that ReadWriteOnce means that different pods cannot access the volume. However, ReadWriteOnce refers to the node and not the pod. For limiting concurrent access by different pods the newly added ReadWriteOncePod access mode can be used.
- configure different PersistentVolumes on kubernetes that happen to use the same storage underneath (e.g. hostpath or NFS). This works because kubernetes is not smart enough to see that differents PVs actually are the same.
- Use a ReadWriteMany NFS volume
In my setup, I am using option 1 because I am deploying most of what I have on the same node anyway. My setup is not about high availability but more about deployment and modernizing and reducing resource usage and learning in the process.
Mailman configuration
The following secrets and configmaps are used:
- mailman-passwords: A secret that defines the passwords to interact between mailman-core and mailman-web
- mailman-extra: Additional configuration required for mailman
- mailmancore: database access configuration for mailman-web
- mailmanweb: database access configuration for mailman-core
The secret mailman-passwords contains the following password-like entries:
- apikey: Hyper kitty API key
- restpassword: Rest password for mailman core
- websecretkey: Key used by Django used for signing cookies
The secret mailman-extra defines additional configuration for mailman. It contains the following entries:
- mailman-extra.cfg: Extra mailman configuration to be mounted into the mailman-core container. Currently this sets only the site owner e-mail address
[mailman] site_owner=listowner@example.com
- settings_local.py: Extra settings for mailman web to be mounted into the mailman-web container.
DEBUG=False # hosts under which the web interface mailman will be accessed ALLOWED_HOSTS=["localhost", "webmail.example.com"] # recommended setting for the indexing HAYSTACK_CONNECTIONS = { 'default': { 'ENGINE': 'xapian_backend.XapianEngine', 'PATH': "/opt/mailman-web-data/fulltext_index", } } # default time zone of users, does not appear to work, appears to be known issue TIME_ZONE="Europe/Amsterdam" USE_TZ=True
- chown: When mounting one of the above files in the core resp. web containers, startup will fail because change owner fails since the owner of the above mounted files cannot be changed. This chown entry is a script that overrides chown to simply always return an exit status 0, which works around the issue.
#!/bin/bash /bin/chown "$@" exit 0
Both mailmancore and mailmanweb define a database URL to connect. However, mailman-core and mailman-web use different database drivers. The format for the url parameter in the secret is as follows:
# mailmancore mysql+pymysql://USERNAME:PASSWORD@HOST/DB?charset=utf8mb4&use_unicode=1 # mailmanweb mysql://USERNAME:PASSWORD@HOST/DB?charset=utf8mb4
Postfix configuration
The postfix configuration must be adapted to add the local network to the trusted hosts for opendkim signing to work. This is done by adding the following to the user-patches.sh script:
echo "10.0.0.0/8 172.16.0.0/12 192.168.0.0/16" >> /etc/opendkim/TrustedHosts
With this setting opendkim will sign any outgoing mail that has a From header for one of the hosted domains on the mailserver. This has to be used in combination with DMARC mitigation on Mailman to use ‘replace From with list address’. Without this, the original From header will be used, but since mailman has modified the mail, DKIM signatures will fail. I am also using the ‘Alter messages’ setting ‘reply goes to list’ This setting puts the original From header in the CC and adds the list address to the Reply-To header.
Additionally, postfix settings must be added to postfix-main.cf to forward mail intended for a mailing list to postfix.
owner_request_special = no transport_maps = hash:/etc/postfix/transport_maps regexp:/etc/postfix/mailman/postfix_lmtp local_recipient_maps = proxy:unix:passwd.byname $alias_maps regexp:/etc/postfix/mailman/postfix_lmtp virtual_mailbox_maps = texthash:/etc/postfix/vmailbox regexp:/etc/postfix/mailman/postfix_lmtp # postfix warning: 'do not list the domain example.com in BOTH virtual_mailbox_domains and relay_domains' #relay_domains = ${{$compatibility_level} < {2} ? {$mydestination} : {}} regexp:/etc/postfix/mailman/postfix_domains
Above, the local_recipient_maps settings appears not to be used. I had to use virtual_mailbox_maps, otherwise mail sent from external mail servers intended for the mailing list would not work. Also, I am not setting the relay_domains as recommended since this leads to postfix warnings. Instead, for every domain used by mailman, there must be at least one account on dovecot to make sure that all domains are recognized.
The original settings of the mailserver are preserved. Before adding the settings for the mailman/postfix_lmtp config, use postconf inside the mail container to get the current values and then add the postfix_lmtp entry to it. The postfix_lmtp file is mounted into the mail container as follows:
soec: template spec: containers: - name: mailserver volumeMounts: - name: mailman-opt mountPath: /etc/postfix/mailman subPath: var/data readOnly: true volumes: - name: mailman-opt persistentVolumeClaim: claimName: mailman-opt-mailman-0
Using a subpath and read-only mouting, the postfix configuration gernerated by mailman-core can be used, exposing only the required files on the volume.
Note that when a new mailing list is added, it is a good idea to do a postfix reload in the mail container so that the changes are used immediately.
Detailed setup
After the basic architecture explanation we can dive into the nitty gritty details. This means it is yaml time! The deployment consists of a StatefulSet together with a ClusterIP service. The mailman web interface will not be accessed directly from the internet so a ClusterIP service is sufficient.
apiVersion: apps/v1 kind: StatefulSet metadata: name: mailman namespace: exposure spec: serviceName: mailman replicas: 1 selector: matchLabels: app: mailman template: metadata: labels: app: mailman spec: hostAliases: - ip: 0.0.0.0 # A hostnames: - mailman.exposure.svc.cluster.local - ip: 127.0.0.1 # B hostnames: - mailman-web containers: - name: core image: maxking/mailman-core:0.4 ports: - name: api # C containerPort: 8001 - name: lmtp # C containerPort: 8024 env: - name: DATABASE_URL valueFrom: secretKeyRef: name: mailmancore key: url - name: DATABASE_TYPE value: mysql - name: DATABASE_CLASS value: mailman.database.mysql.MySQLDatabase - name: HYPERKITTY_API_KEY valueFrom: secretKeyRef: name: mailman-passwords key: apikey - name: MAILMAN_REST_USER value: restadm - name: MAILMAN_REST_PASSWORD valueFrom: secretKeyRef: name: mailman-passwords key: restpassword - name: SMTP_HOST value: mail.exposure - name: MTA value: postfix - name: MM_HOSTNAME # A value: mailman.exposure.svc.cluster.local - name: HYPERKITTY_URL value: http://localhost:8000/hyperkitty volumeMounts: - name: mailman-opt mountPath: /opt/mailman - name: mailman-extra mountPath: /opt/mailman/mailman-extra.cfg subPath: mailman-extra.cfg - name: mailman-extra mountPath: /usr/bin/chown subPath: chown - name: web image: maxking/mailman-web:0.4 ports: - name: http # C containerPort: 8000 - name: uwsgi # C containerPort: 8080 #command: # - tail #args: # - -f # - /dev/null env: - name: DATABASE_URL valueFrom: secretKeyRef: name: mailmanweb key: url - name: DATABASE_TYPE value: mysql - name: DATABASE_CLASS value: mailman.database.mysql.MySQLDatabase - name: HYPERKITTY_API_KEY valueFrom: secretKeyRef: name: mailman-passwords key: apikey - name: MAILMAN_REST_USER value: restadm - name: MAILMAN_REST_PASSWORD valueFrom: secretKeyRef: name: mailman-passwords key: restpassword - name: SECRET_KEY valueFrom: secretKeyRef: name: mailman-passwords key: websecretkey - name: POSTORIUS_TEMPLATE_BASE_URL value: http://localhost:8000/ # serving static files by uwsgi - name: UWSGI_STATIC_MAP. # D value: /static=/opt/mailman-web-data/static - name: MAILMAN_ADMIN_USER value: admin - name: MAILMAN_ADMIN_EMAIL value: erik@example.com - name: MAILMAN_HOST_IP value: 127.0.0.1 - name: MAILMAN_HOSTNAME value: localhost - name: SERVE_FROM_DOMAIN value: example.com - name: MAILMAN_REST_URL # MAILMAN_REST_API_URL is set from this variable in settings. value: http://127.0.0.1:8001 - name: MAILMAN_REST_API_USER value: restadm - name: MAILMAN_REST_API_PASS valueFrom: secretKeyRef: name: mailman-passwords key: restpassword - name: SMTP_HOST value: mail.exposure # otherwise django cannot find the mysql driver. #- name: DYLD_LIBRARY_PATH # value: /usr/local/mysql/lib/ volumeMounts: - name: mailman-web mountPath: /opt/mailman-web-data - name: mailman-extra mountPath: /opt/mailman-web-data/settings_local.py subPath: settings_local.py - name: mailman-extra mountPath: /usr/bin/chown subPath: chown volumes: - name: mailman-extra configMap: name: mailman-extra defaultMode: 0555 # E volumeClaimTemplates: - metadata: name: mailman-opt spec: volumeName: mailman-opt # F accessModes: - ReadWriteOnce resources: requests: storage: 10Gi - metadata: name: mailman-web spec: volumeName: mailman-web # F accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
- # A: In a typical mailman setup where mailman is hosted on a VM, the IP address of the VM is the same address that postfix must connect to, and this is again the same IP address that mailman can listen on. In the pod, the environment variable MM_HOSTNAME configures all three to be identical. However, in the kubernetes deployment, things are different. Here mailman should listen on the cluster IP of its pod, and mailman should connect to the mailman service (mailman.exposure.svc.cluster.local).To work around this, there is a custom hostAlias that maps the service name mailman.exposure.svc.cluster.local to 0.0.0.0 causing mailman to listen on all available interfaces in the pod. In the configuration file that is written by mailman for postfix, the service hostname will be used, and postfix uses that to connect to the mailman service. Note that here, the full service DNS name is used instead of just mailman.exposure. This is because the shorter name will not work since postfix is running in a chroot jail, and therefore, the standard /etc/resolv.conf which is in the postfix pod will not be used. When configuring this in another kubernetes environment (especially cloud environment), make sure you find out what the correct suffix is. A simple way to find out is simply by looking inside the /etc/resolv.conf for the search entry.
- # B: There is an issue in the container causing a name lookup of mailman-web even if ALLOWED_HOSTS is overriden in settings_local.py. This simply adds an entry for mailman-web in /etc/hosts so that it does not fail immediately doing this lookup.
- # C: Ports for the mailman containers
- # D: Serve static files by mailman-web as well. The default is a 20th-century approach to complicate the setup and to serve ststic files through a regular web server.
- # E: Mode 0555 to make chown executable. This also makes the other two config files executable. If this is a problem, the chown wrapper command can be moved to its own ConfigMap but I don’t think it would add much.
- # F: The usual tying of the persistent volume claims to a persistent volume.
The service for mailman finally is as simple as can be:
apiVersion: v1 kind: Service metadata: name: mailman namespace: exposure spec: type: ClusterIP selector: app: mailman ports: - name: api port: 8001 - name: lmtp port: 8024 - name: http port: 8000 - name: uwsgi port: 8080
Exposure of the mailman web interface is finally done using the WSGI protocol in apache. The virtual host looks like this:
<VirtualHost *:80> ServerName webmail.example.com ProxyPreserveHost on RequestHeader set "X-Forwarded-Proto" https RequestHeader set "X-Forwarded-Port" 443 # with mod_proxy_http # Apache somehow sends the host header twice which django refuses # ERROR 2022-09-10 18:29:51,795 417 django.security.DisallowedHost Invalid HTTP_HOST header: 'webmail.example.com, webmail.example.com'. The domain name provided is not valid according to RFC 1034/1035. # wsgi access solves the issue with the duplicate headers. ProxyPass / uwsgi://mailman.exposure:8080/ disablereuse=On ProxyPassReverse / http://mailman.exposure:8080/ </VirtualHost>
Network policies
Network policies are defined in a similar way as was done before for webmail and other components. Since this is quite tedious and basically the same as before, this is not further explained.
Migration from mailman 2.1 to 3
Migration from mailman 2.1 to 3 works exactly as described in the documentation and in my case worked first time without any problems. To migrated, I collected the config.pck files describing the mailing list configuration (including all members) and the mbox files describing all mails collected. I copied the pck files to the mailmancore persistent volume and the mbox files to the mailmanweb persistent volume. The first step is to import the mailing list definitions. For this, perform the following steps:
kubectl exec -it mailman-0 -- bash su -s /bin/bash mailman mailman import21 mylist@example.com /path/to/config.pck
Next, import the mailing list archive into mailman and update the search index:
kubectl exec -it mailman-0 -c web -- bash su -s /bin/bash mailman python3 manage.py hyperkitty_import -l mylist@example.com /path/to/mylist.mbox python3 manage.py update_index_one_list mylist@example.com
Troubleshooting
To troubleshoot, I used a lot of techniques such as:
- logging into the containers for mail, mailman-0:core, and mailman-0:web and looking directly inside the configuration files. In particular, the settings.py file provided a lot of detailed information.
- replacing the startup command by a simple tail -f /dev/null. This gives you the chance to try out stuff inside the containers if the startup script is failing.
- installing additional packages. You can use ‘apt-get update’ followed by ‘apt-get install PACKAGE’ to install packages required for troubleshooting. I used this frequently to install vim for editing files. Editing files on a running container is the quickest way to experiment with new configuration settings.
- network troubleshooting: the tools nc and curl appear to be installed on the mail container and mailman containers.
- use postconf to examine existing postfix configuration
- use postfix reload to reload postfix
- find opendkim processes and kill them with -9 after modifying opendkim configuration. This will lead to a restart of opendkim without have to restart the pod.
- a lot of testing with sending of e-mail and checking mail headers for DKIM
- being very patient, evening after evening after ….
Final thoughts
This was the hardest part of the mail migration. It started with figuring out the architecture of mailman, which is explained in just not enough detail in the available documentation. This is an essential step to understand what you are configuring and why. The docker compose setup was a bit hacky, with mailman-web and mailman-core using the same database instance, but of course a much cleaner way is to use separate database instances. In my case, I am using a custom resource that I developed myself to manage database instances and users, so the effort to go for a cleaner setup is really low. It also works though if you use a single database for mailman-core and mailman-web.
In addition, there were loads of issues at the detail level. It appears that the containers were specifically developed for docker compose and some configuration flexibility is missing which needed to be worked around. Also, the documentation of mailman did not appear to work always. Finally, I spent several evenings trying to get DKIM signing for mails sent by mailman to work. It would certainly have helped if someone had written an instruction specifically on this. Instead, there are loads of contributions on the internet of people describing the problem but not many describing the solution. Also, some obscure issues such as duplicate headers when using mod_proxy in apache were solved by using uwsgi instead of http proxying.
Mailing list migration worked first time without any issues.
This completes the full migration of my whole mail setup from my old 2010 opensuse 11 virtual machine to kubernetes.
Thanks man for the sharing, it has great details
Hello Erik,
thanks again the details, I run into the following problem when I tried to deploy mailmain into Openshift, I know your example is Kubernetes, however Openshift is built on the base of Kubernetes.
The error message is:
Defaulted container “core” out of: core, web
/usr/local/bin/docker-entrypoint.sh: line 27: /etc/mailman.cfg: Permission denied
I use same images from docker hub as yours, however I got above error.
As I understand it openshift has more security by default so it is at least running as non root I guess and the user that it is running as cannot create files in /etc. You can add a custom startup command such as
command:
– sh
– -c
– |
whoami
id
ls -ld /etc
touch /etc/myfile
sleep 1000000
then you can see what user it is running as and what the permissions of /etc are.
When you get the pod yaml using kubectl get pod -o yaml, then what do you see. Specifically the security Context is interesting
This problem could be caused by Openshift Security policy.
Thanks for taking time to reply.