Maintaining a staging environment with nightly Btrfs snapshots
A staging environment is a near-production environment where updates and changes are tested before role-out in production. All my servers run on Btrfs and Btrfs makes managing snapshots easy. Together with systemd services and timers I run staging instances of my docker-based services.
The base is are systemd service and timer that create a snapshot of my Btrfs storage volume in
/<path>/<to>/<volume>/.nightly every night. Each docker service has a staging configuration that points to this snapshot, instead of the production volume, together with systemd units that are triggered by the base unit to restart the staging instance on the new snapshot.
Although I’m doing this with container based data this will also work with non-containerized setups. Only the fact that code and data is separated neatly might not always be the case for non-containerized setups.
Nightly snapshots via systemd service and timer
On Fedora Linux container data is usually stored in
/var/srv/containers. This path has a SELinux type of
container_file_t. With the following command I create a nightly snapshot in
$ btrfs subvolume snapshot /var/srv/containers /var/srv/containers/.nightly`
I use this command in a systemd service
[Unit] Description=Containers nightly snapshot StartLimitIntervalSec=0 [Service] Type=simple TimeoutStopSec=100 TimeoutStartSec=5m RestartSec=1 ExecStartPre=-/usr/sbin/btrfs subvolume delete /var/srv/containers/.nightly ExecStartPre=-/usr/bin/rm -r /var/srv/containers/.nightly ExecStart=/usr/sbin/btrfs subvolume snapshot /var/srv/containers /var/srv/containers/.nightly ExecStop=/usr/sbin/btrfs subvolume delete /var/srv/containers/.nightly Restart=always WorkingDirectory=/var/srv/containers RemainAfterExit=true [Install] WantedBy=multi-user.target
Another service unit (
/etc/systemd/system/containers-snapshot-nightly.service) is used to restart the snapshot service to rebuild the nightly snapshot.
[Unit] Description=Containers nightly rebuild [Service] Type=oneshot ExecStart=/usr/bin/systemctl restart containers-snapshot.service [Install] WantedBy=multi-user.target
Finally a systemd timer (
/etc/systemd/system/containers-snapshot-nightly.timer) triggers the rebuild service and thus the restart of the nightly snapshot service every night.
[Unit] Description=Containers nightly rebuild timer [Timer] Unit=containers-snapshot-nightly.service OnCalendar=Mon..Sun 00:30 [Install] WantedBy=timers.target
Attach services to the snapshot rebuild
To avoid inconsistencies the services should restart, after the snapshot is rebuild. The staging instances can be attached to the snapshot unit through systemd dependencies. Just create a systemd service file for your staging instance with the appropriate dependency settings.
[Unit] Description=Keycloak staging container After=network.target docker.service containers-snapshot.service Wants=network.target Requires=docker.service BindsTo=containers-snapshot.service StartLimitIntervalSec=0 [Service] Type=simple TimeoutStopSec=100 TimeoutStartSec=5m RestartSec=1 WatchdogSec=20 ExecStartPre=-/usr/bin/docker-compose stop staging ExecStart=/usr/bin/docker-compose up -d staging ExecStop=/usr/bin/docker-compose stop staging Restart=always WorkingDirectory=/var/srv/keycloak RemainAfterExit=true [Install] WantedBy=multi-user.target
This service file used
Requires to be restarted if the
containers-snapshot.service unit is restarted and that
docker.service is started if this unit is started. The
WorkingDirectory points to the docker-compose project directory where the
docker-compose.yml file is located. In this compose file I defined a staging service which is restarted.
This is a fairly simple example with a single staging container in a compose file. More complex examples can use differing
env_files and compose file variables to define the differences in config and paths for prod and staging instances. In that case the staging deployment would be it’s own compose project.