This is a big release for us, as it launches on apollo’s first birthday and contains many features that improve the stability of operations and the DevOps-aligned deployment process.
You can start with apollo v2.0.0 right here.
Notable changes from 1.x.x
The apollo CLI has been re-implemented in python (with the help of Typer) and replaces the previous built-in bash-toolchain entirely.
$ apollo version 2.0.0
The CLI will be developed alongside apollo’s core and is part of the Docker image. That also means that core and CLI follow the same release cycle.
Usage: apollo [OPTIONS] COMMAND [ARGS]... apollo CLI Options: --verbosity INTEGER [default: 0] --space-dir TEXT [default: /cargo] --debug INTEGER [default: 0] --install-completion Install completion for the current shell. --show-completion Show completion for the current shell, to copy it or customize the installation. --help Show this message and exit. Commands: build Build apollo infrastructure commit Commit configuration changes to the space (requires git) create Create a space from command line deploy Deploy apollo destroy Destroy apollo enter Enter cluster node exec Exec command on cluster init Initialize configuration push Push configuration changes to the space repository (requires... show Show apollo config validate Validate apollo config version Show apollo's version
To be fair, not all commands are currently fully implemented (commit, push and validate still need a little love).
Starting with 2.0.0 we will publish new releases of apollo on the last tuesday of every month.
New configuration format
We switched from Environment Variables to yaml-config with this release. Using yaml gives us way more flexibility to handle to complex setups apollo is meant to be part of and also maps better to the configuration of apollo’s providers, which is yaml most of the time.
For this purpose, we introduced a Spacefile and a Nodesfile living inside an apollo space repository. The Spacefile configures the platform-part of apollo while the Nodesfile configures the infrastructure part. The Nodesfile can be auto-generated by Terraform (or any other IaC-toolchain) or crafted manually.
The Spacefile can be generated by invoking
apollo init or manually deducted from its defaults file.
The new configuration system is possible thanks to a python module called anyconfig which also allows easy validation of configuration with JSON schema. We plan to support Jinja2-templated config in a future release.
apollo runnnig config
Ansible lost all power over the running configuration. Instead, the apollo running config (arc) will be compiled by the CLI as a combination of Spacefile and Nodesfile and their respective default files and then provided as extra_vars to ansible-playbook. Ansible group_vars only holds a small amount of configuration. The relevant parts will be set in the CLI and
apollo init creates a new space configuration from the defaults, asking you to fill in a few blanks. For the available default configuration, see the Docs.
Separation of concerns
apollos core functionality now includes management, metrics, logs, analytics, data, alerts and backups. We believe these services to supply the MED (minimum effective dose) of platform that you need to run your cloud-native services with confidence.
What apollo offers is best-practice access to container platforms like Docker Swarm or Kubernetes. That’s why we decoupled the services we need to run and maintain this container platform from the platform itself (i.e. „don‘t monitor your cluster from inside your cluster“).
Then there’s addons to apollo – like portainer, an ingress proxy or gitLab-runners – that improve its core functionality or provide additional services or features to the container platform.
Pretty much every part of apollo is now implemented as a provider that can functionally be replaced or overridden by user config in the future.
The new configuration format enables easier injection of custom configuration and the addon system brings custom addons to the platform through space repositories.
Before 2.0.0, apollo had a very strict integration with Terraform as there was no flexibility regarding the actual infrastructure resources and setup that will be created.
With 2.0.0, that changed. With a special output-file you can have your custom Terraform code generate a Nodesfile.yml that apollo can consume. This de-coupling enables you to build infrastructure of any complexity at any provider, as long as you respect apollo’s minimum requirements (2 groups manager and worker, a defined ingress_ip and management_ip).
Traditional apps are still possible the same way as before. We want apps to be something the user runs with the help of the platform while addons come from the core development team and enrich the platform itself.
The best-practice way to bring apps to apollo is to use shipmate. Unfortunately we‘re pretty behind on documentation of that part of apollo but we’re eager to push this for the next release. If you want to learn more, it’s best to visit us in Slack and just ask.
We also introduced the apollo system-user with the goal of making apollo a first-class remote development environment. For this we gradually need to move away from using root as developers can integrate apollo with VSCode and get full access to the console.
The new container dashboard shows the running stacks, services and container and their resource consumption.
The new nodes dashboard offers a global overview of cluster resources.
We added additional utility- and operations-related dashboards for better insights into the cluster:
- gitlab-runner dashboard
- minio dashboard (if enabled)
- victoria metrics dashboard
- traefik dashboard
- proxy dashboard (if enabled)
- storidge dashboard (if enabled)
- and more …
CI workflow reaches beta status
apollo was initially designed to be deployed headless and with confidence. We came one step closer to that goal. With the help of our new CLI it’s now possible to treat apollo spaces like any other of your software components and have them versioned, built, tested and deployed automatically.
We currently only support GitLab CI.
- support for dev-sec.io security hardening
- increased SSL ingress security
- Wireguard inter-node cluster
One more thing
We also implemented NFS and experimented with Storidge on HETZNER to optimize cost efficiency of a cluster for certain workloads. We can’t provide production readiness for these as of now, but we see this coming in 2.1.0.
NFS can be enabled by setting
nfs. apollo then exports
/srv/.apollo/volumes from manager-0 to the other cluster nodes. Please note that this is a clear SPOF (single point of failure) and we strongly advise against using this in production.
Generally, the changes up to here were exhausting but necessary for the bigger picture. Switching to a structured config format and yaml especially makes integrations and templating a breeze and opened the door for the way configuration now moves from the user to the core. Dividing apollo’s core services from the engine/orchestrator puts us in a good position to monitor and manage operations and stability. Before, lots of apollo’s controlplane was dependent on engine and orchestrator, which in today’s architecture are completely reserved for the user.
Other notable changes
- bumped gitlab-runner to 13.4.0
- bumped Docker to 19.03.12
- bumped Storidge to 3336
- bumped grafana to 7.2.0
- bumped victoria-metrics to 1.40.1
- bumped vmagent to 1.40.1
- bumped alertmanager to 0.21.0
- bumped karma to 0.70
- bumped vmalert to 1.40.1
- downgraded cadvisor to 0.32.0 (swarm)
- bumped k3s to 1.19.2+k3s1
- bumped loki to 1.6.1
- bumped node-exporter to 1.0.1
- bumped process-exporter to 0.7.2
- bumped promtail to 1.6.1
- bumped traefik to 2.2.11
- bumped proxy to 1.7.26
- bumped portainer to 1.24.1
- bumped portainer-agent to 1.6.0
See the full CHANGELOG.
Starting with v2, we’re releasing our Managed apollo offering into beta status and accept a few additional participants of the beta program. If you’re interested, reach out on Slack or book a meeting with me.
Learn more about Managed apollo.