Doing gitops without Kubernetes

gitops is a hot topic with the “cloud native” crowd, but is it still relevant if you aren’t using k8s? In a word, yes.

While most examples are of kubernetes yaml (or helm charts), there’s nothing stopping you applying the same principles to your homegrown CD pipeline, and reaping the same benefits.

We are in the process of moving from systemd services to docker, running on our own hosts (ec2 instances). The easy thing to do, would be to just pull latest every time, but it’s not ideal to have no control over what version of software you have installed where.

Instead, we have a git repo containing fragments of yaml, one for each service:

foo_service:
  image: >-
    1234.ecr.amazonaws.com/foo-service@sha256:somesha
  gitCommit: somesha
  buildNumber: '123'
  env_vars:
    BAR: baz

The staging branch of these are updated by the CI build for each service, triggered by a push to main, once the new image has been pushed to ECR.

To deploy, we run an ansible playbook on each docker host. First, we load up all the yaml:

- name: Get service list
  git:
      repo: git@github.com:someorg/docker-services.git
      dest: "{{ docker_services_folder }}"
      version: "{{ docker_services_env | default('staging') }}"
  delegate_to: localhost
  run_once: yes
  register: docker_services_repo

We then generate an env file for each service:

- name: "Create env files"
  template: src=env_file dest=/etc/someorg/{{ docker_services[item].name }}
  become: yes
  vars:
    app_name: "{{ docker_services[item].name }}"
    ...
    env_vars: "{{ docker_services[item].env_vars | default([]) }}"
  loop: "{{ docker_services.keys() }}"
  register: env_files

And finally run each container:

- name: "Pull & Start Services"
  docker_container:
    name: "{{ docker_services[item].name }}"
    image: "{{ docker_services[item].image }}"
    state: "started"
    restart_policy: "always"
    recreate: "{{ env_files.results | selectattr('item', 'equalto', item) | map(attribute='changed') | first }}"
    pull: true
    init: true
    output_logs: true
    log_driver: "syslog"
    log_options:
      tag: someorg
      syslog-facility: local0
    env_file: "/etc/someorg/{{ docker_services[item].name }}"
    network_mode: "host"
  loop: "{{ docker_services.keys() }}"
  become: yes

If the env vars have changed, the container needs to be recreated. Otherwise, only the images that have changed will be restarted (we still remove the node from the LB first).

This gives us an audit trail of which image has been deployed, and makes rollbacks easy (revert the commit).

If the staging deploy is successful (after some smoke tests run), another job creates a PR to merge the changes onto the production branch. When that is merged (after any necessary inspection), the same process repeats on the prod hosts.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s