Using SSM with Ansible

In the before time, to access EC2 instances on a private subnet, you would need to have SSH running on a bastion/jump host in a public subnet to tunnel through.

A more modern alternative is to install the SSM Agent (or use an OS that comes with it already set up). You can then use the AWS console, or CLI, to create a session – tunneling through AWS infrastructure, and using their authentication. This allows you to use short lived credentials, and provides an audit trail – without needing to ship ssh logs to somewhere secure.

$ AWS_PROFILE=... aws ssm start-session --target 'i-123...'
$

For ansible to achieve the same trick, you need to update your inventory:

plugin: aws_ec2
regions: ...
...
compose:
  ansible_host: instance_id
  ansible_connection: '"amazon.aws.aws_ssm"'
  ansible_aws_ssm_bucket_name: '"..."'

You still use the same plugin, but need to add a few more details. The ansible_host needs to be the EC2 instance id (rather than an IP address), and you obviously no longer need the ssh ProxyCommand. The connection type is SSM (and the nested quotes are needed), and finally you need an S3 bucket (this is used for ansible to up/download the python scripts to run on the target node – which would previously have been SCPed, I think).

You should now be able to list the available inventory, as before:

AWS_REGION=... AWS_PROFILE=... ansible-inventory -i inventory.aws_ec2.yml --graph

Once this is working, the transition is pretty seamless. The only real downside is that running a playbook is noticeably slower (~2x) 🐌

Leave a comment