I defined my inventory first:
plugin: aws_ec2 regions: - eu-west-2 keyed_groups: - key: tags.Type separator: '' compose: ansible_host: private_ip_address
(that last bit is important, otherwise the ssh config won’t work). At this point you should be able to list (or graph) the instances you want to connect to:
$ ansible-inventory -i inventories/eu-west-2.aws_ec2.yml --list
Next you need some ssh config:
Host 10.0.*.* ProxyCommand ssh -W %h:%p firstname.lastname@example.org
I kept it pretty minimal. The IP mask needs to match whatever you used for the subnet(s) the instances are attached to (obvs). And the login may vary depending on the image you used, if you are using the defaults.
You can then use this config when running your playbook:
ANSIBLE_SSH_ARGS="-F lon_ssh_config" ansible AppServer -i inventories/eu-west-2.aws_ec2.yml -u admin -m ping
The IP address for the jump host is hard-coded in the ssh config, which isn’t ideal. We may use a DNS record, and update that instead, if it changes; but there doesn’t seem any easy way to either get that from the inventory, or update the cname automatically.