Terminating TLS at an ALB

While there are definite benefits to having a zero trust network, it’s also convenient to outsource all the certificate management.

First you need to create a cert with ACM, either by importing it, or letting them do it (managed renewal ftw!):

  Certificate:
    Type: AWS::CertificateManager::Certificate
    Properties:
      DomainName: !Ref 'DomainName'
      ValidationMethod: DNS

That in hand, you can create the LB, Listener & Target Group:

  LoadBalancer:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Name: foo
      Subnets:
        - !Ref PublicSubnet1
        - ...
      SecurityGroups:
        - !Ref SecurityGroup
  LoadBalancerListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      LoadBalancerArn: !Ref LoadBalancer
      Port: 443
      Protocol: HTTPS
      DefaultActions:
        - Type: forward
          TargetGroupArn: !Ref DefaultTargetGroup
      SslPolicy: ELBSecurityPolicy-TLS-1-2-2017-01
      Certificates:
        - CertificateArn: !Ref Certificate
  DefaultTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      Name: foo
      VpcId: !Ref Vpc
      Port: 80
      Protocol: HTTP
  SecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      VpcId: !Ref Vpc
      GroupDescription: Enable HTTPS access for LB
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: '443'
          ToPort: '443'
          CidrIp: '0.0.0.0/0'

Make sure you use a public subnet, or you won’t be able to reach the LB!

Ansible, AWS, and bastion hosts, oh my!

There’s some useful info available about using a jump host with Ansible, and AWS dynamic inventory; but either the world has changed since those were written, or my scenario is slightly different.

I defined my inventory first:

plugin: aws_ec2
regions:
  - eu-west-2
keyed_groups:
 - key: tags.Type
   separator: ''
compose:
  ansible_host: private_ip_address

(that last bit is important, otherwise the ssh config won’t work). At this point you should be able to list (or graph) the instances you want to connect to:

$ ansible-inventory -i inventories/eu-west-2.aws_ec2.yml --list

Next you need some ssh config:

Host 10.0.*.*
    ProxyCommand ssh -W %h:%p admin@52.56.111.199

I kept it pretty minimal. The IP mask needs to match whatever you used for the subnet(s) the instances are attached to (obvs). And the login may vary depending on the image you used, if you are using the defaults.

You can then use this config when running your playbook:

ANSIBLE_SSH_ARGS="-F lon_ssh_config" ansible AppServer -i inventories/eu-west-2.aws_ec2.yml -u admin -m ping

The IP address for the jump host is hard-coded in the ssh config, which isn’t ideal. We may use a DNS record, and update that instead, if it changes; but there doesn’t seem any easy way to either get that from the inventory, or update the cname automatically.

AutoScalingGroup with a LaunchTemplate

There are plenty of examples for creating an ASG using a CloudFormation template, but those I found all used a “launch configuration“.

According to the docs, using a launch template is the new hotness, so I foolishly assumed it would be simple to adapt one to the other.

Some time later, I had a working example:

  AutoScalingGroup:
    Type: 'AWS::AutoScaling::AutoScalingGroup'
    Properties:
      LaunchTemplate:
        LaunchTemplateId: !Ref LaunchTemplate
        Version: !GetAtt LaunchTemplate.LatestVersionNumber
      VPCZoneIdentifier:
        - Fn::ImportValue:
            Fn::Sub: '${Network1StackName}-PublicSubnetId'
        - Fn::ImportValue:
            Fn::Sub: '${Network2StackName}-PublicSubnetId'
      MinSize: 1
      MaxSize: 1
  LaunchTemplate:
    Type: 'AWS::EC2::LaunchTemplate'
    Properties:
      LaunchTemplateData:
        ImageId: "..."
        InstanceType: "..."
        SecurityGroupIds:
          - Fn::ImportValue:
              Fn::Sub: '${SecurityGroupsStackName}-SshIngressSecurityGroupId'