Disk by id

We’ve been using an (openstack based) cloud provider, that can’t guarantee a stable device name for an attached volume.

This was causing problems when used in /etc/fstab; on reboot, if the device name was incorrect, the instance would hang.

It’s pretty straight forward to use the UUID instead, with ansible:

- name: Mount vol
  become: yes
  mount:
    path: "{{ mount_point }}"
    src: "UUID={{ ansible_devices[device_name].partitions[device_name + '1'].uuid }}"
    fstype: ext4
    state: mounted

but we still needed the device_name in group vars. Our provider explained that a stable id was provided, in /dev/disk/by-id, which could be used directly for most tasks:

- name: Create a new primary partition
  parted:
    device: "/dev/disk/by-id/{{ device_id }}"
    number: 1
    state: present
  become: yes

- name: Create ext4 filesystem on vol
  become: yes
  filesystem:
    fstype: ext4
    dev: "/dev/disk/by-id/{{ device_id }}-part1"

But how do you get from the id, to the device name?

$ ls /dev/disk/by-id/
virtio-c11c38e5-7021-48d2-a  virtio-c11c38e5-7021-48d2-a-part1
"ansible_devices": {
        "vda": {
            ...
        }, 
        "vdb": {
            ...
        }, 
        "vdc": {
            ...
            "links": {
                "ids": [
                    "virtio-c11c38e5-7021-48d2-a"
                ], 
                ...
            }, 
            ...
        }
    }

This seemed like a job for json_query but, after a fruitless hour or two, I gave up and used this (slightly hacky) solution suggested on SO:

- name: Get device name
  set_fact:
    device_name: "{{ item.key }}"
  with_dict: "{{ ansible_devices }}"
  when: "(item.value.links.ids[0] | default()) == device_id"
  no_log: yes

Resetting all sequences in postgresql

It’s pretty simple to update the next value generated by a sequence, but what if you want to update them for every table?

In our case, we had been using DMS to import data, but none of the sequences were updated afterwards. So any attempt to insert a new row was doomed to failure.

To update one sequence you can call:

SELECT setval('foo.bar_id_seq', (select max(id) from foo.bar), true);

and you can get a list of tables pretty easily:

\dt *.*

but how do you put them together? My first attempt was using some vim fu (qq@q), until I realised I’d need to use a regex to capture the table name. And then I found some sequences that weren’t using the same name as the table anyway (consistency uber alles).

It’s also easy to get a list of sequences:

SELECT * FROM information_schema.sequences;

but how can you link them back to the table?

The solution is a function called pg_get_serial_sequence:

select t.schemaname, t.tablename, pg_get_serial_sequence(t.schemaname || '.' || t.tablename, c.column_name)
from pg_tables t
join information_schema.columns c on c.table_schema = t.schemaname and c.table_name = t.tablename
where t.schemaname <> 'pg_catalog' and t.schemaname <> 'information_schema' and pg_get_serial_sequence(t.schemaname || '.' || t.tablename, c.column_name) is not null;

This returns the schema, table name, and sequence name for every (non-system) table; which should be “trivial” to convert to a script updating the sequences (I considered doing that in sql, but dynamic tables aren’t easy to do).

Streaming a csv from postgresql

If you want to build an endpoint to download a csv, that could contain a large number of rows; you want to use streams, so you don’t need to hold all the data in memory before writing it.

If you are already using the pg client, it has a nifty add-on for this purpose:

const { Client } = require('pg');
const QueryStream = require('pg-query-stream');
const csvWriter = require("csv-write-stream");

module.exports = function(connectionString) {
    this.handle = function(req, res) {
        var sql = "SELECT...";
        var args = [...];

        const client = new Client({connectionString});
        client.connect().then(() => {
            var stream = new QueryStream(sql, args);
            stream.on('end', () => {
                client.end();
            });
            var query = client.query(stream);

            var writer = csvWriter();
            res.contentType("text/csv");
            writer.pipe(res);

            query.pipe(writer);
        });
    };
};

If you need to transform the data, you can add another step:

...

const transform = require('stream-transform');

            ...

            var query = client.query(stream);

            var transformer = transform(r => ({
                "User ID": r.user_id,
                "Created": r.created.toISOString(),
                ...
            }));

            ...

            query.pipe(transformer).pipe(writer);

Running locust on Fargate

Locust is a programmer-friendly load testing tool (certainly compared with jmeter!). Traditionally, once you needed to generate more load than a single host could easily support, you would set up a swarm. However, if you’re willing to live without the web UI, there is another option.

Once you have a containerised version of your scripts, you can go “serverless”, and run them as a task on AWS Fargate. You can use the wizard to set up a cluster &c, or define them with cloudformation:

AWSTemplateFormatVersion: 2010-09-09

Resources:

  Cluster:
    Type: 'AWS::ECS::Cluster'
    Properties:
      ClusterName: ${cluster_name}

  TaskExecutionRole:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ecs-tasks.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      Path: /
      Policies:
        - PolicyName: logs
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - 'logs:CreateLogGroup'
                  - 'logs:CreateLogStream'
                  - 'logs:PutLogEvents'
                Resource: '*'
        - PolicyName: ecr
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - 'ecr:BatchCheckLayerAvailability'
                  - 'ecr:GetDownloadUrlForLayer'
                  - 'ecr:BatchGetImage'
                Resource: !Join
                  - ''
                  - - 'arn:aws:ecr:'
                    - !Ref 'AWS::Region'
                    - ':'
                    - !Ref 'AWS::AccountId'
                    - ':repository/your-repo'
              - Effect: Allow
                Action:
                  - 'ecr:GetAuthorizationToken'
                Resource: '*'

  LogGroup:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: !Sub /fargate/${AWS::StackName}
      RetentionInDays: 7

  TaskDefinition:
    Type: 'AWS::ECS::TaskDefinition'
    Properties:
      Family: ${name}
      Cpu: 256
      Memory: 512
      NetworkMode: awsvpc
      RequiresCompatibilities:
        - FARGATE
      ExecutionRoleArn: !Ref TaskExecutionRole
      ContainerDefinitions:
        - Name: ${name}
          Cpu: 256
          Memory: 512
          Image: ${account_id}.dkr.ecr.${region}.amazonaws.com/${image_name}:latest
          LogConfiguration:
            LogDriver: awslogs
            Options:
              awslogs-region: ${region}
              awslogs-group: !Ref LogGroup
              awslogs-stream-prefix: !Ref AWS::StackName

Once the stack is created, you can run a task on the cluster:

aws ecs run-task --launch-type FARGATE --cluster ${cluster-name} --task-definition ${task-name}:${latest-revision} --network-configuration "awsvpcConfiguration={subnets=[${public-subnet-id}],securityGroups=[${security-group}],assignPublicIp='ENABLED'}" --count 1 --overrides '{"containerOverrides":[{"name":${name},"environment":[{"name":"TARGET_URL","value":${target-url}},{"name":"LOCUST_OPTS","value":"--clients=100 --no-web --only-summary --run-time=1h"}]}]}'

(You can use a public subnet/sg from the default vpc). That will spawn 100 VUs, for an hour, against your chosen target. And you can just keep adding more. Any logs from locust will be available in the AWS console.

Terminating TLS at an ALB

While there are definite benefits to having a zero trust network, it’s also convenient to outsource all the certificate management.

First you need to create a cert with ACM, either by importing it, or letting them do it (managed renewal ftw!):

  Certificate:
    Type: AWS::CertificateManager::Certificate
    Properties:
      DomainName: !Ref 'DomainName'
      ValidationMethod: DNS

That in hand, you can create the LB, Listener & Target Group:

  LoadBalancer:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Name: foo
      Subnets:
        - !Ref PublicSubnet1
        - ...
      SecurityGroups:
        - !Ref SecurityGroup
  LoadBalancerListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      LoadBalancerArn: !Ref LoadBalancer
      Port: 443
      Protocol: HTTPS
      DefaultActions:
        - Type: forward
          TargetGroupArn: !Ref DefaultTargetGroup
      SslPolicy: ELBSecurityPolicy-TLS-1-2-2017-01
      Certificates:
        - CertificateArn: !Ref Certificate
  DefaultTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      Name: foo
      VpcId: !Ref Vpc
      Port: 80
      Protocol: HTTP
  SecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      VpcId: !Ref Vpc
      GroupDescription: Enable HTTPS access for LB
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: '443'
          ToPort: '443'
          CidrIp: '0.0.0.0/0'

Make sure you use a public subnet, or you won’t be able to reach the LB!

Ansible, AWS, and bastion hosts, oh my!

There’s some useful info available about using a jump host with Ansible, and AWS dynamic inventory; but either the world has changed since those were written, or my scenario is slightly different.

I defined my inventory first:

plugin: aws_ec2
regions:
  - eu-west-2
keyed_groups:
 - key: tags.Type
   separator: ''
compose:
  ansible_host: private_ip_address

(that last bit is important, otherwise the ssh config won’t work). At this point you should be able to list (or graph) the instances you want to connect to:

$ ansible-inventory -i inventories/eu-west-2.aws_ec2.yml --list

Next you need some ssh config:

Host 10.0.*.*
    ProxyCommand ssh -W %h:%p admin@52.56.111.199

I kept it pretty minimal. The IP mask needs to match whatever you used for the subnet(s) the instances are attached to (obvs). And the login may vary depending on the image you used, if you are using the defaults.

You can then use this config when running your playbook:

ANSIBLE_SSH_ARGS="-F lon_ssh_config" ansible AppServer -i inventories/eu-west-2.aws_ec2.yml -u admin -m ping

The IP address for the jump host is hard-coded in the ssh config, which isn’t ideal. We may use a DNS record, and update that instead, if it changes; but there doesn’t seem any easy way to either get that from the inventory, or update the cname automatically.

AutoScalingGroup with a LaunchTemplate

There are plenty of examples for creating an ASG using a CloudFormation template, but those I found all used a “launch configuration“.

According to the docs, using a launch template is the new hotness, so I foolishly assumed it would be simple to adapt one to the other.

Some time later, I had a working example:

  AutoScalingGroup:
    Type: 'AWS::AutoScaling::AutoScalingGroup'
    Properties:
      LaunchTemplate:
        LaunchTemplateId: !Ref LaunchTemplate
        Version: !GetAtt LaunchTemplate.LatestVersionNumber
      VPCZoneIdentifier:
        - Fn::ImportValue:
            Fn::Sub: '${Network1StackName}-PublicSubnetId'
        - Fn::ImportValue:
            Fn::Sub: '${Network2StackName}-PublicSubnetId'
      MinSize: 1
      MaxSize: 1
  LaunchTemplate:
    Type: 'AWS::EC2::LaunchTemplate'
    Properties:
      LaunchTemplateData:
        ImageId: "..."
        InstanceType: "..."
        SecurityGroupIds:
          - Fn::ImportValue:
              Fn::Sub: '${SecurityGroupsStackName}-SshIngressSecurityGroupId'

Word chains (Part 3)

Last time, we found the possible next words. Now we want to build on that, and use that function to build a chain from the first word, to the goal word. Sounds like a job for recursion (divide and conquer)!

This time, we’ll check that the chains generated all end in the expected word:

prop_all_chains_should_include_last_word() ->
    ?FORALL({FirstWord, LastWord}, valid_words(),
        begin
            Words = word_chains:word_list(length(FirstWord)),
            Chains = word_chains:all_chains(FirstWord, LastWord, Words, length(FirstWord)),
            InvalidChains = lists:filter(fun([W|_]) -> W =/= LastWord end, Chains),
            length(InvalidChains) =:= 0
        end).

We pass in the word list (all words of the chosen length), to avoid reading the file multiple times.

all_chains(FirstWord, LastWord, Words, MaxLength) ->
    lists:sort(fun(A, B) -> length(A) =< length(B) end, all_chains(FirstWord, LastWord, Words, MaxLength, [[FirstWord]])).

all_chains(FirstWord, LastWord, Words, MaxLength, Chains) ->
    lists:append(lists:map(fun(Chain) ->
        [CurrentWord | _Rest] = Chain,
        case CurrentWord =:= LastWord of
            true -> [Chain];
            false ->
                NextWords = next_words(CurrentWord, Words),
                NewChains = compact(lists:map(fun(NewWord) ->
                    case lists:member(NewWord, Chain) of
                        false ->
                            NewChain = [NewWord | Chain],
                            case length(NewChain) > MaxLength of
                                true -> [];
                                false -> NewChain
                            end;
                        true -> []
                    end
                end, NextWords)),
                all_chains(FirstWord, LastWord, Words, MaxLength, NewChains)
        end
    end, Chains)).

Our first chain is simply the first word, e.g. [“cat”]. We then iterate over the list, and find all possible next words, and create the possible chains using those words, [[“bat”, “cat”], [“cab”, “cat”], &c …] .

If any chain ends in the target word, no more work is required. Otherwise we continue to extend, and branch, the chains. If the proposed next word already exists in the current chain, then that branch is dead (to avoid looping forever).

Once all branches have been exhausted, we return the list of valid chains, sorted by length (shortest first).

Unfortunately, while it seemed like a good idea to generate all possible chains, it turns out that some of them can be very long. So I added a max length param, to cut short further exploration.

Even with that, execution can be pretty slow; so next time we’ll do some profiling, and see if caching the possible next words will help.

Word chains (Part 2)

Previously, we laid some groundwork for generating word chains. Rather than arbitrarily returning one word, we might as well get all the words that are one letter different from the first word:

prop_next_words_should_be_near() ->
    ?FORALL({FirstWord, LastWord}, valid_words(),
        begin
            NextWords = word_chains:next_words(FirstWord),
            InvalidWords = lists:filter(fun(W) -> word_chains:get_word_distance(W, FirstWord) =/= 1 end, NextWords),
            length(InvalidWords) =:= 0
        end).

We can calculate the “word distance” using map/reduce:

get_word_distance(Word1, Word2) ->
    Differences = lists:zipwith(fun(X, Y) -> case X =:= Y of true -> 0; false -> 1 end end, Word1, Word2),
    lists:foldl(fun(D, Acc) -> Acc + D end, 0, Differences).

For each letter in Word1, we compare it with the same (position) letter in Word2, and assign a 0 if it matches and a 1 if it differs. The sum of these values tells us the difference between the 2 words.

2> word_chains:get_word_distance("cat", "cat").
0
3> word_chains:get_word_distance("cat", "cot").
1
4> word_chains:get_word_distance("cat", "cog").
2

Using this helper function, we can easily find all the possible next words:

next_words(FirstWord) ->
    WordList = word_list(),
    SameLengthWords = lists:filter(fun(W) -> length(W) =:= length(FirstWord) end, WordList),
    WordDistances = lists:map(fun(W) -> {W, get_word_distance(W, FirstWord)} end, SameLengthWords),
    lists:map(fun({Word, _}) -> Word end, lists:filter(fun({_, Distance}) -> Distance =:= 1 end, WordDistances)).

Almost there! Next time, we will actually start generating some word chains.