Game of Life in Elm (Pt 2)

In Part 1 we rendered an empty grid, but for a more interesting game we need to initialise the board with a random selection of dead & alive cells (known as the seed).

The Random package is a bit different in Elm; while it is possible to generate values one at a time, it’s more common to use a generator to produce a command:

init : (Model, Cmd Msg)
init =
    ([], Random.generate NewBoard (seedBoard))

seedBoard : Random.Generator Board
seedBoard =
    Random.list 5 seedRow

seedRow : Random.Generator Row
seedRow =
    Random.list 5 seedCell

seedCell : Random.Generator Cell
seedCell =
    Random.map (\b -> if b then Dead else Alive) Random.bool

We now initialise our grid as an empty list, and await the result of a command generating 5 rows of 5 cells, randomly Dead or Alive (coin flip).

Our update function needs to handle the message:

type Msg =
    NewBoard Board |
    Tick

update : Msg -> Model -> (Model, Cmd Msg)
update msg model =
    case msg of
        Tick ->
            (model, Cmd.none)
        NewBoard board ->
            (board, Cmd.none)

If you run the code, you should see a 5×5 grid with approx half the squares alive (filled in black) and half the squares dead (filled white).

In the next part we will actually begin to play the game.

Game of Life in Elm (Pt 1)

I stay as far away as possible from the front end, and have no interest in React or Angular; but Elm has piqued my interest, as a new functional language “compiled” to JavaScript. I thought it might be interesting to try and implement Conway’s Game of Life.

In this, the first part of X, I will simply render the grid. You can follow along with the repo here, and see the working final version here. I’m using Elm 0.18, the latest version at the time.

We start with the standard program for the Elm architecture:

main: Program Never Model Msg
main =
    Html.program {
        init = init,
        view = view,
        update = update,
        subscriptions = \_ -> Sub.none
    }

The simplest way to represent the grid is as a list of lists:

type Cell = Dead | Alive
type alias Row = List Cell
type alias Board = List Row
type alias Model = Board

init : (Model, Cmd Msg)
init =
    (emptyBoard, Cmd.none)

emptyBoard : Board
emptyBoard = [
        [ Dead, Dead, Dead ],
        [ Dead, Dead, Dead ],
        [ Dead, Dead, Dead ]
    ]

We need an update function:

type Msg =
    Tick

update : Msg -> Model -> (Model, Cmd Msg)
update msg model =
    case msg of
        Tick ->
            (model, Cmd.none)

but this message will never be received, right now. Finally, we render the model as a table:

view : Model -> Html Msg
view model =
    div [] [
        h1 [] [ text "Game Of Life" ],
        div [ class "board" ] [
            table [ style [ ("border-collapse", "collapse"), ("border", "1px solid black") ] ] [
                tbody [] (List.map renderRow model)
            ]
        ]
    ]

renderRow : Row -> Html Msg
renderRow row =
    tr [] (List.map renderCell row)

renderCell : Cell -> Html Msg
renderCell box =
    td [ style [ ("border", "1px solid black"), ("height", "50px"), ("width", "50px") ] ] []

You can compile the main file using elm-make, or run elm-reactor, and you should see a 3×3 grid rendered.

In Part 2 we’ll look at how to make the “seed” more interesting.

Using multiple indices with ELK

We’re running a relatively old version of ELK (1.x), which is working fine but I don’t want to slip too far behind (the beta for 5.0 is out, although that was a jump from 2.x).

The last time I tried to upgrade I ran into problems with “mapping conflicts”, which are no longer acceptable. I tried to fix the individual conflicts, yet somehow ended up with more conflicts than I had before.

This time, I decided to nuke it from orbit, and use separate indices for different log types:

output {
    elasticsearch {
        host => localhost
        index => "%{type}-%{+YYYY.MM.dd}"
    }
}

This means, for example, that my nginx and postgresql logs are in separate indices; and therefore, similarly named fields no longer conflict.

The main benefit and downside are intertwined: it makes querying simpler as you don’t have to include the type, but you can’t query across log types (this hasn’t been a problem, so far). It also makes it far simpler to check which logs are taking up all the space on disk, and probably in memory.

Uploading to Strava from Arch Linux

I’ve had a MBP for work for the last year or so, so had side-stepped the misery of getting GPS data from my Forerunner 305 on to Strava. Now I’m back on Arch full time, and the plugin I was using no longer works; so it was time to find a new approach.

The first thing you will need is a copy of garmintools, which no longer seems to be available in the AUR. If you can get that built and installed, then I refer you to my previous article to get it working.

At that point, you should be able to run “garmin_save_runs”, and all the data on your device will be exported (in the current working directory!). The data is organised by date, so it should be pretty easy to find the track you want.

Unfortunately, the data is exported as a “.gmn” file, which isn’t supported by Strava; so we need to convert it. Next stop is garmin-dev. You can either clone the repo (if you know what git is), or just download the zip. It’s then as simple as pointing the tool at your file, and saving the output:

./gmn2tcx 20160801T151814.gmn > 20160801T151814.tcx

And uploading the tcx file to Strava. Happy trails!

JSON error page for Nginx

We use the ngx_http_auth_request_module to authenticate requests by calling one of our services. Unfortunately, if the request is rejected, it returns the default 401 error page:

<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.9.10</center>
</body>
</html>

Which is a bit out of place, when the rest of our responses are JSON. A simple fix is to override it:

server {
    ...
    error_page 401 @401_json;

    location @401_json {
        default_type application/json;
        return 401 '{"error":{"message":"Unauthorised"}}';
    }
}

But remember this will be used for all 401s, not just those from the auth module.

Mapping conflicts with ELK

I recently started upgrading to a newer version of ES (4.5), and found that it refused to start:

IllegalStateException: unable to upgrade the mappings for the index

In fact, this mapping conflict was one of the things I was hoping the upgrade would solve. After a bit of reading it became clear that I would have to make some changes.

The mapping in question was a field from the logs called “level”. In the postgres logs it was a string (e.g. “INFO”), and in our application logs (using bunyan) it was an integer (40 => “WARN”).

To allow me to search using a range (e.g. level:[40 TO 60]), I was using a mutate filter to convert the string “40” to an integer, and this was the cause of the conflict.

My first thought was to copy the field before converting:

mutate {
    add_field => { "level_int" => "%{level}" }
    convert => { "level_int" => "integer" }
}

But it turns out that that’s not enough to avoid a conflict (possibly because ES guesses the type, and saw an int first?). So I went with the nuclear option, and renamed the field:

mutate {
    rename => { "level" => "level_int" }
    convert => { "level_int" => "integer" }
}

Now my new documents were conflict free. Unfortunately, the only solution provided for existing data is to export and re-import it, which I wasn’t really in the mood for.

Luckily, I’m not in any rush to upgrade, and we close indices after 30 days. So I plan to wait for a month, and hope my data is clean by then!

“Trouble parsing json”

We use Bunyan in our node apps, for “structured logging”. The output json string is passed to syslog, by systemd, and then fed into ELK.

{
    "name":"foo-service",
    "hostname":"app-01",
    "pid":30988,
    "ip":"1.19.24.8",
    "requestId":"1c11f448-73f2-4efa-bc63-3de787618d49",
    "level":50,
    "err": {
        "message":"oh noes!"
    }
}

Unfortunately, if that string is longer than 2048 chars (usually a stacktrace, or html returned from a web service instead of json), then the json blob ends up split over 2 lines in syslog.

This causes ELK to barf when attempting to parse the broken lines (assuming you are parsing as json), and means you won’t see those errors in Kibana.

It is possible to detect the error parsing the error, by searching for the string “Trouble parsing json”, but that’s not really a solution.

I would prefer to see a truncated error, than have the current situation, but that means either wrapping or patching Bunyan itself.