A generic json handler in Go

One of the things I really like about Go is the way it encourages you to build up behaviour using composition. For example, I was writing handlers that return a response object as json; it’s very simple to extract the duplicated code into a generic handler:

package main

import (
	"encoding/json"
	"log"
	"net/http"
)

type ResponseGenerator interface {
	GetResponse(r *http.Request) (interface{}, error)
}

type JsonHandler struct {
	log *log.Logger
	rg  ResponseGenerator
}

func (h *JsonHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
	response, err := h.rg.GetResponse(r)
	if err != nil {
		h.log.Printf("ERROR: %v\n", err)
		http.Error(w, err.Error(), http.StatusBadRequest)
		return
	}

	b, err := json.Marshal(response)
	if err != nil {
		h.log.Printf("ERROR: %v\n", err)
		http.Error(w, "Unexpected error", http.StatusInternalServerError)
		return
	}

	w.Header().Set("Content-Type", "application/json")
	w.Write(b)
}

Running the dieharder test suite against a Fortuna implementation

I wanted to run the dieharder suite of tests against this implementation of the Fortuna PRNG algorithm. Once I’d worked out how to dump binary output to stdout, it was pretty easy to generate an infinite stream of (hopefully) random data that could be piped into dieharder:

package main

import (
	"crypto/aes"
	"encoding/binary"
	"os"

	"github.com/seehuhn/fortuna"
)

func main() {
	rng, err := fortuna.NewRNG("")
	if err != nil {
		panic("cannot initialise the RNG: " + err.Error())
	}
	defer rng.Close()
	gen := fortuna.NewGenerator(aes.NewCipher)
	for true {
		err = binary.Write(os.Stdout, binary.LittleEndian, gen.PseudoRandomData(16))
		if err != nil {
			panic("binary.Write failed:" + err.Error())
		}
	}
}

(The source can be found here). Below is an example output:

$ go run src/fortuna-dieharder.go | dieharder -a -g 200 -k 2 -Y 1
#=============================================================================#
#            dieharder version 3.31.1 Copyright 2003 Robert G. Brown          #
#=============================================================================#
   rng_name    |rands/second|   Seed   |
stdin_input_raw|  3.79e+05  |4078583786|
#=============================================================================#
        test_name   |ntup| tsamples |psamples|  p-value |Assessment
#=============================================================================#
   diehard_birthdays|   0|       100|     100|0.79585928|  PASSED  
      diehard_operm5|   0|   1000000|     100|0.78748529|  PASSED  
  diehard_rank_32x32|   0|     40000|     100|0.65244868|  PASSED  
    diehard_rank_6x8|   0|    100000|     100|0.15429465|  PASSED  
   diehard_bitstream|   0|   2097152|     100|0.48566800|  PASSED  
        diehard_opso|   0|   2097152|     100|0.35772252|  PASSED  
        diehard_oqso|   0|   2097152|     100|0.47349980|  PASSED  
         diehard_dna|   0|   2097152|     100|0.48349937|  PASSED  
diehard_count_1s_str|   0|    256000|     100|0.95257057|  PASSED  
diehard_count_1s_byt|   0|    256000|     100|0.84437819|  PASSED  
 diehard_parking_lot|   0|     12000|     100|0.31899023|  PASSED  
    diehard_2dsphere|   2|      8000|     100|0.24483205|  PASSED  
    diehard_3dsphere|   3|      4000|     100|0.04900588|  PASSED  
     diehard_squeeze|   0|    100000|     100|0.96659492|  PASSED  
        diehard_sums|   0|       100|     100|0.92162735|  PASSED  
        diehard_runs|   0|    100000|     100|0.29733372|  PASSED  
        diehard_runs|   0|    100000|     100|0.21733297|  PASSED  
       diehard_craps|   0|    200000|     100|0.00620028|  PASSED  
       diehard_craps|   0|    200000|     100|0.77686961|  PASSED  
 marsaglia_tsang_gcd|   0|  10000000|     100|0.41392103|  PASSED  
 marsaglia_tsang_gcd|   0|  10000000|     100|0.94153849|  PASSED  
         sts_monobit|   1|    100000|     100|0.97969393|  PASSED  
            sts_runs|   2|    100000|     100|0.58348717|  PASSED  
          sts_serial|   1|    100000|     100|0.43332780|  PASSED  
          sts_serial|   2|    100000|     100|0.45215572|  PASSED  
          sts_serial|   3|    100000|     100|0.63888152|  PASSED  
          sts_serial|   3|    100000|     100|0.24532215|  PASSED  
          sts_serial|   4|    100000|     100|0.82498873|  PASSED  
          sts_serial|   4|    100000|     100|0.70631562|  PASSED  
          sts_serial|   5|    100000|     100|0.14098307|  PASSED  
          sts_serial|   5|    100000|     100|0.05971648|  PASSED  
          sts_serial|   6|    100000|     100|0.44033615|  PASSED  
          sts_serial|   6|    100000|     100|0.84254288|  PASSED  
          sts_serial|   7|    100000|     100|0.94534057|  PASSED  
          sts_serial|   7|    100000|     100|0.56895078|  PASSED  
          sts_serial|   8|    100000|     100|0.17537892|  PASSED  
          sts_serial|   8|    100000|     100|0.41596536|  PASSED  
          sts_serial|   9|    100000|     100|0.93984064|  PASSED  
          sts_serial|   9|    100000|     100|0.77199673|  PASSED  
          sts_serial|  10|    100000|     100|0.49608967|  PASSED  
          sts_serial|  10|    100000|     100|0.19713502|  PASSED  
          sts_serial|  11|    100000|     100|0.32361987|  PASSED  
          sts_serial|  11|    100000|     100|0.28265068|  PASSED  
          sts_serial|  12|    100000|     100|0.32824861|  PASSED  
          sts_serial|  12|    100000|     100|0.43554391|  PASSED  
          sts_serial|  13|    100000|     100|0.01463804|  PASSED  
          sts_serial|  13|    100000|     100|0.24260554|  PASSED  
          sts_serial|  14|    100000|     100|0.95754708|  PASSED  
          sts_serial|  14|    100000|     100|0.11090928|  PASSED  
          sts_serial|  15|    100000|     100|0.62735234|  PASSED  
          sts_serial|  15|    100000|     100|0.97145571|  PASSED  
          sts_serial|  16|    100000|     100|0.45211316|  PASSED  
          sts_serial|  16|    100000|     100|0.60723389|  PASSED  
         rgb_bitdist|   1|    100000|     100|0.36658874|  PASSED  
         rgb_bitdist|   2|    100000|     100|0.99411631|  PASSED  
         rgb_bitdist|   3|    100000|     100|0.06557851|  PASSED  
         rgb_bitdist|   4|    100000|     100|0.09920095|  PASSED  
         rgb_bitdist|   5|    100000|     100|0.69241623|  PASSED  
         rgb_bitdist|   6|    100000|     100|0.34436231|  PASSED  
         rgb_bitdist|   7|    100000|     100|0.79110397|  PASSED  
         rgb_bitdist|   8|    100000|     100|0.15949927|  PASSED  
         rgb_bitdist|   9|    100000|     100|0.82775041|  PASSED  
         rgb_bitdist|  10|    100000|     100|0.61202477|  PASSED  
         rgb_bitdist|  11|    100000|     100|0.88799469|  PASSED  
         rgb_bitdist|  12|    100000|     100|0.96488193|  PASSED  
rgb_minimum_distance|   2|     10000|    1000|0.35186805|  PASSED  
rgb_minimum_distance|   3|     10000|    1000|0.21343396|  PASSED  
rgb_minimum_distance|   4|     10000|    1000|0.18466470|  PASSED  
rgb_minimum_distance|   5|     10000|    1000|0.49211455|  PASSED  
    rgb_permutations|   2|    100000|     100|0.93013365|  PASSED  
    rgb_permutations|   3|    100000|     100|0.24389986|  PASSED  
    rgb_permutations|   4|    100000|     100|0.10744314|  PASSED  
    rgb_permutations|   5|    100000|     100|0.86806297|  PASSED  
      rgb_lagged_sum|   0|   1000000|     100|0.57089479|  PASSED  
      rgb_lagged_sum|   1|   1000000|     100|0.63991216|  PASSED  
      rgb_lagged_sum|   2|   1000000|     100|0.89554236|  PASSED  
      rgb_lagged_sum|   3|   1000000|     100|0.94219831|  PASSED  
      rgb_lagged_sum|   4|   1000000|     100|0.76276793|  PASSED  
      rgb_lagged_sum|   5|   1000000|     100|0.92606410|  PASSED  
      rgb_lagged_sum|   6|   1000000|     100|0.93189843|  PASSED  
      rgb_lagged_sum|   7|   1000000|     100|0.98320319|  PASSED  
      rgb_lagged_sum|   8|   1000000|     100|0.92024295|  PASSED  
      rgb_lagged_sum|   9|   1000000|     100|0.13888672|  PASSED  
      rgb_lagged_sum|  10|   1000000|     100|0.76576110|  PASSED  
      rgb_lagged_sum|  11|   1000000|     100|0.82383503|  PASSED  
      rgb_lagged_sum|  12|   1000000|     100|0.98684700|  PASSED  
      rgb_lagged_sum|  13|   1000000|     100|0.26130464|  PASSED  
      rgb_lagged_sum|  14|   1000000|     100|0.26742272|  PASSED  
      rgb_lagged_sum|  15|   1000000|     100|0.76937245|  PASSED  
      rgb_lagged_sum|  16|   1000000|     100|0.05400234|  PASSED  
      rgb_lagged_sum|  17|   1000000|     100|0.11081369|  PASSED  
      rgb_lagged_sum|  18|   1000000|     100|0.89407701|  PASSED  
      rgb_lagged_sum|  19|   1000000|     100|0.83792478|  PASSED  
      rgb_lagged_sum|  20|   1000000|     100|0.61427436|  PASSED  
      rgb_lagged_sum|  21|   1000000|     100|0.66549271|  PASSED  
      rgb_lagged_sum|  22|   1000000|     100|0.18433339|  PASSED  
      rgb_lagged_sum|  23|   1000000|     100|0.63460123|  PASSED  
      rgb_lagged_sum|  24|   1000000|     100|0.99377481|  PASSED  
      rgb_lagged_sum|  25|   1000000|     100|0.48356785|  PASSED  
      rgb_lagged_sum|  26|   1000000|     100|0.49883205|  PASSED  
      rgb_lagged_sum|  27|   1000000|     100|0.04619725|  PASSED  
      rgb_lagged_sum|  28|   1000000|     100|0.64940352|  PASSED  
      rgb_lagged_sum|  29|   1000000|     100|0.96496460|  PASSED  
      rgb_lagged_sum|  30|   1000000|     100|0.70586167|  PASSED  
      rgb_lagged_sum|  31|   1000000|     100|0.58423701|  PASSED  
      rgb_lagged_sum|  32|   1000000|     100|0.46024310|  PASSED  
     rgb_kstest_test|   0|     10000|    1000|0.09525747|  PASSED  
     dab_bytedistrib|   0|  51200000|       1|0.72957242|  PASSED  
             dab_dct| 256|     50000|       1|0.67061200|  PASSED  
Preparing to run test 207.  ntuple = 0
        dab_filltree|  32|  15000000|       1|0.44986941|  PASSED  
        dab_filltree|  32|  15000000|       1|0.55626134|  PASSED  
Preparing to run test 208.  ntuple = 0
       dab_filltree2|   0|   5000000|       1|0.06208234|  PASSED  
       dab_filltree2|   1|   5000000|       1|0.65494424|  PASSED  
Preparing to run test 209.  ntuple = 0
        dab_monobit2|  12|  65000000|       1|0.68823690|  PASSED  

However, bear in mind that:

Most of the tests in Diehard return a p-value, which should be uniform on [0,1) if the input file contains truly independent random bits. Those p-values are obtained by p=F(X), where F is the assumed distribution of the sample random variable X—often normal. But that assumed F is just an asymptotic approximation, for which the fit will be worst in the tails. Thus you should not be surprised with occasional p-values near 0 or 1, such as .0012 or .9983.

When a bit stream really FAILS BIG, you will get p’s of 0 or 1 to six or more places. By all means, do not, as a Statistician might, think that a p .975 means that the RNG has “failed the test at the .05 level”. Such p’s happen among the hundreds that Diehard produces, even with good RNG’s. So keep in mind that “p happens”.

Printing a raw binary to stdout with Go

This is probably just me, but I found it surprisingly hard to output some raw binary data to stdout using Go (obviously it’s easy when tha knows how!). I [insert search engine here]ed it pretty hard, but couldn’t find any examples.

Using any of the methods from the fmt package resulted in a representation being printed, rather than the raw binary:

fmt.Println(gen.PseudoRandomData(16))

$go run src/print-binary1.go 
[19 234 189 238 126 50 65 77 46 130 105 26 50 12 217 195]

Then I looked at the binary package, but the only example was writing to a []byte buffer. Eventually, I found a breadcrumb in one of the mailing lists that led me to the solution:

binary.Write(os.Stdout, binary.LittleEndian, gen.PseudoRandomData(16))

Multiple configuration files for LogStash

If you have multiple applications running on a box with the logstash client, you probably want to split the config file up a bit. According to this thread, you can use a directory as the value for the config input:

/opt/logstash/bin/logstash agent --config /etc/logstash.d --log /var/log/logstash/logstash.log

and the files in that directory will be conflated in lexicographical order.

This means you need some sort of scheme to keep your inputs before your outputs, etc. I’m not too bothered about the order of the individual inputs, so I prefixed each category with a different number:

$ ls -la /etc/logstash.d
total 32
drwxr-xr-x  2 logstash logstash 4096 Apr 14 13:42 .
drwxr-xr-x 98 root     root     4096 Apr 16 01:28 ..
-rwxr-xr-x  1 logstash logstash  140 Jan  8 13:34 1_foo_inputs.conf
-rwxr-xr-x  1 logstash logstash  122 Jan  8 12:55 1_bar_inputs.conf
-rwxr-xr-x  1 logstash logstash   99 Jan  8 12:06 1_syslog_inputs.conf
-rwxr-xr-x  1 logstash logstash  237 Jan  8 13:34 2_foo_filters.conf
-rwxr-xr-x  1 logstash logstash  229 Jan  8 12:55 2_bar_filters.conf
-rwxr-xr-x  1 logstash logstash  170 Apr 14 13:42 3_output.conf

ensuring that inputs will be before filters, and both will be before outputs.

This means that the logstash config for each app can be deployed with it, without needing to amend a shared file.

LogStash & Bunyan, a match made in heaven

Every platform has a plethora of logging libraries; each with their own good points, and bad points. The first node.js library we tried was winston. It had all the usual features: multiple logging levels, different outputs (we actually just use stdout, and the daemon pipes it to a logfile); but in production we wanted to use logstash to aggregate and parse the logs.

Unfortunately winston suffers from a common problem: exception stack traces are split across multiple lines. It should be possible to use the multiline codec to merge the lines, but around this time I ran across bunyan.

The manifesto from Joyent explains the merits of logging in json (or some other machine parseable format), and it’s very easy to configure a logstash file input for a json file just by setting the codec:

input {
    file {
        type => "foo-app"
        path => [ "/var/log/foo-app/*.log" ]
        codec => "json"
    }   
}

Now, when your logs are shipped to the logstash server, each field will be indexed individually without needing to grok any crazy patterns.

It’s also very simple to add fields to the logger at runtime:

var logger = bunyan.createLogger({
    name: name,
    stream: process.stdout,
    level: 'info',
    anExtraField: 'foo'
});

...

logger.fields.anotherExtraField = 'bar';

which will also be searchable in logstash; making it easy to trace a request across multiple services, for example.

UPDATED: to add some sample output

{"name":"foo","hostname":"pikachu","pid":7645,"level":30,"msg":"Listening on port 1234","time":"2014-08-12T16:14:19.667Z","v":0}
{"name":"foo","hostname":"pikachu","pid":7645,"level":30,"msg":"Service started","time":"2014-08-12T16:14:19.673Z","v":0}
{"name":"foo","hostname":"pikachu","pid":7645,"level":30,"msg":"Received request for abc","time":"2014-08-12T16:32:28.600Z","v":0}
{"name":"foo","hostname":"pikachu","pid":7645,"level":50,"msg":"ERROR: bad request","time":"2014-08-12T16:32:28.600Z","v":0}

As the levels are integers (e.g. 30 => INFO), you can ask logstash for all entries where level >= ? (e.g. all ERRORs & FATALs).

Error building node module in a Vagrant box with NFS

I was trying, and failing, to install node-heapdump:

vagrant@debian7:/vagrant$ npm install heapdump
npm WARN package.json execSync@1.0.1-pre No repository field.
npm http GET https://registry.npmjs.org/heapdump
npm http 304 https://registry.npmjs.org/heapdump

> heapdump@0.2.7 install /vagrant/node_modules/heapdump
> node-gyp rebuild

make: Entering directory `/vagrant/node_modules/heapdump/build'
  CXX(target) Release/obj.target/heapdump/src/heapdump.o
  CXX(target) Release/obj.target/heapdump/src/platform-posix.o
  SOLINK_MODULE(target) Release/obj.target/heapdump.node
flock: ./Release/linker.lock: No locks available
make: *** [Release/obj.target/heapdump.node] Error 71
make: Leaving directory `/vagrant/node_modules/heapdump/build'
gyp ERR! build error 
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack     at ChildProcess.onExit (/home/vagrant/.nvm/v0.10.26/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:267:23)
gyp ERR! stack     at ChildProcess.EventEmitter.emit (events.js:98:17)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (child_process.js:797:12)
gyp ERR! System Linux 3.2.0-4-686-pae
gyp ERR! command "node" "/home/vagrant/.nvm/v0.10.26/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /vagrant/node_modules/heapdump
gyp ERR! node -v v0.10.26
gyp ERR! node-gyp -v v0.12.2
gyp ERR! not ok 
npm ERR! heapdump@0.2.7 install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the heapdump@0.2.7 install script.
npm ERR! This is most likely a problem with the heapdump package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     node-gyp rebuild
npm ERR! You can get their info via:
npm ERR!     npm owner ls heapdump
npm ERR! There is likely additional logging output above.

npm ERR! System Linux 3.2.0-4-686-pae
npm ERR! command "/home/vagrant/.nvm/v0.10.26/bin/node" "/home/vagrant/.nvm/v0.10.26/bin/npm" "install" "heapdump"
npm ERR! cwd /vagrant
npm ERR! node -v v0.10.26
npm ERR! npm -v 1.4.3
npm ERR! code ELIFECYCLE
npm ERR! 
npm ERR! Additional logging details can be found in:
npm ERR!     /vagrant/npm-debug.log
npm ERR! not ok code 0

A github issue suggested that cloning the repo, and running node-gyp rebuild, might give a better error. But that succeeded. Searching for the error message: “flock: ./Release/linker.lock: No locks available”, pointed the finger at the share being mounted using NFS. Unfortunately, after much research and fiddling, the best solution I could come up with was to switch the vagrant share back to a VirtualBox one temporarily.

vagrant@debian7:/vagrant$ npm install heapdump
npm WARN package.json execSync@1.0.1-pre No repository field.
npm http GET https://registry.npmjs.org/heapdump
npm http 304 https://registry.npmjs.org/heapdump

> heapdump@0.2.7 install /vagrant/node_modules/heapdump
> node-gyp rebuild

make: Entering directory `/vagrant/node_modules/heapdump/build'
  CXX(target) Release/obj.target/heapdump/src/heapdump.o
  CXX(target) Release/obj.target/heapdump/src/platform-posix.o
  SOLINK_MODULE(target) Release/obj.target/heapdump.node
  SOLINK_MODULE(target) Release/obj.target/heapdump.node: Finished
  COPY Release/heapdump.node
make: Leaving directory `/vagrant/node_modules/heapdump/build'
heapdump@0.2.7 node_modules/heapdump

Success!

FATAL ERROR: JS Allocation failed – process out of memory

We recently started seeing a spate of node processes dying with the unhelpful error:

FATAL ERROR: JS Allocation failed - process out of memory

Unfortunately, there’s no callstack provided. A StackOverflow answer suggested installing the OFE package, which got us a “heapdump” but it still wasn’t obvious what the problem was.

Eventually we spotted a pattern in the logs, and adding more logging led to a reproducible test case. It turned out that the bug was caused by using an array instead of an object (“hashmap”):

var foo = [];
foo[1000000001] = {};
console.log('got here');
console.log(foo);
console.log('boom!');

Accessing the array with a high id caused a huge array to be created, and trying to print that array caused the process to consume all the available memory.

UPDATE: we’re not the only ones to run into this issue.

Uploading data from a Garmin Forerunner 305 on Arch Linux

I have a Garmin Forerunner 305 GPS watch, that I use to record running data; which I upload to Strava. I recently had to get this working with Arch, and I wanted to record the steps required; both for posterity, and in case I need to do it again.

To get it working, you need to install garminplugin from the AUR. It has a dependency on garmintools so you need to install that manually as well, as makepkg doesn’t know how to. You then need to rmmod garmin_gps, and blacklist it.

At this point, you can run:

garmin_get_info -v

If you get back a gobbet of xml, then all is well; and you should be able to upload data to Strava or Garmin Connect. If you get a permission error, then you need to add a udev rule. Create a file named /etc/udev/rules.d/51-garmin.rules, and enter in it:

ATTRS{idVendor}=="091e", ATTRS{idProduct}=="0003", MODE="666"

(If you are using a different device, then you’ll need the appropriate vendor or product ids). If you reload the udev config, you should be able to access your device.

UPDATE: this has stopped working for me; I can still get the xml off the watch but the browser plugin doesn’t work (on Strava anyway). I’ll update this if I ever get it working again.

Faking a response to a gen_server call

I’ve been trying to write a unit test for an FSM, that makes a call to a gen_server:

Result = gen_server:call(Pid, foo).

I wanted to fake the response to the call, without needing to spin up a gen_server instance. My first attempt didn’t really get me anywhere:

Result = ok,
Pid = spawn(fun() ->
    receive
        _ -> Result
    end
end).

But some digging through the OTP source revealed the solution:

Result = ok,
Pid = spawn(fun() ->
    receive
        {_, {Caller, Ref}, _} -> Caller ! {Ref, Result}
    end
end).