Stopping a locust

Locust is a load testing tool that allows you to script your actions in python (much more pleasant than fiddling with jMeter).

I was investigating an issue that only happened under load (turned out to be due to starvation of the db connection pool). The trouble was, after it had happened, the logs filled up with errors, which weren’t useful information. I wanted the load test to stop, as soon as the first error had occurred.

I couldn’t find anything in the documentation, but a quick nose in the source revealed the existence of a StopLocust exception:

from locust.exception import StopLocust


response =, data=json.dumps(data), headers=headers)
if response.status_code != 200:
    raise StopLocust()

At any point, you can raise it, and that locust will stop. If you wanted to stop them all, you could set a flag and check it in your other tasks (not ideal, but the best I can offer).

Remember this is undocumented behaviour, and could change at any time, but it works in v0.7.2.

Load testing Tibco EMS with JMeter

The JMeter documentation does an excellent job of describing how to set up a test plan for a JMS provider. However there are still a few provider specific details you need to know:

Make sure you’ve copied all the jars from TIBCO_HOME\ems\6.0\lib to jmeter\lib!

The InitialContextFactory for TIBCO EMS 6.0 is:


A fault tolerant Provider Url looks like:


It’s also worth setting the reconnect_attempt_delay (or count) reasonably high in your connection factory, as it can take a while for failover to succeed.

  type                     = topic
  url                      = tcp://machine1:7222,tcp://machine2:7222
  reconnect_attempt_delay  = 10000

If you’re using durable subscribers, you can only have one thread (user) per subscriber as the client id must be unique. And the permissions are different (durable & use_durable).

You can’t use MapMessage, stick with TextMessage.

There’s a bug in JMeter 2.5.1 (fixed in the trunk) that may cause you to see NPEs when getting the connection (InitialContextFactory.lookupContext doesn’t use the map/cache properly).