Tracking network latency with SmokePing

I recently moved house, and since then my internets have been… flaky, at best. Ringing their customer service (yes, Virgin Media, you) is masochism at its purest, and I was repeatedly told that it was fine, no problems at all. (You do get to choose the hold music though!).

I decided to get some stats, and see how bad it really was. A quick bit of duckduckgo-ing revealed a few options: PingPlotter & MRTG, for example. But I went with SmokePing in the end.

Installation is as easy as:

sudo apt-get install smokeping

Which installs both the daemon that gathers the stats, and a website to look at pretty graphs. I found this handy dandy article about configuration, although a few things have changed in the latest version.

The config files can be found in /etc/smokeping/config.d. I updated the Targets file, and added a few hosts:

menu = Top
title = Network Latency Grapher
remark = Welcome to the SmokePing website of 'Graham\'s laptop. F*ck Virgin Media.\'

+ Local

menu = Local
title = Local Network

++ LocalMachine

menu = Local Machine
title = This host
host = localhost

+ UK

menu = UK
title = UK

++ BBC

menu = BBC
title = BBC
host = www.bbc.co.uk

++ Google

menu = Google
title = Google
host = www.google.co.uk

+ DotCom

menu = DotCom
title = DotCom

++ Google

menu = Google
title = Google
host = www.google.com

++ Virgin

menu = Virgin
title = Virgin
host = www.virginmedia.com

And bounced the daemon. Then it’s just a matter of going to: http://localhost/cgi-bin/smokeping.cgi, and waiting for the graphs to fill out a bit.

Graph of network latency

The bad news? All those spikes. Apparently it’s a “high utilisation” problem, and won’t be fixed for a while :(

Using mage.exe with .net 4 assemblies

If you’re using mage.exe directly to create an application manifest, and you’re seeing errors like this:

Warning MSB3178: Assembly <assembly> is incorrectly specified as a file.

and the assemblies flagged up are targeting .net 4, then you’re probably using the wrong (old) version. Try the one in C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\NETFX 4.0 Tools.

Unable to generate a temporary class (result=1)

A recently deployed web service (IIS on Server 2008) was producing a YSOD with the error:

Unable to generate a temporary class (result=1)

Some cursory duckduckgo-ing suggested that the problem was a lack of permissions, for the user that IIS was running as, on the C:\Windows\Temp folder.

I compared the rights for that folder on the broken server with a working one, and the (local) IIS_IUSRS group had “special permissions”. Specifically, the right to “List folder / read data”.

Once that was set up, IIS was back in business. I presume this is something normally set up during IIS install, which either went wrong or was later corrupted.

Activation woes with Windows 7 & VirtualBox

I run Windows 7 in a VirtualBox (on top of Linux Mint). The idea being that when you’ve broken one VM, you can trash it and move onto a new one :)

To that end, I keep a clean baseline image with only Windows installed. When the time has come to pave my dev environment, I clone this VM and start re-installing (thanks Chocolatey!).

However, rather inconveniently, I found that even though the baseline copy of Windows had been activated, the cloned VMs somehow knew that something had changed and needed re-activation.

Normally the trigger for this is a change of MAC address, but VBox allows you to keep that the same (the VMs are never running at the same time, so having the same MAC address is not a problem).

Eventually, this got annoying enough that I did what I should have done in the first place, to the internet!

Apparently, the secret is to set an extra property called the hardwareuuid:

VBoxManage modifyvm  --hardwareuuid 

Once this is set, any cloned VMs will inherit it. No more activation!

(Piracy is bad, m’kay. But I paid for Windows, and I want to use it as I wish).

Using conventions with the MEF

By default, Caliburn Micro uses the MEF. And, by default, the MEF uses attributes to identify components.

I’m pretty lazy and, given the choice, I’d rather use conventions and auto-registration. Luckily someone else was way ahead of me! And the MEFContrib project contains all the building blocks you need to create your own conventions.

I was originally planning on implementing a convention to register the first declared interface but, due to the vagaries of reflection, I took the easy way out and registered all interfaces from the same assembly (did I mention I’m lazy?).

    public class AppPartRegistry : PartRegistry
    {
        public AppPartRegistry()
        {
            this.Part().RegisterAllInterfaces();

            this.Scan(c => c.Assembly(typeof(AppPartRegistry).Assembly));
        }
    }

    public static class PartConventions
    {
        public static void RegisterAllInterfaces(this PartConventionBuilder<PartConvention> builder)
        {
            builder
                .ForTypesMatching(t => true)
                .ImportConstructor()
                .MakeNonShared()
                .Exports(e =>
                {
                    e.Export().Members(t => t.GetInterfaces().Where(i => i.Assembly == t.Assembly).ToArray());
                });
        }
    }

Update: Import (greediest) ctor, and make registrations transient.

Testing a Caliburn Micro bootstrapper

I like to have some tests that ensure I can get everything out of the container that I expect to need, before I fire up the application. (Although others would disagree). I mainly have tests for the composition root(s), i.e. anything I am going to try and resolve at runtime.

When developing a WPF app with Caliburn Micro, the container is owned by a Bootstrapper. Unfortunately, it’s not very testable, as most of the interesting methods are protected (but virtual).

The easiest thing to do is create a Test Specific Subclass, and expose the required methods. And then you will find that the base class attempts to initialize the app in the ctor (unless you can find some way to set a flag to false). Thankfully, that method is virtual too, so you can override it and just keep the bits you need:

    public class AppBootstrapperTests
    {
        private TestBootstrapper bootstrapper;

        [TestFixtureSetUp]
        public void FixtureSetUp()
        {
            this.bootstrapper = new TestBootstrapper();
        }

        [Test]
        public void Shell()
        {
            bootstrapper.GetInstance<IShell>();
        }

        private class TestBootstrapper : AppBootstrapper
        {
            public T GetInstance<T>()
            {
                try
                {
                    return (T)base.GetInstance(typeof(T), string.Empty);
                }
                catch (Exception e)
                {
                    this.WhatDoIHave();
                    throw;
                }
            }

            protected override void StartRuntime()
            {
                this.Configure();
            }

            private void WhatDoIHave()
            {
                foreach (var definition in Container.Catalog.Parts)
                {
                    foreach (var exportDefinition in definition.ExportDefinitions)
                    {
                        Console.WriteLine(exportDefinition);
                    }
                }
            }
        }
    }

UPDATE: Added WhatDoIHave to TestBoostrapper.

Working with binary dependencies

We have a reasonably complex build pipeline, using TeamCity & NuGet. This is a generally a Good Thing, but there are occasions when it becomes tempting to go back to having one big solution.

The main problem is the length of the feedback loop: you check some code in, wait for a build, and some tests, and some more tests. Then it triggers another build, and some tests, and some more tests.

And eventually the change arrives at the place you need it. Assuming you didn’t make any dumb mistakes, there’s no network issues, etc etc.

This can sap productivity, especially once you start perusing the internets :)

The alternative is to copy the dlls from one source tree, to another. An arduous process, and easy to get wrong. So script it:

function ripple([string] $project, [string] $source, [string] $target) {
  $targetNugget = gci "$target\packages" -r -i "$project.*" | Where {$_.psIsContainer -eq $true} | Sort-Object -Descending | Select-Object -First 1
  gci "$source\$project\bin\*" -r -i "$project.*" | foreach { cp -v $_ "$targetNugget\lib\net40" }
}

Usage:

$packages = "Project1", "Project2"
foreach ($p in $packages) { ripple $p "C:\code\Solution1\src" "C:\code\Solution2\src" }

This will copy the build artifacts for Project1 (i.e. bin\*\Project1.*) in Solution1, to the highest Project1 nuget package in Solution2 (e.g packages\Project1.3.1.0.456).

(In case it’s not obvious, the name is an homage to the tool being developed for the same purpose by the FubuMVC team)

Tibco EMS instance dies with Hibernate exception

If your Tibco EMS instance dies, and the last thing in the log looks like this:

ERROR: Failed to refresh lock for store '$sys.meta', [ERRSTR = org.hibernate.exception.JDBCConnectionException: 
could not load an entity: [com.tibco.tibems.tibemsd.internal.db.HBLock#1] ]

it’s probably due to connectivity problems with the DB. As this thread states: if the connection is lost, the service shuts down without trying to reconnect. Sadly it doesn’t bother to log the fact that it’s shutting down.

There’s also a DBSTORE logging option, that may provide some further detail if enabled (we’ll see!).

Updating a NuGet package fails saying it’s not installed

If you try to update a NuGet package, and get an error saying:

Update-Package : 'package' was not installed in any project. Update failed.

(but it’s definitely already in use), it may be because of this.

In my case, I’d got into the situation by previously updating the package, and then reverting the changes. Because we’re using the package restore functionality, this meant the package repo contained a newer version of the package than the packages.config referenced. Which didn’t go down well!

The solution is to delete the offending newer package (or just nuke the whole contents of the packages folder (except the repositories.config!), and re-build). You’ll need to close VS first, as it locks the files.

Hopefully a future version of NuGet will provide a more informative message :)