Adding a domain user to a local group using PowerShell

Following on from the fun of giving write permissions on a folder to a user, today’s installment covers adding a domain user to a local group.

Specifically, the group “Performance Log Users”, which allows a process to use (rather than create) perf counters.

function Add-UserToPerformanceLogUsersGroup($user, $session) {  
  Invoke-Command -Args $user -Session $session -ErrorAction Stop -ScriptBlock {
    param($user)
    
    $groupName = "Performance Log Users"
    $group = [ADSI]("WinNT://$env:COMPUTERNAME/$groupName,group")
    # check if user is already a member
    $members = @($group.psbase.Invoke("Members"))
    $matches = $members | where { $_.GetType().InvokeMember("Name", 'GetProperty', $null, $_, $null) -eq $user.split("\")[1] }
    
    if ($matches -eq $null) {
      Write-Host "Adding $user to $groupName group"   
      $user = $user.replace("\", "/")
      $group.add("WinNT://$user,user")
    }
  }
}

Caveat: the user specified is assumed to be a fully qualified DOMAIN\User, hence the unpleasant string manipulation.

TeamCity build steps are not the way forward

I have a lot of love for TeamCity, and one of the many things that makes it easy to get a build configuration up and running is its concept of “build steps“.

Recently, however, I’ve started to worry about our use (or abuse) of them. My main concern is the fact that it can make it very hard to reproduce a failing build (works on my machine!).

There’s always going to be some differences between the build running on the CI server, and my local build (hardware, installed software, timing, etc). But I’d like to reduce them to the bare minimum. It’s bad enough that I’m running a sh*tty laptop with XP, while the build agent is on 64 bit Server 2008, without having a completely different build process.

I was never a fan of MSBuild, but at least having a build script in with the sauce meant that you could run a local build before checking in.

Some of our build configs now have up to 8 build steps (StyleCop, NuGet install & update, build, test, source index, NuGet pack & publish, etc), any of which could cause a build to fail without being easily reproduced locally. Which encourages people to just check in, and hope for the best. And the slow feedback cycle when trying to fix an issue is a real gumption trap.

So, what’s the alternative? I couldn’t face going back to an xml based build tool (e.g. NAnt or MSBuild), but a scripting language like Ruby (Rake) or Powershell is ideal. In our case, I’d like to replace the steps with a build.ps1 (and some shared modules for common tasks).

Adding write permissions to a folder using powershell

As part of our deployment process, we need to give an IIS app pool identity write permissions on a log folder.

There are a few articles describing how to set permissions using powershell, but getting the incantation exactly right was a bit tricky.

So, for future reference, here it is:

function Set-RightsForAppPoolOnLogFolder($appPoolName, $session) {
  Write-Host "Setting app pool identity write rights on log folder"
  
  Invoke-Command -Args $appPool -Session $session -ErrorAction Stop -ScriptBlock {
    param($appPoolName)
    
    $logFolder = "D:\Logs"
    $acl = Get-Acl $logFolder
    $identity = "IIS AppPool\$appPoolName"
    $fileSystemRights = "Write"
    $inheritanceFlags = "ContainerInherit, ObjectInherit"
    $propagationFlags = "None"
    $accessControlType = "Allow"
    $rule = New-Object System.Security.AccessControl.FileSystemAccessRule($identity, $fileSystemRights, $inheritanceFlags, $propagationFlags, $accessControlType)
    $acl.SetAccessRule($rule)
    Set-Acl $logFolder $acl
  }
}

Working with binary dependencies

We have a reasonably complex build pipeline, using TeamCity & NuGet. This is a generally a Good Thing, but there are occasions when it becomes tempting to go back to having one big solution.

The main problem is the length of the feedback loop: you check some code in, wait for a build, and some tests, and some more tests. Then it triggers another build, and some tests, and some more tests.

And eventually the change arrives at the place you need it. Assuming you didn’t make any dumb mistakes, there’s no network issues, etc etc.

This can sap productivity, especially once you start perusing the internets :)

The alternative is to copy the dlls from one source tree, to another. An arduous process, and easy to get wrong. So script it:

function ripple([string] $project, [string] $source, [string] $target) {
  $targetNugget = gci "$target\packages" -r -i "$project.*" | Where {$_.psIsContainer -eq $true} | Sort-Object -Descending | Select-Object -First 1
  gci "$source\$project\bin\*" -r -i "$project.*" | foreach { cp -v $_ "$targetNugget\lib\net40" }
}

Usage:

$packages = "Project1", "Project2"
foreach ($p in $packages) { ripple $p "C:\code\Solution1\src" "C:\code\Solution2\src" }

This will copy the build artifacts for Project1 (i.e. bin\*\Project1.*) in Solution1, to the highest Project1 nuget package in Solution2 (e.g packages\Project1.3.1.0.456).

(In case it’s not obvious, the name is an homage to the tool being developed for the same purpose by the FubuMVC team)

PowerShell equivalent of Capistrano’s “run”

Automating deployments using PowerShell & WinRM is, by turns, both awesome and deeply frustrating.

One of the main stumbling blocks, for me anyway, is which side of the machine boundary the code should be evaluated on.

When using Invoke-Command with a script block, any variables inside the block are evaluated remotely:

$foo = "foo"
Invoke-Command -ScriptBlock {
    Write-Host "Foo: $foo"
}

If you want to use a local variable on the remote machine, you need to force the evaluation earlier. I ended up with something inspired by Capistrano’s run action.

function run([string] $command, $session)
{
  $runBlock = [ScriptBlock]::Create($command)
  try
  {
    Invoke-Command -Session $session -ScriptBlock $runBlock -ErrorAction Stop
  }
  catch
  {
    $_.Exception
    exit 1
  }
}

$foo = "foo"
run "write-host `"Foo: $foo`""

Alternatively, you can pass the args to Invoke-Command explicitly:

Invoke-Command -Args $foo, $bar -Session $session -ErrorAction Stop -ScriptBlock {
  param($foo, $bar)
  
  Write-Host "Foo: $foo"
  Write-Host "Bar: $bar"
}

Source indexing with PowerShell (and TeamCity)

Source indexing is definitely a “best practice”, when developing libraries that will be referenced as binaries.

Getting it working as part of a CI build can be a bit fiddly though.

There are a few pre-requisites for the build agent:

  1. Perl (>= 5.6): Strawberry Perl portable, for example
  2. SrcSrv: Part of the Debugging Tools for Windows
  3. SVN: command line e.g. Win32SVN (zip install)

All the above tools can be xcopy installed, but feel free to use an msi.

To index your PDBs, you need to run svnindex. Using TeamCity, you can add a PowerShell build step (in our case, after build & test, and before NuGet packaging):

function srcIndex([string] $project)
{
  & svnindex.cmd /debug /source=$project /symbols="$project\bin"
}

write-host "Updating path"
$env:path = "$env:path;D:\perl\perl\site\bin;D:\perl\perl\bin;D:\perl\c\bin;D:\srcsrv;D:\svn\bin"
write-host "Path: $env:path"
$env:term = "dumb" #strawberry perl portable specific

srcIndex "%system.teamcity.build.checkoutDir%\src\MyProject"

We start by updating the path, to include the location of the necessary exes (you can skip this if you used an installer for the pre-reqs). We then point svnindex at the source, and symbols.

If you want to index all your projects, you can loop over them:

gci "%system.teamcity.build.checkoutDir%\src" -r -i *.csproj | foreach { srcIndex $_.fullname }

The working dir for the build step needs to be the drive root e.g. D:\, as the indexing scripts don’t like relative paths (and you’ll see the dreaded “… zero source files found …”).

Then you just need to enable Source Server support in Visual Studio (uncheck “Just my code”), and luxuriate in full source debugging!

EDIT: Make sure the VCS root is set to checkout on the agent, not the server, as the information required is in the SVN repo.

Encrypting external config sections (using PowerShell)

The .Net framework allows you to encrypt sections of your configuration files, e.g. connection strings. If they live in the web.config it’s very simple:

aspnet_regiis -pe "connectionStrings"

Unfortunately, for those of us who like to keep our connection strings in an external config section, it can be a little more convoluted.

A bit of Googling turned up a couple of blog posts & Stack Overflow answers pointing in the right direction, and after a few hiccups (encrypting the machine.config by accident!) here’s a script that does the job:

param(
  [String] $configFilePath = $(throw "Config file path is mandatory"),
  [String] $sectionName = "connectionStrings",
  [String] $dataProtectionProvider = "DataProtectionConfigurationProvider"
)
 
#The System.Configuration assembly must be loaded
$configurationAssembly = "System.Configuration, Version=2.0.0.0, Culture=Neutral, PublicKeyToken=b03f5f7f11d50a3a"
[void] [Reflection.Assembly]::Load($configurationAssembly)
 
$configurationFileMap = New-Object -TypeName System.Configuration.ExeConfigurationFileMap
$configurationFileMap.ExeConfigFilename = $configFilePath
$configuration = [System.Configuration.ConfigurationManager]::OpenMappedExeConfiguration($configurationFileMap, [System.Configuration.ConfigurationUserLevel]"None")
$section = $configuration.GetSection($sectionName)
 
if (-not $section.SectionInformation.IsProtected)
{
  Write-Host "Encrypting configuration section..."
  $section.SectionInformation.ProtectSection($dataProtectionProvider);
  $section.SectionInformation.ForceSave = [System.Boolean]::True;
  $configuration.Save([System.Configuration.ConfigurationSaveMode]::Modified);
  Write-Host "Succeeded!"
}