So in basically every job I have had in the last twenty years, I have had to write some code to enable Windows privileges, such as the backup or restore privileges, which are required to import or export registry keys. It’s a pretty common operation and there is a convenient example on MSDN that shows how to do it. I believe that every instance of writing this code I have seen followed the same pattern: Get a thread token, or a process token if there is no thread token, then adjust the privilege on that token. Then undo the whole thing when you no longer need the privilege.

But recently I encountered a situation where there is large-scale threading going on, and this technique really backfires. The threads quickly start to disable privileges that another thread needs, etc. I started to look into a better solution, and realized that I needed a thread token for each thread so that the privileges could be adjusted on that specific thread only. I knew that the way to get a thread token was to impersonate a user, but I didn’t need to be impersonating another user in this case.

Enter the ImpersonateSelf Windows API, which is specifically designed for this situation. The API creates a thread token for the current thread, which then means that the enable privilege code can be safely called against the thread token (NOT the process token). This is a pretty straight-forward process, but based on my experience I don’t think it is commonly done correctly. The code that I have seen everywhere has definitely not been thread-safe.

I learned a new favorite kernel debugger trick tonight. I regularly have a kernel debugger attached while working on my driver, but tonight experienced a crash in my user mode service. Not wanting to set up a new debugger inside the vm, I googled around and came up with the following:

!gflag +soe

This windbg command makes all exceptions go to the kernel mode debugger first. Voila!

I ran across a problem this week where I needed to get the filename where an RSA encryption key was stored. These files are stored (for machine-scope keys) in C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys, and have a filename that looks like a hash value followed by a SID. This is easy to find if you have access to the key:

var csp = new CspParameters
    Flags = CspProviderFlags.NoPrompt | 
            CspProviderFlags.UseMachineKeyStore | 
    KeyContainerName = ""

var crypto = new RSACryptoServiceProvider(csp);


But in my case I didn’t have access to the keyfile, as it had been created by another user and ACLed. The algorithm for deriving these filenames is not too difficult… It turns out you can take the container name, convert it to lowercase, add an extra null byte, compute the MD5 hash, and then convert the MD5 hash to a string in DWORD-sized chunks. Then you append the machine guid, which can be found in the registry.

public static class RsaCryptoServiceProviderExtensions
    public static string GetUniqueKeyContainerName(string containerName)
        using (var rk = Registry.LocalMachine.OpenSubKey(@"SOFTWARE\Microsoft\Cryptography"))
            if (rk == null)
                throw new Exception("Unable to open registry key");

            var machineGuid = (string)rk.GetValue("MachineGuid");

            using (var md5 = MD5.Create())
                var containerNameArray = Encoding.ASCII.GetBytes(containerName.ToLower());
                var originalLength = containerNameArray.Length;
                Array.Resize(ref containerNameArray, originalLength + 1);

                var hash = md5.ComputeHash(containerNameArray);
                var stringBuilder = new StringBuilder(32);
                var binaryReader = new BinaryReader(new MemoryStream(hash));
                for (var i = 1; i <= 4; i++)

                stringBuilder.Append("_" + machineGuid);

                return stringBuilder.ToString();

I recently had a co-worker who needed to instantiate and use a class from an assembly loaded at runtime. The code couldn’t reference the assembly directly for various reasons. This was accomplished relatively easily until he needed to assign the value of an enumerated type. So take the following class definition.

namespace DynamicAssembly
    public class MyClass
        public enum MyEnum { ValueA, ValueB, ValueC }

        public MyEnum TheEnumValue { get; set; }

From a project, the goal was to load the above assembly dynamically, instantiate a MyClass variable and then set TheEnumValue = MyEnum.ValueB. Really simple in normal code… a little more convoluted in dynamic runtime code. The solution I came up with is the following:

static void Main(string[] args)
    var p = Path.GetFullPath(@"..\..\..\DynamicAssembly\bin\Debug\DynamicAssembly.dll");
    var a = Assembly.LoadFile(p);

    var classType = a.GetType("DynamicAssembly.MyClass");
    dynamic classInstance = Activator.CreateInstance(classType);

    var enumType = a.GetType("DynamicAssembly.MyClass+MyEnum");
    var enumValues = enumType.GetEnumNames();
    var enumIndex = Array.IndexOf(enumValues, "ValueB");
    var enumValue = enumType.GetEnumValues().GetValue(enumIndex);

    classInstance.TheEnumValue = (dynamic)enumValue;

I would love to hear from you if you know of a better way to accomplish this.

Yesterday at Domo’s Domopalooza conference I had the opportunity to listen to Billy Beane, the GM of the Oakland Athletics MLB organization, and the subject of the 2003 book and 2001 film Moneyball. Beane spoke about how their organization used hard data to produce winning baseball teams with a drastically lower budget than some of the other winning MLB teams. I found the session fascinating and it got me thinking about the applications in hiring great software developers.

I’ve often thought that the way we hire software developers is not very effective. We try to gauge a developer’s talent by giving them little problems to solve: sometimes these are puzzles and brain-teasers, though these seem to be growing less common, and other times they are little programming problems to be solved on a whiteboard. While these can be fun, and even instructive, I don’t think they result in a good hiring decision.

I am intrigued by the idea that instead of little exercises (similar to a scout watching a baseball player in high school, or a tryout), we need to have some hard data to base our decisions on. The big difference between professional sports and software development, though, is that athletes have their performance constantly measured and recorded. MLB is full of stats: wins, losses, on base hits, home runs, stolen bases, etc. With that data you can find out interesting things about the game, such as the fact that stolen bases contribute very little statistically to whether a game is won or lost.

While software developers don’t have performance measurements that are public, there are ways of measuring that performance: bugs written, lines of code produced, bugs resolved, etc. If we had statistics like these that were public, then maybe we would have a better way of finding and selecting the right developers for our projects. But within our own organizations we could keep track of these statistics and use them to manage our employees after they were hired.

But many software developers are now starting to become involved in social programming. Many of us are participating in code retreat days, programmer meetup groups, hack nights, open source projects, etc. Quite a few of us are even putting our side projects out in the open on sites like github or codeplex. What if we could process all that data and get statistics about what good programmers look like, and find ways to measure a programmer’s talent in a real way?

Of course, there are a lot of intangibles that still probably need to be interviewed for. It’s not worth hiring somebody that no one gets along with just because they have some skills. But if you could weed out the people who don’t have what you’re looking for, wouldn’t you be miles ahead in the interview process?

Ok, maybe the title came across a bit too strong. I actually really like the idea of executable packages being signed so I know where/who they came from. And for device drivers I can see why they have effectively made it mandatory.

But this last week I ran into a major road block with the Windows 8 smart screen filtering. Supposedly this is to keep me safe. I can even buy that requiring an installer to be signed so you know where it comes from implies a greater degree of reliability.

I have a software package that has been shipping for years, has always been signed, and now our digital certificate has expired and been renewed. For some reason, Microsoft has decided that this must mean that our software is untrustworthy. They have conveniently provided us with the opportunity to purchase a more expensive certificate for signing (EV code signing) that will let us me immediately trustworthy.

But when we tried to go down that road, we ran into all kinds of road blocks. The EV certificate has to be on a hardware token, cannot be used on an Amazon EC2 instance (or any other cloud based machine), and it also cannot be used on a VM of any kind (they informed me that this was a “security feature”). So my only option is to purchase dedicated hardware for the relatively rare situation where I need to perform a publically released build.

It feels like so-called security companies don’t have a clue about usability. The down side of this is that the more they make security unusable, the less it will be used. There is a huge human factor to security that they just don’t want to admit exists.

It also feels like Microsoft is just trying to help generate revenue for signing certificate providers. If we have proven our identity, and created a reputation for our existing certificate, then the fact that we have to renew our certificate shouldn’t be a cause for lowering our reputation. Rather, Microsoft needs to provide a way for our reputation to migrate to the renewed certificate.

I ran across a bug in a project just the other day that I thought others could find interesting. In this project, I had a main thread that was listening for connections and then serving back some data. I also had a timer that would periodically trigger an update of the data that was being served. This was using a System.Threading.Timer and therefore was running on a secondary thread from the thread pool.

The problem was that the timer would run two or three times (in fifteen minute intervals) and then it would just magically stop running. I initially thought perhaps locking issues between the threads, so I went through and locked everything that was shared, all to no avail.

And to make the problem even more frustrating, I couldn’t reproduce it in a debugger. I initially thought that this was perhaps because I was not patient enough to wait 45 minutes for it to happen. But it turned out to be a release vs. debug kind of problem: the release build had the problem, while the debug build didn’t seem to.

For research purposes, take as an example the follow little program. This program should have a main thread that just sleeps the day away, and a timer that prints out a debug message every 5 seconds. If I run the debug build of this, it works great, but running the release build on my machine, the timer thread didn’t even run a single time! Waahh?!?!

class Program
  static long i = 0;

  static void TimerCallback(object state)
    Debug.WriteLine("{0:D5}: TimerCallback", i);

  static void Main(string[] args)
    // Trigger the callback every 5 seconds
    System.Threading.Timer t = new System.Threading.Timer(TimerCallback, null, 0, 5000); 

    while (true)

It turns out what is going on here is that the system is happily garbage collecting my Timer object. According to the system, that t variable never gets used after it is initialized, so it’s safe to just throw it away. If you look at the MSIL using the ILDASM tool, you see the following for the release build. Notice that it does a newobj to create the Timer object, and then rather than storing it in a local with something like stloc.0, it just pops it off the stack and doesn’t keep any reference on it.

IL_0013:  newobj     instance void [mscorlib]System.Threading.Timer::.ctor(class [mscorlib]System.Threading.TimerCallback,
IL_0018:  pop

The debug version of the same code like the following, and note that it declares a local object, and then stores the reference to the Timer object in that local object.

.locals init ([0] class [mscorlib]System.Threading.Timer t,
           [1] bool CS$4$0000)
IL_0014:  newobj     instance void [mscorlib]System.Threading.Timer::.ctor(class [mscorlib]System.Threading.TimerCallback,
IL_0019:  stloc.0

Now once I figured out what was going on, fixing it was trivial. A using statement around the disposable Timer object keeps it in scope, and deterministically cleans it up when appropriate. (Of course, this is how the code should have been written in the first place, but look at the cool problem I got to figure out as a result of my lazy coding.)

I have noticed an increasing trend recently of people who aren’t professional programmers wanting to learn coding. There are a bunch of businesses and web sites that have sprung up around this:,,, just to name a few. I think that this is a great trend, but I also think it deserves a measured approach.

Every worker can likely improve their productivity by learning some basic coding skills. Whether it’s automating some data entry, or being able to maintain a simple website, we all have tasks that we do with computers that could probably be made more efficient with a little coding know-how. For example, I know someone who had to do data entry on a web form, transcribing it from an excel spreadsheet, and they automated the process with a simple Excel macro. I have also had cases where I needed to rename a few hundred files with certain conventions, and shell scripts (batch files) fit the bill perfectly.

But not everyone can or should try to be a professional programmer on the side. Being a professional anything takes time and dedication. While you may learn how to fix a leak under the kitchen sink or even replace a garbage disposal, it probably doesn’t make much sense for you to learn how to be a professional plumber while maintaining your day job. The same applies to auto mechanics skills, or electrician skills, or to legal skills and business management. Everyone can improve their life by acquiring some basic skills in all of these areas, but as soon as you start putting too much effort, you will short your primary pursuits and end up being a master of nothing.

My advice is to absolutely spend some time learning to code. Find the ways that you can invest smaller amounts of time to get the largest benefit. Don’t try to be a professional programmer… unless, of course, you want to give up being a professional whatever-you-are-now. In that case, by all means, dive right in and make a career switch! Programming is awesome!