Debug Xen Hosted Windows Kernel Over Network

Read the original at my company’s blog.

Blue screens are not a rare commodity when working with virtualization. Most of the times, full crash dumps do the trick, but sometimes live kernel debugging is required. Hard disk related crashes that prevent memory dumping is a good example where it is required, but there are times where it’s just easier to follow the entire crash flow instead of just witnessing the final state.

Type 2 (hosted) virtualization usually comes with an easy solution. But type 1 (bare metal) virtualization, like Xen, complicates matters. Debugging must be offloaded to a remote Windows machine. The common solution, it seems, is to tunnel the hosted machine’s serial connection over TCP to another Windows machine where WinDBG is running, waiting anxiously for a bug check. There are many websites describing this setup in various component combinations. I have gathered here all the tricks I could find plus some more of my own to streamline the process and get rid of commercial software.

Lets dive into the nitty gritty little details, shall we?

Hosted Windows

Kernel debugging requires some boot parameters. Windows XP includes a utility called bootcfg.exe that makes this easy.

bootcfg /copy /id 1 /d "kernel debug"
bootcfg /raw "/DEBUG /DEBUGPORT=COM1" /id 2 /a
bootcfg /raw "/BAUDRATE=115200" /id 2 /a
bootcfg /copy /id 2 /d "kernel debug w/ break"
bootcfg /raw "/BREAK" /id 3 /a

This assumes you have only one operation system configured in Windows boot loader. If the boot loader menu shows up when Windows boots, you might need to add the flags on your own to C:\boot.ini.

Xen Host

Windows will now try to access the serial port in search of a debugger. Xen’s domain configuration file can be used to forward the serial port over TCP. Locate your domain configuration file and add the following line. The configuration files are usually located under /etc/xen.


Debugger Machine

The server side is set and it’s time to move on to the client. As previously mentioned, WinDBG doesn’t care for TCP. Instead of the usual TCP to RS-232 solution, named pipes are used here. I wrote a little application called tcp2pipe (download available on the bottom) which simply pumps data between a TCP socket and a named pipe. It takes three parameters – IP, port and named pipe path. The IP address is the address of the Xen host and the port is 4444. For named pipe path, use \\.\pipe\XYZ, where XYZ can be anything.

tcp2pipe.exe 4444 \\.\pipe\XYZ

All that is left now is to fire up WinDBG and connect it to \\.\pipe\XYZ. This can be done from the menus, or from command line.

windbg -k com:pipe,port=\\.\pipe\XYZ

To make this even simpler, you can use kdbg.bat and pass it just the IP. It assumes WinDBG.exe is installed in c:\progra~1\debugg~1. If that’s not the case, you’ll have to modify it and point it to the right path.


Source code is included in zip file under public domain.

Download (mirror).

Happy debugging!

SourceForge tracker and SVN integration

TortoiseSVN and some other Subversion clients have a cool feature for integration with issue trackers. By setting a few properties for a project, you can have automatically generated links to tracked issues directly from SVN commit messages. TortoiseSVN also supports a special input method for issue numbers in commit boxes for easier integration.

The basic property that must be set is bugtraq:url which lets SVN know how to form a URL from an issue number. This is a bit difficult with SourceForge, where there are in fact at least three trackers and each with its own URL scheme including the tracker identifier in the URL. Toying with the URL and removing both group and tracker identifiers results in an error. After playing around some more, I found the following URL that works like a charm for every group and any tracker so there isn’t even a need to figure out the group identifier.

It took me a while to figure out and I couldn’t find any mention of it online, so I thought I’d post it here to help others.

For more information on all of the related properties see TortoiseSVN’s page.

Lorena Kichik

I don’t know why or how, but Google seems to think my little blog is worth something. Terms like “quake flag”, “triple double u”, “the green wire” , “atomic codes” and even “mr. angry pants” return some of my posts in the top 10 results and sometimes even as the first result. NSIS seems to share the same glory with terms like “java runtime download”, “strstr”, “exch”, “createmutex”, “iis version”, “messagebox”, “steam account” and even “windows critical update”. This Google love is very nice and helpful for NSIS but it doesn’t get too much of my attention span when it comes to this blog. People searching for my name or nickname find it as the first result and that’s good enough for me.

However, out of pure curiosity, I’ve decided to perform a little Google experiment. Searching for “kichik” results in my blog and some pages concerning a beautiful actress named Lorena Kichik. Her profile states she’s latin or middle eastern, so I’m not so sure how she ended up with Hungarian or Pakistani last name, but that’s another story. I’m not exactly sure what I’m going to figure out here, but I wanted to know what happens if I directly mention her in my blog. Will this post show up when searching for her name? Will it show when searching for mine? Which would come first – my post or her Facebook or IMDB profile?

Let the games begin!

Pragmatic variant

As mentioned in my previous post, I have been working on incorporating some more features into WinVer.nsh. Every little change in this header file requires testing on all possible versions and configurations of Windows. Being the Poor Open Source DeveloperTM that I am, I do not have sufficient resources to assemble a full-blown testing farm with every possible version of Windows on every possible hardware configuration. Instead, I have to settle for a bunch of virtual machines I have collected over the years. It is pretty decent, but has no standards and doesn’t cover every possible version. Still, it does its job well and has proven itself very effective.

Obviously, be it a farm or a mere collection of virtual machines, testing on so many different configurations carries with it a hefty fine. Testing a single line change could waste almost an hour. Imagine the time it would take to test, fix, retest, fix and retest again a complete rewrite of WinVer.nsh. As fascinating as that empirical scientific experiment would have been, I was reluctant to find out. Laziness, in this case, proved to be a very practical solution.

WinVer.nsh tests do not really need the entire operation system and its behavior as it relies on nothing but 4 parameters. All it requires is the return values of GetVersionEx for OSVERSIONINFO and OSVERSIONINFOEX. For nothing more than 312 bytes, I have to wait until Windows Vista decides it wants to execute my test, Windows NT4 gracefuly connects to my network, Windows ME wakes up on the right side of the bed and doesn’t crash, Windows Server 2008 installs again after its license has expired and Windows 95…. Actually, that one works pretty well. So why wait?

Instead, I’ve created a little harvester that collects those 312 bytes, ran it on all of my machines and mustered the results into one huge script that tests every aspect of WinVer.nsh using every possible configuration of Windows in a few seconds. It required adding a hooking option to WinVer.nsh, but with the new !ifmacrondef, that was easy enough.

Currently, the script tests:

  • Windows 95 OSR B
  • Windows 98
  • Windows ME
  • Windows NT4 (SP1, SP6)
  • Windows 2000 (SP0, SP4)
  • Windows XP (SP2, SP3)
  • Windows XP x64 (SP1)
  • Windows Vista (SP0)
  • Windows Server 2008 (SP1)

If you have access to a configuration not listed here, please run the harvester and send me the results. More specifically, I could really use Windows 2003 and Windows Vista SP1. My Windows Vista installation simply refuses the upgrade to SP1. Again.

The test script also includes a hexdump of those 312 bytes for every configuration so anyone performing similar tests for another reason doesn’t have to parse the NSIS syntax. Feel free to use it for your testing.

Voodoo fabrication

Last week I’ve decided it’s time to apply a few long overdue patches some people have submitted. The main issue with patches is that the patch submitter and patch applier are never the same person, unless you’re on lithium in which case the code is bound to be intriguing any way you spin it. But the lack of clear coding guidelines or my code review process is a whole other topic.

One of the patches was Anders’ WinVer.nsh patch for Windows Server 2008 support along with some other nifty little features. You would think this would be a pretty simple patch, but Microsoft had a surprise for us in this case. I admire Microsoft for their dedication for backward compatibility and basic API coherence, but in this case of version detection, they got it a bit mixed up. There are three API functions to get the version, two of them work on all versions of Windows and one of them has two operation modes. To get special features information there’s another completely unrelated inconspicuous function. The format of the returned data depends on the version and on operation mode used. In short, it’s a bag full of fun and games and there’s never a dull moment testing every little change on every possible configuration of Windows.

For the original version of WinVer.nsh, I used the simplistic GetVersion API which requires about 3 lines of code. Later on a patch was submitted to support verification of service pack numbers which required the usage of GetVersionEx’s two modes of operation. This required quite a bit more code, but that code was only used when SP were specifically checked. With the latest patch for Windows Server 2008 support, the simplistic API was no longer enough and a full blown function using every possible API and doing a lot of math and bit shuffling was required. And therein lies the catch.

As we yet to have developed real code optimization mechanisms, code duplication makes the installer bigger and bigger is not better in this case. The code could go into a function which will be called by every usage of WinVer.nsh, but that would mean a warning will be generated in case the function is never called because it can’t be optimized. A requirement to declare the usage of WinVer.nsh could be added, but that would break the number one rule I’ve learned from Microsoft – backward compatibility. All three issues are on the top 10 frequently asked questions list and getting my costumers a reason to ask them even frequently-er is not in my wish list.

As the code size grew bigger WinVer.nsh, I started pondering of a way to solve this. The obvious solution would be adding code optimization and that’s already functioning neatly in my beloved nobjs branch that’s sadly not yet ready for prime time. And so I had to think of another idea that could work with the current branch and so Artificial Functions were conceived. Instead of letting the compiler create the function, I’ve used some of the lesser known features to create them on my own. A combination of runtime and compile-time black magia using both old and new features allowed me to get rid of the code duplication.

To make sure the code of the function isn’t inserted more than once, the good old !ifndef-!define-!endif combo is used. But the function can be called from more than one scope and so it must be globally locatable. Exactly for this purpose, global labels were added over six years ago. However, that’s not all as the function must somehow return control to the original code that called it. To do this Return is used at the end of the function’s code and Call is used to treat the global label as a function and build a stack frame for it. Last but not least, we have to do deal with uninstaller functions that can’t jump to code in the installer as they don’t share the same code. The new __UNINSTALL__ definition saves the day and helps differentiate installer’s and uninstaller’s code.

!macro CallArtificialFunction NAME
  !ifndef __UNINSTALL__
    !define CallArtificialFunction_TYPE inst
    !define CallArtificialFunction_TYPE uninst
  Call :.${NAME}${CallArtificialFunction_TYPE}
  !ifndef ${NAME}${CallArtificialFunction_TYPE}_DEFINED
    Goto ${NAME}${CallArtificialFunction_TYPE}_DONE
    !define ${NAME}${CallArtificialFunction_TYPE}_DEFINED
      !insertmacro ${NAME}
  !undef CallArtificialFunction_TYPE

When combined all together, it not only solves the code size issue for WinVer.nsh, but also rids the world of two very frequently asked questions about our standard library. It took a few good hours, but after converting FileFunc.nsh, TextFunc.nsh and WordFunc.nsh to use the new Artificial Functions; there’s no longer a need to use forward decelerations for those commonly used functions and calling them in uninstaller code is no different than calling them in the installer.

!include "FileFunc.nsh"
!insertmacro GetFileExt
!insertmacro un.GetParent
Section Install
     ${GetFileExt} "C:My DownloadsIndex.html" $R0
Section un.Install
     ${un.GetParent} "C:My DownloadsIndex.html" $R0

Goes on a diet and elegantly transforms into:

!include "FileFunc.nsh"
Section Install
     ${GetFileExt} "C:My DownloadsIndex.html" $R0
Section un.Install
     ${GetParent} "C:My DownloadsIndex.html" $R0

I love it. This trick is so sinister. It reminds me of the days Don Selkirk and Dave Laundon worked on LogicLib. Coming to an installer near you this Christmas.


Memory is a fascinating mechanism. Electric currents running through biological matter draw input from sensitive organs and store anything from scents and pictures and all the way to logical conclusions. It’s the simplest time-machine implementation and it’s embedded in our brains, allowing us to travel back in to our history.

Commonly, it’s divided to short-term memory and long-term memory. Short-term memory is the working memory. It holds temporary but currently-crucial information retrieved from sensors, long-term memory or the result of a mental process. It is estimated that short-term memory is capable of holding up to seven items at any given time. As with many other areas in life, that number is magical and quite stubborn at keeping its status quo. To that end, common memory improvement techniques focus on methods of getting around that number instead of increasing it. A method I affectionately call “divide and conquer” suggests grouping items and memorizing the groups instead of the items themselves, so that more items can be memorized. It’s actually an expansion of the very basic naming method. Complicated items can be easily memorized when named — the very basic of all languages; allowing the commutation of complicated matters with simple words.

One of the most obvious implementations of these methods is known simply as “list”. By numbering and even naming large chunks of data, information could be efficiently conveyed and referenced. Under this principal books are divided into chapters, rules are presented as a list of do and don’ts, everything is divided to magical three items, and complicated ideas are abstracted and listed so that they may serve as a fertile ground for even greater ideas.

The problem with popular and useful methods is their wide abuse. Lately, I’ve witnessed an abundance of articles containing nothing but a list with sparse content and a very thin thread holding the bullets together. To help combat this epidemic, I hereby propose my list of list-don’ts.

  1. While a very good memory technique, a list without any content is worthless. Forging pointless data into a list will not breath life into it. At the very best, it’d help the pointless data being pointlessly forgotten.
  2. Lack of good transition between paragraphs qualifies for more content or some transitional devices and not a list.
  3. Bullets before the punch line won’t necessarily make it funnier. It will, however, make the journey leading to the punch line a boring one.
  4. There are rare cases where a list genuinely qualifies. There is no need to overstate this by noting it in the title.
  5. When already writing a list, keep it short and to the point. Three, seven, ten and thirteen are nice magical numbers. That’s not a good enough reason to pad lists.

Pixel shifting

Every picture is worth a thousand words, they say. In total, I’ve written the equivalent of 13 pictures in this blog. I thought I’d have at least 50, but apparently I don’t write that much. For the fourteen-thousandth word, I’ve decided to create an actual picture. Sadish’s MistyLook has served me well for a long time, but I felt it is time for a change and so this new theme was born. It’s not perfect yet and I’ll probably keep working on it, but I really like it and it fits me well.

It was quite fun to create this new theme. It allowed me to resurrect deeply burried skills I thought I’d never touch again. I’ve also learned that I need a Wacom tablet, Firebug is priceless, CSS is even more powerful than I remembered and Kim’s Lakers pen is a life saver.

Intelligence quotient

Humanity is doomed. We are just too brilliant to keep on living. Everyone can feel it, but like the sheep we are, we fail to notice the looming cliff ledge, slowly pacing towards our inescapable demise. We have outgrown our intellectual capacity. Any bit of information added since 2781 BC brings destiny a step closer. We are facing imminent extinction by the hands of our own wisdom.

Being the average sheep herd we are, we have our share of black sheep. Some of them have taken it upon themselves to enlighten the herd and warn us of the danger looming ahead. News networks all over the globe are alerting the homo sapiens species of the grave dangers unfolding in front of their unsuspecting herd. Every self-respecting website publishes at least one article spelling out the well known fact that technology is extremely dangerous. Not even one newspaper failed to bring forth today’s hot headline – “Modern day technology is the bane of our existence”. Radio broadcasts elaborate – “It thins out the herd”. Ewes, rams and lambs alike all know by now that using technology limits the herd’s collective intellect, slowly turning it to a crowd of brainless zombies, unable to care for themselves.

Black sheep have successfully taught us to hinder inventions such as GPS, the Internet and computer games. Sadly, they were too late to do the same for thesaurus, books, pen and paper, wheel and fire. Those unholy inventions and discoveries have already taken their toll on the herd. Young lambs no longer look for words in the dictionary, but find them in two keyboard strokes; ewes no longer tell stories around the fireplace, but write them in books available for all; rams no longer draw on cavern walls with charcoals, but paint with too much detail and too many colors on cloth; herds no longer break their legs and perish on their way to neighbor herds, but drive in air-conditioned cars with leather seats; sheep no longer get ill of uncooked meat, but devour delicious seasoned steaks. The herd has agonized for thousands of years without even realizing it.

Clearly, scientific inventions and discoveries that ease every day lives are the devil’s brainchild. Those who know they know nothing and keep on trying to disclose as many of the meadow’s great secrets as they can are nothing but mere devil worshipers. Sheep that fear not looking beyond the grass that lies before them do nothing but harm. Foul creatures that dare share their fruit of labor so that the entire herd may advance and excel are inconsiderate, egocentric and self-serving sinners. Those who define the very meaning of being stupid by negation are the horsemen of the apocalypse.

I call to you today my fellow sheep — let us put an end to this morbid state of affairs. Let us break this vicious circle of knowledge passing, stop this vile orgy of technology and return to our lonely roots. Let us burn Google on the stake, melt our GPS-capable iPhone, demolish our libraries, drown every type of vehicle, incinerate all the books, halt all scientific progress and go look for red round small things in the big place with the green and brown big stuff where the other lamb just goed.

Dominical update

9 out of 10 open-source experts advocate frequent releases. We, the simple people, don’t know better and should listen to the experts. Sadly, we simpletons still don’t know how to read and so the fine print eludes us. While we all may be good and obedient developers, the users don’t care for our frequent releases squashing our colossus bugs and featuring our shiny new toys. As frequent our releases are as frequent the reports of bugs long ago fixed and features that shined and sparkled at ancient times but are now filled with rust.

Ghost versions of the past haunt us daily while users refuse to upgrade. Our innovative forefathers, suffering immensely from this plague, had uncovered the great potential of automatic updates. No longer is the user able to flee his ordained destiny. Fate shall pop-up and fulfill itself even with the absence of user interaction.

But even this sparsely applied method carries its own set of fine prints. Boiler plate implementation includes a web server containing the latest version number or even a server-side script that ever so nicely checks for the user whether his version is expectedly old. As with everything else, here too success brings failure. As faithful users gather their masses around our monthly-polished releases, the web server begins to break down. Most web servers, especially those that poor open-source developers can afford, do not offer load balancing and will easily succumb to the sheer amount of bandwidth generated by thousands of users performing even the simplest of GET requests.

Enter DNS. The Domain Name System is a distributed and globally cached system that basically maps domain names such as into numbers such as And it gets even better — foreign sources report there are free DNS servers out there, waiting to be used. Services such as offer a simple HTTP based API that sets new IP to a free domain name. Creating a new version notification service is as simple as creating a new free domain, updating it every time a new version is released, calling inet_addr when the client-side loads and comparing the result to the current version.

This free and simple solution provides many advantages over conventional HTTP based version check.

  • Automatic load balancing with servers all over the world.
  • Simple code with no need for complex HTTP libraries.
  • No need for relatively heavy HTTP operations for both client and server.
  • HTTP proxies do not get in the way.
  • Firewalls and the entire security fiasco usually overlook DNS.

And as always, there are disadvantages.

  • Updates take time to propagate.
  • Only 3 bytes of information.

Make sure you set the first byte to 127 to make sure the IP associated with your update domain is invalid. This way, whoever is at won’t get any unwelcome traffic.

I am probably not the first to think of this, but it is a cool idea nonetheless. I’m so going to implement this for the next version of NSIS! 🙂


Over a year has passed since the NSIS Media menace. Mostly good things have happened since. I figured this could be a good time to recap and summarize.

  • no longer contains NSIS Media infected downloads. I’ve received no response for my queries, so I assume I had nothing to do with it.
  • NSIS Media malware update servers are no longer operational.
  • I have received only one e-mail complaining about NSIS Media over the last year, compared to the dozens before I’ve released the remover.
  • My remover was downloaded approximately 10,000 times from my website and probably a bit more from other websites as well.
  • My lawsuit has failed miserably. I was trying to get back at Opensoft/Openwares and all of their Vanuatu-based friends with the help of the Software Freedom Law Foundation. We tried to track down someone we could sue, but failed. After a few unanswered queries and answers pointing at multiple directions from various related companies, the search was sadly brought to a halt.
  • I was contacted by F-Secure for details of NSIS Media. I seem to recall there were more companies that asked for my help, but I can’t find the e-mails proving it.
  • Most anti-virus or malware removal applications I’ve tested find only the most common infections of NSIS Media and skip the rarer DLL files.
  • Opensoft is still up to no good.
  • Openwares is still alive and kicking, spreading malware and using NSIS but no sane user will surf to that website.
  • I have received no donations for my research or for creating the remover.
  • I still don’t make 1000$ a day 😦

So there you have it — the story of a deceased malware. I’d like to think I took at least a small part of its demise.