False Positive Watch

While debugging any issue that arises on Windows, my go-to trick is blaming the anti-virus or firewall. It almost always works. As important as these security solutions are, they can be so disruptive at times. For developers this usually comes in the form of a false positive. One day, out of the blue, a user emails you and blames you for trying to infect their computer with Virus.Generic.Not.Really.LOL.Sue.Me.1234775. This happened so many times with NSIS that someone created a false positive list on our wiki.

There are a lot of reasons why this happens and a lot of ways to lower the chances of it happening, but at the end of the day, chances are it’s going to happen. It even happened to Chrome and Windows itself.

So I created False Positive Watch. It’s a simple free service that periodically scans your files using Virus Total and sends you an email if any of your files are erroneously detected as malware. You can then notify the anti-virus vendor so they can fix the false positive before it affects too many of your customers.

I use it to get notifications about NSIS and other projects, but you can use it for your projects too for free. All you need is to supply your email address (for notifications) and upload the file (I delete it from my server after sending it to VirusTotal). In the future I’m going to add an option to just supply the hash instead of the entire file so you can use it with big files or avoid uploading the file if it’s too private.

Docker Combo Images

combo

I’ve been working with Docker a lot for the past year and it’s pretty great. It especially shines when combined with Kubernetes. As the projects grew more and more complex, a common issue I kept encountering was running both Python and JavaScript code in the same container. Certain Django plugins require Node to run, Serverless requires both Python and Node, and sometimes you just need some Python tools on top of Node to build.

I usually ended up creating my own image containing both Python and Node with:

FROM python:3

RUN curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
RUN apt-get install -y nodejs

# ... rest of my stuff

There are two problems with this approach.

  1. It’s slow. Installing Node takes a while and doing it for every non-cached build is time consuming.
  2. You lose the Docker way of just pulling a nice prepared image. If Node changes their deployment method, the Dockerfile has to be updated. It’s much simpler to just docker pull node:8

The obvious solution is going to Docker Hub and looking for an image that already contains both. There are a bunch of those but they all look sketchy and very old. I don’t feel like I can trust them to have the latest security updates, or any updates at all. When a new version of Python comes out, I can’t trust those images to get new tags with the new version which means I’d have to go looking for a new image.

So I did what any sensible person would do. I created my own (obligatory link to XKCD #927 here). But instead of creating and pushing a one-off image, I used Travis.ci to update the images daily (update 2022: GitHub Actions). This was actually a pretty fun exercise that allowed me to learn more about Docker Python API, Docker Hub and Travis.ci. I tried to make it as easily extensible as possible so anyone can submit a PR for a new combo like Node and Ruby, or Python or Ruby, or Python and Java, etc.

The end result allows you to use:

docker run --rm combos/python_node:3_6 python3 -c "print('hello world')"
docker run --rm combos/python_node:3_6 node -e "console.log('hello world')"

You can rest assured you will always get the latest version of Python 3 and the latest version of Node 6. The image is updated daily. And since the build process is completely transparent on Travis.ci you should be able to trust that there is no funny business in the image.

Images: https://hub.docker.com/r/combos/
Source code: https://github.com/kichik/docker-combo
Build server: https://github.com/kichik/docker-combo/actions

Compatible Django Middleware

Django 1.10 added a new style of middleware with a different interface and a new setting called MIDDLWARE instead of MIDDLEWARE_CLASSES. Creating a class that supports both is easy enough with MiddlewareMixin, but that only works with Django 1.10 and above. What if you want to create middleware that can work with all versions of Django so it can be easily shared?

Writing a compatible middleware is not too hard. The trick is having a fallback for when the import fails on any earlier versions of Django. I couldn’t find a full example anywhere and it took me a few attempts to get it just right, so I thought I’d share my results to save you some time.

import os

from django.core.exceptions import MiddlewareNotUsed
from django.shortcuts import redirect

try:
    from django.utils.deprecation import MiddlewareMixin
except ImportError:
    MiddlewareMixin = object

class CompatibleMiddleware(MiddlewareMixin):
    def __init__(self, *args, **kwargs):
        if os.getenv('DISABLE_MIDDLEWARE'):
            raise MiddlewareNotUsed('DISABLE_MIDDLEWARE is set')

        super(CompatibleMiddleware, self).__init__(*args, **kwargs)

    def process_request(self, request):
        if request.path == '/':
            return redirect('/hello')

    def process_response(self, request, response):
        return response

CompatibleMiddleware can now be used in both MIDDLWARE and MIDDLEWARE_CLASSES. It should also work with any version of Django so it’s easier to share.

Stale MapReduce Staging Directories

I had a problem where HDFS would fill up really fast on my small test cluster. Using hdfs dfs -du I was able to track it down to the MapReduce staging directory under /user/root/.staging. For some reason, it wasn’t always deleting some old job directories. I wasn’t sure why this kept happening on multiple clusters, but I had to come up with a quick workaround. I created a small Python script that lists all staging directories and removes any of them not belonging to a currently running job. The script runs from cron and I can now use my cluster without worrying it’s going to run out of space.

This script is pretty slow and it’s probably possible to make it way faster with Snakebite or even some Java code. That being said, for daily or even hourly clean-up, this script is good enough.

import os
import re
import subprocess

all_jobs_raw = subprocess.check_output(
  'mapred job -list all'.split())
running_jobs = re.findall(
  r'^(job_\S+)\s+(?:1|4)\s+\d+\s+\w+.*$',
  all_jobs_raw, re.M)

staging_raw = subprocess.check_output(
  'hdfs dfs -ls /user/root/.staging'.split())
staging_dirs = re.findall(
  r'^.*/user/root/.staging/(\w+)\s*$',
  staging_raw, re.M)

stale_staging_dirs = set(staging_dirs) - set(running_jobs)

for stale_dir in stale_staging_dirs:
  os.system(
    'hdfs dfs -rm -r -f -skipTrash ' +
    '/user/root/.staging/%s' % stale_dir)

The script requires at least Python 2.7 and was tested with Hadoop 2.0.0-cdh4.5.0.

Download PDB by GUID

Sometimes you get stuck with a broken or no dump at all. You know what you’re looking for but WinDBG just keeps refusing to load symbols as you continue to beg for mercy from the all knowing deities of Debugging Tools for Windows. You know what PDB you’re looking for but it just wouldn’t load. The only thing you do know is that you don’t want to go digging for that specific version of your product in the bug report and build a whole setup for it just so you can get the PDB. For those special times, some WinDBG coercion goes a long way.

To download the PDB create a comma separated manifest file with 3 columns for each row. The columns are the requested PDB name, its GUID plus age for a total of 33 characters and the number 1. Finally call symchk and pass the path to the manifest file with the /im command line switch. Use the /v command line switch to get the download path of the PDB.

To demonstrate I’ll use everyone’s favorite debugging sample process.

C:\>echo calc.pdb,E95BB5E08CE640A09C3DBF3DFA3ABCB42,1 > manifest

C:\>symchk /v /im manifest
[...]
SYMSRV: Get File Path: /download/symbols/calc.pdb/E95BB5E08CE640A09C3DBF3DFA3ABCB42/calc.pdb
[...]
DBGHELP: C:\ProgramData\dbg\sym\calc.pdb\E95BB5E08CE640A09C3DBF3DFA3ABCB42\calc.pdb - OK

SYMCHK: FAILED files = 0
SYMCHK: PASSED + IGNORED files = 1

To force load the PDB you need to update the PDB path, turn SYMOPT_LOAD_ANYTHING on, and use the .reload command with /f to force and /i to ignore any so called mismatches.

kd> .sympath C:\ProgramData\dbg\sym\calc.pdb\E95BB5E08CE640A09C3DBF3DFA3ABCB42
kd> .symopt+0x40
kd> .reload /f /i calc.exe=0x00400000

You should now have access to all the data in the PDB file and stack traces should start making sense.

Android LXR

An open source OS makes debugging applications so much easier. Instead of firing up IDA and going through opcodes, you can simply read the code and sometimes even find comments. However, searching through millions of lines of code can be a daunting task. Operation systems usually have a huge codebase and even the simple task of looking for one function can take a few good minutes. After reading that function, you usually want to search for functions it calls or functions that call it to better understand the flow. Those extra searches take time too. A good IDE would solve this issue but it requires downloading and indexing the massive source code first.

LXR was created for this exact reason. It allows hosting a fully indexed copy of the source code. It even makes it easy to publish an index of multiple versions of the source code. Want to compare a certain function between two versions of the Linux kernel? No problem. Want to know which functions use a certain function? Easy. LXR is awesome and fast.

Setting up LXR on your own, however, does take some time and effort. That is why I was happy to find AndroidXref.com while trying to hunt down a bug in one of my Android applications. It indexes both Android and patched Linux kernel sources for all major versions of Android. It is an invaluable resource every Android developer should know.

I originally had a question about this topic open on StackOverflow with AndroidXref as the accepted answer. It was recently deleted, probably because it didn’t have anything to do with C operator precedence. This is my AndroidXref.SEO++.

Old GDB find

Newer versions of GDB come with the nifty find command. The old version of GDB I have to use does not. It is also incapable of generating a proper stack trace for the platform it supposedly serves. But that’s a whole other story…

Anyway, I found a piece of code that almost does the same. I tweaked it a bit, fixed the stray bug ($x -> %p) and would like to never do it again. So here it is for my future reference and your indulgent.

define find
  set $start = (uint64 *) $arg0
  set $end = $start + $arg1
  set $pattern = (uint64) $arg2
  set $p = $start
  while $p < $end
    if (*(uint64 *) $p) == $pattern
      printf "pattern %p found at %pn", $pattern, $p
    end
    set $p++
  end
end

Hello Android

Humanoid, search engine and one of the most addictive FPS games ever created walk into a bar. A few refreshing cups of coffee later, a joke is born and its name is MW2 Guide.

I’ve created a pretty simple application for Android that helps Call of Duty: Modern Warfare 2 addicts, such as myself, to make some sense of the bombardment of dialog boxes popping after a match. It’s basically a list of all available callsign titles and their description. What sets it apart from a few dozen similar apps is the quick search ability and auto completion voodoo, in accordance with Android’s search centric vision.

Search for MW2 Guide on the market, or use the QR code below.

MW2 Guide QR code

SCSIPORT debugging

Microsoft provides useful extensions for debugging SCSIPORT drivers in WinDbg. But with some versions of scsiport.sys, the symbol files don’t contain type information. This produces fun errors like the following.

kd> !scsikd.scsiext 8a392a38
*************************************************************************
***                                                                   ***
***                                                                   ***
***    Your debugger is not using the correct symbols                 ***
***                                                                   ***
***    In order for this command to work properly, your symbol path   ***
***    must point to .pdb files that have full type information.      ***
***                                                                   ***
***    Certain .pdb files (such as the public OS symbols) do not      ***
***    contain the required information.  Contact the group that      ***
***    provided you with these symbols if you need this command to    ***
***    work.                                                          ***
***                                                                   ***
***    Type referenced: scsiport!_DEVICE_OBJECT                       ***
***                                                                   ***
*************************************************************************
scsikd error (3): ...\storage\kdext\scsikd\scsikd.c @ line 188

This makes the common task of getting your device extension object very daunting. After some digging, I came up with this code to at least get my device extension object from SCSIPORT’s device extension object.

!drvobj mydriver
* get relevant DevObj
!devobj <devobj>
* get DevExt
dt mydriver!MY_DEVICE_EXTENSION poi(<DevExt> + b4)

I’ve only tried it on Windows XP SP3. The offset may be different with other configurations. Anyone knows a better way around this? Preferable method would naturally be making scsikd work.