Lovage

I have been playing with serverless solutions lately. It started with a Django project that was dealing with customer AWS credentials both in background and foreground tasks. I wanted to keep those tasks compartmentalized for security and I wanted them to scale easily. Celery is the common solution for this, but setting it up in my environment was not straightforward. This was as good excuse as any to AWS Lambda. I gave Serverless Framework a try because it was the most versatile framework I could find with proper Python support.

It worked well for a long time. But over time I noticed the following repeating issues.

  1. It requires Node.js which complicated development and CI environments. This is the reason I originally created docker combo images of Python and Node.js.
  2. Packaging Python dependencies is slow and error prone. Every deployment operation downloaded all the dependencies again, compressed them again, and uploaded them again. On Windows, Mac, and some Linux variants (if you have binary dependencies) it requires Docker and even after multiple PRs it was still slow and randomly broke every few releases.
  3. There was no easy way to directly call Lambda functions after they were deployed. I had to deal with the AWS API, naming, arguments marshaling, and exception handling myself.

To solve these issues, I created Lovage. The pandemic gave me the time I needed to refine and release it.

No Node.js

Lovage is a stand-alone Python library. It has no external dependencies which should make it easy to use anywhere Python 3 can be used. It also does away with the Node.js choice of keeping intermediate files in the source folder. No huge node_modules folders, no code zip files in .serverless, and no dependency caches.

Lambda Layers

Instead of uploading all of the project’s dependencies every time as part of the source code zip, Lovage uploads it just once as a separate zip file and creates a Lambda Layer from it. Layers can be attached to any Lambda function and are meant to easily share code or data between different functions.

Since dependencies change much less frequently than the source code itself, Lovage uploads the dependencies much less frequently and thus saves compression and upload time. Dependencies are usually bigger than the source code so this makes a significant difference in deployment time.

But why stop there? Lovage gets rid of the need for Docker too. Docker is used to get an environment close enough to the execution environment of Lambda so that pip downloads the right dependencies, especially when binaries are involved. Why emulate when we can use the real thing?

Lovage creates a special Lambda function that uses pip to download your project’s dependencies, package them up, and upload them to S3 where they can be used as a layer. That function is then used as a custom resource in CloudFormation to automatically create the dependencies zip file and create a layer from it. Nothing happens locally and the upload is as fast possible given that it stays in one region of the AWS network.

Here is a stripped down CloudFormation template showing this method (full function code):

Resources:
  RequirementsLayer:
    Type: AWS::Lambda::LayerVersion
    Properties:
      Content:
        S3Bucket:
          Fn::Sub: ${RequirementsPackage.Bucket}
        S3Key:
          Fn::Sub: ${RequirementsPackage.Key}
  RequirementsPackage:
    Type: Custom::RequirementsLayerPackage
    Properties:
      Requirements:
        - requests
        - pytest
      ServiceToken: !Sub ${RequirementsPackager.Arn}
  RequirementsPackager:
    Type: AWS::Lambda::Function
    Properties:
      Runtime: python3.7
      Handler: index.handler
      Code:
        ZipFile: |
          import os
          import zipfile

          import boto3
          import cfnresponse

          def handler(event, context):
            if event["RequestType"] in ["Create", "Update"]:
              requirements = event["ResourceProperties"]["Requirements"]
              os.system(f"pip install -t /tmp/python --progress-bar off {requirements}"):
              with zipfile.ZipFile("/tmp/python.zip", "w") as z:
                for root, folders, files in os.walk("/tmp/python"):
                  for f in files:
                    local_path = os.path.join(root, f)
                    zip_path = os.path.relpath(local_path, "/tmp")
                    z.write(local_path, zip_path, zipfile.ZIP_DEFLATED)
              boto3.client("s3").upload_file("/tmp/python.zip", "lovage-bucket", "reqs.zip")
              cfnresponse.send(event, context, cfnresponse.SUCCESS, {"Bucket": "lovage-bucket, "Key": "reqs.zip"}, "reqs")

This is by far my favorite part of Lovage and why I really wanted to create this library in the first place. I think it’s much cleaner and faster than the current solutions. This is especially true considering almost every project I have uses boto3 and that alone is around 45MB uncompressed and 6MB compressed. Compressing and uploading it every single time makes fast iteration harder.

“RPC”

Most serverless solutions I’ve seen focus on HTTP APIs. Serverless Framework does have support for scheduling and events, but still no easy way to call the function yourself with some parameters. Lovage functions are defined in your code with a special decorator, just like Celery. You can then invoke them with any parameters and Lovage will take care of everything, including passing back any exceptions.

import lovage

app = lovage.Lovage()

@app.task
def hello(x):
  return f"hello {x} world!"

if __name__ == "__main__":
  print(hello.invoke("lovage"))
  hello.invoke_async("async")

The implementation is all very standard. Arguments are marshaled with pickle, encoded as base85, and stuffed in JSON. Same goes for return values and exceptions.

Summary

Lovage deploys Python functions to AWS Lambda that can be easily invoked just like any other function. It does away with Docker and Node.js. It saves you development time by offloading dependency installation to Lambda and stores dependencies in Lambda layers to reduce repetition.

I hope you find this library useful! If you want more details on the layer and custom resource to implement in other frameworks, let me know.

CloudWatch2S3

AWS CloudWatch Logs is very useful service but it does have its limitations. Retaining logs for a long period of time can get expensive. Logs cannot be easily searched across multiple streams. Logs are hard to export and integration requires AWS specific code. Sometimes it makes more sense to store logs as text files in S3. That’s not always possible with some AWS services like Lambda that write logs directly to CloudWatch Logs.

One option to get around CloudWatch Logs limitations is exporting logs to S3 where data can be stored and processed longer-term for a cheaper price. Logs can be exported one-time or automatically as they come in. Setting up an automatic pipeline to export the logs is not a one-click process, but luckily Amazon detailed all the steps in a recent blog post titled Stream Amazon CloudWatch Logs to a Centralized Account for Audit and Analysis.

Amazon’s blog post has a lot of great information about the topic and the solution. In short, they create a Kinesis Stream writing to S3. CloudWatch Logs subscriptions to export logs to the new stream are created either manually with a script or in response to CloudTrail events about new log streams. This architecture is stable and scalable, but the implementation has a few drawbacks:

  • Writes compressed CloudWatch JSON files to S3.
  • Setup is still a little manual, requiring you to create a bucket, edit permissions, modify and upload source code, and run a script to initialize.
  • Requires CloudTrail.
  • Configuration requires editing source code.
  • Has a minor bug limiting initial subscription to 50 log streams.

That is why I created CloudWatch2S3 – a single CloudFormation template that sets everything up in one go while still leaving room for tweaking with parameters.

The architecture is mostly the same as Amazon’s but adds a subscription timer to remove the hard requirement on CloudTrail, and post-processing to optionally write raw log files to S3 instead of compressed CloudWatch JSON files.

architecture

Setup is simple. There is just one CloudFormation template and the default parameters should be good for most.

  1. Download the CloudFormation template
  2. Open AWS Console
  3. Go to CloudFormation page
  4. Click “Create stack
  5. Under “Specify template” choose “Upload  a template file”, choose the file downloaded in step 1, and click “Next”
  6. Under “Stack name” choose a name like “CloudWatch2S3”
  7. If you have a high volume of logs, consider increasing Kinesis Shard Count
  8. Review other parameters and click “Next”
  9. Add tags if needed and click “Next”
  10. Check “I acknowledge that AWS CloudFormation might create IAM resources” and click “Create stack”
  11. Wait for the stack to finish
  12. Go to “Outputs” tab and note the bucket where logs will be written
  13. That’s it!

Another feature is the ability to export logs from multiple accounts to the same bucket. To set this up you need to set the AllowedAccounts parameter to a comma-separated list of AWS account identifiers allowed to export logs. Once the stack is created, go to the “Outputs” tab and copy the value of LogDestination. Then simply deploy the CloudWatch2S3-additional-account.template to the other accounts while setting LogDestination to the value previously copied.

For troubleshooting and more technical details, see https://github.com/CloudSnorkel/CloudWatch2S3/blob/master/README.md.

If you are exporting logs to S3 to save money, don’t forget to also change the retention settings in CloudWatch so old logs are automatically purged and your bill actually goes down.

Why Kubernetes?

Kubernetes logoKubernetes (or k8s) allows you to effortlessly deploy containers to a group of servers, automatically choosing servers with sufficient computing resources and connecting required network and storage resources. It lets developers run containers and allocate resources without too much hassle. It’s an open-source version of Borg which was considered one of Google’s competitive advantages.

There are many features and reasons to use Kubernetes, but for me the following stood out and made me a happy user.

  1. It works on-premise and on most clouds like AWS, Google Cloud and Azure.
  2. Automatic image garbage collection makes sure you never run out of disk space on your cluster nodes because you push hundreds of versions a day.
  3. Scaling your container horizontally is as easy as:
    kubectl scale deployment nginx-deployment --replicas=10
  4. Rolling over your container to a new version one by one is as easy as:
    kubectl set image deployment/nginx-deployment nginx=nginx:1.91
  5. Easy to define health checks (liveness probes) automatically restart your container and readiness probes delay roll over update if the new version doesn’t work.
  6. Automatic service discovery using built-in DNS server. Just define service and reference it by name from any container in your cluster. You can even define a service as a load balancer which will automatically create ELB on AWS, Load Balancer on GCE, etc.
  7. Labels on everything can help organize resources. Services use label selectors to choose which containers get traffic directed at them. This allows, for example, to easily create blue/green deployments.
  8. Secrets make it easier to handle keys and configuration separately from code and containers.
  9. Resource monitoring is available out of the box and can be used for auto-scaling.
  10. Horizontal auto scaling can be based on resource monitoring or custom metrics.
  11. You can always use the API to create custom behaviors. For example, you can automatically create Route53 record sets based on a service label.

Is there anything I missed? What’s your favorite Kubernetes feature?

Python 3 is Awesome!

pythonToday I will tell you about the massive success that is whypy3.com. With hundreds of users a day (on the best day when it reached page two of Hacker News and hundreds actually being 103), it has been a tremendous success in the lucrative Python code snippet market. By presenting small snippets of code displaying cool features of Python 3, I was able to single–handedly convert millions (1e-6 millions to be exact) of Python 2 users to true Python 3 believers.

It all started when I saw a tweet about a cool Python 3 feature I haven’t seen before. This amazing feature automatically resolves any exception in your code by suppressing it. Who needs pesky exceptions anyway? Alternatively, you can use it to cleanly ignore expected exceptions instead of the usual except: pass.

from contextlib import suppress

with suppress(MyExc):
    code

# replaces

try:
    code
except MyExc:
    pass

There are obviously way better and bigger reasons to finally make that move to Python 3. But what if you can be lured in by some cool cheap tricks? And that’s exactly why I created whypy3.com. It’s a tool that us Python 3 lovers can use to try and slowly wear down on an insistent boss or colleague. It’s also a fun way for me to share all my favorite Python 3 features so I don’t forget them.

I was initially going to to do the usual static S3 website with CloudFront/CloudFlare. But I also wanted it to be easy for other people to contribute snippets. The obvious choice was GitHub, and since I’m already using GitHub, why not give GitHub Pages a try? Getting it up and running was a breeze. To make it easier to contribute without editing HTML, I decided to use the full blown Jekyll setup. I had to fight a little bit with Jekyll to get highlighting working, but overall it took no time to get a solid looking site up and running.

After posting to Hacker News, I even got a few pull requests for more snippets. To this day, I still get some Twitter interactions here and there. I don’t expect this to become a huge project with actual millions of users, but at the end of the day this was pretty fun, I learned some new technologies, and I probably convinced someone to at least start thinking about moving to Python 3.

Do you use Python 3? Please share your favorite feature!

Hypervisor Hunt

After getting burnt by Hyper-V, I decided to go for the tried and true and installed VMware Player on Windows 10. I had to install Ubuntu again on the new virtual machine, but it was a breeze thanks to VMware’s automated installation process. Everything that was missing in Hyper-V was there. I was able to use my laptop’s real resolution, networking over Wi-Fi was done automatically, audio magically started working, and even performance was noticeably better.

After a few weeks of heavy usage, I started noticing some problems with VMware. Every once in a while the guest OS would freeze for about a second. I didn’t pay too much attention to it at first, but it slowly started to wear on me. I eventually realized it always happens when I use tab completion in the shell and the real cause was playing sounds. It’s still progress over Hyper-V’s inability to play any audio, but it was not exactly a pleasant experience.

The other, far more severe issue, was general lack of performance. It just didn’t feel like I was running Ubuntu on hardware, or even close to it. I experienced constant lags while typing, alt+tab took about a second to show up, compiling code was weirdly slow, video playback was unusable, and everything was just generally sluggish and unresponsive. Overall it was usable, but far from ideal.

Today I finally broke down and decided to give yet another hypervisor a shot. Next up came VirtualBox. I didn’t have high expectations, but VMware was starting to slow me down so I had to try something. Installation was even easier since VirtualBox can just use VMware images. Then came the pleasant surprise. Straight out of the box performance was noticeably better. Windows moved without lagging, alt+tab reaction was instantaneous, and sound playback just worked. Once I installed the guest additions and enabled video acceleration, video playback started functioning too. I still can’t play 4K videos, but at least my laptop doesn’t crawl to a halt on every video ad.

As a cherry on the top, VirtualBox was also able to properly set the resolution on the guest OS at boot time. In VMware, I had to leave and enter full screen once after login for the real resolution to stick. Switching inputs between guest and host in VirtualBox is also easier. It requires just one key (right ctrl) as opposed to two with VMware (left ctrl+alt).

I realize these results depend on many things like hardware, drivers, host/guest versions, etc. I bet I could also solve some of these issues if I put some research into it. But for running Ubuntu 17.04 desktop on my Windows 10 Dell XPS 13 with the least hassle, VirtualBox is the clear winner. Let me know if you had different experience or know how to make it run even smoother.

Things They Don’t Tell You About Hyper-V

I really wanted to like Hyper-V. It’s fully integrated into Windows and runs bare metal, so I was expecting stellar performance and a smooth experience. I was going to run a Linux box for some projects, get to work with Docker for Windows, and do it all with good power management, smooth transitions and without sacrificing performance.

And then reality hit.

  1. Hyper-V doesn’t support resolutions higher than 1920×1080 with Linux guests. And even that is only adjustable by editing grub configuration which requires a reboot. The viewer allows zooming, but not in full screen mode. With a laptop resolution of 3200×1800, that leaves me with a half empty screen or a small window on the desktop.
  2. Networking support is mostly manual, especially when Wi-Fi is involved. You have to drop into PowerShell to manually configure vSwitch with NAT. Need DHCP? Nope, can’t have it. Go install a third party application.
  3. Audio is not supported for Linux guests. Just like with the resolution issue, you’re forced to use remote X server or xrdp. Both are a pain to setup and didn’t provide acceptable performance for me.
  4. To top it all off, you can’t use any other virtualization solution when Hyper-V is enabled. Do you want both Docker for Windows and a normal Linux desktop VM experience? Too bad… VMware allows you to virtualize VT-x/EPT so you can run a hypervisor inside your guest. Hyper-V doesn’t.

It seems like Hyper-V is just not there yet. It might work well for Windows guests or Linux server guests, but for Linux desktop guest it’s just not enough.

False Positive Watch

While debugging any issue that arises on Windows, my go-to trick is blaming the anti-virus or firewall. It almost always works. As important as these security solutions are, they can be so disruptive at times. For developers this usually comes in the form of a false positive. One day, out of the blue, a user emails you and blames you for trying to infect their computer with Virus.Generic.Not.Really.LOL.Sue.Me.1234775. This happened so many times with NSIS that someone created a false positive list on our wiki.

There are a lot of reasons why this happens and a lot of ways to lower the chances of it happening, but at the end of the day, chances are it’s going to happen. It even happened to Chrome and Windows itself.

So I created False Positive Watch. It’s a simple free service that periodically scans your files using Virus Total and sends you an email if any of your files are erroneously detected as malware. You can then notify the anti-virus vendor so they can fix the false positive before it affects too many of your customers.

I use it to get notifications about NSIS and other projects, but you can use it for your projects too for free. All you need is to supply your email address (for notifications) and upload the file (I delete it from my server after sending it to VirusTotal). In the future I’m going to add an option to just supply the hash instead of the entire file so you can use it with big files or avoid uploading the file if it’s too private.

Docker Combo Images

combo

I’ve been working with Docker a lot for the past year and it’s pretty great. It especially shines when combined with Kubernetes. As the projects grew more and more complex, a common issue I kept encountering was running both Python and JavaScript code in the same container. Certain Django plugins require Node to run, Serverless requires both Python and Node, and sometimes you just need some Python tools on top of Node to build.

I usually ended up creating my own image containing both Python and Node with:

FROM python:3

RUN curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
RUN apt-get install -y nodejs

# ... rest of my stuff

There are two problems with this approach.

  1. It’s slow. Installing Node takes a while and doing it for every non-cached build is time consuming.
  2. You lose the Docker way of just pulling a nice prepared image. If Node changes their deployment method, the Dockerfile has to be updated. It’s much simpler to just docker pull node:8

The obvious solution is going to Docker Hub and looking for an image that already contains both. There are a bunch of those but they all look sketchy and very old. I don’t feel like I can trust them to have the latest security updates, or any updates at all. When a new version of Python comes out, I can’t trust those images to get new tags with the new version which means I’d have to go looking for a new image.

So I did what any sensible person would do. I created my own (obligatory link to XKCD #927 here). But instead of creating and pushing a one-off image, I used Travis.ci to update the images daily (update 2022: GitHub Actions). This was actually a pretty fun exercise that allowed me to learn more about Docker Python API, Docker Hub and Travis.ci. I tried to make it as easily extensible as possible so anyone can submit a PR for a new combo like Node and Ruby, or Python or Ruby, or Python and Java, etc.

The end result allows you to use:

docker run --rm combos/python_node:3_6 python3 -c "print('hello world')"
docker run --rm combos/python_node:3_6 node -e "console.log('hello world')"

You can rest assured you will always get the latest version of Python 3 and the latest version of Node 6. The image is updated daily. And since the build process is completely transparent on Travis.ci you should be able to trust that there is no funny business in the image.

Images: https://hub.docker.com/r/combos/
Source code: https://github.com/kichik/docker-combo
Build server: https://github.com/kichik/docker-combo/actions

Compatible Django Middleware

Django 1.10 added a new style of middleware with a different interface and a new setting called MIDDLWARE instead of MIDDLEWARE_CLASSES. Creating a class that supports both is easy enough with MiddlewareMixin, but that only works with Django 1.10 and above. What if you want to create middleware that can work with all versions of Django so it can be easily shared?

Writing a compatible middleware is not too hard. The trick is having a fallback for when the import fails on any earlier versions of Django. I couldn’t find a full example anywhere and it took me a few attempts to get it just right, so I thought I’d share my results to save you some time.

import os

from django.core.exceptions import MiddlewareNotUsed
from django.shortcuts import redirect

try:
    from django.utils.deprecation import MiddlewareMixin
except ImportError:
    MiddlewareMixin = object

class CompatibleMiddleware(MiddlewareMixin):
    def __init__(self, *args, **kwargs):
        if os.getenv('DISABLE_MIDDLEWARE'):
            raise MiddlewareNotUsed('DISABLE_MIDDLEWARE is set')

        super(CompatibleMiddleware, self).__init__(*args, **kwargs)

    def process_request(self, request):
        if request.path == '/':
            return redirect('/hello')

    def process_response(self, request, response):
        return response

CompatibleMiddleware can now be used in both MIDDLWARE and MIDDLEWARE_CLASSES. It should also work with any version of Django so it’s easier to share.

Dell XPS 13 9350 WiFi

Dysfunctional Broadcom WiFi card
Dysfunctional Broadcom WiFi card

I have had my Dell XPS 13 for almost half a year now. It’s the 2015 model numbered 9350. Ever since I got it, the WiFi has had issues. It’s dropping packets left and right and especially when placed on uneven surfaces like my lap or a carpet. I think it also has something to do with heat because it seems to take a while to show up after the computer is initially powered on. The issue worsened with time to the point where I simply can’t use it on my lap anymore.

I am far from alone with this issue. A simple Google search shows many people complaining about some variation of the same. Luckily, some of them actually figured out the culprit and it’s the Broadcom WiFi card. Apparently some models come with an Intel WiFi card that works perfectly fine. In the 2016 model they went ahead and completely replaced the faulty Broadcom card with Killer card across all models.

For whatever stupid reason, I decided to call Dell Support before just replacing the card myself. They were predictably useless and wasted two and a half hours of my time. Tier 1 took two hours of blindly following steps from a piece of paper. Tier 3 actually tried some advanced WiFi settings I never heard of, but that failed too. And then they saw my VirtualBox network adapters. They immediately jumped to conclusions that I’m sharing my connection with other computers (huh?) and that I need to disable them. So we disabled them and then instead of running an actual test like moving the laptop to an uneven surface, they ran one pingtest.net and decided that one passing test means they fixed the issue. I hung up.

More angry at myself for wasting my own time than Dell Support doing their jobs, I went ahead and purchased:

  1. Intel WiFi card 7265NGW
  2. Screwdriver set with Torx T5 because why make things easy to repair?
  3. Plastic pry tools because Torx is not hard enough

Once everything arrived, I followed the service manual and installed the new card in less than 10 minutes. With the right tools in hand, it was really simple and only cost $40. No need to wait 10-14 days for Dell to repair my laptop and no need to waste my time convincing technical support that it’s a hardware issue and another Broadcom won’t help.

Needless to say the WiFi is finally working perfectly fine on any surface.