Migrating Showmax Catalogue API from Django to Falcon - Part I

One of the projects that CMS team is responsible for is the Showmax Catalogue API, which provides information about content metadata to end-user devices. In this series of blog posts I will, together with Vojtěch Štefka, talk about why we migrated this API from Django framework to Falcon.

The pain of the Django monolith

Currently the Catalogue API is part of a monolithic Django project that also contains the BUI for managing metadata, the API for serving images, and also controls the encoding process on the Showmax platform.

We have three main reasons why we want to move the Catalogue API away from the Django application. First, we found ourselves in the classic “Monolith First” situation. We started feeling blocked by not being able to deploy the changes in the API separately from the BUI and also not being able to scale these individually. So we decided to start peeling off some functionality and moving it to separate microservices. Most of the Showmax APIs are already handled by microservices, so the approach also fits nicely into the ecosystem.

Second, Django is not the fastest framework for REST APIs out there. At the beginning it was extremely convenient to have the BUI and the APIs under one shell to be able to share the data models between them, but since we started using ElasticSearch as our data store for the APIs, this is no longer required. So we were free to select another framework without having to worry too much about the legacy stuff.

And this brings us to the third reason. At the time when we started using ElasticSearch, the only reasonable Python library available was pyes. Unfortunately it’s rather slow, does not support ElasticSearch 2.0 and is no longer being actively maintained by the author. So we also wanted to change the ElasticSearch library.

To Go or not to Go

When we were deciding which language to choose for the new Catalogue API microservice, we mainly focused on Python and Go. First, all the team members are already familiar with both of these languages. And second, there are excellent ElasticSearch libraries available for both of them - ElasticSearch DSL for Python and elastic for Go.

To figure out how much performance gain we could expect we created a proof-of-concept application in both Python and Go. This application provided one of the API endpoints that we intended to migrate.

Go was performing roughly 20 times better than the existing Django application. We mostly used the standard library, except mux for routing.

The same endpoint written in Python using the Falcon framework was nowhere as fast as Go, but still 9 times faster than Django. We have used uWSGI with gevent to run the application. Test results with ApacheBench on Intel Xeon E3-1246 with 32 GB of RAM (100 requests in parallel):

  Requests per second Average time per request (ms) Performance improvement
Django + uWSGI 1716.90 58.244 baseline
Go 35244.40 2.837 2053%
Falcon + uWSGI 16050.77 6.230 935%

When our Ops team noticed us doing these experiments, they were eager to join and prove that they can do even better. Merlin Gaillard successfully tried improving the Go results using fasthttp. Arne Rusek experimented with PyPy, but surprisingly it was slower than the standard Python interpreter. In the future we would like to revisit this experiment and do some profiling to figure out why exactly PyPy was so lethargic.

If we only cared about the API performance, Go would be the clear winner here. But the truth is that we are a small team and while we will be rewriting the APIs, we also need to keep working on day-to-day stuff and introducing new features as per product requirements. And here’s the catch. Writing the PoC application in Python felt like driving a Ferrari - we were there in a blink of an eye. With Go it was more like driving a harvester - we eventually got to the point where the two applications were producing the same results, but getting there with Go felt really sluggish.

We’ve decided to sacrifice some of the performance gain and stick with Python. This will also allow us to directly reuse some of the existing code.

Test flying Falcon

We have started with rewriting API endpoints that have a small amount of external dependencies and business logic. Our first effort was the sections API, which returns the list of the content sections available to the customer on her particular device and in her country.

    "count": 5,
    "items": [
            "name": "Hollywood",
            "slug": "hollywood"
            "name": "Best of British",
            "slug": "best_of_british"
            "name": "kykNET",
            "slug": "kyknet"
            "name": "Mzansi",
            "slug": "mzansi"
            "name": "Kids",
            "slug": "kids"
    "remaining": 0

We were positively surprised with the quality of the documentation. It’s quite comprehensive and we didn’t have to hunt inside the source code very often. We also appreciate that it is being actively maintained by the author. We’ve skimmed through the reported issues and feature request on GitHub and Keith is very responsive and helpful. Also writing unit tests for the APIs is so easy that you will no longer have to listen to your colleagues’ lame excuses for not doing so.

So far the team is very happy with Falcon. We can now deploy the changes in the migrated API endpoints independently from the Django monolith. We’ve seen roughly a tenfold increase in performance compared to Django. So far rewriting the APIs felt very smooth and we’ve not hit any major snags.

Rewriting more complex API endpoints

As of writing this article, we’ve migrated 6 API endpoints to Falcon. They all fall under the “simple & no dependencies” category. This week we started rewriting the first complex endpoint, which is responsible for returning actual content metadata. Here it starts to be a little bit more difficult. The data returned varies by the customer’s country, the device they are using and the version of the application installed. The queries to ElasticSearch are also much more complex compared to the endpoints migrated so far. In the next part of this series we will let you know how it goes (assuming we avoid doing something stupid).

Please check the original version of this article at