From Django to Serverless

A modernization story, and what we learned

Nicolò Gasparini
6 min readFeb 24, 2020

This is a story about how an application that was developed the same “old-fashioned” way was ported in a completely serverless app, as it was supposed to be from the beginning.

Before you start reading, I just want to clear up that not every Django application can be shifted to serverless, it takes specific use cases, or you could port a part of it, if you think it’s worth the effort. Our original Django application was particularly suitable because it had no front-end, no user login, and just some (ten or so) APIs.

The original architecture

Like many software companies out there, you have tools, languages, and framework that you are used to, and you over-utilize them for everything.
That was Django for us, it is an incredible framework to build web applications and it has many qualities, it's ORM (Object Relational Mapper) to cite one.

But you might realize, especially when you learn about new technologies, that that one project where you used Django didn’t actually need any of its features.

Our application was very simple: a sort of middleware that would collect tasks to do from many different clients. It would then connect to an external service that would send back the results.
The followed steps are summed up in the picture below:

The older, Django-centric, architecture
  1. Receiving clients tasks by API calls
  2. Storing them in a SQL database, to keep track of the progress
  3. Calling the external service that would provide the results, asynchronously
  4. Storing the results in a NoSQL database
  5. Answering one of the clients call, returning the results

This worked fine for a couple of years, until the requests from the clients became too many. Our Django app became our bottleneck, it wasn’t capable of scaling up.

Approaching the revolution

The fact that Django probably wasn’t the best choice was clear by now. We wanted to try out the serverless paradigm and this was the perfect opportunity!

If you need scalability, a managed, preferably serverless, approach is the way to go.

The serverless framework

We started out by searching for a framework that would help us start the job in an organized and structured way. Serverless.com was our choice, because of its popularity and ease of use, even though we encountered some bugs and problems, in 2019 it was already a stable and complete enough framework, and it’s being updated constantly.

Core functionalities

The main job of the application was to read the jobs saved in the SQL database, interact with the external service and store the results in a NoSQL database.

First of all, we moved the task storage from SQL to AWS DynamoDB, with every row corresponding to a single task, creating this way a very simple Finite State Machine, with a simple STATUS field representing the state of the task, from RECEIVED, to IN_PROGRESS, to DONE.
Every task also had a Time to Live attribute, updated once done, so that we would keep the result for a while and not worry about deleting it when no longer needed.
In order to work with DynamoDB from Python, we used the library PynamoDB, it didn’t substitute Django’s ORM, because of the different logic and syntax, but it helped to interact with the database.

Next, we moved every Django command, function, and core functionality in separate Lambdas, the most important thing was to apply the KISS principle (Keep It Simple Stupid), every Lambda performs one action only, could it be reading from the database, getting a request from the clients… everything must be separated, and every Lambda will be able to call other Lambdas if it needs to perform multiple operations. Also, whenever possible, call Lambdas asynchronously, so that you don’t need to wait for the response.

This also gave us an opportunity to do some code refactoring, improving the existing code, and, most importantly, removing every heavy, overkill, external library that was used and abused, such as Pandas. We opted for a cleaner, basic approach, using whenever possible the python standard libraries.

Rest APIs

After the core functionality was moved, only the APIs that the clients and the external service used to call our application remained to transfer.
For this task, in the older application we used Django Rest Framework, whit every URL corresponding to a Django View. We simply needed to shift every view in a separate Lambda, and change every bit of code that used old methods, such as database reading, to a new call of another Lambda dedicated to that.
In order to avoid code duplication, a shared module was created to manage lines of code that every lambda function should access, and then easily uploaded as a common lambda layer. I recommend you take a look at the packaging documentation from serverless.

Monitoring

Now that every bit of code was shifted online, only accessories functions remained.
For monitoring purposes, we had some cron commands running in the older server that would tell us how many tasks were running, in which state were they, and more information, we moved every one of these metrics to AWS Cloudwatch. Many metrics were actually available by default, such as the number of tasks stored in DynamoDB or the number of clients invocation, derived by the number of times a Lambda was called, or even the number of errors that occurred.
For metrics that weren’t already obtainable by the AWS services, we established another Lambda that would be scheduled to run, collect these metrics and push them on Cloudwatch.

A part of the Cloudwatch dashboard to keep track of the system

The new architecture

A newer, serverless architecture

After everything had been moved and deployed in production, we had a situation similar to the one in the picture.

The architecture remained similar, but every task made by Django commands and APIs had been divided into multiple Lambdas that called each other, directly, through DynamoDB streams or passing through AWS SNS (Simple Notification Service).

SNS was used as a publisher/subscriber queue method, where every action to make was enqueued in a topic and consumed by the appropriate Lambda.

What we learned

After the whole project rolled out in production we had some minor bugs to fix but the application went live pretty seamlessly. The pricing was just a bit less than what we were spending on servers and it could finally scale as much as needed.

Here are some recapped experiences we made along the way that I hope could help others approaching the serverless world for the first time:

  • Be careful of threshold limits, there is a limit for the number of concurrent Lambda invocations you can have. This limit is by default 1000, but you can open a ticket with the AWS support to have it increased by a couple thousand more at least
  • If you use AWS API Gateway, read the docs about parameter passing, as it is different than direct Lambda invocations. Also, be careful with API Gateway’s pricing if many of your calls are from outside of the application, as they may spike up
  • Try to have no more than 8/10 Lambdas in a serverless service, because if you do, it will seriously impact your upload and deploy time. Serverless lets you deploy a single function at a time, but for every new function you’ll need to deploy the full package. If your project requires more Lambdas, try to split the application into different services (for example, if you use 2 or more lambdas to interact with MongoDB, it could be a separate service)
  • Try to tag everything, especially if you have multiple projects under your AWS account, this will help understand better your costs and easily watch out for any inconsistencies

--

--

Nicolò Gasparini

Software Engineer and full stack developer 💻 based in Italy — /in/nicologasparini/