From Django to Serverless

A modernization story, and what we learned

The original architecture

As many software companies out there, you have tools, languages and framework that you are used to, and you over-utilize them for everything.
That was Django for us, it is an incredible framework to build web applications and it has many qualities, its ORM (Object Relational Mapper) to cite one.

The older, Django-centric, architecture
  1. Storing them in a SQL database, to keep track of the progresses
  2. Calling the external service that would provide the results, asynchronously
  3. Storing the results in a NoSQL database
  4. Answering one of the clients call, returning the results

Approaching the revolution

The fact that Django probably wasn’t the best choice was clear by now. We wanted to try out the serverless paradigm and this was the perfect opportunity!

The serverless framework

Core functionalities

The main job of the application was to read the jobs saved in the SQL database, interact with the external service and store the results in a NoSQL database.

Rest APIs

After the core functionality was moved, only the APIs that the clients and the external service used to call our application remained to transfer.
For this task, in the older application we used Django Rest Framework, whit every URL corresponding to a Django View. We simply needed to shift every view in a separate Lambda, and change every bit of code that used old methods, such as database reading, to a new call of another Lambda dedicated to that.
In order to avoid code duplication, a shared module was created to manage lines of code that every lambda function should access to, and then easily uploaded as a common lambda layer. I recommend you take a look at the packaging documentation from serverless.


Now that every bit of code was shifted online, only accessories functions remained.
For monitoring purposes, we had some cron commands running in the older server that would tell us how many tasks were running, in which state were they and more information, we moved every one of these metrics in AWS Cloudwatch. Many metrics were actually available by default, such as the number of tasks stored in DynamoDB or the number of clients invocation, derived by the number of times a Lambda was called, or even the numbers of errors occurred.
For metrics that weren’t already obtainable by the AWS services, we established another Lambda that would be scheduled to run, collect these metrics and push them on Cloudwatch.

A part of the Cloudwatch dashboard to keep track of the system

The new architecture

A newer, serverless architecture

What we learned

After the whole project rolled out in production we had some minor bugs to fix but the application went live pretty seamlessly. The pricing was just a bit less that what we were spending in servers and it could finally scale as much as needed.

  • If you use AWS API Gateway, read the docs about parameter passing, as it different than a direct Lambda invocations. Also, be careful with API Gateway’s pricing if many of your calls are from outside of the application, as they may spike up
  • Try to have no more than 8/10 Lambdas in a serverless service, because if you do, it will seriously impact your upload and deploy time. Serverless lets you deploy a single function at a time, but for every new function you’ll need to deploy the full package. If your project requires more Lambdas, try to split the application in different services (for example, if you use 2 or more lambdas to interact with MongoDB, it could be a separate service)
  • Try to tag everything, especially if you have multiple projects under your AWS account, this will help understand better your costs and easily watch out for any inconsistencies

Software Engineer and full stack developer 💻 based in Italy — /in/nicologasparini/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store