Modifying files stored outside of PostgreSQL while maintaining full transaction safety and stateless design

What to do when things just get too big to be stored inline in the database but we still want all that juicy transaction safety and read-write locks without building your own gatekeeping solution?

This is part of a series of articles showing how we’ve built the Paw Cloud syncing backend, in order to provide a both reliable and safe document sharing service for teams. Previously, I wrote about: AWS as a stateless platform for cloud syncing services.

Our main goal with this system was to ensure a completely stateless operation of all our servers this included being able to disconnect any instance at any time without creating any data corruption or loss. Since all our APIs are wrapped in transactions, I figured we should do that same for our [cloud syncing (https://paw.cloud/teams/) APIs.

As we chose to use Git’s underlying C library (libgit2) as our server-side backed for syncing cloud projects, it wouldn’t have been easy to store data in our PostgreSQL database. Firstly it would have been very costly, and secondly would have required many custom and rather complex adaptations to libgit2. So for both cost and convenience it made sense to store projects on Amazon S3 instead.

Loading a project from S3 on each interaction (even within the same AWS region) is slow so we wanted to have the concept of a project being currently HOT when loaded onto a background server. Since we have more than one instance of Django on our web-facing servers (these sit behind an a load balancer) we needed a common way to ensure that there were no concurrent read or write happening simultaneously with a write operation on the same document even if the requests were coming through different client-facing API servers.

PostgreSQL Row Locks to the rescue!

Postgres lets us explicitly get a row lock. This concept lets us create a useful abstract class for Django models that we might need lock.

PostgreSQL offers two kinds of row locks: FOR SHARE a shared lock which we use for read operations, and FOR UPDATE an exclusive lock which we use of write operations. Multiple shared locks can be obtained at the same time, while exclusive locks require no other lock to be acquired simultaneously.

Implementation of an exclusive lock in a Django model. Just add another method to get the shared lock by replacing the FOR UPDATE with FOR SHARE (the the full abstract model here).

Postgres row locks are acquired until the current transaction ends unlike advisory locks one cannot explicitly release them without committing/revoking your transaction.

These locks will also block standard updates to the respective row if the lock is taken by another transaction, therefore we do not lock the document row directly but rather a document_state entity that points to the document. This is to allow normal database activity around the document (eg rename permission changes etc) to happen without being blocked by the row locks. On the document_state entity we only keep data relevant to the syncing operations, such as a version hash and the name of the persisted file on S3.

With row locks in place based on the type of users interaction, we can acquire either a shared or an exclusive lock before the action is sent down to the worker server instance where the document is HOT and ready to be worked on.

To ensure we are always working on the correct version of the user’s document, each API call to the worker micro-service contains a version hash read from the database (which is considered to be the source of truth). If the file version cached on the worker does not match with the one from the database, we need to reload the document’s archive from S3.

When a user does a write on the project, the worker server generates a new UUID used as filename, uploads the updated archive to S3, and returns this filename and document hash back up to the Django instances. Updated file infos are saved on the project’s document_state row (the row we acquired an exclusive locks
on).

This ensures that if at any point a write API call fails (for example the worker server is shut down in the middle of a write), the corresponding transaction will fail so the next set of API calls will use the older version since the database won’t have updated the version hash.

Exclusive lock decorator for our API view (django REST framework) . Our shared lock decorator is very much the same just swapping out the get_exclusive_lock context call as well as some user permissions / logging differences.

The project reloading process is the final area I want to touch on today. Since we block a Python thread when waiting for a lock, we need to do this reload on a separate thread to allow the retry described above (line 25) to be first in the queue after the reload. However, we don’t want to release our current exclusive lock (on line 21) before the reloader has requested to get another one. In fact, we know the project needs to be reloaded so if some other request comes in, it would also fail. Therefore, we need two separate threads: one that does the reloading of the project, and another that watches all pending locks on projects and sends a message back when the reloader has a pending lock in the queue so that the main thread can release its lock. Here, on line 23 the main thread waits for this message.


Flow diagram of the threads and locks used durring an api action. (Some additional Postgres Advisory pg_advisory locks are used on the Reload thread to ensure only one reload is pending per document)

To note here: since we need to have Cpython daemon threads, we realized that we couldn’t use greenlet with our Django instances. In fact, greenlet is a Python library that does some magic to parallelize operations by faking threads. Before we implemented this feature, were were using the Python server runtime Gunicorn — which is based on greenlet–however we needed to switch back to a standard uWSGI configuration.

An extra bonus of having lockable models in Django is to make our other management cronjobs (billing, cleanup, emails) run over a stateless distributed cluster of Django instances, while making it safe to ensure that a single database item is not being processed by multiple jobs at once. For example, when running our billing cronjob we get an exclusive lock on a team database entity. This both protects us against duplicate credit card charges, and also ensures the team’s data does not change during this billing processing, making the code much more simple as we do not need to check for this edge case. Once an exclusive lock is acquired, no other database transaction will be committed on that row. Therefore, we massively reduce the chances our update in the billing cronjob will be rejected by PostgreSQL due to a conflict.

We have noticed one clear issue with our system, if the tasks on our micro-service nodes suddenly start to take a long time and we don’t timeout fast enough, we can end up with many many concurrent database connections. As API calls start to fail more often, reload processes are started producing a runaway effect. To protect against this, it is important to have a strict timeouts when calling your micro-service backend.

Originally posted on Medium / Hackernoon | All code snippets are released under the MIT license