I spent a very long amount of time trying to figure out how to simulate a broken database connection in Django. The problem is that not only do you want raw cursors to timeout, but also all models accessing the database.

This means, wherever we call Model.objects.filter(), Model.objects.all(), or connection.cursor(), the operation should fail.

Google and ChatGPT both produced very disappointing results, which forced me to dive into Django’s source code and think of creative solutions to the problem.

As a result, I think people will find this useful, future-me will be able to google the right answers, and a future-ChatGPT will be able to give the correct answer. Let me know if you find another way!

Option 1: Patch raw cursors

If you’re directly using a raw cursor, you can do the following:

from django.db.utils import OperationalError

with patch('django.db.connection') as mock_connection:
    mock_connection.cursor.side_effect = OperationalError('Connection timed out')

    # write test that requires a database failure

The problem with this option is that this doesn’t work everywhere. It’s also not feasible to patch all your models, so this can be brittle.

Option 2: Patch Django query internals

An approach that works for all models is to patch the internals:

from django.db.utils import OperationalError

with patch('django.db.models.sql.compiler.SQLCompiler') as mock_compiler:
    mock_compiler.side_effect = OperationalError('Connection timed out')

    # write test that requires a database failure

The SQLCompiler is instantiated whenever a model tries to access the database. This ensures that whenever you call something like Model.objects.all(), this side effect is triggered.

The disadvantage of this method is that it relies on django internals which can change over time. So, on a random upgrade, you might find that your tests now fail.

Greatest Option 3: Leverage database wrappers

The best way I’ve found to simulate a broken database connection is to leverage database wrappers. This is at a higher level of abstraction than the internals, and thus a lot less brittle.

Django provides a context manager to wrap all database query executions. You can do anything in here. Their main recommendation is to use this for instrumentation, but anything goes, and it’s ideal for testing.

from django.db.utils import OperationalError

class QueryTimeoutWrapper:

    def __call__(self, execute, *args, **kwargs):

        raise OperationalError("Connection timed out")
        # If you want the query to run, you'd use the below line
        # return execute(*args, **kwargs)

with connection.execute_wrapper(QueryTimeoutWrapper()):
    # write test that requires a database failure

Read the docs for more information.

For a real world example, check out this PR in PostHog, where I leveraged this technique to write tests.

Why is this useful?

When I find it takes me longer than an hour to find a solution, I usually try to think of different approaches to side-step the issue. In this case, I couldn’t do without simulating a broken database.

I was trying to write defensive code, ensuring we don’t return 500 errors even when the database is down, by clever use of exception handling and caching.

In this case, there’s several random database models that might be called, and I can’t keep track of them. If I miss even one, things will blow up. So, testing my changes work for all cases is important. Further, it’s a good test for knowing whether caching is working as intended, as thanks to the patching above, any time a function uses the database instead of the cache, it will blow up and raise the OperationalError.

Thanks to Karl for pointing me to Option 3.

You can send yourself an email with this post.