Skip to content
Home » All Posts » Top 7 Strategies to Kill Django ORM N+1 Queries in Production

Top 7 Strategies to Kill Django ORM N+1 Queries in Production

Introduction: Why Django ORM N+1 Queries Wreck Performance

When I review slow Django apps in production, Django ORM N+1 queries are by far the most common (and most invisible) performance killer I run into. The app looks fine in development, passes code review, and even survives staging loads. Then real traffic hits, data grows, and suddenly simple pages start taking seconds instead of milliseconds.

The N+1 problem happens when your code executes one query to fetch a list of objects, and then runs an additional query for each of those objects to fetch related data. So instead of a single efficient join-like fetch, your view quietly triggers dozens or hundreds of small queries. On a local SQLite or tiny Postgres database, this barely registers; in production, against large tables and a busy connection pool, it can choke your API or web views.

What I’ve learned the hard way is that Django’s ORM makes it easy to write code that looks clean but generates terrible query patterns: template loops that access related fields, serializers that walk relationships, or business logic that lazily touches foreign keys inside Python loops. Without profiling tools or query logging, these Django ORM N+1 queries are almost impossible to spot by eye.

In this article, I’ll walk through seven practical strategies I use to find and eliminate N+1s in real Django projects—covering how to detect them, how to fix them with tools like select_related and prefetch_related, and how to prevent them from creeping back in. By the end, you should be able to ship Django code that scales gracefully as your data and traffic grow, instead of slowing to a crawl under production load.

1. Make Django ORM N+1 Queries Visible in Development

Every nasty production incident I’ve debugged caused by Django ORM N+1 queries had the same root issue: nobody saw the problem in development. The queries were there all along, but they were hidden behind fast laptops and tiny datasets. The first step to killing N+1s is to make them impossible to ignore while you still have cheap options to fix them.

1. Make Django ORM N+1 Queries Visible in Development - image 1

Use Django Debug Toolbar to Inspect Per-View SQL

On most teams I’ve worked with, the Django Debug Toolbar is the quickest win for exposing N+1 queries in day-to-day development. Once it’s installed for your local settings, every page you load shows the total number of SQL queries and a detailed list with timings and stack traces. When I see a simple list view doing 150 queries, I know immediately that some relationship access is exploding into an N+1 pattern.

In development settings, you can wire it up like this:

# settings/dev.py
INSTALLED_APPS += [
    "debug_toolbar",
]

MIDDLEWARE = [
    "debug_toolbar.middleware.DebugToolbarMiddleware",
    *MIDDLEWARE,
]

INTERNAL_IPS = ["127.0.0.1"]

Then include the URLs:

# urls.py
from django.conf import settings
from django.urls import include, path

urlpatterns = [
    # ... your URLs
]

if settings.DEBUG:
    import debug_toolbar
    urlpatterns += [
        path("__debug__/", include(debug_toolbar.urls)),
    ]

After that, I keep an eye on the query count panel as I build views. Any list page crossing, say, 20–30 queries is a red flag that I’m likely dealing with Django ORM N+1 queries or missing prefetching.

Enable Query Logging and Slow Query Thresholds

For APIs—and for flows that aren’t easy to hit via browser—I rely heavily on query logging. In my experience, having SQL logs in the terminal while running the dev server makes N+1 issues painfully obvious as soon as they appear.

One simple way is to configure Django’s logger to print database queries in development:

# settings/dev.py
LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "handlers": {
        "console": {
            "class": "logging.StreamHandler",
        },
    },
    "loggers": {
        "django.db.backends": {
            "handlers": ["console"],
            "level": "DEBUG",  # or INFO to reduce noise
        },
    },
}

For production-like debugging or staging, I prefer setting a higher level and using database-native slow query logging instead, so the logs don’t get flooded. But in local development, watching a stream of nearly identical SELECTs scroll past when I hit a single endpoint is how I often catch N+1s before they ever reach staging. You can even adopt a soft team rule like: if a single request generates more than a screenful of queries, stop and investigate.

Print SQL in Tests to Catch Regressions Early

One thing I learned the hard way was that it’s not enough to fix an N+1 once—you also need to prevent it from quietly coming back when someone changes a serializer or template. I like to add targeted tests that assert query counts for critical views or query functions, and print SQL when those tests fail. This turns Django ORM N+1 queries into a test-visible regression instead of a production surprise.

Here’s a simple example using Django’s CaptureQueriesContext in a test case:

# tests/test_queries.py
from django.db import connection
from django.test import TestCase
from django.urls import reverse

class UserListQueryTests(TestCase):
    fixtures = ["users_with_profiles.json"]

    def test_user_list_uses_bounded_queries(self):
        url = reverse("user-list")

        with self.assertNumQueries(3):  # tweak as appropriate
            response = self.client.get(url)
            self.assertEqual(response.status_code, 200)

        # Optional: debug output during development
        from django.test.utils import CaptureQueriesContext

        with CaptureQueriesContext(connection) as ctx:
            self.client.get(url)

        for q in ctx.captured_queries:
            print(q["sql"], q["time"])  # helpful locally if the assert fails

This pattern has saved me more than once: a teammate adds an extra related field in a serializer, the query count jumps from 3 to 53, and the test immediately flags it. Inside those failures, I manually inspect the printed SQL to track down which access pattern reintroduced the N+1. For teams that want stricter guarantees, you can even add a custom test mixin or pytest plugin that fails when certain views exceed an agreed query budget. Django Performance Issue: N+1 Foreign Key Access – Sourcery

2. Master select_related to Fix N+1 on ForeignKey and OneToOne

When I’m hunting Django ORM N+1 queries, select_related is usually my first and most effective tool. It tells Django to follow single-valued relationships (ForeignKey and OneToOne) in a single SQL query using joins, instead of lazily loading each related object later. Used correctly, it can turn dozens of follow-up queries into one well-structured SELECT.

How select_related Works Under the Hood

At a high level, select_related instructs the ORM to JOIN related tables and hydrate related objects from that joined row, instead of fetching them on demand. The resulting queryset still looks and behaves like a normal queryset in Python, but the related objects are already in memory when you access them.

For example, imagine the classic Book and Author models:

# models.py
from django.db import models

class Author(models.Model):
    name = models.CharField(max_length=255)

class Book(models.Model):
    title = models.CharField(max_length=255)
    author = models.ForeignKey(Author, on_delete=models.CASCADE)

A naïve view might look like this:

# views.py (naïve)
from django.shortcuts import render
from .models import Book

def book_list(request):
    books = Book.objects.all()  # 1 query
    return render(request, "book_list.html", {"books": books})

Then in the template:

{% for book in books %}
  {{ book.title }} - {{ book.author.name }}
{% endfor %}

This looks harmless but triggers an N+1 pattern: Django runs 1 query for all books, then 1 extra query per book when accessing book.author. With select_related:

# views.py (optimized)

def book_list(request):
    books = Book.objects.select_related("author")  # still 1 query, with a JOIN
    return render(request, "book_list.html", {"books": books})

Now the same template loop generates a single SQL query with a JOIN to author, and all author data is available without extra trips to the database.

When select_related Fixes N+1 Cleanly

In my experience, select_related shines in three cases:

  • ForeignKey on your main queryset model (e.g., Book -> Author)
  • OneToOne relationships (e.g., User -> Profile)
  • Nested single-valued chains (e.g., Order -> Customer -> AccountManager)

To preload multiple related fields, you can pass them as arguments:

orders = (
    Order.objects
    .select_related("customer", "customer__account_manager")
)

That turns code like this:

{% for order in orders %}
  {{ order.id }} - {{ order.customer.name }} - {{ order.customer.account_manager.full_name }}
{% endfor %}

into a single efficient query instead of dozens. I’ve seen this alone shave hundreds of milliseconds off list endpoints once data volume grows.

Another subtle but powerful pattern is using select_related in DRF viewsets or class-based views that build querysets in get_queryset(), so every consumer (templates, serializers, APIs) benefits automatically:

from rest_framework.viewsets import ReadOnlyModelViewSet
from .models import Book
from .serializers import BookSerializer

class BookViewSet(ReadOnlyModelViewSet):
    serializer_class = BookSerializer

    def get_queryset(self):
        return Book.objects.select_related("author")

When I wire it up this way, I don’t have to remember to optimize each new endpoint that reuses the same queryset.

Common select_related Pitfalls That Keep N+1 Alive

Despite how powerful it is, I’ve seen plenty of codebases where select_related is present but N+1 problems still persist. The usual culprits are subtle, but once you know them, they’re easy to spot.

  • Using select_related on reverse or multi-valued relationships

    select_related only works on single-valued relations (ForeignKey, OneToOne) from the model you’re querying. Trying Book.objects.select_related("authors") when authors is a ManyToManyField or reverse relation simply won’t preload anything. For those, you need prefetch_related instead.

  • Forgetting nested relationships

    I’ve made this mistake more than once: I add select_related("customer") to fix one N+1, but serializers or templates also access order.customer.team. The customer is preloaded, but team still triggers N+1 queries. In those cases, the chain needs to be explicit, e.g. select_related("customer", "customer__team").

  • Applying select_related too late

    If you slice or evaluate the queryset before calling select_related, it has no effect. For example:

    orders = list(Order.objects.all())  # queryset evaluated here
    orders = orders.select_related("customer")  # has no effect; now just a list
    

    The optimization must be chained at the queryset level before evaluation:

    orders = (
        Order.objects
        .select_related("customer")
        .all()
    )
    orders = list(orders)  # safe to evaluate now
        
  • Over-joining huge tables unnecessarily

    Once or twice, I got overexcited and slapped select_related() on everything. On very large tables with wide columns, this can backfire by creating massive joined rows. The fix is to be intentional: only follow the relationships you actually read in that view or serializer.

Whenever I suspect an N+1 on a ForeignKey or OneToOne relation, my default approach is to inspect the SQL (via debug toolbar or logging), add targeted select_related calls on exactly the relationships being accessed, and re-check the query count. Done carefully, that pattern consistently kills a whole class of Django ORM N+1 queries with minimal code changes.

3. Use prefetch_related for Many-to-Many and Reverse Relations

Once I’ve cleaned up ForeignKey N+1s with select_related, the remaining performance problems almost always hide in many-to-many and reverse relations. That’s where prefetch_related shines. Instead of joining everything into a giant row, it executes a small, bounded set of extra queries and stitches results together in Python, eliminating Django ORM N+1 queries on multi-valued relationships.

3. Use prefetch_related for Many-to-Many and Reverse Relations - image 1

Fixing Classic Many-to-Many N+1 Patterns

Consider a simple blog setup where posts have many tags. I’ve seen this exact pattern cause dozens of unnecessary queries on a list page:

# models.py
from django.db import models

class Tag(models.Model):
    name = models.CharField(max_length=50)

class Post(models.Model):
    title = models.CharField(max_length=255)
    tags = models.ManyToManyField(Tag, related_name="posts")

Naïve view + template:

# views.py (naïve)
from django.shortcuts import render
from .models import Post

def post_list(request):
    posts = Post.objects.all()  # 1 query
    return render(request, "post_list.html", {"posts": posts})
{# post_list.html #}
{% for post in posts %}
  <h2>{{ post.title }}</h2>
  {% for tag in post.tags.all %}
    {{ tag.name }}
  {% endfor %}
{% endfor %}

This looks totally fine in code review, but it triggers an N+1: one query for posts, then one extra query per post for its tags. On a page with 50 posts, that’s 51 queries. The fix with prefetch_related is straightforward:

# views.py (optimized)

def post_list(request):
    posts = Post.objects.prefetch_related("tags")  # 1 + 1 queries total
    return render(request, "post_list.html", {"posts": posts})

Now Django runs:

  • 1 query to get all posts
  • 1 query to get all tag relations for those posts (through the M2M table + tags table)

and then attaches the tag lists to each post in memory. The template stays exactly the same: no coupling between your HTML and the optimization.

Handling Reverse ForeignKey Relations and Nested Prefetches

The other place I regularly see Django ORM N+1 queries is on reverse relationships, like a user with many orders. Because there’s no direct forward field on the main model, people often forget that accessing user.orders.all in a loop is just as expensive as hitting a many-to-many.

# models.py
class User(models.Model):
    email = models.EmailField(unique=True)

class Order(models.Model):
    user = models.ForeignKey(User, related_name="orders", on_delete=models.CASCADE)
    total = models.DecimalField(max_digits=10, decimal_places=2)

Naïve access:

{% for user in users %}
  {{ user.email }}
  {% for order in user.orders.all %}
    {{ order.total }}
  {% endfor %}
{% endfor %}

This will trigger 1 query for users, then 1 extra query per user for their orders. To fix it, I prefetch the reverse FK:

users = User.objects.prefetch_related("orders")

If I also need to touch deeper relationships on Order (for example, order.items or order.product.category), I’ve had good results using nested prefetches and Prefetch objects for more control:

from django.db.models import Prefetch

users = (
    User.objects
    .prefetch_related(
        Prefetch(
            "orders",
            queryset=(
                Order.objects
                .select_related("payment_method")  # FK on Order
                .prefetch_related("items")         # M2M or reverse on Order
            ),
        )
    )
)

In my own projects, this pattern has been key for complex dashboards: I prefetch a tree of related objects in a controlled number of queries, and then safely iterate through everything in templates or serializers without fear of surprise queries.

Using Prefetch Objects for Filtered and Renamed Prefetches

The most powerful (and underused) feature, in my experience, is the Prefetch object itself. It lets you apply filters, change ordering, or even rename the attribute where the prefetched results are stored. This is how I keep query counts low even when business rules get complicated.

Say you want users with only their recent orders attached:

from django.utils import timezone
from django.db.models import Prefetch

thirty_days_ago = timezone.now() - timezone.timedelta(days=30)

recent_orders_qs = (
    Order.objects
    .filter(created_at__gte=thirty_days_ago)
    .order_by("-created_at")
)

users = User.objects.prefetch_related(
    Prefetch("orders", queryset=recent_orders_qs, to_attr="recent_orders"),
)

for user in users:
    # uses the prefetched list; no extra queries
    for order in user.recent_orders:
        print(user.email, order.total)

Notice how to_attr lets us keep orders as the default manager while exposing a separate, filtered list recent_orders that’s fully prefetched. When I’m working with serializers, I often lean on this pattern to avoid weird filter logic in templates or views.

The main pitfalls I watch for with prefetch_related are:

  • Forgetting to prefetch the reverse or M2M side at all, assuming select_related will handle it.
  • Over-prefetching massive collections (e.g., thousands of related rows per parent) when I only need a subset—this is where Prefetch with filters really pays off.
  • Applying prefetch_related on an already-evaluated queryset (like after list()), which silently does nothing.

When I see a view looping over parents and touching any multi-valued attribute inside that loop, my default instinct is to reach for prefetch_related or a tuned Prefetch object. That habit alone has eliminated some of the worst Django ORM N+1 queries I’ve seen in production APIs. Common pattern for prefetched and filtered related objects – Django Forum

4. Reshape Views, Templates, and Serializers to Avoid Hidden N+1

Once I started profiling real projects, I realized many Django ORM N+1 queries weren’t just about missing select_related or prefetch_related—they were the result of how views, templates, and serializers were structured. The data access pattern baked into your presentation layer often decides whether you make one query or hundreds.

Simplify Loops and Move Relationship Access into the View

In my experience, the worst N+1 offenders hide in nested template loops or Python loops that reach through multiple relationships. A common pattern looks like this:

{# user_list.html #}
{% for user in users %}
  {{ user.email }}
  {% for order in user.orders.all %}
    {{ order.total }} - {{ order.product.category.name }}
  {% endfor %}
{% endfor %}

Even if you prefetch orders, that extra hop to product.category can still introduce fresh Django ORM N+1 queries. What’s worked well for me is:

  • Flattening data needs: decide exactly what fields the template needs.
  • Preparing those relationships explicitly in the view.
  • Passing a simple structure to the template, so it doesn’t walk deep relationships on its own.

Reshaped view:

# views.py
from django.db.models import Prefetch
from .models import User, Order

def user_list(request):
    orders_qs = Order.objects.select_related("product__category")

    users = (
        User.objects
        .prefetch_related(Prefetch("orders", queryset=orders_qs, to_attr="prefetched_orders"))
    )

    context_users = []
    for user in users:
        context_users.append(
            {
                "email": user.email,
                "orders": [
                    {
                        "total": order.total,
                        "product_name": order.product.name,
                        "category_name": order.product.category.name,
                    }
                    for order in user.prefetched_orders
                ],
            }
        )

    return render(request, "user_list.html", {"users": context_users})

Template becomes a simple renderer with no ORM access at all:

{% for user in users %}
  {{ user.email }}
  {% for order in user.orders %}
    {{ order.total }} - {{ order.product_name }} ({{ order.category_name }})
  {% endfor %}
{% endfor %}

After getting burned by template-driven N+1 a few times, I now try to keep templates “dumb”: they render data, they don’t traverse relationships.

Rework Serializers and DRF Viewsets to Respect Query Budgets

APIs using Django REST Framework are another place I routinely find hidden N+1 issues. You can have a perfectly optimized get_queryset(), but a nested serializer that lazily accesses related fields inside a loop will quietly undo your work.

A typical problematic pattern:

# serializers.py (naïve)
from rest_framework import serializers
from .models import Order

class OrderSerializer(serializers.ModelSerializer):
    product_name = serializers.CharField(source="product.name")
    category_name = serializers.CharField(source="product.category.name")

    class Meta:
        model = Order
        fields = ["id", "total", "product_name", "category_name"]

If your viewset doesn’t select and prefetch correctly, every serialized order can trigger extra queries. I now treat serializers and viewsets as a pair: the viewset is responsible for loading everything the serializer will touch.

# views.py
from rest_framework.viewsets import ReadOnlyModelViewSet
from .models import Order
from .serializers import OrderSerializer

class OrderViewSet(ReadOnlyModelViewSet):
    serializer_class = OrderSerializer

    def get_queryset(self):
        # Explicitly load what the serializer needs
        return (
            Order.objects
            .select_related("product__category")
        )

In a few performance-critical APIs, I’ve gone one step further and introduced lightweight “projection” serializers that work off pre-shaped dictionaries instead of ORM instances—very similar to how I reshaped context for templates. That removed ORM access from the serialization layer entirely and made query counts much more predictable.

My rule of thumb now: whenever I see nested serializers or templates walking across more than one relationship hop, I pause and ask, “Can I reshape this in the view and guarantee all the data is loaded up front?” Answering that question early has saved me from some nasty Django ORM N+1 queries in production.

5. Add Query Hints: only(), defer(), and values() for Lean Results

After I’ve fixed the obvious Django ORM N+1 queries with select_related and prefetch_related, the next big win is usually trimming what I fetch. There’s no point joining and prefetching efficiently if each row still hauls around huge blobs of data I never use. That’s where only(), defer(), and values() come in: they keep result sets lean and memory-friendly.

5. Add Query Hints: only(), defer(), and values() for Lean Results - image 1

Optimizing Models with only() and defer()

In my experience, only() and defer() are most useful on “fat” models (lots of fields, big text or JSON columns) that you still want as full model instances. only() whitelists fields to load immediately; everything else is deferred until accessed, while defer() does the opposite and blacklists specific fields.

Imagine a Product model with a huge description and a JSON metadata field that most list views don’t need:

# models.py
class Product(models.Model):
    name = models.CharField(max_length=255)
    price = models.DecimalField(max_digits=10, decimal_places=2)
    description = models.TextField()
    metadata = models.JSONField(default=dict)

For a list endpoint that just needs name and price, I’ll often do:

# views.py
from .models import Product

def product_list(request):
    products = (
        Product.objects
        .only("id", "name", "price")  # defer heavy fields by default
    )
    # combine with select_related/prefetch_related as needed
    return render(request, "product_list.html", {"products": products})

The ORM still returns Product instances, but big fields are left out of the initial SELECT. As long as the template doesn’t access description or metadata, there are no extra queries and far less data moves over the wire. If someone later adds {{ product.description }} to the template, Django will transparently fire a per-row query to retrieve it—so I only use only()/defer() when I’m confident about how instances are used.

When I want to keep all fields except one or two heavy ones, I flip it around:

products = Product.objects.defer("description", "metadata")

This has been especially helpful in admin-like pages and background tasks where tens of thousands of rows are processed, and memory pressure really matters.

Using values() and values_list() for Pure Data Pipelines

If I don’t actually need model behavior at all—just raw data—I reach for values() or values_list(). These return plain dictionaries or tuples instead of model instances, which means less work for the ORM and less memory used for large batches. I’ve found this particularly effective in reporting endpoints and export jobs.

Example: exporting order summaries to CSV without caring about model methods:

# reports.py
from .models import Order

def get_order_summaries():
    qs = (
        Order.objects
        .select_related("user")
        .only("id", "total", "created_at", "user__email")
        .values("id", "total", "created_at", "user__email")
    )

    # qs yields dicts like {"id": 1, "total": ..., "created_at": ..., "user__email": ...}
    for row in qs:
        yield row

Here I’m combining three ideas:

  • select_related to avoid N+1 on user
  • only() to keep the SELECT narrow
  • values() so Django doesn’t instantiate full Order objects at all

One subtle lesson I’ve learned: values() short-circuits a lot of ORM magic, which is great for performance but means you should keep that data path very deliberate—pass it to a CSV writer, JSON response, or reporting layer, not back into code that expects full models.

While these query hints don’t directly fix Django ORM N+1 queries, they stack beautifully on top of your existing select_related/prefetch_related work. The result is fewer queries and less data per query, which is exactly what you want in high-traffic production environments.

6. Enforce N+1 Detection in CI with Tests and Linters

Every time I’ve fixed nasty Django ORM N+1 queries without adding guardrails, they eventually came back. Someone tweaks a serializer, adds a new template field, and suddenly the query count doubles again. The only reliable way I’ve found to keep performance stable is to codify expectations in tests and, where possible, static checks that run in CI.

Add Query Count Assertions to Critical Tests

My first line of defense is always query-count tests for critical views and endpoints. Django gives us assertNumQueries, which I wrap in helper functions so the intent is clear and easy to reuse. Whenever I ship a performance-sensitive view, I add a test that fails loudly if someone reintroduces a Django ORM N+1 query later.

Here’s a simple pattern I’ve used on several teams:

# tests/test_query_budgets.py
from django.test import TestCase
from django.urls import reverse

class QueryBudgetMixin:
    def assertQueryBudget(self, num, func, *args, **kwargs):
        """Assert a hard query budget for a block of code."""
        with self.assertNumQueries(num):
            return func(*args, **kwargs)

class UserListQueryTests(QueryBudgetMixin, TestCase):
    fixtures = ["users_with_orders.json"]

    def test_user_list_keeps_query_count_bounded(self):
        url = reverse("user-list")

        def request():
            response = self.client.get(url)
            self.assertEqual(response.status_code, 200)

        # If someone reintroduces an N+1, this number will jump and the test fails
        self.assertQueryBudget(4, request)

In my experience, the exact number (here 4) is less important than having a finite and intentional budget. I also like to comment the assumptions (“1 for users, 1 for orders, 1 for products, 1 for categories”) so future maintainers understand why bumping the budget might hide a regression.

The same idea works well for Django REST Framework: exercise the API with the test client, assert on response status and payload shape, then enforce a query budget for that endpoint. Over time, you end up with a small suite of performance tests that protect your hottest paths.

Introduce Linters and CI Rules Around ORM Usage

While runtime tests catch regressions at execution time, I’ve found static checks and linters helpful for flagging risky patterns before they run. They’re not perfect, but they nudge the team toward safer defaults and make Django ORM N+1 queries less likely in new code.

Some approaches that have worked for me:

  • Custom flake8 or pylint rules that warn when certain modules (like views.py or DRF viewsets) access multi-valued relations in loops without passing through a central, optimized get_queryset().
  • CI scripts that run a small subset of performance tests with DEBUG=True and query logging enabled, then fail if any request exceeds a configured query threshold.
  • Team policies enforced via code review bots (e.g., GitHub Actions comments) that remind authors to add assertNumQueries when they touch critical views or serializers.

Here’s a simple example of a test you could wire into CI as a smoke check for the most important endpoints:

# tests/test_smoke_query_budgets.py
from django.test import TestCase
from django.urls import reverse

CRITICAL_ENDPOINTS = [
    ("user-list", 5),
    ("order-list", 8),
]

class SmokeQueryBudgetTests(TestCase):
    fixtures = ["users_with_orders.json"]

    def test_critical_endpoints_respect_query_budgets(self):
        for name, max_queries in CRITICAL_ENDPOINTS:
            url = reverse(name)
            with self.subTest(endpoint=name):
                with self.assertNumQueries(max_queries):
                    response = self.client.get(url)
                    self.assertEqual(response.status_code, 200)

Is this a perfect guard against every regression? No. But in my experience, even a handful of these tests, combined with review discipline and static checks, dramatically reduce the chance that a stray serializer change will smuggle Django ORM N+1 queries back into production. Control Django DB Performance Using django-test-query-counter | Sophilabs

7. Monitor Django ORM N+1 Queries in Production

Even with solid tests and linters, some Django ORM N+1 queries only reveal themselves under real user traffic and real data. In my own projects, the biggest surprises have come from edge cases: pagination at high page numbers, unusual filters, or rare user roles. That’s why I treat runtime monitoring as the final safety net—tracking query patterns and alerting when something smells like N+1 in production.

7. Monitor Django ORM N+1 Queries in Production - image 1

Use APM, Sampling, and Custom Logging to Spot Query Spikes

Application Performance Monitoring (APM) tools have saved me hours of guesswork. Most modern APMs (Datadog, New Relic, Sentry Performance, etc.) can show you per-request SQL counts, slow queries, and even traces of which view or serializer triggered them. When I wire this up, I always start by defining a few key signals:

  • Requests with unusually high SQL counts (e.g., > 50 queries per request).
  • Endpoints whose query counts scale with list size (classic Django ORM N+1 symptom).
  • Sudden regressions where an endpoint’s median query count jumps after a deploy.

If you don’t have APM in place yet, Django’s own query logging can go a long way. In one project, I added a simple middleware that logs SQL counts and the slowest queries for a small sample of requests in production. That was enough to catch a few nasty N+1 patterns that only appeared for certain filters.

# middleware.py
import logging
from django.conf import settings
from django.db import connection

logger = logging.getLogger(__name__)

class QueryStatsMiddleware:
    def __init__(self, get_response):
        self.get_response = get_response

    def __call__(self, request):
        # Optional: sample to avoid logging every request
        sample = getattr(settings, "QUERY_STATS_SAMPLE_RATE", 0.01)
        import random
        should_log = random.random() < sample

        if not should_log:
            return self.get_response(request)

        # Clear queries list for this request
        connection.queries_log.clear() if hasattr(connection, "queries_log") else None

        response = self.get_response(request)

        num_queries = len(connection.queries)
        if num_queries > 50:
            logger.warning(
                "High query count", extra={
                    "path": request.path,
                    "num_queries": num_queries,
                }
            )

        return response

This middleware is intentionally simple, but the idea is powerful: sample real traffic, log paths with suspiciously high query counts, and investigate those first. In my experience, the endpoints that regularly log 100+ queries per request almost always hide Django ORM N+1 queries.

Add Tracing and Per-Endpoint Budgets in Production Dashboards

Once I know which endpoints matter most (usually from business metrics and APM data), I set query budgets per endpoint and track them in dashboards. This doesn’t have to be fancy: even a simple Prometheus metric or periodic log-based report can highlight when an endpoint suddenly exceeds its historical query profile.

For example, you might expose a simple metric counter in middleware:

# pseudo-metrics example
from django.db import connection
from my_metrics import record_gauge  # imaginary metrics helper

class QueryMetricsMiddleware:
    def __init__(self, get_response):
        self.get_response = get_response

    def __call__(self, request):
        response = self.get_response(request)
        num_queries = len(connection.queries)
        endpoint = request.resolver_match.view_name or request.path
        record_gauge("http_endpoint_sql_queries", num_queries, tags={"endpoint": endpoint})
        return response

With that in place, I’ve been able to create simple charts like “SQL queries per request for /api/orders/” over time. When a new deploy quietly adds a Django ORM N+1 pattern, the chart spikes and I know exactly where to look.

My rule of thumb: eliminate Django ORM N+1 queries in development, prove it with tests and budgets in CI, then watch for regressions under real load with APM, logging, and metrics. Combining all three layers has been the most reliable way, in my experience, to keep performance from drifting as the codebase evolves. Performance and optimization – Django documentation

Conclusion: A Playbook for Eliminating Django ORM N+1 Queries

When I look back at the worst performance bugs I’ve debugged, Django ORM N+1 queries were almost always at the center of them. The good news is that once you have a playbook, they become predictable and manageable instead of mysterious.

The core workflow I rely on now looks like this:

  • Detect locally: enable query logging and django-debug-toolbar, then hunt for endpoints whose queries scale with list size.
  • Fix at the queryset level: apply select_related for ForeignKey/OneToOne, prefetch_related (and Prefetch) for many-to-many and reverse relations.
  • Reshape the presentation layer: simplify views, templates, and serializers so they render pre-fetched data instead of traversing relationships ad hoc.
  • Trim the payload: use only(), defer(), and values() to keep each query lean once N+1 issues are fixed.
  • Lock it in with tests and CI: enforce query budgets using assertNumQueries and lightweight smoke tests on critical endpoints.
  • Watch production: use APM, sampling, logs, and metrics to catch regressions and edge cases under real workloads.

In my experience, teams that adopt this cycle—measure → optimize → codify → monitor—see query counts stabilize and performance incidents drop sharply. Once you’ve tamed Django ORM N+1 queries, the next natural steps are tackling slow individual queries (indexes, query plans) and higher-level caching (per-object, per-view, or HTTP-layer caching). Taken together, these practices turn the ORM from a liability into a dependable foundation for high-traffic Django apps.

Join the conversation

Your email address will not be published. Required fields are marked *