Introduction: Why You Need Better PostgreSQL Extension Development Tools
When I first started writing PostgreSQL extensions in C, I treated them like small shared libraries with a SQL wrapper. That mindset worked for simple functions, but it completely fell apart once I moved into serious work with SPI, background workers, and hooks. Debugging crashes inside the PostgreSQL backend, tracking memory contexts, and keeping ABI compatibility across versions quickly convinced me that generic C tooling just wasn’t enough.
By 2025, PostgreSQL has become a critical infrastructure layer for many organizations, and the expectations for extensions have risen with it. Modern extensions aren’t just a few CREATE FUNCTION wrappers; they embed complex business logic, custom indexing strategies, async workers, and hooks into planner, executor, and logging pipelines. Without the right PostgreSQL extension development tools, it’s easy to introduce subtle race conditions, memory leaks, or performance regressions that only show up under production load.
Specialized tooling gives C developers three big advantages:
- Deep visibility into the backend process – stepping through extension code safely, inspecting pgproc state, and understanding how your hooks interact with core subsystems.
- Faster feedback while iterating – automating build, install, regression tests, and server restarts so you can tweak a background worker or SPI call and see results immediately.
- Safer upgrades and portability – catching API and catalog changes early when moving from, say, PostgreSQL 14 to 16+, instead of finding out after a failed deployment.
In my experience, the teams that invest in a focused toolchain ship more reliable extensions and spend far less time chasing backend crashes at 2 a.m. The rest of this article looks at the PostgreSQL extension development tools that make that possible for C developers today.
1. PostgreSQL Source Tree & contrib as Your Baseline Extension Toolkit
The most underrated of all PostgreSQL extension development tools is the PostgreSQL source tree itself. When I started writing C extensions, I kept bouncing between outdated blog posts and half-complete examples. The moment I began treating the core source and contrib directory as my primary reference, my productivity and code quality both jumped.
Inside the PostgreSQL tree you get real, production-grade examples of how to interact with SPI, memory contexts, background workers, and hooks. You also get the regression tests that core developers rely on every day. In my experience, copying patterns from battle-tested contrib modules is far safer than inventing new patterns from scratch.
Using contrib extensions as realistic templates
The contrib directory is effectively a library of example extensions, covering functions, types, indexes, background workers, and more. Whenever I’m designing something new, I start by finding the closest contrib module and reading it front to back.
- Custom types & operators – extensions like cube or hstore show how to define types, I/O functions, and operators properly.
- Index access methods – modules like btree_gin and pg_trgm demonstrate how to plug into indexing infrastructure.
- Internal APIs – contrib code shows safe patterns for using
SPI_*APIs, memory contexts, error handling macros, and more.
On my own projects, I often start by cloning a contrib extension into a new directory and then trimming it down. That way, the extension skeleton already follows PostgreSQL coding conventions, Makefile patterns, and control file structure.
Leaning on the core Makefiles and regression test framework
The core build system and regression framework are also part of this baseline toolkit. They teach you how PostgreSQL expects extensions to be compiled, installed, and tested. Instead of crafting my own ad‑hoc scripts, I reuse the patterns already maintained by the community.
A minimal extension directory typically looks like this:
my_extension/
my_extension.c
my_extension.control
Makefile
sql/
my_extension.sql
expected/
my_extension.out
And the Makefile usually follows the same style as contrib modules:
MODULES = my_extension EXTENSION = my_extension DATA = my_extension--1.0.sql REGRESS = my_extension PG_CONFIG = pg_config PGXS := $(shell $(PG_CONFIG) --pgxs) include $(PGXS)
This small amount of boilerplate buys you a lot: version-aware builds via pg_config, portable installation paths, and automatic integration with make installcheck for regression tests. When I first adopted this pattern, my CI pipelines for extensions went from fragile scripts to near-copy-paste from contrib.
If you want a deeper walk-through of using contrib modules as templates and extending the regression framework, check out 36.10. C-Language Functions – PostgreSQL.
2. PGXS and Modern Build Tooling for PostgreSQL Extension Development
Out of all the PostgreSQL extension development tools I rely on, PGXS is the one that quietly does the most heavy lifting. Instead of hand-crafting compiler flags, include paths, and linker options for every environment, I let PGXS pull everything from the target PostgreSQL installation. Once I embraced PGXS, my extensions stopped breaking every time a client upgraded their server or moved from Linux to macOS.
PGXS is a makefile infrastructure shipped with PostgreSQL that abstracts the details of building against a particular server version. It reads configuration via pg_config and wires up the right CFLAGS, LDFLAGS, include directories, and extension install paths. In 2025, I treat PGXS as the baseline and then layer modern tooling—like Ninja, CMake, and CI pipelines—on top of it.
Structuring a portable PGXS-based build
A portable build starts with a clean, PGXS-aware Makefile. I usually follow the same pattern as contrib modules so that anyone familiar with PostgreSQL can understand the layout instantly.
Here’s a minimal but realistic example for a C-based extension:
# Makefile for my_extension MODULE_big = my_extension OBJS = my_extension.o worker.o spi_utils.o EXTENSION = my_extension DATA = my_extension--1.0.sql REGRESS = my_extension PG_CONFIG = pg_config PGXS := $(shell $(PG_CONFIG) --pgxs) include $(PGXS)
In my experience, those last two lines are where the magic happens. PGXS injects the correct compiler options for the target PostgreSQL, so you can:
- Build the extension against different server versions without changing the Makefile.
- Compile cleanly on multiple platforms and distributions.
- Hook into
make installandmake installcheckin the same way core and contrib modules do.
When I need conditional compilation for different PostgreSQL versions—especially around SPI, background worker APIs, or new hook signatures—I combine PGXS with standard C feature checks:
#include "postgres.h"
#include "utils/guc.h"
void _PG_init(void)
{
#if PG_VERSION_NUM >= 150000
DefineCustomStringVariable("my_extension.mode",
"Extension mode for PG 15+.",
NULL,
&my_mode_guc,
"default",
PGC_POSTMASTER,
0,
NULL,
NULL,
NULL);
#else
/* Fallback for older PostgreSQL versions */
DefineCustomStringVariable("my_extension.mode",
"Extension mode for older versions.",
NULL,
&my_mode_guc,
"legacy",
PGC_POSTMASTER,
0,
NULL,
NULL,
NULL);
#endif
}
Because PGXS gives me the correct PG_VERSION_NUM and headers for the build target, I can keep one codebase that compiles cleanly from, say, PostgreSQL 12 through 16+ without forking the repository.
Combining PGXS with modern build chains and CI
Where PGXS really shines for me is in CI and multi-environment setups. Instead of reinventing a build system, I let PGXS handle the PostgreSQL-specific bits and use modern tooling to orchestrate everything around it.
- Ninja or parallel make – I run
make -j"$(nproc)"or use Ninja to speed up builds in larger extensions with many C files. - CMake or Meson as wrappers – for some teams, a higher-level build description is useful; you can still shell out to PGXS-aware Makefiles for the actual compilation step.
- Docker and CI pipelines – I use Docker images with different PostgreSQL versions, call
pg_configinside each container, and runmake USE_PGXS=1plusmake installcheckto validate compatibility.
A simple CI script to build and test against the current server might look like this:
#!/usr/bin/env bash
set -euo pipefail
PG_CONFIG=${PG_CONFIG:-pg_config}
export PG_CONFIG
make clean
make USE_PGXS=1 -j"$(nproc)"
make USE_PGXS=1 install
make USE_PGXS=1 installcheck
Once I set this up for a project, I rarely touch the build configuration again; new PostgreSQL releases are usually just a matter of updating the Docker image or test matrix. If you want to go deeper into integrating PGXS-based builds with multi-version CI pipelines, pgxn/pgxn-tools – Docker Image with GitHub Workflow for multi-version PGXS testing.
3. Debugging PostgreSQL Extensions in C with gdb and lldb
The moment I started treating gdb and lldb as first-class PostgreSQL extension development tools, my life got much easier. Logs and elog() calls are fine for quick checks, but when a background worker deadlocks or a hook corrupts memory, I need to see exactly what the backend is doing. Attaching a debugger directly to the Postgres server process is the most reliable way I’ve found to track down subtle SPI bugs, bad memory context usage, and incorrect hook logic.
The basic workflow is always the same: compile PostgreSQL and your extension with debug symbols, start the server in a predictable way, attach gdb or lldb to the relevant backend or background worker PID, and then set breakpoints in your C code.
Attaching to a backend or background worker
For interactive debugging, I usually run PostgreSQL under my own user account and keep the server logs visible. When I trigger a query or action that hits my extension, I look up the process ID (PID) and attach the debugger.
On Linux with gdb, the flow looks like this:
# 1. Build PostgreSQL and your extension with debug symbols # (e.g., configure with --enable-debug and no optimization or -O0) # 2. Start PostgreSQL normally pg_ctl -D /path/to/data start # 3. Find the PID of the backend handling your session ps aux | grep postgres # 4. Attach gdb to that PID sudo gdb -p <backend_pid>
Once attached, I set breakpoints in my extension’s functions. For example, if I want to debug a function that issues SPI queries:
(gdb) break my_spi_function (gdb) continue
Then I run the corresponding SQL in psql, wait for the breakpoint to hit, and step through SPI calls, inspecting parameters and SPI_tuptable contents. In my experience, this is the fastest way to confirm that I’m managing transactions and error handling correctly.
On macOS with lldb, the commands are similar:
$ lldb -p <backend_pid> (lldb) breakpoint set --name my_spi_function (lldb) continue
For background workers, I usually start the worker with bgw_start_time = BgWorkerStart_RecoveryFinished or similar, watch logs for the worker’s PID, and attach the debugger before the worker reaches the problematic code path.
Practical tips for debugging hooks, SPI, and crashes
The hardest bugs I’ve dealt with usually involve hooks and memory misuse. Over time, I’ve built a few habits that save hours when using gdb or lldb with PostgreSQL:
- Use debug builds – compile both PostgreSQL and your extension with
-gand minimal optimization (e.g.,-O0), otherwise stepping through code is painful. - Break on
elog(ERROR)orAssert– if I suspect an internal error, I set a breakpoint onerrstartor look for assertion functions to catch the failure right before it unwinds. - Inspect key Postgres structs – for hooks like planner or executor hooks, I print relevant fields from
PlannerInfo,QueryDesc, orEStatedirectly in the debugger to ensure I’m not misinterpreting the API.
For example, debugging a simple executor hook in C might look like this:
#include "postgres.h"
#include "executor/executor.h"
static ExecutorRun_hook_type prev_ExecutorRun = NULL;
static void
my_ExecutorRun(QueryDesc *queryDesc,
ScanDirection direction,
uint64 count,
bool execute_once)
{
/* Place a breakpoint here with gdb or lldb */
if (prev_ExecutorRun)
prev_ExecutorRun(queryDesc, direction, count, execute_once);
else
standard_ExecutorRun(queryDesc, direction, count, execute_once);
}
void
_PG_init(void)
{
prev_ExecutorRun = ExecutorRun_hook;
ExecutorRun_hook = my_ExecutorRun;
}
And then in gdb:
(gdb) break my_ExecutorRun (gdb) continue # Execute a query in psql (gdb) print queryDesc->operation (gdb) bt # show backtrace to see how we got here
For crashes and memory issues, getting a core dump is invaluable. I usually configure the OS to enable core dumps, then let the backend crash, and inspect the core with gdb:
# Enable core dumps (Linux example) ulimit -c unlimited # After a crash, load the core gdb `which postgres` core (gdb) bt full
This approach has helped me track down everything from double frees in custom memory contexts to misuse of SPI connections inside background workers. Once you’re comfortable attaching gdb or lldb to live backends, debugging complex PostgreSQL extensions in C becomes a lot less mysterious.
4. Harnessing pgTAP and Regression Tests for Extension Quality
The PostgreSQL extension development tools that have saved me from the most production bugs are, without question, pgTAP and the built-in regression test harness. Once my extensions started using SPI heavily, adding hooks, and spinning up background workers, ad‑hoc manual testing stopped being viable. I needed repeatable, automated checks that could run on every commit and across multiple PostgreSQL versions.
Combining pgTAP with the core regression framework
pgTAP gives me expressive, SQL-native assertions, while the core regression harness—make installcheck driven by PGXS—provides a stable way to run those tests in CI. I usually keep two layers of tests:
- Behavioral tests in pgTAP – verify that SPI queries return the right results, hooks fire as expected, and background workers update state correctly.
- Baseline regression tests – simple SQL/expected pairs to ensure the extension loads, basic functions work, and no unexpected output changes slip through.
For example, a pgTAP test file might live under sql/ in the extension source:
-- sql/my_extension_pgtap.sql
CREATE EXTENSION IF NOT EXISTS pgtap;
CREATE EXTENSION IF NOT EXISTS my_extension;
SELECT plan(5);
SELECT has_function('public', 'my_spi_function(integer)', 'SPI function exists');
SELECT results_eq(
$$ SELECT my_spi_function(42); $$,
$$ VALUES (42) $$,
'SPI-based function returns the expected row'
);
SELECT finish();
Then, the REGRESS variable in the extension Makefile lists the test scripts:
REGRESS = my_extension my_extension_pgtap
With that in place, make installcheck runs both the standard regression tests and the pgTAP suite, giving me a single command that validates the whole extension.
Testing hooks and background workers reliably
Hooks and background workers are trickier to test, but I’ve had good results by exposing small, testable pieces of state via SQL. For example, a background worker that processes a queue table can update a stats table whose contents I assert in pgTAP. For planner or executor hooks, I often add a lightweight debug GUC or function that reports which code path was taken, then assert on its output.
When I wired pgTAP and the regression harness into my CI pipelines, subtle regressions—like a planner hook misbehaving on PostgreSQL 16, or a background worker not restarting correctly—started showing up in minutes instead of weeks. If you want a structured example of integrating pgTAP with PGXS-based regression tests, pgTAP Documentation.
5. Using Docker and Containers as a Fast PostgreSQL Extension Lab
Among all the PostgreSQL extension development tools I use day to day, Docker is the one that lets me experiment fearlessly. Instead of polluting my local system with multiple PostgreSQL versions and odd configurations, I spin up disposable containers where I can try risky SPI calls, crash background workers, or misconfigure hooks without worrying about collateral damage.
In practice, I treat containers as a “PostgreSQL lab”: each container runs a specific Postgres version with my extension mounted in, compiled with PGXS, and wired into a simple test database. When something goes wrong—like a segfault from a bad pointer or an infinite loop in a background worker—I just stop the container, fix the code, and start fresh.
Spinning up a disposable Postgres + extension dev container
My usual pattern is to mount the extension source into the container and build it inside, so the environment matches the target PostgreSQL exactly. A minimal workflow looks like this:
# From your extension source directory # 1. Start a PostgreSQL container with the source mounted docker run --rm -it \ -e POSTGRES_PASSWORD=postgres \ -v "$PWD":/ext \ -p 5432:5432 \ postgres:16
In another terminal, I exec into the container and build the extension using PGXS:
# 2. Exec into the running container docker exec -it <container_id> bash # 3. Install build tools, then build and install the extension apt-get update && apt-get install -y build-essential postgresql-server-dev-16 cd /ext make USE_PGXS=1 make USE_PGXS=1 install
Once installed, I connect with psql (from host or inside the container), CREATE EXTENSION, and start hammering my SPI functions, hooks, or background workers. When I was first learning background worker APIs, this setup let me iterate dozens of times a day without ever touching my main Postgres installation.
Multi-version testing and scripted labs
Containers shine even more when I need to support multiple PostgreSQL versions. I keep a simple Bash script that loops over image tags (e.g., postgres:13, postgres:14, postgres:16), builds the extension in each container, and runs make installcheck. That way I catch API or behavior differences early.
#!/usr/bin/env bash
set -euo pipefail
for v in 13 14 16; do
docker run --rm -t \
-e POSTGRES_PASSWORD=postgres \
-v "$PWD":/ext \
postgres:$v \
bash -lc "apt-get update && \
apt-get install -y build-essential postgresql-server-dev-$v && \
cd /ext && make USE_PGXS=1 && make USE_PGXS=1 installcheck"
done
In my experience, once this kind of container-based lab is in place, experimenting with new SPI patterns, hook points, or background worker behaviors becomes fast and low risk. I can crash things, tweak settings, or switch Postgres versions in minutes, all without leaving my development machine in a mess.
6. Exploring pgrx, pgzx and Other Emerging Extension Toolchains
Even though this article focuses on C-focused PostgreSQL extension development tools, I’ve found that keeping an eye on newer ecosystems like pgrx (Rust) and pgzx (Zig) has changed how I design C extensions as well. These toolchains sit on top of the same C APIs you and I use—SPI, hooks, background workers—but they add stronger type systems, safer memory handling, and modern build workflows. In my own projects, I still ship C where it makes sense, but I increasingly prototype complex logic in Rust or Zig and then integrate it with existing C-based extensions.
pgrx: Rust-powered extensions with safer SPI, hooks, and background tasks
pgrx (formerly pgx) is a Rust framework for building PostgreSQL extensions without hand-writing C glue. Under the hood, pgrx talks to the C API, but the developer-facing code feels like idiomatic Rust. When I’m experimenting with new background workers or complex SPI-heavy logic, pgrx often lets me move faster because I’m not constantly fighting segfaults or manual memory management.
A simple function written with pgrx might look like this:
use pgrx::prelude::*;
pgrx::pg_module_magic!();
#[pg_extern]
fn my_safe_sum(a: i32, b: i32) -> i32 {
a + b
}
#[pg_extern]
fn my_spi_count() -> i64 {
Spi::get_one("SELECT count(*) FROM pg_class").unwrap()
}
pgrx also has facilities for background workers and hooks, but it wraps them in Rust types and lifetimes. I’ve used this to prototype planner or executor hooks that would be risky in raw C; once the idea proves out, I can either keep it in Rust or port the core logic back to C with more confidence. Importantly, pgrx-generated shared libraries can happily coexist with traditional C extensions in the same database.
The build tooling is also much smoother than a hand-rolled C setup: you use cargo pgrx init, cargo pgrx run, and cargo pgrx test to build, install, and regression-test your extension across multiple PostgreSQL versions. For me, that’s been a useful reference when improving my own PGXS-based C workflows.
pgzx and Zig-based approaches as experimental companions to C
pgzx is a newer toolchain that brings Zig into the PostgreSQL extension story. Zig compiles down to C-like binaries and interoperates cleanly with C, but adds its own take on safety and explicitness. I see it as a kind of experimental companion to my C work: I can explore new patterns for background workers or hook-based instrumentation in Zig, while still linking against the same C headers and APIs.
A minimal Zig function exported for PostgreSQL might look something like this:
// Pseudo-Zig sketch – actual pgzx APIs may differ
export fn my_zig_add(a: c_int, b: c_int) c_int {
return a + b;
}
From a practical standpoint, these emerging ecosystems influence how I structure C extensions even when I don’t ship Rust or Zig in production:
- Cleaner separation of concerns – I keep a thin C layer close to PostgreSQL internals (hooks, shared memory, GUCs) and factor complex business logic into a form that could be ported to Rust or Zig later.
- Better testing habits – pgrx’s integrated test tooling pushed me to tighten up my PGXS + pgTAP story for C projects.
- Interoperability mindset – I design C APIs that are FFI-friendly so I can call into Rust/Zig components or have them call back into C as the project evolves.
If you’re deep in C today, you don’t need to abandon it to benefit from these toolchains. In my experience, treating pgrx and pgzx as adjacent PostgreSQL extension development tools—great for rapid prototyping, safer experimentation with hooks and background workers, and inspiration for better designs—pays off even if the final shipping artifact is still a classic C-based extension.
7. Workflow Utilities: pgxn, Extension Catalogs, and Template Repos
The PostgreSQL extension development tools that save me the most time now are not compilers or debuggers, but ecosystem utilities: PGXN, curated extension catalogs, and battle-tested template repositories. Once I started leaning on these, I stopped reinventing project scaffolds and could focus on the hard parts—SPI logic, hooks, and background workers.
PGXN and curated catalogs for discovery and inspiration
PGXN (the PostgreSQL Extension Network) is effectively a package index for extensions. I regularly browse it when I’m starting a new C-based project, not to copy code, but to see how other authors structure their Makefile, handle PGXS, and document hooks or background workers. In my experience, reading a couple of mature PGXN-hosted extensions is worth hours of trial and error.
Alongside PGXN, community-maintained extension catalogs and GitHub topic pages give me a quick sense of:
- Common layout patterns for C-based extensions (where to put
sql/,expected/,src/). - How other developers expose configuration via GUCs, especially for workers and hooks.
- Real-world examples of SPI-heavy code that survives production traffic.
When I’m unsure about how to structure a new background worker or executor hook, I’ll often search catalogs for similar extensions and treat them as living documentation. For a broader overview of where to find and evaluate existing PostgreSQL extensions, PostgreSQL Extension Network (PGXN).
Template repositories and generators for fast scaffolding
The other big accelerator in my workflow is using template repos and project generators instead of starting from an empty directory. A good C extension template usually gives me:
- A PGXS-aware
Makefilewired formake installandmake installcheck. - Skeleton C files with
_PG_init, GUC examples, and hooks already registered. - Basic regression tests and, ideally, a pgTAP harness ready to extend.
- Optional Docker or CI configs that build against multiple PostgreSQL versions.
On GitHub, I keep a personal fork of a lean C extension template that I clone whenever I start a new idea. In my experience, having that starting point means it takes minutes, not days, to go from concept to a loadable extension with working tests. From there, I can plug in the more advanced PostgreSQL extension development tools—debuggers, Docker labs, pgTAP, and even pgrx or Zig experiments—without touching the project skeleton.
Conclusion: Building a Modern PostgreSQL Extension Development Stack
When I look at my own workflow today, the most productive PostgreSQL extension development tools form a layered stack rather than a random toolbox. I prototype and iterate in containers, debug nasty SPI and hook issues with gdb or lldb, and lock in correctness with pgTAP and regression tests. Around that, I use PGXN, catalogs, and template repos to avoid greenfield boilerplate—and keep an eye on pgrx and pgzx when I want safer or more experimental patterns for background workers and hooks.
If you’re refining your own setup, I’d start by standardizing three things: a PGXS-based template repo, a Docker-based lab that can spin up multiple PostgreSQL versions quickly, and an automated test harness (pgTAP + installcheck). Once those are in place, adding debuggers, modern language toolchains, and ecosystem discovery tools becomes a natural evolution, not a one-off effort.

Hi, I’m Cary Huang — a tech enthusiast based in Canada. I’ve spent years working with complex production systems and open-source software. Through TechBuddies.io, my team and I share practical engineering insights, curate relevant tech news, and recommend useful tools and products to help developers learn and work more effectively.





