Introduction: Why Structured Logging Matters for Rust Axum APIs
When I started building production Rust HTTP APIs with Axum, I quickly realized that basic println!-style logs weren’t enough. Once traffic, background jobs, and external integrations entered the picture, I needed a way to reliably answer questions like: “What happened for this specific request?” and “Why is this endpoint suddenly slow in production?”
This is where structured logging with Rust axum middleware logging becomes critical. Instead of writing unstructured text lines, structured logs emit well-defined key–value pairs: request IDs, HTTP methods, paths, status codes, latencies, and user or tenant identifiers. Log aggregators and observability tools can then filter, group, and correlate these fields, making real troubleshooting dramatically faster.
In my experience, the biggest win of structured logging is consistency. Every incoming request passes through the same Axum middleware, which automatically attaches context and timing information. That means I don’t have to remember to log every detail myself in each handler; the middleware ensures a baseline level of observability that I can rely on across the entire API surface.
Prerequisites and Project Setup for Axum Middleware Logging
Before wiring in Rust axum middleware logging, I like to make sure the basics are rock solid: a recent Rust toolchain, a minimal Axum app, and the tracing ecosystem ready to go. That way, when I add middleware later, I’m just plugging into an already working foundation, not debugging setup issues and logging logic at the same time.
Toolchain and crate prerequisites
You’ll want the following as a baseline:
- Rust 1.70+ (installed via rustup)
- Cargo for building and running the project
- Familiarity with async Rust and
tokio
Here’s a minimal Cargo.toml setup I typically start with for an Axum API using structured logging:
[package]
name = "axum-logging-demo"
version = "0.1.0"
edition = "2021"
[dependencies]
axum = "0.7"
tokio = { version = "1", features = ["full"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "fmt"] }
tracing-log = "0.2"
In my experience, adding env-filter early makes it much easier to dial log verbosity up or down per module without redeploying.
Scaffolding a minimal Axum HTTP API with tracing
Next, I scaffold a tiny Axum server and wire in tracing as the global subscriber. This gives us structured logs from day one, even before we add custom middleware.
use axum::{routing::get, Router};
use std::net::SocketAddr;
use tracing::{info, Level};
use tracing_subscriber::EnvFilter;
#[tokio::main]
async fn main() {
// Set up a global subscriber with env-based filtering
tracing_subscriber::fmt()
.with_env_filter(EnvFilter::from_default_env()
.add_directive("axum_logging_demo=info".parse().unwrap())
.add_directive("tower_http=info".parse().unwrap()))
.with_max_level(Level::INFO)
.json() // emit structured JSON logs
.init();
let app = Router::new().route("/health", get(health));
let addr: SocketAddr = "0.0.0.0:3000".parse().unwrap();
info!(%addr, "starting HTTP server");
axum::Server::bind(&addr)
.serve(app.into_make_service())
.await
.unwrap();
}
async fn health() -> &'static str {
"OK"
}
At this stage, running cargo run gives me a working HTTP API that already emits JSON-structured logs. When I add Axum middleware for per-request logging in the next steps, it simply enriches this existing tracing pipeline instead of replacing it.
For a deeper dive into the tracing ecosystem and subscriber configuration, you can check Tracing – Tokio – An asynchronous Rust runtime.
Bootstrapping Tracing: Setting Up a Global Subscriber
Before any Rust axum middleware logging can work reliably, the tracing ecosystem needs a global subscriber. In my experience, getting this part right up front saves a lot of confusion later, because every span and event in your Axum app will flow through this single configuration point.
Configuring a JSON-based tracing subscriber
For production, I prefer structured JSON logs so log aggregators can parse fields like request_id or latency_ms without regex hacks. Here’s a focused example of initializing a tracing-subscriber in main:
use tracing::Level;
use tracing_subscriber::EnvFilter;
fn init_tracing() {
// Use RUST_LOG for dynamic filtering, with sensible defaults
let env_filter = EnvFilter::try_from_default_env()
.unwrap_or_else(|_| EnvFilter::new("info,axum_logging_demo=debug"));
tracing_subscriber::fmt()
.with_env_filter(env_filter)
.with_target(true) // include module path
.with_thread_ids(false) // often unnecessary for HTTP APIs
.with_thread_names(false)
.with_max_level(Level::TRACE)
.json() // key-value structured logs
.flatten_event(true) // nicer top-level JSON fields
.init();
}
One thing I learned the hard way was to always keep filtering flexible. Using EnvFilter plus RUST_LOG lets me turn up verbosity in a single microservice during an incident, without changing code or redeploying.
Capturing spans across async boundaries
Axum and Tokio are both async, so you want spans to propagate across await points and background tasks. The good news is that tracing integrates cleanly with async Rust if you build your app with spans from the start:
use axum::{routing::get, Router};
use tracing::{info_span, Instrument};
#[tokio::main]
async fn main() {
init_tracing();
let app = Router::new().route("/work", get(handler));
// wrap the whole server future in a span
let server_span = info_span!("http_server", service = "axum_logging_demo");
axum::Server::bind(&"0.0.0.0:3000".parse().unwrap())
.serve(app.into_make_service())
.instrument(server_span)
.await
.unwrap();
}
async fn handler() -> &'static str {
// child span per handler invocation
let span = info_span!("work_handler", operation = "demo");
async move {
// business logic goes here
"OK"
}
.instrument(span)
.await
}
With this in place, every event inside a handler is tied to a span tree the subscriber understands. When I later plug in request-level Axum middleware, those spans become even richer, with HTTP metadata automatically attached on every request.
If you want to go deeper into advanced subscriber setups and layering (for example, combining JSON logs with OpenTelemetry), it’s worth exploring How to Use tracing-subscriber with OpenTelemetry Layer in Rust.
Using tower-http TraceLayer for Request and Response Logging
Once I have a global tracing subscriber in place, the next step I always take is adding tower_http::trace::TraceLayer. This is where Rust axum middleware logging really starts paying off: every request automatically gets structured logs for method, path, status, and latency without me sprinkling tracing::info! everywhere.
Attaching TraceLayer to the Axum router
The TraceLayer plugs into Axum using the standard Layer interface. At minimum, I attach it at the top level so all routes inherit the same behavior:
use axum::{routing::get, Router};
use std::time::Duration;
use tower_http::trace::TraceLayer;
use tracing::Level;
fn app() -> Router {
Router::new()
.route("/health", get(health))
.route("/items", get(list_items))
.layer(
TraceLayer::new_for_http()
.make_span_with(|request: &http::Request<_>| {
// create a span for each request
let method = request.method();
let uri = request.uri().path();
tracing::info_span!(
"http_request",
%method,
%uri,
user_agent = request
.headers()
.get(http::header::USER_AGENT)
.and_then(|h| h.to_str().ok())
.unwrap_or("unknown"),
)
})
.on_response(|response: &http::Response<_>, latency: Duration, span: &tracing::Span| {
let status = response.status().as_u16();
span.record("status", &tracing::field::display(status));
tracing::info!(
parent: span,
%status,
latency_ms = latency.as_millis() as u64,
"request completed",
);
}),
)
}
async fn health() -> &'static str { "OK" }
async fn list_items() -> &'static str { "[]" }
In practice, I like this pattern because it keeps the logging behavior close to the router setup, and I can see at a glance that all routes share the same structured logging policy.
Customizing spans and log fields for better observability
Out of the box, TraceLayer::new_for_http() gives decent logs, but in production I almost always customize it to capture richer context: request IDs, user identifiers, and timing information. That extra structure is what makes dashboards and incident debugging much easier.
Here’s a more customized example I’ve used in real services:
use tower_http::trace::{DefaultOnFailure, DefaultOnRequest, DefaultOnResponse, TraceLayer};
use tracing::Level;
fn tracing_layer() -> TraceLayer<_, _> {
TraceLayer::new_for_http()
// log a short line on request start
.on_request(DefaultOnRequest::new().level(Level::INFO))
// log completion with latency and status
.on_response(DefaultOnResponse::new().level(Level::INFO))
// log errors at WARN and include error display
.on_failure(DefaultOnFailure::new().level(Level::WARN))
// customize the span itself
.make_span_with(|request: &http::Request<_>| {
let method = request.method();
let uri = request.uri().path();
// pick up a request id header if present (e.g. from a load balancer)
let request_id = request
.headers()
.get("x-request-id")
.and_then(|h| h.to_str().ok())
.unwrap_or("generated-locally");
tracing::info_span!(
"http_request",
%method,
%uri,
%request_id,
status = tracing::field::Empty, // filled later
latency_ms = tracing::field::Empty,
)
})
}
The pattern I’ve settled on is: the middleware span holds the stable identifiers (method, path, request_id), and the completion log event fills in dynamic information (status, latency, error). This keeps each log line compact but still correlatable in a log viewer.
When I first wired this up, I underestimated how often I’d rely on request_id and latency_ms. Now, I treat them as non-negotiable fields in every HTTP service; they’ve saved me more than once when tracking down slow or failing endpoints in production.
Creating Custom Axum Middleware for Correlation IDs
After I have basic Rust axum middleware logging in place with TraceLayer, the next capability I always add is a correlation ID. Being able to say “show me everything related to req-123” across HTTP logs, background tasks, and even other services is a huge boost for debugging. The nice thing with Axum is that I can implement this as a small, focused middleware.
Design: generate or propagate an X-Request-Id
The pattern I’ve had good experiences with is simple:
- If the incoming request already has an
X-Request-Idheader, trust and propagate it. - Otherwise, generate a new correlation ID (for example with
uuidor a custom scheme). - Attach that ID to the request extensions so handlers can access it.
- Re-expose it on the response header so downstream callers can also log or propagate it.
- Record it in the
tracingspan so it shows up in every structured log line.
Here’s a minimal implementation as a tower Layer + Service. This approach plays nicely with Axum’s middleware stack and keeps the correlation logic reusable across services.
use std::{task::{Context, Poll}, pin::Pin};
use axum::body::Body;
use http::{Request, Response, HeaderValue};
use tower::{Layer, Service};
use tracing::Span;
use uuid::Uuid;
pub static CORRELATION_ID_HEADER: &str = "x-request-id";
#[derive(Clone)]
pub struct CorrelationIdLayer;
impl<S> Layer<S> for CorrelationIdLayer {
type Service = CorrelationIdMiddleware<S>;
fn layer(&self, inner: S) -> Self::Service {
CorrelationIdMiddleware { inner }
}
}
#[derive(Clone)]
pub struct CorrelationIdMiddleware<S> {
inner: S,
}
impl<S> Service<Request<Body>> for CorrelationIdMiddleware<S>
where
S: Service<Request<Body>, Response = Response<Body>> + Clone + Send + 'static,
S::Future: Send + 'static,
{
type Response = S::Response;
type Error = S::Error;
type Future = Pin<Box<dyn futures::Future<Output = Result<Self::Response, Self::Error>> + Send>>;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.inner.poll_ready(cx)
}
fn call(&mut self, mut req: Request<Body>) -> Self::Future {
// 1. read existing or generate new correlation id
let corr_id = req
.headers()
.get(CORRELATION_ID_HEADER)
.and_then(|h| h.to_str().ok())
.map(|s| s.to_string())
.unwrap_or_else(|| Uuid::new_v4().to_string());
// 2. store in request extensions for handlers
req.extensions_mut().insert(corr_id.clone());
// 3. record on the current tracing span
Span::current().record("request_id", &tracing::field::display(&corr_id));
let mut inner = self.inner.clone();
Box::pin(async move {
let mut response = inner.call(req).await?;
// 4. write header on response
response
.headers_mut()
.insert(
CORRELATION_ID_HEADER,
HeaderValue::from_str(&corr_id).unwrap_or(HeaderValue::from_static("invalid")),
);
Ok(response)
})
}
}
One thing I learned early on was to keep the correlation ID format simple (like a UUID). That way I don’t have to worry about encoding/decoding rules or conflicting with external systems that might also generate IDs.
Wiring the middleware into Axum and using the ID in handlers
With the middleware defined, I can plug it into my Axum app just like any other layer. From there, any handler can access the correlation ID from request extensions and include it in logs, metrics, or outbound calls to other services.
use axum::{extract::State, http::Request, response::IntoResponse, routing::get, Router};
use tower::ServiceBuilder;
use tracing::info;
#[derive(Clone)]
struct AppState;
#[tokio::main]
async fn main() {
crate::init_tracing(); // assume you have the global subscriber from earlier
let app_state = AppState;
let app = Router::new()
.route("/demo", get(demo))
.with_state(app_state)
.layer(
ServiceBuilder::new()
.layer(CorrelationIdLayer) // our custom middleware
);
axum::Server::bind(&"0.0.0.0:3000".parse().unwrap())
.serve(app.into_make_service())
.await
.unwrap();
}
async fn demo(req: Request<Body>, State(_state): State<AppState>) -> impl IntoResponse {
// pull the correlation id from extensions
let corr_id = req
.extensions()
.get::<String>()
.cloned()
.unwrap_or_else(|| "missing".to_string());
info!(%corr_id, "handling /demo request");
format!("hello, your correlation id is {corr_id}")
}
In my own services, this pattern became the backbone of observability: the same x-request-id flows from the edge proxy, through Axum, into internal RPC calls, and even into background jobs spawned from request handlers. Combined with the structured fields from TraceLayer, it gives me a coherent story for each user interaction, even when things go wrong in subtle ways.
Logging in Handlers with Axum Extractors and tracing
Once middleware is taking care of HTTP-level details, I focus on handler-level logs. This is where Rust axum middleware logging meets real business context: user IDs, domain objects, and decisions. The goal is to emit structured events that automatically attach to the same request span created by your TraceLayer or correlation-ID middleware.
Using extractors to pull context into handler logs
In my experience, keeping handler signatures explicit with Axum extractors makes it much easier to log meaningful context. Instead of passing everything through globals, I extract what I need and log it via tracing macros:
use axum::{extract::{Path, State}, Json};
use serde::Deserialize;
use tracing::{info, warn, error, instrument};
#[derive(Clone)]
struct AppState;
#[derive(Deserialize)]
struct UpdateItem {
name: String,
quantity: u32,
}
// instrument ties this function to the current request span and adds arguments as fields
#[instrument(skip(state, payload), fields(user_id = %user_id))]
pub async fn update_item(
State(state): State<AppState>,
Path((user_id, item_id)): Path<(String, String)>,
Json(payload): Json<UpdateItem>,
) -> Json<&'static str> {
info!(
%item_id,
name = %payload.name,
qty = payload.quantity,
"received update_item request",
);
if payload.quantity == 0 {
warn!(%item_id, "quantity set to zero; marking item as inactive");
}
// simulate an error branch
if payload.name.is_empty() {
error!(%item_id, "invalid item name");
return Json("invalid name");
}
Json("ok")
}
One thing I’ve found helpful is to log only a few stable fields (like user_id, item_id, and key input parameters) and avoid dumping entire payloads unless I’m debugging a specific issue. That keeps logs small and safer from a privacy perspective.
Accessing correlation IDs and enriching logs
If you followed the correlation-ID middleware pattern, you can enrich handler logs with the same ID. Even though the span already has it, I often record it explicitly on important events so they’re easy to filter in log tools.
use axum::{http::Request, body::Body};
use tracing::info;
pub async fn checkout_handler(req: Request<Body>) -> &'static str {
let corr_id = req
.extensions()
.get::<String>()
.cloned()
.unwrap_or_else(|| "missing".to_string());
info!(
%corr_id,
event = "checkout_started",
"starting checkout flow",
);
// ... business logic, DB calls, etc.
info!(
%corr_id,
event = "checkout_completed",
"checkout finished successfully",
);
"OK"
}
For me, this is the sweet spot: middleware guarantees a rich baseline of HTTP telemetry, and handlers add domain-specific events that ride on the same spans and correlation IDs. When I’m debugging a production issue, I can follow a single request from the edge, through middleware, into business logic, and back out again—all in one filtered view.
Verifying Logs and Integrating with External Observability Tools
With Rust axum middleware logging wired up, I always take a moment to verify log quality locally before sending anything to a shared observability stack. Catching field-name mistakes or missing correlation IDs early saves a lot of frustration once multiple teams depend on the data.
Inspecting structured logs locally
Assuming you used a JSON tracing-subscriber setup, you can quickly sanity-check your logs from the terminal. I usually run a small script that sends a few requests and then pipe the output through jq so I can confirm the fields I care about:
# run the server with verbose logging for your crate RUST_LOG="info,axum_logging_demo=debug" cargo run | jq
Then, in another terminal:
# hit a few endpoints curl -i http://localhost:3000/health curl -i -H "x-request-id: demo-123" http://localhost:3000/demo
What I look for here is pretty simple: each request should generate a coherent set of events that all share the same request_id (or x-request-id header), plus consistent fields like method, uri, status, and latency_ms. If anything is missing or misnamed, I fix it before moving on.
Forwarding structured logs to hosted or OpenTelemetry platforms
Once the local story is solid, it’s straightforward to forward logs to a central observability tool. In my own deployments, I’ve used two main patterns:
- Sidecar or agent-based shipping (Fluent Bit, Vector, filebeat): the Rust service writes JSON logs to stdout or a file, and an agent forwards them to Elasticsearch, Loki, or a SaaS platform.
- Direct OpenTelemetry export: use
tracing-opentelemetryandopentelemetry-otlpto send spans and events straight to an OTLP endpoint like Tempo, Honeycomb, or SigNoz.
Here’s a simplified example showing how you might wire a tracing layer into an OpenTelemetry pipeline using OTLP (details will depend on your exporter and backend):
use opentelemetry::sdk::trace as sdktrace;
use tracing_subscriber::{layer::SubscriberExt, Registry};
use tracing_opentelemetry::OpenTelemetryLayer;
fn init_tracing_with_otel() {
// build an OTLP tracer
let tracer = opentelemetry_otlp::new_pipeline()
.tracing()
.with_trace_config(sdktrace::config().with_resource(
opentelemetry::sdk::Resource::new(vec![
opentelemetry::KeyValue::new("service.name", "axum-logging-demo"),
]),
))
.install_batch(opentelemetry::runtime::Tokio)
.expect("failed to install OTLP pipeline");
let otel_layer = OpenTelemetryLayer::new(tracer);
let env_filter = tracing_subscriber::EnvFilter::from_default_env();
let subscriber = Registry::default()
.with(env_filter)
.with(tracing_subscriber::fmt::layer().json())
.with(otel_layer);
tracing::subscriber::set_global_default(subscriber).expect("setting default subscriber failed");
}
After wiring something like this up, I send a few test requests and confirm in the external UI that spans are grouped by request_id, endpoints are labeled correctly, and latency metrics look reasonable. When that checks out, I’m confident the service is ready for real traffic.
For a more complete walkthrough of wiring tracing into OpenTelemetry and exporter backends, it’s worth consulting How to Set Up OpenTelemetry Tracing in Rust – OneUptime.
Conclusion and Next Steps for Rust Axum Middleware Logging
By this point, we’ve wired up a full Rust axum middleware logging pipeline: a global tracing-subscriber emitting structured JSON, TraceLayer capturing HTTP-level details, a custom correlation ID middleware, and handler-level logs enriched with business context and Axum extractors. In my experience, that combination alone is enough to turn opaque APIs into systems I can actually reason about during incidents.
From here, the natural next steps are to layer on metrics and deeper tracing. You can expose Prometheus metrics alongside logs (for example, per-endpoint histograms), add richer spans around database calls or outbound HTTP requests, and push everything into an OpenTelemetry backend. If you work in mixed environments, the same patterns carry over to other frameworks like actix-web: reuse the tracing setup, correlation ID strategy, and log field conventions so your services share a common observability vocabulary across the board.

Hi, I’m Cary Huang — a tech enthusiast based in Canada. I’ve spent years working with complex production systems and open-source software. Through TechBuddies.io, my team and I share practical engineering insights, curate relevant tech news, and recommend useful tools and products to help developers learn and work more effectively.





