Analysis

AWS Rust Lambda middleware gains attention as serverless support matures

A DynamoDB-backed rate limiter showed how Tower can turn Rust Lambda into a reusable middleware stack, just as AWS moved Rust support to GA.

Sam Ortega··2 min read
Published
Listen to this article0:00 min
Share this article:
AWS Rust Lambda middleware gains attention as serverless support matures
Source: rust-lambda.com

Luciano Mammino’s latest Rust Lambda piece landed because it treated middleware as production plumbing, not a classroom exercise. The example centered on a DynamoDB-backed per-IP rate limiter, which is exactly the kind of cross-cutting logic that tends to sprawl when teams bolt it on by hand. In Rust on AWS Lambda, Mammino argued, Tower already gives developers a clean way to package that logic once and reuse it across handlers.

That matters because AWS has been tightening the Rust story around a real stack, not just a language checkbox. Its Lambda documentation says Rust functions run on an OS-only runtime and points developers to the Rust runtime client, Cargo Lambda, Lambda HTTP, Lambda Extension, and AWS Lambda Events as the main tooling. AWS also documents sample Rust Lambda applications that use Tower to inject CORS headers, and its examples include a dedicated http-tower-trace case. The message is hard to miss: middleware is no longer a side quest in Rust serverless, it is part of the official shape of the platform.

AI-generated illustration
AI-generated illustration

The timing helped the article resonate. In November 2025, AWS moved Lambda support for Rust from Experimental to Generally Available, backed by AWS Support and the Lambda availability SLA. That promotion turned Rust from an interesting option into a serious choice for business-critical serverless workloads. Mammino’s example, hosted as a minimal AWS Lambda in Rust on GitHub, fit that shift neatly by showing reusable middleware built with Tower around a real rate-limiting problem instead of a toy hello-world handler.

The mechanics are where the post got useful. Middleware composition order matters, and the stack can be thought of like an onion around the handler, with outer layers running first. Mammino also dug into the parts that usually trip people up, including why Service::call cannot just be async and how Box::pin(async move) fits into the pattern. That is the kind of detail that helps Rust developers stop treating Lambda as bespoke glue and start treating it like any other composable service stack.

Cargo Lambda reinforces that direction. Its workflow covers local emulation with cargo lambda watch, typed local invocation with cargo lambda invoke, and deployment with cargo lambda deploy. Paired with the aws/aws-lambda-rust-runtime workspace, which includes lambda-runtime, lambda-http, lambda-extension, and lambda-events, the ecosystem is starting to look less like a collection of one-off tricks and more like a standard way to ship serverless Rust. For teams already living on Lambda, that is the real story: not just that Rust works, but that the middleware layer is finally maturing into something reusable, testable, and familiar.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get Rust Programming updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More Rust Programming News