Encore Built a Massive Rust Runtime to Power Its TypeScript Cloud Framework
Encore spent two years writing 67,000 lines of Rust to power Encore.ts, yielding 9x Express performance gains — here's what the FFI boundary, Tokio's async model, and Node.js co-residency actually cost them.

67,000 Lines and Two Years: The Scope of the Bet
When the Encore team decided to build a Rust runtime beneath their cloud-native TypeScript framework, they weren't reaching for a quick performance win. They were committing to a full-scale infrastructure project. Two years and 67,000 lines of Rust later, Encore.ts stands as one of the most concrete examples in the modern backend ecosystem of what happens when you embed a Rust core inside a higher-level language framework — and what it costs you to get there.
The headline result is hard to ignore: Encore.ts benchmarks at 9x faster throughput than Express.js, 3x faster than both ElysiaJS and Hono, and 3x faster than Bun paired with Zod. Those aren't marginal wins on synthetic micro-benchmarks. They represent a rethinking of where work actually happens inside a Node.js process.
The Core Architecture: Two Event Loops, One Process
The foundational insight driving Encore's design is that Node.js's single-threaded event loop is exactly the wrong place to run I/O-heavy infrastructure work. Every database query, every incoming HTTP request, every Pub/Sub publish that runs through the Node.js event loop is time the event loop can't spend executing your application's business logic.
Encore's answer is a complete split of responsibilities. The Rust runtime, built on Tokio as its async executor and Hyper as its HTTP layer, runs its own independent, multi-threaded event loop inside the same process as Node. That Rust loop owns all I/O: accepting and processing incoming HTTP requests, running database queries against PostgreSQL, handling Pub/Sub message delivery and retry logic, managing Secrets, and firing Cron Jobs. Node's event loop, freed from that work, focuses exclusively on executing TypeScript business logic. The result, as Encore puts it, is that "virtually all non-business-logic is off-loaded from the JS event loop."
The practical consequence for developers is subtle but significant. When application code calls `.publish()` to emit a Pub/Sub message, execution doesn't stay in JavaScript. The payload crosses the process boundary into Rust, which handles delivery, retry backoff, and confirmation — none of which blocks the Node.js thread. The same handoff happens for every database query. From the TypeScript side, it reads like ordinary async code; under the hood, Tokio is scheduling the actual work.
Where Rust Paid Off: The FFI Boundary
The performance story is compelling, but the more interesting engineering story is what it took to wire these two runtimes together. Making Node.js and Rust coexist in the same OS process is not a documented happy path. The Encore team's own framing of the project describes it as solving "the non-obvious problems of making Node.js and Rust work together in the same process" — and the word "non-obvious" is doing a lot of work there.
The bridge between the two worlds runs through N-API, Node's stable C interface for native addons. N-API lets Rust code interact with the V8 heap and the Node.js runtime in a way that survives Node version upgrades, which matters for a framework that can't ask users to pin to a specific engine build. But N-API is fundamentally a synchronous, C-style interface, and threading it through Tokio's async model introduced friction at every seam. Passing data across the FFI boundary without unnecessary copying, managing lifetimes when Rust's ownership model and V8's garbage collector both want to claim the same objects, and ensuring that panics in Rust code don't silently corrupt the Node.js process — these are the categories of problems that don't show up in language benchmarks but define how long a project like this actually takes.

Static Analysis as a Runtime Safety Layer
One of Encore's less-discussed engineering choices directly complements the Rust runtime: the framework uses static code analysis at build time to extract TypeScript type definitions and compile them into a schema the Rust layer can use for request validation. Normally, TypeScript's type system evaporates at runtime. Encore's compiler analysis preserves it, embedding the API schema into the Rust runtime so that incoming HTTP requests are fully validated before they ever reach the JavaScript layer.
This matters because it shifts a class of validation errors from application code into the infrastructure layer. The Rust runtime can reject malformed requests with zero JavaScript execution, which means the Node.js event loop only receives work it can actually complete. For developers, it also eliminates a category of defensive boilerplate: you define the type once in TypeScript, and the runtime enforces it everywhere.
The Infrastructure Surface: What Rust Actually Owns
The Rust runtime's responsibilities extend well beyond HTTP. All infrastructure integrations in Encore.ts — PostgreSQL connections, Pub/Sub producers and consumers, Secrets retrieval, and Cron Job scheduling — live inside the Rust layer. This is a deliberate architectural boundary: infrastructure code runs in Rust, application code runs in Node.js, and the FFI layer is the only interface between them.
The advantage of this partition is operational consistency. Infrastructure behavior like connection pooling, retry logic, and timeout handling is implemented once in Rust and shared across every application built on the framework. A developer adding a Pub/Sub subscriber in TypeScript isn't writing retry logic; they're calling into a battle-tested Rust implementation that handles failure modes they might not have considered. The framework's performance characteristics also remain predictable regardless of how much infrastructure a given application uses, because that work is always off the critical path of the Node.js thread.
What This Framework Means for the "Rust Core" Pattern
Encore.ts is a clear data point for a pattern that's becoming more common across the tooling ecosystem: a thin, ergonomic TypeScript or Python interface backed by a Rust engine that handles the performance-critical path. The tradeoffs are consistent across cases. You gain multi-threaded I/O throughput, deterministic memory behavior, and the ability to enforce safety guarantees at the language boundary. You pay in build complexity, FFI friction, and a debugging experience that can feel fragmented when a bug lives at the seam between two runtimes.
For teams evaluating whether to introduce a Rust core beneath their own higher-level framework, Encore's two-year timeline and 67,000-line codebase are the most honest project scope estimates available. The 9x performance ceiling is real, but it doesn't come from dropping Tokio into a project in an afternoon. It comes from solving the N-API boundary, the async ownership puzzles, and the type-system bridging that the benchmark posts don't measure. The question isn't whether Rust can accelerate a TypeScript framework. Encore already answered that. The question is whether your team has the infrastructure appetite to build the bridge.
Know something we missed? Have a correction or additional information?
Submit a Tip

