As Envoy gains widespread adoption in the cloud-native networking space, more developers are exploring how to extend its capabilities. Envoy supports multiple extension mechanisms, each with trade-offs in performance, security, development complexity, and compatibility.
This article summarizes my research into these mechanisms and aims to help you better understand and choose the right extension strategy.
When discussing “extension mechanisms” in Envoy, it’s important to distinguish between native in-process extensions and external integrations. Architecturally, these fall into two broad categories:
ext_proc
and ext_authz
rely on gRPC/HTTP APIs to call external services for request handling logic. They run outside the Envoy process and are not part of its filter chain. Thus, strictly speaking, they are integration mechanisms, not extensions.
However, since they are widely used in real-world scenarios for HTTP request/response handling, I’ve included them here for comparison.
Each mechanism’s implementation cost, performance, and applicable scenarios differ. Here’s a high-level comparison:
This is the most powerful and lowest-level method—embedding custom logic directly into Envoy’s source and compiling it. It delivers the best performance (zero-copy, low latency) and suits performance-critical paths. But the downsides include maintaining a custom build pipeline, distributing your own Envoy binaries, and high upgrade costs.
Lua is a mature extension option. It runs coroutine-based scripts in the same process as Envoy. It’s easy to use, requires no recompilation, and can be inline in the config. However, there’s no isolation—crashes can affect Envoy itself—so it’s best used in trusted environments.
Proxy-Wasm allows writing filters in Rust, Go, etc., compiled to WebAssembly modules dynamically loaded into Envoy. Wasm runs in a sandboxed VM with decent isolation. However, the ecosystem is still evolving, debugging is difficult, and performance is lower than C++ or dynamic modules.
Dynamic Modules, introduced in Envoy v1.34, let you write Rust-based extensions (compiled with C ABI) and load them as .so
shared libraries at runtime. Compared to built-in C++, dynamic modules offer similar performance without needing to rebuild Envoy, making them ideal for teams that demand performance but want to avoid forking Envoy.
ext_proc
enables complete customization of request/response logic in an external service—including reading/modifying the body. It’s useful for deep content inspection (DLP, antivirus, etc.), but being out-of-process, it introduces extra latency.
ext_authz is similar but only handles request path evaluation and cannot modify responses. It’s ideal for OAuth2, JWT, or header-based access control, commonly deployed remotely and non-intrusively.
Here’s a detailed table comparing all six mechanisms across execution model, performance, language support, security, compatibility, and use cases:
Aspect | C++ Filter | Lua Script | Wasm | Dynamic Module | ext_proc | ext_authz |
---|---|---|---|---|---|---|
Execution Model | Native C++ in Envoy | LuaJIT coroutine (in-process) | VM-based (V8/Wazero) | Shared object, in-process execution | gRPC/REST, external service | gRPC/REST, external service |
Performance | Best (zero-copy) | Moderate (better than Wasm) | Moderate (cross-VM serialization) | High, near-native | Lower, cross-process cost | Efficient for metadata-only |
Language Support | C++ | Lua (stream API) | Best in Rust, supports Go/C++ | Rust official, Go possible | Any gRPC/REST language | Any gRPC/REST language |
Deployment | Statically compiled | Inline or script reference | Dynamically loaded .wasm |
Dynamically loaded .so |
Remote or sidecar service | Remote service |
Security/Isolation | Fully trusted | No isolation, full trust | Sandboxed, isolated | Shared memory, full trust required | Process isolation, secure | Process isolation, secure |
Compatibility | Strongly coupled to Envoy | Depends on Lua API stability | Relatively stable ABI | ABI-sensitive, version locked | Stable API, version-tolerant | Stable API, version-tolerant |
Use Cases | Core traffic path | Quick header edits, logic tweaks | Safe, cross-language, rapid protos | High-perf HTTP extensions, no build | DLP, security scans | Auth, access control |
Based on my research and practical experience, dynamic modules have become my top recommendation for extending Envoy.
They provide near-C++ performance without the pain of rebuilding Envoy. This makes them ideal for teams that need high performance but want to avoid the complexity of managing a custom Envoy fork.
Compared to Wasm, dynamic modules run directly in-process—no serialization, no VM overhead, no memory sandbox—which gives them a natural advantage for header and body manipulation.
While still experimental and lacking ABI compatibility across versions, this is manageable for environments with fixed or controlled release cycles.
In my view, dynamic modules are poised to replace many Wasm use cases, especially in enterprise environments where performance and debuggability matter.
I’ll share a complete tutorial on building a dynamic module in Rust soon—stay tuned!
Here’s my recommendation:
No single mechanism is a silver bullet. The key is understanding their design trade-offs and selecting based on your operational needs.
If you’re evaluating long-term extension strategies for Envoy, I strongly encourage you to keep an eye on dynamic modules and prepare accordingly.
If you’ve used any of these mechanisms in real-world scenarios, I’d love to hear from you. You can find more of my work on Envoy, Istio, and service meshes at jimmysong.io.