How Edera Provides VM-Level Isolation for Containers

Edera Protect introduces a runtime model that combines the boundary guarantees of VMs with the ergonomics of containers. The design is based on a Rust based micro hypervisor that evolved from Xen model. It boots a separate kernels for each container using PVH (paravirtualized hardware) mode. This avoids device emulation entirely. There is no userspace syscall translation, no trap-heavy interposition, and no reliance on seccomp filtering. Each workload executes in its own kernel context, with no cross-container syscall surface.

This model enables precise kernel-level isolation where the performance penalty is negligible when VT-x or AMD-V hardware support is available, and in fallback software-only mode, overhead increases but mostly remains below 15%. The hypervisor is fully container-aware. It does not treat containers as VMs, rather,  it treats VMs as container boundaries. That inversion of priority matters.

Each container gets its own memory space, kernel scheduler, and optionally even drivers. Edera can isolate networking, GPU, and storage drivers into separate zones. These driver zones provide true hardware-layer compartmentalization, which prevents compromised drivers from leaking into unrelated containers. This level of boundary enforcement is not possible with cgroups or seccomp, and not implemented in Kata or gVisor.

Container Security Benchmarks: Edera vs. Docker and Kata

In their published benchmarks, Edera’s runtime exhibit syscall and memory performance comparable to Docker. This includes operations like read, write, memcpy, and memset which are quite often bottlenecks in I/O-bound and compute-heavy applications. In PVH mode, syscall performance even exceeds Docker in some paths due to optimized guest-kernel interactions.

The kernel build benchmark is a useful real-world signal. Building a Linux kernel is a proxy for sustained I/O, CPU scheduling, page faults, and cache behavior. Edera completed the task with less than 5 percent overhead relative to Docker. This is significant because kernel builds are common in CI workloads. A 5 percent performance loss is acceptable if it comes with guaranteed isolation.

Startup latency is slightly higher than runc but significantly lower than Kata Containers and Firecracker. This makes Edera viable for latency-sensitive microservices and interactive workloads. It may still require tuning for highly ephemeral serverless environments where cold start time is measured in milliseconds.

Observability in Isolated Container Runtimes with eBPF

One of the weakest points of hypervisor-based isolation models has historically been observability. Tools like strace, bpftrace, and perf either fail or give incomplete information inside nested kernel contexts. Edera addresses this by implementing eBPF telemetry per zone. Each container’s kernel runs its own eBPF instrumentation, which feeds into the host through scoped control channels. Security teams can monitor syscall behavior, memory usage, and I/O patterns without needing access to the host kernel.

This model also improves security signal clarity. Events in one zone are not conflated with those in another. No single container can generate enough noise to blind the telemetry layer. This lowers false positives and avoids the alert fatigue common in monolithic observability stacks on shared-kernel systems.

Run Secure Containers Without Rebuilding Images

Unlike gVisor, Edera does not require containers to be rewritten or repackaged, rather, it supports standard OCI images, Dockerfiles, and container runtimes. The runtime swap is invisible to the developer workflow. And, the CI/CD pipelines do not require any sort of special handling. Orchestration tools like Kubernetes and Nomad are supported through shim integration. Here, there are no syscall emulators or translation layers, and most workloads run as-is, including compiled binaries, language runtimes, and third-party dependencies.

This is a critical point for compliance-driven organizations. Many workloads come from vendor-provided containers or third-party partners. Rebuilding or modifying these artifacts to run in a specialized runtime breaks supply chain integrity and creates long-term friction. Edera avoids that entirely.

A New Trust Model for Secure Container Infrastructure

With Edera, the trust model shifts. You no longer trust the host kernel to remain unexploited. You trust the hypervisor boundary between containers. That boundary is hardware-enforced when virtualization extensions are present. It is memory-segregated, scheduler-isolated, and syscall-contained.

For regulated industries, this means you can build infrastructure where no two tenants ever share a kernel. You can attach policy to kernel instances rather than namespaces. You can monitor each container’s kernel independently. And when something fails, the blast radius is constrained to a single isolated zone.

A New Baseline for High-Assurance Containers

Edera is not just another secure container runtime, rather, it is a full system rethink that starts with security and designs for performance, and absolutely not the other way around. It avoids emulation, syscall translation, and modifying developer workflows. It introduces real isolation without rewriting the container ecosystem.

This is the first runtime I have seen that I would deploy in a regulated environment without hesitation. The isolation guarantees are measurable and the performance trade-offs are acceptable. The tooling integrations are realistic, and the telemetry model is secure.

This is what we should have had from the beginning… Containers with actual containment. 

Finally!

If you're managing containers in regulated environments and tired of tradeoffs, it's time to rethink your runtime. Let’s chat about Edera Protect and how it delivers real isolation—without compromise.