Modern Programming III: Unsafe Is the New C - How Escape Hatches Concentrate Risk
The rise of memory-safe systems languages has fundamentally changed how low-level software is written, but it has not eliminated the need to escape those guarantees. Both Rust and Go provide explicit mechanisms to bypass safety in the name of performance, interoperability, and control. These escape hatches are intentional, necessary, and carefully designed. Yet over time, they have begun to play a familiar role. In modern systems, unsafe code is increasingly where the old problems live.
This is not an indictment of unsafe constructs themselves. It is an observation about how complex systems evolve. In C and C++, all code lived in a permanently unsafe environment. In Rust and Go, most code does not. As a result, risk does not disappear; it concentrates. The same categories of bugs that once permeated entire codebases now cluster around the boundaries where safety is explicitly relaxed. These boundaries become the new fault lines.
Rust’s unsafe keyword is often misunderstood as a flaw in the language rather than a reflection of reality. Certain operations simply cannot be expressed within a fully safe abstraction, particularly when interacting with hardware, kernel interfaces, or foreign code. Unsafe blocks allow developers to assert additional invariants that the compiler cannot verify. When those invariants are correct and stable, unsafe code can be robust and efficient. When they are wrong, incomplete, or invalidated by future changes, the compiler has no recourse. The responsibility shifts entirely to human reasoning, documentation, and institutional memory.
Go’s unsafe package plays a similar role, albeit in a different style. By allowing arbitrary pointer conversion and memory reinterpretation, it enables optimizations and integrations that would otherwise be impossible. In practice, it is often used to bypass type systems, reduce allocations, or interface with low-level APIs. Like Rust’s unsafe blocks, these usages tend to appear in the same high-risk domains: serialization, cryptography, networking, and performance-critical infrastructure. These are precisely the areas where correctness matters most and mistakes are most costly.
What makes unsafe code particularly dangerous in modern systems is not just the presence of memory risk, but its interaction with safe surroundings. Unsafe code often relies on assumptions about how safe code will behave. Those assumptions may hold initially, but they are rarely enforced. Over time, refactors, dependency upgrades, and performance tuning subtly alter the environment in which unsafe code operates. The compiler continues to enforce safety elsewhere, giving a false sense of global correctness, while the unsafe core quietly accumulates technical and security debt.
This pattern is especially visible in kernel-adjacent and systems-level code. Drivers, protocol stacks, and cryptographic libraries frequently sit at the boundary between safe and unsafe worlds. In these contexts, unsafe code often encodes invariants about memory layout, alignment, lifetime, and concurrency that are not locally verifiable. A change in one subsystem can invalidate assumptions in another, reintroducing use-after-free conditions, aliasing violations, or race-prone behavior. The language has changed, but the failure mode is familiar.
From a security perspective, unsafe code also becomes a high-value target because of its durability. Unlike application logic, which changes frequently, unsafe code is often written once and rarely revisited. It tends to be authored by a small number of specialists and then trusted implicitly. This mirrors the historical fate of hand-written C libraries that survived for decades with minimal scrutiny. When vulnerabilities emerge in such code, they are often systemic, difficult to patch, and widely deployed.
The problem is compounded by foreign function interfaces. Both Rust and Go rely heavily on FFI to integrate with existing C libraries, operating system APIs, and hardware drivers. FFI boundaries are, by definition, unsafe. They create zones where the guarantees of one language end and another begin. In these zones, memory ownership, lifetime, and error handling conventions must be translated manually. Any mismatch becomes a potential vulnerability. As systems increasingly stitch together components written in multiple languages, these translation layers grow in number and complexity.
In networking and control-plane software, unsafe code frequently underpins parsing, packet handling, and cryptographic primitives. These components process untrusted input at high rates and often operate under strict performance constraints. The pressure to optimize can lead to shortcuts that erode safety guarantees. While a memory-safe language may prevent these shortcuts from spreading, it does not prevent them from existing where they are most tempting. The result is a system that appears safe at the surface but depends critically on a small, fragile core.
The key insight is that unsafe code is not merely a technical construct; it is an organizational and lifecycle problem. The security of unsafe regions depends on long-lived assumptions remaining valid across years of change. This requires discipline that is difficult to sustain at scale. Without explicit governance, documentation, and review processes, unsafe code gradually becomes indistinguishable from legacy C. It is still compiled by a modern toolchain, but it carries the same risks as its predecessors.
Recognizing this does not diminish the value of memory-safe languages. On the contrary, it clarifies their role. Go and Rust dramatically reduce the surface area in which traditional memory vulnerabilities can occur. What remains is smaller, more visible, and more amenable to focused scrutiny. The challenge is to treat unsafe code as a first-class security boundary, not an implementation detail.
In a post-memory-safety world, secure systems programming requires more than choosing the right language. It requires understanding where safety ends, why it ends there, and how those boundaries are maintained over time. Unsafe is not a failure of modern languages. It is the place where modern systems must still confront the hard problems that C and C++ exposed everywhere.
This article is from a series on modern compiled programming languages. Specifically some musing on Go and Rust. The series can be followed along here:
- Modern Programming I: Memory Safe Does Not Mean Exploit Free
- Modern Programming II: Concurrency Is the New Exploit Primitive
- Modern Programming III: Unsafe Is the New C - How Escape Hatches Concentrate Risk
- Modern Programming IV: Control-Plane Security in a Memory-Safe World
- Modern Programming V: Parsing, Protocols, and Safe Failure That Still Breaks Systems
- Modern Programming VI: Redefining Secure Systems Programming