Modern Programming I: Memory Safe Does Not Mean Exploit Free
- Get link
- X
- Other Apps
Over the last decade, systems programming has undergone a quiet but profound shift. Languages such as Go and Rust have moved from niche tools into the mainstream of infrastructure software, cloud platforms, and security-sensitive services. Their rise has been driven largely by one claim: that memory safety eliminates an entire class of vulnerabilities that plagued C and C++ for decades. This claim is largely correct, and it represents a genuine advance in the reliability and security of modern systems. Yet the growing body of production incidents, kernel debates, and distributed system failures makes one thing increasingly clear. Memory safety has not eliminated exploitation. It has transformed it.
The transition from C and C++ to Go and Rust should not be understood as a clean break from historical vulnerability classes, but as a generational evolution of them. Many of the failure modes that once manifested as buffer overflows and use-after-free conditions now reappear as logic violations, concurrency hazards, protocol misuse, and control-plane abuse. The underlying lesson is not that modern languages have failed, but that secure systems programming must now be defined more broadly than memory correctness alone.
Memory safety deserves its success. C and C++ made raw pointer arithmetic, manual allocation, and unchecked memory access the default mode of programming. Unsurprisingly, this produced decades of vulnerabilities rooted in memory corruption. Rust’s ownership model and borrow checker, along with Go’s managed memory and bounds-checked slices, have dramatically reduced the likelihood of these errors in well-written code. Empirical evidence from large organizations suggests that a substantial percentage of historical high-severity bugs simply do not occur in memory-safe languages. This is not incremental progress; it is a structural improvement.
However, the absence of memory corruption does not imply the absence of exploitable behavior. Instead, the center of gravity has shifted upward. Modern compiled languages increasingly fail not at the byte level, but at the semantic level. Systems now break because they do the wrong thing safely.
One of the clearest examples of this persistence is the continued reliance on unsafe escape hatches. Both Go and Rust explicitly provide mechanisms to bypass safety guarantees. Rust’s unsafe blocks and Go’s unsafe package exist to enable interoperability, performance, and low-level access to hardware and kernel interfaces. In isolation, these features are carefully designed. In practice, they often concentrate risk in precisely the areas where systems are most sensitive: cryptography, networking stacks, serialization frameworks, and kernel-adjacent code. The result is a familiar pattern from the C++ era, where a small fraction of the codebase silently reintroduces the very classes of bugs the language was meant to eliminate. Memory safety becomes conditional rather than absolute, and over time, these conditions are forgotten or misunderstood.
Concurrency represents another major continuity across language generations. While Rust’s type system eliminates entire classes of data races at compile time, it does not eliminate deadlocks, livelocks, starvation, or subtle ordering bugs. Go’s concurrency primitives simplify parallelism but do not inherently prevent race conditions or state inconsistencies. In both ecosystems, the complexity of modern concurrent systems often leads to failures that are difficult to reproduce, diagnose, and secure. From a security perspective, these are not merely reliability concerns but also human concerns as they are inevitably implementation failure. Control-plane logic that executes out of order, applies updates inconsistently, or deadlocks under load can be exploited for denial of service, state desynchronization, or policy bypass. The exploit primitive has changed, but the attacker’s leverage has not.
Parsing and serialization remain a particularly instructive example of vulnerability persistence. Historically, parsers written in C were notorious for buffer overflows and memory corruption. In Go and Rust, those same parsers may no longer overwrite memory, but they can still panic, exhaust resources, or accept malformed input that corrupts higher-level state. In networking and kernel-adjacent components, where protocols such as DNS, BGP, NDP, and emerging transports like QUIC operate continuously on untrusted input, these failures are especially consequential. A panic in a routing daemon or a resource exhaustion condition in a control-plane service can have impact comparable to a traditional exploit, even if the failure mode is “safe” by language definition.
As systems become more distributed and protocol-driven, logic vulnerabilities increasingly dominate real-world attacks. Memory-safe languages do not prevent authentication confusion, authorization gaps, state machine violations, replay acceptance, or trust boundary collapse. In fact, the increased confidence afforded by memory safety can sometimes obscure these risks. Engineers may correctly assume that a crash will not become code execution, but underestimate how a protocol edge case or control-plane inconsistency can be weaponized. In large enterprise and carrier-grade environments, control-plane abuse often represents a more realistic threat than low-level exploitation, particularly when availability and integrity are the primary security objectives.
The Linux kernel provides a concrete illustration of how these ideas intersect. Rust’s gradual introduction into the kernel has been cautious and deliberate, not because of ideological resistance, but because kernel security is fundamentally about systemic correctness. Rust is not replacing C wholesale, nor is it intended to. Instead, it is being used to build safety envelopes around historically fragile subsystems such as drivers and filesystems. This reflects an important recognition: language choice is a mitigation, not a panacea. Kernel security depends just as much on correct synchronization, protocol adherence, and privilege separation as it does on memory safety.
What emerges from this landscape is a need to reframe what secure systems programming actually means. In the C and C++ era, security engineering focused heavily on preventing memory corruption because it was the dominant exploit vector. In the Go and Rust era, the emphasis must now shift toward semantic integrity. Secure systems programming now requires rigorous attention to concurrency models, protocol correctness, control-plane isolation, and failure behavior under adversarial conditions. It demands an understanding of how systems behave over time, under load, and in the presence of malformed or malicious inputs, not just how they behave in the happy path.
Seen through this lens, the persistence of vulnerability classes across language generations is not a failure of modern languages, but a reflection of the inherent complexity of systems. Go and Rust have successfully removed one layer of fragility. What remains are deeper, more structural risks that cannot be solved by type systems alone. Addressing them requires a synthesis of language design, systems architecture, and security engineering, particularly in kernel, networking, and control-plane domains where correctness and trust are inseparable.
Memory safety is a milestone, not an endpoint. The future of secure systems will be defined not by the absence of crashes, but by the resilience of systems against misuse, and misinterpretation. In that future, exploitability will be measured less by whether memory can be corrupted and more by whether systems can be coerced into violating their own assumptions safely.
This article is from a series on modern compiled programming languages. Specifically some musing on Go and Rust. The series can be followed along here:
- Modern Programming I: Memory Safe Does Not Mean Exploit Free
- Modern Programming II: Concurrency Is the New Exploit Primitive
- Modern Programming III: Unsafe Is the New C - How Escape Hatches Concentrate Risk
- Modern Programming IV: Control-Plane Security in a Memory-Safe World
- Modern Programming V: Parsing, Protocols, and Safe Failure That Still Breaks Systems
- Modern Programming VI: Redefining Secure Systems Programming
- Get link
- X
- Other Apps