Posts

Onboarding Into a Living Codebase

Image
The first week is usually quiet. A new engineer joins the team, their laptop freshly provisioned, their calendar suspiciously empty. They clone the repositories, follow the setup guide, and eventually reach out with a familiar message:  “I think I’m missing something. The service won’t start.”   Someone responds with a workaround. Another suggests a different environment variable. A third offers to hop on a call and “just get them unblocked.” By the end of the week the engineer is running the system but nothing about that success is repeatable, documented, or intentional. The team moves on, unaware they have just taught their newest member that survival depends on tribal knowledge. Furthermore there is lost cadence and opportunity amongst the wider team during the distracting episode of onboarding. That moment is where onboarding either becomes an investment or a slow, compounding failure. Onboarding a new software engineer into a mature team is not a clerical exercise. It is ...

This is Cybermancy

Image
Preface This document is a manifesto, not a tutorial. It is written for practitioners who live inside systems rather than around them. It is for those who build, test, trace, break, and repair complex machinery at scale. It is not concerned with tools, techniques, or exploits, but with the way of seeing that precedes all of them. The worldview here is neither criminal nor romantic albeit a bit poetic and whimsical in my own way. It is operational. It emerges naturally in those who understand that modern systems fail not because they are attacked, but because they are misunderstood. What follows is a statement of posture, responsibility, and discipline for those who recognize that sight itself is the rarest capability on the Grid. Cybermancers are not defined by exploits, hardware, or street legend. Those tangentials  are the residue and not the essence. A cybermancer is defined by how they see the Grid  not as fortresses, maps, gates, and walls, but as a breathing lattice of...

Generative AI Is Expensive, and Often Solving the Wrong Layer of the Problem

Image
Generative AI has rapidly become the default interface for asking questions of data. Its fluency and flexibility make it attractive, particularly in environments where questions are ill-formed or evolving. However, beneath this convenience lies a material cost: generative models are computationally intensive, energy-hungry, and operationally expensive. At scale, they represent a fundamentally inefficient way to answer many of the questions organizations are actually asking. In a significant number of practical engineering and analytics scenarios, the desired outcome of a generative interaction is not novel reasoning or creative synthesis. It is a concrete artifact: a SQL query, a filter, a classification, a correlation, a threshold, or a decision rule applied to data the system already possesses. The generative model is often acting as an intermediary while translating human language into a deterministic operation that could have been executed directly, repeatedly, and at negligible m...

The Architecture of Modern Command-and-Control Networks

Modern command-and-control (C2) networks are best understood not as isolated malicious servers, but as resilient, distributed systems designed to operate in hostile environments. Unlike early botnets that relied on a single centralized server, contemporary C2 architectures assume persistent disruption: nodes will be taken down, domains will be seized, traffic will be filtered, and endpoints will be cleaned. As a result, today’s malware ecosystems increasingly resemble fault-tolerant service meshes (and they're really impressive), borrowing concepts from distributed computing, content delivery networks, and peer-to-peer systems. The defining characteristic of modern C2 is not stealth alone, but survivability under continuous attrition. At the most basic level, C2 networks exist to solve three problems: command dissemination, telemetry collection, and lifecycle management of infected hosts. These functions must be achieved while minimizing detectability and maximizing availability. ...

On Debugging as a Discipline: From Guesswork to Targeted Investigation and Raising Skillsets

For many new software engineers, debugging is perceived as a rite of passage measured by tool progression: first print statements, then logging frameworks, and eventually a full-featured debugger. This framing is misleading. Debugging does not begin with tools at all. It begins with the ability to reason about a system, what it is supposed to do, how it is structured, where invariants should hold, and which assumptions are most likely to be wrong. Tools merely amplify that reasoning. Without a targeted methodology, even the most advanced debugger becomes a slow and unfocused microscope pointed at the wrong place. Effective debugging is fundamentally about knowing where to look and why . That knowledge comes from understanding program structure, control flow, data lifecycles, and the systems the program runs on. A null pointer dereference in isolation is a symptom, not a cause. The cause may be a violated precondition several layers earlier, an unexpected concurrency interaction, a m...

Modern Programming V: Parsing, Protocols, and Safe Failure That Still Breaks Systems

Parsing has always been one of the most dangerous activities a system can perform. It sits at the boundary between trusted logic and untrusted input, translating raw data into structured meaning. In the C and C++ era, this boundary was infamous for buffer overflows, memory corruption, and remote code execution. Memory-safe languages have dramatically reduced these outcomes, but they have not made parsing safe in a broader sense. They have simply changed how parsing fails, and in modern systems, safe failure can still break everything. Go and Rust make it difficult to write parsers that corrupt memory, but they do not prevent parsers from panicking, allocating unbounded resources, or accepting malformed input that poisons higher-level state. In many infrastructure systems, particularly those that operate continuously on network input, these failure modes are just as damaging as classic exploits. A crash in a control-plane daemon, a stalled parser waiting on input that never resolves, o...

Modern Programming VI: Redefining Secure Systems Programming

The gradual disappearance of memory corruption as the dominant failure mode in systems software has created an uncomfortable ambiguity. Many systems today are demonstrably safer at the machine level, yet they remain fragile at the system level. They do not crash as easily, but they can still be coerced into incorrect, unstable, or adversary-favorable behavior. This tension reveals a deeper truth: secure systems programming can no longer be defined primarily by the absence of memory errors. It must be redefined around systemic integrity. For decades, security engineering in low-level software was shaped by the realities of C and C++. The central question was whether untrusted input could overwrite memory or redirect control flow. Defensive techniques, tools, and mental models evolved accordingly. Memory-safe languages have broken this pattern, but they have also rendered many of those mental models incomplete. When systems no longer fail loudly, security failures become quieter, more p...