Vintage Detection: Applying RADAR Research from 1953 to Detect Modern Cyber Threats

I recently spent time with a mentor and friend, Brandon, the founder of Panoptcy Cybersecurity, discussing various topics in detection engineering like the positivity and intent of a detector. During this conversation, he mentioned how valuable it is to explore how different domains approach similar problems—a philosophy I strongly share and have benefitted from in the past.

Brandon introduced me to Signal Detection Theory, a framework developed from military RADAR research focused on distinguishing real signals from background noise and optimizing the balance between false alarms and missed detections. At first, when he said "signal detection theory," I figured it was some recent, more academic version of something like David Bianco's Pyramid of Pain that I hadn't come across yet. However, it turned out to be a mathematical theory developed in the early 1950s for RADAR applications.

The parallels to modern threat detection were immediately apparent—both domains wrestle with separating meaningful signals (attacks/threats) from noise (benign activity) while managing the inevitable trade-offs between false positives and false negatives. Intrigued, I looked into the original research on the topic and came across a foundational 1953 paper. The connections between this decades-old work and current detection engineering practices were remarkable.

In this post, I'll highlight key insights from this groundbreaking paper on signal detectability and demonstrate how this 70-year-old framework provides a surprisingly robust foundation for modern security operations.


Reference: Peterson, W. W., & Birdsall, T. G. (1953). "The Theory of Signal Detectability." Technical Report No. 13, Electronic Defense Group, University of Michigan. Available at: https://deepblue.lib.umich.edu/handle/2027.42/7068


Signal Detection Theory: Making Sense of Noise

Peterson and Birdsall's 1953 paper "The Theory of Signal Detectability" is far more than a simple detection method—it's a mathematical framework that helps frame how we think about signal detection across multiple domains. The paper provides a statistical approach to a fundamental challenge: how can we reliably distinguish meaningful signals from background noise?

The authors prove that optimal signal detection is fundamentally a statistical problem, best solved through a concept called the likelihood ratio. This isn't just a simple confidence metric, but a mathematical tool that compares the probability of an event occurring under different scenarios—specifically, how likely an observation is to happen during a genuine threat versus normal activity.

The paper's significance lies in its comprehensive approach. Peterson and Birdsall don't just propose a theory; they mathematically demonstrate how different detection strategies are fundamentally equivalent. They develop a framework that allows for setting detection thresholds in two ways: by restricting false alarm probability or by mathematically optimizing the costs of different types of errors. This means organizations can choose a detection strategy that best fits their specific risk tolerance and operational constraints.

The research includes case studies exploring signal detection under various conditions, such as signals with known characteristics, signals with unknown phases, and different types of noise—with a particular focus on Gaussian noise. By providing mathematical proofs and practical applications, the paper transforms signal detection from an art into a scientific discipline.

Think of the likelihood ratio as a sophisticated decision-making tool. For every alert or log entry, it systematically asks: "How much more likely is this pattern if an attack is occurring versus if it's just normal activity?" It's like having an analyst who can mathematically quantify the suspiciousness of an event, moving beyond gut feelings to data-driven insights.

In practical terms, imagine a security team reviewing a series of failed login attempts. A low likelihood ratio might suggest these are simply users struggling with passwords. A high likelihood ratio, however, would indicate these failures look more like a coordinated attack pattern, such as password spraying, where an attacker systematically tries passwords across multiple accounts.

It offers a powerful approach to reducing both unnecessary alarm and dangerous complacency, making it as relevant to modern cybersecurity as it was to military radar detection in the 1950s.

Potential Ways to Operationalize this Research for Modern SecOps

It's one thing to just talk about how this paper could be applied to cybersecurity through vague anecdotes, however it's another to actually try and take the paper an operationalize it. While I haven't yet been able to apply the methodology from this paper in a production environment, when I was reading through the paper I had a few ways where I thought this could be applied.

Crafting a Probabilistic Risk-Based Alerting Framework

The likelihood ratio provides a mathematical foundation for building a risk-based alerting algorithm. By quantifying the probability of an event being a genuine threat versus routine activity, security teams can dynamically adjust detection sensitivity based on contextual risk.

The core of this approach involves calculating the likelihood ratio for each potential security event, then establishing adaptive thresholds that balance false positive and false negative rates. This allows for a more nuanced detection strategy that can evolve with changing organizational risk profiles.

Practical implementation would involve creating a scoring mechanism that translates the likelihood ratio into actionable alert priorities, potentially incorporating additional contextual metadata to refine the detection probability.

Optimizing Detection Strategies and Managing False Positive Rates

Peterson and Birdsall's research provides a framework for quantifying detection error costs. By assigning numerical values to correct detections, missed signals, false alarms, and signal losses, organizations can mathematically optimize their detection strategy's expected value.

The key is using the likelihood ratio to systematically calculate the probability of different error types, then adjusting detection thresholds to minimize overall risk. This approach shifts detection from a subjective process to a data-driven optimization problem.

Organizations can use this methodology to explicitly calculate their risk tolerance, creating detection strategies that are precisely tuned to their specific operational context and risk appetite.

Incident Response & SOC Queue Prioritization

By calculating the probability of an event being a genuine threat, security teams can create a probabilistic ranking system for incident investigation.

Each potential security event receives a quantitative "suspiciousness" score based on its likelihood ratio, allowing for systematic, data-driven triage. This approach helps teams focus limited investigative resources on the most probable threats.

The method is particularly powerful because it can adapt to changing threats and organizational contexts, providing a flexible and statistical approach to incident prioritization.

Strategic Security Metrics

Signal detection theory offers an approach to developing meaningful security metrics. Instead of relying on simplistic counts, organizations can develop metrics that capture the nuanced probabilistic nature of security events.

By tracking metrics like false alarm rates, detection probabilities, and likelihood ratio distributions, security leaders can gain insight into their detection capabilities. These metrics provide a mathematical language for discussing security effectiveness.

The framework allows for creating comparable, quantitative measures of detection performance across different systems, technologies, and threat scenarios, enabling more informed strategic decision-making.

A 70-Year-Old Blueprint for Modern Detection Engineering

What makes Peterson and Birdsall's 1953 paper so remarkable is its timeless insight into the fundamental challenge of distinguishing signal from noise. By providing a stringent statistical framework, they transformed signal detection from an "art" into a scientific discipline.

The likelihood ratio becomes more than just a mathematical concept—it's a lens through which we can understand the probabilistic nature of threat detection. It reminds us that security is not about absolute certainty, but about intelligent risk management.

As cybersecurity continues to evolve, the core principles outlined in this decades-old research remain surprisingly relevant. By embracing mathematical thinking and probabilistic modeling, we can build detection systems that are not just reactive, but intelligently adaptive.