Capabilities
I build and refine capabilities that help organizations evaluate whether their cyber defenses can withstand realistic adversary behavior.
The emphasis is not on theory alone. It is on creating repeatable, measurable ways to test defensive effectiveness.
Adversary emulation
I translate real-world threat behavior into realistic scenarios that mirror adversary tactics, techniques, and procedures.
This supports:
- threat-informed testing
- mission-relevant exercises
- realistic validation scenarios
- repeatable adversary tradecraft replication
Detection validation
I design validation approaches that measure whether detections, workflows, and defensive controls perform as expected against realistic attack behavior.
This includes:
- coverage assessment
- gap identification
- telemetry correlation
- analytic effectiveness measurement
- threat-informed defensive tuning
Cyber experimentation
I build environments that support controlled, repeatable experimentation across infrastructure, telemetry, and adversary activity.
This enables teams to:
- test assumptions
- compare defensive approaches
- measure outcomes
- improve readiness through evidence
Defensive architecture assessment
I evaluate how defensive architecture performs under realistic pressure from adversary tradecraft rather than relying only on policy, configuration, or deployment status.
The goal is to determine:
- what is visible
- what is detectable
- what is actionable
- what fails under realistic conditions
Red-blue integration
I operate at the seam between offensive realism and defensive utility.
That means designing efforts where adversary activity is not performed for spectacle, but for measurable learning and defensive improvement.
AI-ready analytic workflows
I am particularly interested in environments and data flows that support explainable, evidence-based use of machine learning in cyber defense.
That includes work related to:
- telemetry structuring
- detection evaluation
- analytic prioritization
- measurable decision support
OPFORGE
OPFORGE is my flagship platform for adversary emulation, detection validation, and cyber experimentation.
It is designed to support realistic testing, technical documentation, and a more rigorous understanding of defensive effectiveness.
If your mission requires moving from security assumptions to measurable defensive validation, this is the kind of work I focus on.
Working together
Organizations interested in adversary-informed defensive validation, cyber experimentation environments, or detection evaluation can reach out through the contact page.