security
- When AI Stops Being a Tool and Becomes an Attack Surface
AI systems are starting to behave less like passive tools and more like autonomous attack surfaces. A technical look at prompt injection, a concrete end-to-end attack chain, a scoping of which architectures are actually at risk, and practical defensive actions for engineering teams.
- Fackel: an autonomous pentest framework powered by ReAct agents
Fackel is a multi-agent pentest framework where LLMs decide strategy, not hardcoded pipelines. A walkthrough of the architecture, the design decisions, and the lessons learned.
- Device Code Phishing + Vishing: How Attackers Compromise Microsoft Entra Accounts Using Legit Login Pages
A practical deep dive into device code phishing combined with vishing targeting Microsoft Entra: how the OAuth device code flow gets abused, what to monitor, and how to mitigate.
- The State of the Art in AI Agents (2026): What ‘Modern’ Actually Means
A practical overview of modern AI agent systems: tool use, retrieval, memory, verification, multi-agent patterns, evaluation, and security.
- Security Implications of Probabilistic Reasoning in Generative AI
A rigorous analysis of how probabilistic reasoning in generative models shapes security risk, failure modes, and robustness.
- The Cost of Abstraction: When Layers Hide Security and Reliability Risks
Argues that abstraction layers can obscure failure modes, shift risk across boundaries, and weaken assurance unless their assumptions are made explicit.
- Why Traditional Threat Modeling Breaks Down in Generative AI Systems
Argues that probabilistic behavior, distributional risk, and system composability invalidate core assumptions of classical threat modeling for generative AI.