
Building Agentic RE: Automating Reverse Engineering & Vulnerability Research with AI // John McIntosh
Virtual | March 27-31 | 32 Hours
BOOK NOWABSTRACT
Reverse engineering is evolving beyond static tools and manual workflows. This 32 hour hands‑on course introduces a new paradigm: Agentic Workflows. By combining cutting‑edge large language models (LLMs), the Model Context Protocol (MCP), and reverse engineering tools like Ghidra, you’ll learn how to design, train, and orchestrate AI‑powered systems that automate and accelerate complex RE and VR tasks.
The course blends foundational concepts, the latest practices in AI server configuration, programming, and workflow design, custom MCP development, and advanced orchestration—culminating in LLM‑powered agents that act as autonomous collaborators in reverse engineering and vulnerability research.
Through a systematic progression, you’ll move from fundamentals to advanced orchestration:
- Building local AI stacks that ensure privacy, reproducibility, and control.
- Leveraging LLMs to explain, annotate, and reason about binaries.
- Developing custom MCP servers to expose reverse engineering and vulnerability research tools.
- Integrating static and dynamic analysis pipelines with AI‑driven insights.
- Validating findings through automated cross‑checks and building reliable, trustworthy workflows.
- Delivering an integrated agentic workflow that assists in both reverse engineering and vulnerability research.
By the end of the course, you will have built an integrated agentic AI workflow that assists in your reverse engineering and vulnerability research tasks—capable of analyzing binaries, surfacing potential vulnerabilities, validating, and triaging results.
Why This Matters
Modern binaries increasingly resist traditional tooling. By combining human expertise with agentic AI, you can:
- Shorten analysis cycles.
- Surface subtle behavioral patterns.
- Scale research without sacrificing depth or accuracy.
- Automate repetitive triage while keeping humans in the loop.
This course equips you to move beyond brittle prompts into orchestration, where AI becomes a programmable, composable part of your workflow.
INTENDED AUDIENCE
- Reverse Engineers who want to augment their workflows with AI‑driven automation.
- Vulnerability Researchers looking to accelerate bug discovery and triage with agentic frameworks.
- Security Professionals interested in building private, reproducible AI stacks for sensitive analysis.
- Developers & Tool Builders exploring how to extend MCP servers and integrate AI into RE pipelines.
- Applied AI Practitioners who want to move beyond prompt‑hacking into orchestration, reproducibility, and workflow design.
If you’ve ever wished your RE tools could act as autonomous collaborators—not just assistants—this course is for you.
KEY LEARNING OBJECTIVES
Foundations of Agentic RE: Understand the intersection of generative AI, MCP, and reverse engineering.
Private Local LLM Stack: Build and configure your own stack with GhidraMCP, Ollama, and OpenWebUI, with a focus on hardware and performance trade-offs.
Custom MCP Development: Extend MCP servers to expose binary metadata, integrate semantic search, and connect with RE tools.
LLM Training for RE: Create datasets, fine‑tune models with QLoRA, and train models to detect vulnerabilities or identify key functions.
Agentic Workflow Design: Learn DSPy and LangGraph orchestration patterns to build resilient, compositional workflows.
Reliable AI & Validation: Implement automated cross-checks and guardrails to reduce hallucinations and validate AI-generated findings.
Custom RE HUDs: Build interactive dashboards with Chainlit/Streamlit to guide multi‑platform RE analysis.
Capstone Project: Deliver a single HUD with two workflow paths — one for RE, and one for VR that includes discovery, triage, and validation steps.
COURSE OUTLINE
Part 1 – Foundations of Agentic RE
AI here is a computational and systems layer.
You’ll learn the fundamentals of how LLMs operate — tokenization, embeddings, quantization — and what those mean for reverse engineering tasks. We’ll cover considerations for system design, how to enhance LLMs with MCP‑exposed tools, and the client–server architecture that makes interfacing with models possible.
- The Agentic Era: how LLMs transform reverse engineering and vulnerability research.
- LLM basics: tokens, embeddings, quantization. (Mostly Overview)
- Model Selection & Hardware: Trade-offs between model size (7B, 13B, 70B), performance, quantization (QLoRA), and realistic hardware requirements (VRAM).
- Model Context Protocol (MCP): exposing RE tools to LLMs.
- Why local LLMs matter: privacy, reproducibility, and control.
- Local LLM Stack Setup — Install Ollama, OpenWebUI, LM‑Studio, and connect to GhidraMCP.
- AI-Assisted Reverse Engineering — Use a local LLM to explain code, identify constants, or annotate functions, and dive deep into binary analysis.
Part 2 – Extending the Stack: Custom MCP Servers
AI here is an environment you control.
With the foundational stack running, you'll move from being a user to a builder. This section focuses on extending your private AI ecosystem by creating custom Model Context Protocol (MCP) servers that expose powerful static analysis and reverse engineering tools to your LLMs.
- MCP server basics (Python + FastAPI).
- Designing tool-specific MCPs for structured input and output.
- Static Analysis MCPs — Expose Semgrep (pattern‑based) and CodeQL (query‑driven) through MCP, compare their outputs on a sample codebase.
- Custom Ghidra MCP — Use headless scripting to analyze binaries and expose key information like function listings and cross-references.
- Multi‑Binary CLI Analysis — Use pyghidra‑mcp to detect reused code, suspicious patterns, and API call flows across multiple binaries. Extend the tool to accomplish custom RE tasks.
- Building advanced custom MCP servers.
Part 3 – Training/adapting LLMs for RE/VR tasks
AI here is a programmable collaborator.
LLMs alone can’t introspect binaries the way RE demands, but MCP lets you expose structured tools and data. You’ll learn to build advanced MCP servers and then train your models to better understand the RE/VR domain.
- Programming with LLMs: context engineering, handling non‑determinism, and designing well‑defined tools.
- Securing Agentic Workflows: Introduction to prompt injection, data sanitization, and securing MCP API endpoints.
- Prompt optimization with MiPROv2 and GEPA. Learn how to improve prompts the improve smaller 8B models by building evaluation test sets to automatically discover the optimal prompt.
- Training LLMs for RE tasks:
- Data Sourcing & Curation: Strategies for creating, sourcing, and labeling high-quality datasets from opensource code, CVE reports, and internal projects.
- Techniques like QLoRA for efficient fine‑tuning.
- Training models to improve detection of vulnerability classes.
- Fine‑Tuning Your First Model — Train a small model to detect a specific vulnerability class (e.g., UAF or overflow).
Part 4 – Advanced Workflows and Orchestration
AI here is a workflow partner.
Beyond prompts, you’ll explore how to build advanced workflows that combine traditional RE tools with MCP‑exposed services or direct execution. You’ll design RE HUDs that visualize and coordinate these workflows, inserting agentic RE agents where they add value, and integrating validation steps to improve reliability. Ultimately, you’ll learn to build a hybrid architecture that combines deterministic RE tools with reasoning agents — grounding generative AI in the outputs of real analysis tools to prevent hallucinations and ensure trustworthy results.
- Programmatic workflows that integrate RE tools (via MCP or direct execution) with agentic agents where useful.
- RE HUD Prototype — Create an interactive dashboard with Streamlit/Chainlit to visualize and guide workflows.
- Multi‑Platform Workflow — Implement logic for Windows, Android, and iOS, combining RE tools with agentic feedback.
- Static Analysis Integration — Incorporate Semgrep and CodeQL MCPs into a single workflow to compare results and support triage.
- Capstone Project: Integrated RE + VR Workflow
- Develop a workflow that analyzes binaries, surfaces vulnerabilities, and provides triage with contextual explanations.
- Integrate Ghidra MCP, Semantic Search MCP, and static analysis tools into one HUD.
- Apply agentic reasoning to prioritize findings, annotate binaries, and improve clarity.
- Deliverable: A Chainlit‑based HUD that unifies reverse engineering and vulnerability research into a single integrated workflow.
Technology Stack
- AI: Local / Frontier LLMs, Ollama, OpenWebUI, LM‑Studio
- RE/VR: Ghidra, Semgrep, CodeQL, Tree‑sitter
- Development: Python (primary), MCP SDKs (TypeScript, Go, Rust, etc.)
- Workflow Orchestration: DSPy, LangGraph
- UI/Integration: Chainlit, Streamlit
Student Requirements
To get the most out of this training, participants should have:
- Intermediate reverse engineering experience (familiarity with Ghidra, IDA, or similar tools).
- Basic vulnerability research knowledge (understanding of common bug classes and analysis workflows).
- Comfort with scripting in Python (used for MCP servers, orchestration, and workflow glue).
- Familiarity with Linux or macOS command‑line environments for stack setup and automation.
No prior experience with LLMs or AI frameworks is required—fundamentals will be covered before diving into advanced workflows.
System AI Requirements and Alternatives
- A machine capable of running at least an 8B model (e.g., Qwen3 or Llama).
- Recommended for local LLMs: modern GPU (RTX 3060 or Apple M‑series) and 16GB+ RAM for smooth performance. Ideally, able to run gpt-oss-20b at 20+ t/s w/ ~30K context.
- If hardware doesn’t meet these specs, students can still participate using free tiers of frontier models; setup instructions for both local and remote‑friendly options will be provided.
Software Requirements
- Python 3.11+
- Docker to run OpenWebUI and Ollama
- git and a Linux‑style command line environment with administrator privileges
- Free Google Account for Google Collab exercises
Practical Takeaways
By the end of this course, participants will walk away with:
- A fully configured local RE+LLM stack (Ollama, OpenWebUI, LM‑Studio, GhidraMCP).
- An understanding of hardware trade-offs for running local LLMs effectively.
- Custom MCP servers for binary metadata, semantic search, and static analysis (Semgrep + CodeQL).
- Hands‑on experience fine‑tuning models for RE‑specific tasks
- Reusable workflow templates for binary analysis, vulnerability discovery, and results validation.
- A Chainlit‑based RE HUD that integrates multiple MCPs and provides an interactive interface for analysis.
- An integrated capstone project:
- RE Path: A workflow that analyzes and explains binaries, leveraging Ghidra MCP and semantic search.
- VR Path: A workflow that discovers, triages, and validates potential vulnerabilities using Semgrep, CodeQL, and LLM-driven cross-checks.
- A showcase‑ready system that demonstrates how agentic AI can partner with humans in both reverse engineering and vulnerability research.
RECOMMENDED COMBO TRAINING

An ideal prequel before taking this training
YOUR INSTRUCTOR: John McIntosh
John McIntosh @clearbluejar, is a security researcher at Clearseclabs. His area of expertise lies within reverse engineering and offensive security, where he demonstrates proficiency in binary analysis, patch diffing, and vulnerability discovery. Notably, John has developed multiple open-source security tools for vulnerability research, all of which are accessible on his GitHub page. Additionally, his website, https://clearbluejar.github.io/, features detailed write-ups on reversing recent CVEs and building RE tooling with Ghidra. Boasting over a decade of experience in offensive security, John is a distinguished presenter and educator at prominent security conferences internationally. He maintains a fervent commitment to sharing his latest research, acquiring fresh perspectives on binary analysis, and engaging in collaborative efforts with fellow security enthusiasts.
Cancellations are not permitted but attendee changes can be accommodated anytime prior to the start of the course.
Note: In the event of a class cancellation, Ringzer0 will endeavor to offer transfer to another training at no additional charge.