WORKSHOP: Supercharging Ghidra: Build Your Own Private Local LLM RE Stack with GhidraMCP, Ollama, and OpenWebUI

John McIntosh

ABSTRACT

Reverse engineering workflows are evolving and local LLMs are reshaping how we analyze binaries, automate tooling, and preserve privacy. In this hands-on workshop, participants will learn how to build a modular, private RE stack using GhidraMCP, pyghidra-mcp, Ollama, and OpenWebUI. We’ll walk through setting up local LLMs, integrating them with Ghidra, and customizing workflows to suit your threat model and tooling preferences. Whether you're reverse engineering malware, firmware, or proprietary binaries, this session will equip you with a reproducible, offline-first workflow that enhances analysis while keeping sensitive data local. Attendees will leave with a working local LLM RE stack and the confidence to extend their setup with custom prompts and models.

This 90-minute workshop is designed for practitioners who want to modernize their reverse engineering workflows using local LLMs. We’ll cover:

Part 1: Foundations

  • Why local LLMs matter for RE: privacy, reproducibility, and control
  • Overview of GhidraMCP, Ollama, and OpenWebUI
  • Hardware and OS considerations

Part 2: Stack Setup

  • Installing Ollama and running models locally
  • Configuring OpenWebUI for prompt management
  • Integrating GhidraMCP with Ghidra and local LLMs
  • Testing Your MCP Server MCPO

Part 3: Workflow Deep Dive

  • Real-world RE tasks enhanced by LLMs (decompilation, annotation, automation)
  • Prompt engineering for binary analysis
  • Extending the stack with custom models and plugins

Exercise 1: GhidraMCP GUI – Interactive Binary Analysis

Use GhidraMCP with a local LLM to rename functions, summarize behavior, and answer questions directly in the Ghidra GUI.

Exercise 2: pyghidra-mcp CLI – Multi-Binary Project Analysis

Analyze an entire Ghidra project via command line using pyghidra-mcp. Run cross-binary queries to identify reused code, suspicious patterns, or shared functionality.

Part 4: Wrap-Up & Q&A

  • Troubleshooting tips
  • Sharing modular configs and prompt libraries
  • Open discussion on future directions

System Requirements & Alternatives

To run the local LLM stack effectively, you’ll need a machine capable of running at least an 8B model (such as Qwen3 or Llama). This typically means a system with a modern GPU and 16GB+ RAM for smooth performance. Quantized models are ideal for laptops or mid-tier GPUs (e.g., RTX 3060 or Apple M-series).

If your hardware doesn’t meet these specs, you can still follow along using a free-tiers of most frontier models. The workshop will provide setup instructions for both local and remote-friendly options, ensuring everyone can participate fully, regardless of hardware.

We will preferably use docker to run openweb-ui and ollama, but they can be installed without docker if preferred.

John McIntosh @clearbluejar, is a security researcher at Clearseclabs. His area of expertise lies within reverse engineering and offensive security, where he demonstrates proficiency in binary analysis, patch diffing, and vulnerability discovery. Notably, John has developed multiple open-source security tools for vulnerability research, all of which are accessible on his GitHub page. Additionally, his website, https://clearbluejar.github.io/, features detailed write-ups on reversing recent CVEs and building RE tooling with Ghidra. Boasting over a decade of experience in offensive security, John is a distinguished presenter and educator at prominent security conferences internationally. He maintains a fervent commitment to sharing his latest research, acquiring fresh perspectives on binary analysis, and engaging in collaborative efforts with fellow security enthusiasts.

MORE FROM RINGZER0 COUNTERMEASURE25

Great! Next, complete checkout for full access to Ringzer0
Welcome back! You've successfully signed in
You've successfully subscribed to Ringzer0
Success! Your account is fully activated, you now have access to all content
Success! Your billing info has been updated
Your billing was not updated