
Applied AI/LLM for Android APK Reversing and Analysis // Guerric Eloi, Nabih Benazzouz
Virtual | Oct 26-Nov 1 | 16 Hours
BOOK NOWABSTRACT
This hands-on training explores how AI and Large Language Models (LLMs) can augment the reverse engineering and security analysis of Android applications. Participants will learn to leverage both local and online LLMs to assist with decompiling, annotating, and analyzing APKs and native libraries. The training introduces Multi-Client Plugins (MCPs) to enhance static analysis tools like Jadx and Ghidra, and walks through real-world usage of LLMs for code refactoring, Frida script generation, fuzzing harness creation, and automated reporting. The course blends static and dynamic analysis with AI-assisted tooling, culminating in a hands-on project where participants reverse a complex APK using AI throughout the process.
KEY LEARNING OBJECTIVES
- Automate and accelerate Android reverse engineering and analysis using AI/LLMs.
- Integrate AI into tools like Jadx and Ghidra.
- Use AI to generate Frida scripts, fuzzing harnesses, and annotated decompiled code.
- Identify vulnerabilities and analyze APKs more efficiently.

Applied AI/LLM for Android APK Reversing and Analysis // FuzzingLabs
Virtual | Oct 26-Nov 1 | 16 Hours
WHO SHOULD ATTEND
- Android Reverse Engineers
- Android Vulnerability Researchers
- Pentesters focussed on Android apps
COURSE DETAILS
Part 1 – AI-Augmented Static Analysis
Module 1 | Introduction & Setup
- Training overview
- Installing required tools: Jadx, Ghidra, Frida, Python LLM clients
- Introduction to MCPs (Multi-Client Plugins)
Module 2 | AI Overview for Reverse Engineers
- What LLMs can do for reverse engineering
- Prompt engineering basics
- Online vs local LLM models
- Tools: GPT-4, Claude, DeepSeek, StarCoder, Llama3
Module 3 | Jadx MCP + AI-enhanced Analysis
- Reverse engineering APKs with Jadx
- Using AI to annotate decompiled code
- Recovering class names, enums, constants
Module 3 | Ghidra MCP + AI Integration
- Reverse engineering native .so files
- Using AI inside Ghidra to annotate disassembly
- Generating and executing Ghidra scripts with LLM support
Module 4 | AI-Powered Decompilation Refactoring
- Refactor and clean decompiled Java/smali
- Use AI for control flow simplification, better naming
Module 5 | Practice Challenge
- Reverse a full APK using Jadx + AI
- Summarize functionality, permissions, and logic
- Deliver a short AI-assisted report
Part 2 – Dynamic Analysis + AI-Powered Automation
Module 6 | Frida Script Generation with AI
- Basics of Frida (Java + JNI hooking)
- Prompting LLMs to generate Frida scripts
- Auto-generate Java method hooks, native interceptors
Module 7 | AI-Driven Fuzzing Harness Generation
- Explain a target function to LLM
- Generate fuzz harnesses for Java/native
- Integrate with AFL++, libFuzzer
Module 8 | Using Nuclei AI for Mobile Security
- Create AI-generated Nuclei templates
- Connect APK analysis to backend scanning
- Combine static APK info with Nuclei tests
Module 9 | Automated Reporting with LLMs
- Auto-generate vulnerability reports
- Summarize technical data for exec/management
Module 10 | Final Hands-on Project
- Reverse a more complex APK (obfuscation + native)
- Use AI for annotation, Frida script, fuzzing, and report
- Deliver final findings
- Q&A, feedback, and resources
Knowledge Prerequisites
- Basic understanding of Android application structure and components (APK, manifest, smali/Java)
- Basic knowledge of reverse engineering concepts (Jadx, Ghidra, Frida is a plus)
- Familiarity with Python scripting
- Some experience with using LLMs (e.g., ChatGPT, Claude) is helpful but not required
- Optional: previous experience with fuzzing or mobile vulnerability analysis
System Requrirements
- Operating System: Linux or macOS
- Internet Access: Required for using online LLMs
- Local AI: a local LLM client or interface
Students will be provided with detailed setup instructions sufficiently prior to the class.
YOUR INSTRUCTORS: Guerric Eloi and Nabih Benazzouz
Guerric ELOI is a cybersecurity researcher at FuzzingLabs focused on Android and iOS security. He identifies high-impact vulnerabilities through penetration testing, reverse engineering, and bug bounty programs, working with vendors to prevent major threats. He also delivers practical training on mobile security and builds custom tools to automate vulnerability discovery and strengthen system defenses.
Nabih is the COO of FuzzingLabs. Over the last 3.5 years he has moved from intern to security engineer, team lead, and now operations lead. His work focuses on fuzzing and vulnerability research, writing and maintaining tools in C, Python, Rust, and Go. He earned his cybersecurity-engineering degree from EPITA
About FuzzingLabs
Founded in 2021 and headquartered in Paris, FuzzingLabs is a cybersecurity startup specializing in vulnerability research, fuzzing, and blockchain security. We combine cutting-edge research with hands-on expertise to secure some of the most critical components in the blockchain ecosystem
COUNTERMEASURE25: 60+ days before the event 75% of fees refunded; 45-60 days before event 50% refunded, less than 45 days 0% refunded. Course changes are allowed up to 14 days before event start (some restrictions will apply). Attendee changes can be accommodated up to 14 days prior to the event.
Note: In the event of a class cancellation, Ringzer0 will endeavor to offer transfer to another training at no additional charge.