A Guide to KAPE: Streamlining Windows Forensics
- CyberQuest
- 5 days ago
- 7 min read

"In digital forensics, time is the currency" and If you ever worked a forensic case on Windows, you know the struggle- event logs, registry hives, prefetch files, shadow copies...they all pile up into a mountain of artifacts.
Each artifact might hold a clue. But digging manually? That feels endless. Imaging a whole disk is thorough, sure, but who has hours when the pressure is on?
That's exactly the gap KAPE (Kroll Artifact Parser and Extractor) was built to fill in. Instead of spending hours grabbing artifacts one by one, KAPE helps you triage and pull out what matters in minutes.
Launched in 2019 by Eric Zimmerman, a name most of us in the forensic world recognize- KAPE has grown into a go-to-tool for incident response, investigations, and even enterprise security audits. It doesn't replace deep-dive analysis, but it gives you the speed to actually keep up with today's cases."
From Imaging to Triage: Why KAPE Speeds Up Forensics
Before KAPE, analysts often relied on:
Full disk imaging → thorough but painfully slow.
Manual artifact extraction → precise but time-consuming.
KAPE flips this with a triage-first model:
Collect the most relevant artifacts first.
Parse them instantly with proven tools.
Get structured outputs (CSV, JSON, TXT) that are ready to dive into.
It doesn't replace deep dives, but it gives analysts something just as valuable: speed, clarity, and early leads when time matters most
To Download KAPE: https://www.kroll.com/en/services/cyber/incident-response-recovery/kroll-artifact-parser-and-extractor-kape
How KAPE Works:
One of the best thing about KAPE is that it doesn't need heavy setup-no big installs, you can run it straight from a USB drive, network share, or even a folder on your machine.
At its core, KAPE works in two simple phases:
Collection (Targets) → Think of this as deciding what evidence you actually care about.
KAPE goes after the artifacts you select.
Files are queued in two passes: unlocked files are copied directly, while locked files are captured using raw disk reads.
All metadata and timestamps are preserved.
Processing (Modules) → It defines how to parse collected files.
Once the files are collected, modules (often powered by Eric Zimmerman’s suite) run automatically to parse them
Output is saved in investigator-friendly formats like CSV or JSON.
Targets = what you grab
Modules = How you read it.
Together they form a fast triage pipeline: collect first, translate immediately, act faster.
Targets: Narrowing the Hunt

In practice, investigators don't need everything from the disk-- what matters is knowing where to look first. That's where Targets comes in. Instead of imaging terabytes of data, .tkape files let us define exactly what KAPE should extract.
Some of the usual suspects:
Prefetch (*.pf files) → Quick evidence of which program ran
Registry hives → Rich content on both user and system activity.
Event logs → The go-to for logons, process creation, and persistence.
When we need speed, we don’t cherry-pick. We use Compound Targets like KapeTriage bundle together the most common artifacts (AmCache, SysCache, Registry, logs) so we can run one command and start making sense of the activity trail faster.
Targets aren't static either. They are organized into folders:
!Disabled: Rules you don't need right now.
!Local: Custom, environment-specific Targets that remain untouched during updates.
Targets define scope. They’re our evidence checklist.
Module: The Evidence Translators

Raw artifacts by themselves don't tell us much -- they are just noise until we give them structure. That’s where Modules steps in.
"Think of them as translators : each modules tells KAPE exactly what to run, how to process it, and where the results should go.
A Module (.mkape) basically answers four questions:
What tool should I use? (e.g.., PECmd for Prefectch , EvtxECmd for Event Logs)
What Arguments should I pass? (commonly export to CSV or JSON)
Where do I put the Output, which is -- Output destination
In what format (CSV for spreadsheets, JSON for automation, TXT for logs)
Here's a quick Example, the Prefetch Module runs PECmd.exe over .pf files and outputs a CSV timeline of executed applications instantly.
When scale matters, we don't run tools one by one. Compound Modules let KAPE chain multiple parsers in sequence.
For example:
!EZParser → processes Prefetch, Registry, AmCache, Event Logs, and more.
This reduces repetitive execution and ensures consistency across cases.
The Bin Directory: Behind the scene, Modules depend on a Bin directory of third-party executables. To make KAPE portable, these tools live in the bin directory (commonly Zimmerman’s suite).
No matter where KAPE runs — USB, network share, or local — the tools are always available.
Modules don’t just parse data-they translate noise into narrative
Operating KAPE
KAPE can be operated in two ways:
Graphical interface (gkape.exe) → clear and visual, useful for transparency and training.
Command line (kape.exe) → faster, scriptable, and the method of choice for live incidents.

Operating with the GUI (gkape.exe)
If you like things visual, KAPE's GUI (gkape.exe) makes life much easier. Think of it as dashboard where you decide what to collect and how to process it. When you launch the graphical interface, KAPE presents a two-panel view:
Left panel → Target configuration: Here you tell KAPE what evidence you want to grab and from where (disk, folder, or mounted images) and where to save it.
Right panel → Module configuration: This is where you decide what to do with the evidence once it's collected-- which parsers to run, what output to use, and where results should be saved.
This design reflects KAPE’s two-phase model: collection first, processing second.
Step 1: Target Configuration
Source (Target Source / --tsource)
The root drive or directory to collect from.
Can also point to mounted images or alternate directories.
Destination (Target Destination / --tdest)
Path where collected artifacts are stored.
KAPE creates the directory if it does not exist.
Target Selection (--target)
Choose individual Targets (e.g., Prefetch) or Compound Targets (e.g., KapeTriage).
Flush Option (--tflush)
Determines whether KAPE clears the destination folder before each run.
Useful to prevent mixing old and new evidence, but risky if not intentional.
Additional Options
Deduplication (--tdd) → prevents duplicate copies, especially useful when collecting from Volume Shadow Copies.
"The GUI view: configure, select, and execute with clicks"
Step 2: Module Configuration
Module Source (--msource)
Defines the folder KAPE should process.
By default, if left empty, this points to the same folder used for Target Destination (--tdest).
Module Destination (--mdest)
Directory where parsed results will be stored.
Keeping this separate from --tdest ensures raw artifacts stays untouched while processed output remain clen and organized remain distinct.
Module Selection (--module)
Choose one or multiple modules, or compound sets like !EZParser.
!EZParser is frequently used in the field because it automatically runs parsers against Prefetch, Registry, AmCache, and Event Logs.
Module Flush (--mflush)
Same as --tflush but applies to the module output directory.

Step 3: Additional Options
KAPE’s GUI exposes advanced features:
Packaging
--zip, --vhd, --vhdx: Package all results into a container for easy transfer.
--zpw: Apply a password to ZIP files.
Volume Shadow Copies (--vss)
Instructs KAPE to collect from shadow copies.
Cloud/Remote Transfer
Upload processed output to S3 or SFTP directly from KAPE.
Often used in enterprise incident response, where evidence must be centralized securely.
Step 4: Execution
Clicking Execute builds the equivalent command-line instruction and runs it. The console window shows live progress logs — every artifact collected, every parser run, every error encountered.
The result: a structured output directory containing both raw artifacts (from Targets) and parsed intelligence (from Modules).
Operating with the CLI (kape.exe)
While KAPE provides a GUI, its true power lies in the command line (kape.exe). In live investigations, the CLI is faster, more flexible, and far easier to automate across multiple systems.
Basic Collection
The simplest way to run KAPE is to collect artifacts from a source drive into a destination folder:
kape.exe --tsource C: --target KapeTriage --tdest D:\test
--tsource C: → Defines the source (usually the system drive).
--target KapeTriage → Runs the compound target KapeTriage, which includes a broad set of artifacts such as Prefetch, AmCache, Registry, and event logs.
--tdest D:\test → Saves all collected files into the specified folder.
This command is ideal for fast triage collection during incident response.
Collection and Processing Together
KAPE becomes more powerful when Modules are added. Instead of just copying files, it can parse them immediately using forensic tools.
kape.exe --tsource C: --target KapeTriage --tdest D:\test
--mdest E:\KAPE-OUTPUT --module !EZParser
--mdest → Location where parsed results are stored.
--module !EZParser → Runs the compound module !EZParser, which automatically processes key artifacts with tools like PECmd, EvtxECmd, and AmCacheParser.
In practice, the command gives both raw artifacts and structured CSV reports ready for review
Advanced CLI Options
When operating in the field, we often combine core commands with advanced flags for precision and efficiency:
Volume Shadow Copies (--vss):
Collects historical and deleted artifacts. Useful in cases where attackers attempted to wipe traces.
kape.exe --tsource C: --target KapeTriage --tdest D:\test --vss
Packaging Results (--zip / --vhdx):
Automatically compresses output into a single archive for transfer.
kape.exe --tsource C: --target KapeTriage --tdest D:\test --zip
Batch Automation
For repeatable investigations, KAPE supports batch execution with _kape.cli files. Create a text file in the same directory as kape.exe and define your preferred workflow:
--tsource C: --target KapeTriage --tdest D:\test
--mdest E:\KAPE-OUTPUT --module !EZParser
Then run:
kape.exe
KAPE automatically reads _kape.cli and executes the sequence.
This is especially useful in enterprise response, where the same triage process must be repeated across multiple endpoints.
KAPE in the Investigation Workflow
KAPE is not a standalone tool — it is part of a larger forensic workflow. The data it collects and parses is often just the beginning. Structured outputs like CSV and JSON can be imported into:
Timeline Explorer for timeline reconstruction.
Excel or Power BI for pivoting and visualization.
SIEM platforms (Splunk, ELK) for cross-system correlation.
Other forensic tools for deep dives into specific artifacts.
This integration makes KAPE an accelerator. It gives investigators early leads while traditional full imaging and deep analysis continue in parallel.
Final Notes
KAPE doesn’t replace full forensic analysis, but it gives investigators what they need most: speed and clarity. It bridges the gap between evidence collection and actionable leads, making it a go-to tool for DFIR.
In my next post, I’ll share how KAPE fits into wider incident response workflows and works alongside other forensic utilities.