Skip to content

Architecture

Relevant source files

The following files were used as context for generating this wiki page:

eCapture is built on a modular, three-layer architecture designed to bridge high-performance eBPF kernel instrumentation with a flexible userspace processing pipeline. This design allows developers to extend the tool with new probes while maintaining a consistent data path for event processing and output.

System Overview

The following diagram illustrates the high-level relationship between the kernel-space eBPF programs and the userspace Go components.

High-Level Component Interaction

Sources: main.go:9-11, kern/tc.h:58-78, README.md:94-103


Architectural Layers

The system is divided into three primary functional layers:

1. eBPF Kernel Layer

This layer consists of C programs compiled into eBPF bytecode. It performs the actual interception of data at the kernel level using uprobes (for user-space libraries like OpenSSL), kprobes, or TC (Traffic Control) classifiers.

  • Bytecode Management: eBPF assets are stored in ebpfassets/ and can be loaded in CO-RE (Compile Once – Run Everywhere) mode using BTF or non-CO-RE mode for older kernels.
  • Data Capture: Programs use helpers like bpf_probe_read kern/bpf/bpf_helper_defs.h:110 to extract plaintext from memory before encryption or after decryption.

For details, see Three-layer Architecture.

2. Userspace Probe Layer

Located in internal/probe/, this layer manages the lifecycle of eBPF programs. It handles:

  • Discovery: Finding the target shared libraries (e.g., libssl.so) on the host system README.md:108-112.
  • Loading: Using the BaseProbe template to load bytecode and attach hooks to specific function symbols (e.g., SSL_write).
  • Configuration: Validating CLI arguments via the BaseConfig structure.

For details, see Probe Framework and Extension Mechanism.

3. Event Processing & Output Layer

Once data leaves the kernel via Perf or Ring buffers, it enters the pkg/event_processor pipeline.

  • Ordering: The eventWorker ensures that packets belonging to the same connection (identified by a UUID) are processed in the correct sequence CHANGELOG.md:5.
  • Parsing: Protocol-specific parsers (HTTP/1.1, HTTP/2, MySQL) reconstruct high-level application data.
  • Delivery: Final data is encoded (JSON/Text/Protobuf) and sent to configured writers like Stdout, PcapWriter, or the eCaptureQ WebSocket server.

For details, see Event Processing Pipeline.


Core Data Structures

To understand the data flow, developers should be familiar with the following entities defined in the kernel and userspace headers:

EntityLocationDescription
skb_data_event_tkern/tc.h:30-37Metadata for network packets captured via TC hooks.
net_id_tkern/tc.h:39-47Connection tuple (IP/Port/Protocol) used for session tracking.
skb_eventskern/tc.h:58-63The BPF Perf Event Array map used to stream data to userspace.
LogEntrypkg/ecaptureq/The standard Protobuf message format for remote streaming.

Event Flow: Kernel to CLI

Sources: kern/tc.h:136-150, CHANGELOG.md:103-111, README.md:114-119


Chapter Index

Architecture has loaded