Remote Event Streaming (eCaptureQ WebSocket)
Relevant source files
The following files were used as context for generating this wiki page:
- cli/cmd/ecaptureq.go
- examples/ecaptureq_client/README.md
- examples/ecaptureq_client/TESTING.md
- examples/ecaptureq_client/go.mod
- examples/ecaptureq_client/go.sum
- examples/ecaptureq_client/main.go
- internal/output/writers/iowriter_adapter.go
- pkg/ecaptureq/README.md
- pkg/ecaptureq/client.go
- pkg/ecaptureq/hub.go
- pkg/ecaptureq/server.go
- pkg/util/ws/server.go
- pkg/util/ws/server_test.go
- protobuf/PROTOCOLS.md
- protobuf/README.md
- protobuf/gen/v1/ecaptureq.pb.go
- protobuf/proto/v1/ecaptureq.proto
- utils/protobuf_visualizer/README.md
- utils/protobuf_visualizer/README_CN.md
The eCaptureQ live event push mechanism provides a real-time, structured data stream from eCapture to external consumers. By utilizing WebSockets and Protocol Buffers (Protobuf), it enables low-latency delivery of captured plaintext events and system logs to remote analysis platforms, security dashboards, or custom collectors.
Architecture Overview
The eCaptureQ system follows a hub-and-spoke model where the eCapture process acts as a WebSocket server. It manages multiple client connections and broadcasts events as they are processed by the userspace pipeline.
System Components and Data Flow
The following diagram illustrates the relationship between the internal eCapture components and the eCaptureQ streaming service.
Diagram: eCaptureQ Streaming Architecture
Sources: pkg/ecaptureq/server.go:31-57, pkg/ecaptureq/client.go:34-48, cli/cmd/ecaptureq.go:21-36
Starting the Server
The WebSocket server is enabled via the --ecaptureq flag followed by a WebSocket URL.
- Command Example:
sudo ecapture tls --ecaptureq ws://127.0.0.1:28257/ - Requirement: The URL must include a trailing slash
/pkg/ecaptureq/README.md:12-12. - Default Port: While the user can specify any port,
28257is the convention used in examples examples/ecaptureq_client/main.go:40-40.
Message Format (Protobuf)
eCaptureQ uses Protocol Buffers for efficient serialization. The top-level message is LogEntry, which uses a oneof field to encapsulate different data types.
LogEntry Structure
The LogEntry structure identifies the message type using the LogType enum protobuf/PROTOCOLS.md:53-62.
| Field | Type | Description |
|---|---|---|
log_type | LogType | 0: Heartbeat, 1: Process Log, 2: Event pkg/ecaptureq/README.md:34-39 |
payload | oneof | Contains event_payload, heartbeat_payload, or run_log protobuf/PROTOCOLS.md:53-62 |
Data Types
1. Event (LOG_TYPE_EVENT)
Represents a captured traffic fragment (e.g., a decrypted TLS packet).
- Key Fields:
timestamp,uuid,src_ip,src_port,dst_ip,dst_port,pid,pname,type(Protocol ID), and the rawpayloadbytes protobuf/PROTOCOLS.md:24-41.
2. Process Log (LOG_TYPE_PROCESS_LOG)
Contains standard eCapture execution logs (e.g., version info, probe attachment status).
- Format: Plain string delivered via the
run_logfield pkg/ecaptureq/server.go:94-94.
3. Heartbeat (LOG_TYPE_HEARTBEAT)
Used for connection keep-alive and health monitoring.
- Interval: The server sends a heartbeat every 15 seconds pkg/ecaptureq/client.go:83-83.
- Fields:
timestamp,count(sequence number), and a statusmessageprotobuf/PROTOCOLS.md:45-51.
Implementation Details
Startup Buffer (History Replay)
To ensure clients don't miss critical initialization logs (like version info or probe status) that occur before the client connects, the Server maintains a logbuff.
- Capacity: Fixed at 128 messages pkg/ecaptureq/server.go:29-29.
- Behavior: When a new client connects, the server immediately replays all messages in
logbuffto that client before streaming live events pkg/ecaptureq/server.go:75-75.
Connection Management
- Hub: The
Hubmanages the set of active clients and handles the broadcasting of messages to ensure thread-safety pkg/ecaptureq/server.go:47-47. - Pump Pattern: Each client connection spawns two goroutines:
writePump(): Consumes messages from thesendchannel and pushes them to the WebSocket pkg/ecaptureq/client.go:82-117.readPump(): Monitors the connection for closure or incoming control messages pkg/ecaptureq/client.go:55-75.
Client Integration
Reference Go Client
A reference implementation is provided in examples/ecaptureq_client/main.go. It demonstrates:
- Connecting to the server using
websocket.Dialexamples/ecaptureq_client/main.go:50-50. - Receiving raw bytes and unmarshaling them into
pb.LogEntryexamples/ecaptureq_client/main.go:87-88. - Switching logic based on
LogTypeto process events or heartbeats examples/ecaptureq_client/main.go:100-110.
Multi-language Integration Pattern
For non-Go languages (Python, Rust, etc.), the integration follows this pattern:
Diagram: Integration Logic
Sources: pkg/ecaptureq/README.md:166-169, examples/ecaptureq_client/main.go:99-110
Debugging Tools
The project includes a protobuf_visualizer utility (located in utils/protobuf_visualizer/) that can connect to an eCaptureQ stream and display messages in compact or hex formats for debugging utils/protobuf_visualizer/README.md:1-31.
Sources: