The Five-Step Process in Detail
Step 1 — Camera Deployment
EchoDepth Events works with standard commercial cameras — no specialist hardware required. Cameras are placed at zones you want to measure: stand entrances, product demonstration areas, speaker stages, interactive displays, and points of decision.
Our deployment team advises on optimal camera positioning for each zone type. A medium-sized stand (6×9m) typically uses 4–6 camera positions; a full exhibition floor may use 20–40 positions.
Integrates with standard IP cameras, event-hire systems, and existing venue CCTV infrastructure (where permitted). No dedicated hardware purchase required.
Step 2 — FACS Action Unit Analysis
Each camera feed is processed by EchoDepth's computer vision engine, which identifies facial regions and analyses 44 FACS-compliant Action Units — the discrete facial muscle movements forming the basis of all human emotional expression.
The Facial Action Coding System (FACS), developed by psychologists Paul Ekman and Wallace Friesen, is the gold standard for systematic facial expression analysis. EchoDepth maps these 44 AUs to discrete emotional states:
- Engagement — eyebrow raise (AU1/2), wide eyes, forward lean signals
- Confusion — brow furrow (AU4), asymmetric lip press, head tilt
- Delight — Duchenne smile (AU6+AU12), periorbital crinkling
- Scepticism — unilateral lip raise (AU14), neutral brow with reduced engagement
- Disengagement — reduced AU activity, gaze aversion signals
- Surprise — brow raise (AU1+AU2), jaw drop (AU27)
Step 3 — Signal Extraction and Anonymisation
This is where EchoDepth Events fundamentally differs from surveillance systems. Raw video frames are processed at the edge and immediately discarded. The frame itself is never stored, transmitted, or retained in any form.
What is extracted is a set of numerical signals: AU activation values and derived emotional state scores. These signals contain no facial geometry, no biometric identifiers, and no information that could identify an individual.
Three primary output signals are computed per observation window:
- Confidence Score — measures positive emotional engagement and intent signals
- Instability Score — measures ambivalence, confusion, and conflicting signals
- Net Confidence — the composite signal indicating overall emotional disposition
Step 4 — Real-Time Dashboard
Aggregated emotional signals are delivered to the EchoDepth Events dashboard in near real time (typically <3 seconds latency). Event managers can monitor:
- Live engagement levels by zone
- Confusion alerts — flagged when confusion signals exceed threshold
- Peak engagement windows throughout the day
- Comparative zone performance
- Staff interaction effectiveness indicators
- Visitor flow correlation with emotional state
Configurable alerts can notify stand managers instantly when confusion spikes at any zone — enabling real-time intervention, messaging adjustments, or staff redeployment.
Step 5 — Post-Event Analytics Report
Every deployment includes a comprehensive post-event report covering:
- Zone-by-zone engagement performance with timeline charts
- Peak and trough analysis with contributing factors
- Confusion hotspot mapping with messaging recommendations
- Staff interaction effectiveness scores
- Visitor emotional journey arc through the stand
- Benchmarked recommendations for future events
- Stakeholder-ready executive summary for budget justification
Privacy Architecture
EchoDepth Events was architected under GDPR Article 25 (Privacy by Design). Privacy is a structural property of the system, not a compliance checkbox.
- Edge processing: Video analysis occurs on-device. Frames are never transmitted over a network.
- Ephemeral frames: Each frame is processed and deleted within milliseconds — no buffer exists.
- Signal-only persistence: Only numerical AU values and derived scores are transmitted and stored.
- No unique identifiers: The system does not assign IDs to individuals or track individuals over time.
- Aggregate-only outputs: No individual-level data is ever presented in any output.
What EchoDepth Events Does Not Do
- ❌ Store video footage or facial images
- ❌ Perform facial recognition or identification
- ❌ Track individuals across time or across zones
- ❌ Store biometric data of any kind
- ❌ Create any linkable individual record