Replay

Replay is Fluency’s immersive environment for reliving real security events, training analysts, validating AI, and testing response workflows — all from sanitized telemetry and curated scenarios.

Replay
Replay is a Project to rerun real-world issues inside the SIEM.

Rethinking How We Learn, Test, and Trust Cybersecurity

In cybersecurity, detection and response don’t just depend on technology — they depend on experience. But experience is often gained the hard way: through trial, error, and actual incidents. Replay changes that.

Replay is Fluency’s immersive environment for reliving real security events, training analysts, validating AI, and testing response workflows — all from sanitized telemetry and curated scenarios. It’s built for those who know that good security isn’t just about alerts — it’s about what you do next.

Why Replay Exists

Most SOC training is passive. Playbooks are theoretical. SIEM rules are untested at scale. AI systems are trained on static datasets or assumptions that don’t reflect the messy reality of enterprise networks. This creates a fundamental problem:

You can’t defend what you haven’t practiced.
You can’t trust what you haven’t tested.

Replay solves this by letting you walk through real-world events — again and again — but this time with insight, tools, and structure.

Documentation

How to use Replay can be found here.

The code base is being prepared for OpenSource use. To get access to Replay, you can contact support@fluencysecurity.com.

What Replay Enables

🎓 Train Analysts with Real Data

Replay introduces structured workbooks that guide students and analysts through four stages of investigation:

  1. Validation – Is this alert actionable?
  2. Scoping – What is affected and how deep does it go?
  3. Response – What actions are necessary right now?
  4. Hot Wash – What do we change for next time?

Each workbook includes logs, detection context, AI analysis results, and space to write your findings. It’s like a flight simulator for SOC analysts — but built on real logs from real systems.

🧠 Evaluate AI with Ground Truth

AI models that analyze alerts often lack context, feedback, or iteration. Replay fixes that by offering a closed loop evaluation environment, where each scenario:

  • Contains structured prompts and AI responses
  • Has a known “truth” for comparison
  • Can be re-run with different models or updated tools

This is how we improve trust in AI. Not by claiming perfection, but by proving performance — and learning from errors.

🧪 Validate Detection and Response at Scale

Writing a rule is easy. Trusting it in production is not. Replay provides a way to test:

  • Detection logic against full telemetry
  • Workflow outcomes for automation or analyst review
  • SOC maturity through repeatable exercises

This lets you go beyond “the rule matched” and ask: Did the team respond correctly? Did the system help or hinder?

Built for SOCs, MSSPs, and Educators

Replay was born from live cybersecurity training. It now powers:

  • Internal SOC onboarding
  • Managed services quality checks
  • Detection engineering regression testing
  • University-level instruction on real-world logs
  • Cyber range simulations for red-blue team evaluation

Whether you’re teaching a class, tuning AI, or building a stronger SOC, Replay gives you the data, structure, and tools to do it right.