How do AI Workflows differ from SOAR Playbooks?

How do AI Workflows differ from SOAR Playbooks?
Impossible Travel Workflow

You have two logins.

One from Spain. One from Switzerland. Eight minutes apart.

The system fires an alert: “Impossible travel.”

But here’s the problem—it’s not.

This is where traditional automation breaks down. The playbook did exactly what it was told: measure the time, measure the distance, flag the event. On paper, it’s a perfect rule. It’s a false positive.

What went wrong?

The answer reveals something deeper—not just about SOAR, but about the limits of rule-based security altogether. It’s a lesson every security team faces when automation matures: the problem isn’t that your logic failed. It’s that your logic was never designed to handle complexity in the first place.

Hard Logic in SOAR: Why Playbooks Break Down

Security Orchestration, Automation, and Response—SOAR—has become a core concept in modern security operations. Its promise is simple: take repetitive, structured tasks out of the analyst’s hands and automate them. But to understand why this promise often falls short in practice, we need to break SOAR down into its three fundamental functions: enrichment, investigation (analysis), and response.

Enrichment are response are the easier of the three. Both are a mechanical task. You receive an event—say, a login from an IP address—and you want to add context to it. A SOAR playbook might append geolocation data, identify the ISP, or flag whether the address appears in a threat intelligence feed. These are lookups, and while they can be layered for accuracy (e.g., querying multiple geo-IP databases), they follow a predictable structure. They’re the plumbing of the security stack, and they scale well. Response is similar, the action follows the request of the analysis. Such as to quarantine a user or reset certificates. These are direct requests to remote functions, such as RESTful APIs.

But enrichment alone doesn’t solve problems. It just decorates the data. The harder—and far more important—job is investigation: figuring out whether an event is an issue and deciding what to do about it.

This is where hard logic dominates. In SOAR, investigation is a playbook, a static set of instructions: If X and Y are true, then consider Z and do A. These playbooks function much like a flowchart written in code. They rely on clear thresholds, rigid conditions, and known paths. And at first, they work—particularly on the obvious cases.

Take “impossible travel,” a classic SOAR scenario. A user logs in from Spain, then eight minutes later logs in from Switzerland, a distance from border to border of 100 kilometers. A playbook evaluates the distance between the two points and the time between events. If no person could physically make that trip in that window, the response is triggered—maybe an alert, maybe a forced token refresh.

It feels elegant. It works in the demo. But under real-world complexity, this kind of playbook breaks with a high number of false positives.

The Illusion of Progress

Perhaps the most deceptive aspect of rule-based playbooks is how quickly they appear to work. The low-hanging fruit is easily automated: detecting logins from blocked countries, identifying known malware signatures, or flagging default credentials. These rules are straightforward, and their impact is visible. Alerts drop. Dashboards show success. Teams celebrate.

But this initial wave of automation conceals a deeper problem. Once the obvious scenarios are handled, what remains are the nuanced cases—the edge behaviors, the unpredictable anomalies, the combinations of context that resist easy categorization. And here, hard logic begins to fail.

As teams attempt to expand automation into these more complex areas, each new playbook takes longer to build and is more fragile. Slight changes in data formats, environmental conditions, or user behavior break the logic. The more sophisticated the rule, the more brittle it becomes.

This is the heart of the scaling problem. Hard logic excels at the simple—but complexity is where the real threats hide. The very success of automation at the beginning creates a trap: the illusion that the rest of the problem will yield just as easily. But it doesn’t. And that’s when security teams begin to realize that something else is needed. Something that can handle nuance, contradiction, and uncertainty.

AI Workflows are Soft Logic

To understand why AI Workflows are such a critical advancement in cybersecurity, we need to revisit a familiar problem: impossible travel. The concept is simple—if a user logs in from two geographically distant locations within a time frame that defies physical travel, that’s suspicious. But as straightforward as it sounds, accurately detecting this scenario reveals the limits of rule-based logic and the promise of soft logic.

Hard logic treats impossible travel as a binary condition: calculate the distance between two points, compare it to the time between logins, and flag it if the result exceeds human capabilities. This works well on paper—but the real world is rarely so clean. Users work remotely. VPNs and anonymizers skew geographic indicators. Authentication methods vary. And what looks like an impossibility may be perfectly legitimate.

Soft logic, on the other hand, embraces ambiguity. It works not by asserting rules, but by weighing evidence. Instead of asking “Did this event break a rule?” it asks, “How likely is this event to be legitimate, given what we know?” That shift—from certainty to probability, from rules to judgment—is the heart of the transition from SOAR-style automation to AI-driven reasoning.

How Splunk Handles Impossible Travel

Splunk offers one of the more sophisticated implementations of impossible travel detection within the realm of traditional SIEMs. Its logic centers on evaluating login events across a time window—typically 24 hours—and identifying the furthest geolocation pair associated with the same user during that interval.

  index=auth_logs sourcetype=login_events
    | iplocation src_ip
    | eval login_time=_time
    | fields user src_ip login_time City Country lat lon
    | sort 0 user login_time
    | streamstats current=f last(login_time) as prev_time last(lat) as prev_lat last(lon) as prev_lon last(City) as prev_city last(Country) as prev_country by user
    | eval time_diff = login_time - prev_time
    | eval distance_km = haversine(lat, lon, prev_lat, prev_lon)
    | eval speed_kmh = round(distance_km / (time_diff / 3600), 2)
    | where speed_kmh > 1000 AND time_diff < 7200  /* 2-hour window */
    | table user login_time City Country src_ip prev_time prev_city prev_country distance_km time_diff speed_kmh

The workflow goes something like this:

  1. Aggregate logins for a given user across a specified time window.
  2. Resolve geolocation (latitude and longitude) from the IP addresses associated with those logins.
  3. Calculate pairwise distances between all unique location combinations.
  4. Identify the maximum separation—the two logins furthest apart geographically.
  5. Compare distance and time: if the time between those two logins is less than the minimum required to travel between them by realistic means, flag the user.

This logic is impressive in scope and surprisingly effective for a rule-based approach. It represents a peak form of hard logic: well-tuned, deterministic, and precise. Yet it’s still constrained by several assumptions:

  • That users log in from IPs tied directly to their physical location.
  • That all logins within the time window are equally relevant or anomalous.
  • That the outermost pair defines the risk, regardless of context between.

And perhaps most importantly, it presumes that distance is the only variable that matters.

What it gains in mathematical clarity, it loses in contextual nuance. It cannot weigh the significance of the ISP, the identity provider, the login method, or the browser. It does not learn from history or refine its approach. It merely repeats a calculation. This is the fork in the road. Hard logic stops here. AI begins.

When the IP Isn’t the Person: The Hidden Complexity of Location

The strength of Splunk’s impossible travel logic lies in its simplicity. But its weakness lies there, too.

At the core of the impossible travel calculation is two fragile assumptions. First, that an IP address with geographic coordinates tells us where the user is. In some cases, that’s true. But in many, it’s not. And the gap between the IP’s “location” and the user’s reality is wide enough to undermine the entire premise. Second, the properties of the network and how authentication was performed are irrelevant.

Let’s walk through the layers of this problem.

VPNs and Proxy Tunnels

The most obvious complication is the use of a VPN. A user working remotely in Madrid may connect through a VPN node in New York, meaning their login appears to come from the United States. If, moments later, they connect to a different system without the VPN, their IP resolves to Spain. Hard logic compares these two locations and sees a physical impossibility—New York to Madrid in minutes. But the user never left their desk.

Beyond VPNs are more complex network behaviors: layer two routing, anonymizing proxies, corporate gateways, and Tor nodes. These don’t just obscure location—they sever any consistent link between the IP and the user’s device. The IP becomes an abstraction, sometimes shared by many users, sometimes transient, and always slippery.

It Depends

There’s also a qualitative difference between types of networks:

  • A home IP address may suggest a consistent user location.
  • A business office’s corporate IP address.
  • A hotel network introduces uncertainty—travel may be valid.
  • A university subnet might serve thousands of users.
  • A mobile carrier can shift geographic indicators from moment to moment.

These distinctions matter. Analysts instinctively factor them in when making decisions. So should the system.

Further context includes how the user authenticated. Was it a password? A FIDO2 token? A federated identity like Okta or Azure AD? Different methods carry different implications for certainty and risk. The browser and device involved provide additional signal—was this a known user agent, or something suspicious?

Another point of context is the system, protocol versions, and browser type that sometimes is included in the logs.

The more we explore these factors, the more we find ourselves saying: it depends.

That phrase—so natural to human reasoning—is devastating to hard logic. Rules don’t do well with ambiguity. They prefer black and white, not shades of context. This is where soft logic shines.

How AI Is the Analyst

The failure of hard logic is not that it’s wrong—it’s that it’s narrow. It answers only the question it was explicitly told to ask, in precisely the way it was told to ask it. But most real-world problems in cybersecurity, especially those involving human behavior, aren’t that simple. They contain nuance. And nuance resists hard logic.

This is where the distinction between hard and soft logic becomes clear. Hard logic always leads to the same outcome given the same inputs. It is rigid. Soft logic, by contrast, begins with a different posture. It accepts that context matters. That the answer, quite often, is  it depends.

Take the case of impossible travel. A simple rule might say: “If two logins occur within 10 minutes of each other and are more than 1,000 kilometers apart, trigger an alert.” That rule may be precise, but it’s blind to the nuance behind the data. It doesn’t consider that one of the IPs is a known corporate VPN exit, or that it’s part of a wide IPv6 range tied to cloud infrastructure. It doesn’t ask what browser or device was used, whether the authentication method was password or token-based, or whether the destination IP is common for that user or that company.

A human analyst doesn’t just look at distance. They ask: What kind of access is this? Does it align with expectations? Is it justified? These are questions that rarely have binary answers. They depend on many small factors—factors that vary from one case to the next.

That’s why AI is so effective in this space. It does not apply rules in isolation. It evaluates the totality of the data the way an analyst would—by considering context, intent, supporting evidence, and the interplay between variables. Where hard logic says, “true or false,” AI says, “this is likely legitimate, given what I see.” That distinction is not just stylistic. It is foundational to why AI can handle what rule-based systems cannot.

A Real-World Example: Why AI Saw What the Rule Missed

To ground this in a concrete example, let’s return to the concept of impossible travel. Previously, we discussed how Splunk’s detection rule identifies anomalous logins by calculating the maximum distance between geolocated IP addresses within a fixed time range. Mathematically, it’s sound. If the distance exceeds what’s physically possible for a human to travel in that window, the event is flagged.

But as we’ve established, sound mathematics doesn’t always equate to sound judgment. Let’s explore how a real-world example exposes the difference between rule-based detection and AI-driven reasoning.

In this case, a user logged in twice within a short span of time: once from Spain, and once from Switzerland. The computed distance—over 800 kilometers—immediately triggered the hard logic. This, on paper, was a textbook example of impossible travel.

But what happened next is what distinguishes AI from rules.

The AI analyst didn’t stop at the equation. It examined who the ISP was for each login. The Spanish IP was tied to a mobile carrier, suggesting the user was on a mobile device and physically present in Spain. This aligns with expectations—mobile ISPs are region-bound and reflect actual user location. So far, the impossible travel alert appears justified: the user is physically in Spain.

Then the AI turned to the second login—this one from Switzerland. This IP belonged to a business-class ISP, typically assigned to fixed enterprise infrastructure. This implies that the user was accessing a system physically hosted in Switzerland. On its own, this supports the original alert.

However, the AI went further. It analyzed the asset name and system metadata and discovered that the machine being accessed was tagged as a remote workstation—a resource intentionally deployed in Switzerland for remote access. In other words, this was a valid remote session. The user was in Spain, accessing a machine they were authorized to use in another country. Not only was this not impossible travel—it was entirely normal for the role.

The traditional rule saw a violation. The AI saw the whole picture. It evaluated the ISP class, the authentication method, the system’s tagging, and the role-based context of the access. It answered not just “how far” but “what for” and “what does this mean.”

Here’s a simplified view of the data evaluated:

Event

IP Address

ISP Type

Geo Location

Device Type

Asset Tag

Conclusion

1

77.227.x.x

Mobile (Spain)

Barcelona

Mobile Device

N/A

User is physically present

2

46.140.x.x

Business (CH)

Zurich

Remote System

REMOTE-WORKSTATION

Remote workstation login

 By analyzing these variables holistically, the AI dismissed the alert—not because it failed to understand the rule, but because it understood the context behind the exception.

This example makes one thing clear: AI doesn’t just mimic the analyst—it is the analyst. It follows the same process a skilled human would use: question the assumptions, cross-reference the data, and make decisions based on layered insight. Not rigid triggers, but a measured understanding of risk.

Why Soft Logic Wins — and How It Works in a Workflow

What makes Fluency’s AI different isn’t that it discards traditional logic—it builds on it. We’re not throwing out impossible travel as a detection category. We’re replacing the brittle, narrow methods used to evaluate it with broader, context-aware reasoning. Impossible travel remains a useful trigger, just as in tools like Devo or Splunk. But the analysis no longer depends on a rigid rule. It happens within a workflow, a structure that guides the AI through the type of evaluation we want while still allowing for flexibility in how the decision is made.

This is the shift: from hard logic to soft logic within a structured workflow. A traditional SOAR playbook runs a script of predefined checks. It’s fast, but it’s brittle. The moment something unexpected happens—or a new condition arises—the logic breaks, and we fall back to human analysts. Worse, those playbooks become difficult to maintain and scale. The more exceptions we add, the less manageable they become.

AI-based workflows solve that. They preserve the concept of playbooks but allow for nuance, judgment, and adaptability. They scale where SOARs do not—not just because they automate more, but because they adapt better. Soft logic makes it possible to capture the “it depends” of real-world decision-making without writing thousands of if-else statements. It also means we can maintain and evolve our logic over time without rewriting everything from scratch.

At the same time, these workflows create a more productive human environment. Entry-level analysts are still part of the process—but instead of building logic from scratch, they learn by reviewing and validating AI-driven analysis. It’s like handing someone a recipe instead of asking them to invent a dish. This guided process levels up talent faster, turning junior staff into senior contributors more effectively.

Ultimately, the structure remains: triggers, workflows, validation, improvement. But now the workflows are powered by soft logic, not rigid code. Analysts aren’t buried in the noise. They’re elevated above it—working alongside AI to refine detection, strengthen posture, and make security operations truly scalable.

That’s the future. And it’s working right now.