Who Watches the Watchers?

This article addresses a rising skepticism about whether vendors are sufficiently transparent, accountable, and secure.

Who Watches the Watchers?

Addressing Risk in AI Solutions

Two recent publications—an open letter by JPMorgan Chase and an analysis from Dark Reading—have reignited the conversation around third-party risk and transparency in cybersecurity. In both, the call is clear: the burden of trust is shifting, and vendors are now being asked not just to sell products, but to prove them. These discussions point to a rising skepticism about whether vendors are sufficiently transparent, accountable, and secure.

While the JPMorgan letter focuses on operational and contractual accountability in SaaS relationships, the Dark Reading piece extends the concern further—into the realm of AI, questioning the visibility and control enterprises truly have over AI-enabled platforms. Both reflect the growing tension between innovation and oversight, especially in an environment saturated with marketing hype and technological abstraction.

But in the rush to demand transparency, the discourse risks drifting too far into cynicism. We must ask ourselves: when did we stop trusting the watchers? And more importantly, how do we rebuild an infrastructure of assurance that doesn't rely solely on vendor promises or public reputation?

Background: Third-Party Risk, AI Trust, and the Speed of SaaS

JPMorgan’s open letter highlights a fundamental operational challenge: when enterprises onboard SaaS vendors, they inherit the vendor’s security posture. The letter emphasizes the lack of visibility into provider practices, an overreliance on certifications without operational proof, and vendor unresponsiveness during incidents. These are real, grounded issues—not speculative ones.

The Dark Reading article builds on this foundation but pivots the discussion toward AI. It frames the trust gap not just around SaaS vendors generally, but around AI specifically, particularly the way AI is embedded into SaaS at an unprecedented pace. This connection is valid: the rapid integration of AI into business platforms has outpaced our ability to govern it with the same rigor we apply to traditional systems. In most cases, SaaS providers don’t own the LLMs powering their features, introducing a second layer of third-party risk that is often opaque to the customer.

While this doesn’t mean AI systems are inherently untrustworthy, it does raise significant questions. There is widespread misunderstanding around AI security; an issue we’ve addressed in our recent paper on AI infrastructure vulnerabilities. The takeaway here is that AI transparency is now a third-party risk issue in its own right, and the combination of the JPMorgan letter and Dark Reading commentary brings this into sharp relief. AI is typically delivered as part of a SaaS model, and we must critically examine not only what AI is being used, but how it is being governed and validated.

Framing the Path Forward

To address the rising concerns around trust, transparency, and third-party risk, especially in the age of AI, we must take a more grounded approach. It’s easy to demand more transparency or point fingers at vendors, but solving this requires more than slogans. It requires a thoughtful reexamination of how we validate, certify, and govern the systems we depend on. So let’s take a step back and look at this challenge through three essential lenses.

First, we must ask the foundational question: Who do we trust, and who watches the watchers? The integrity of our entire assurance model depends on that answer. Second, we need to confront the reality of transparency in AI, including what can and cannot be made visible without compromising intellectual property or competitive differentiation. And finally, we must explore how standards and certifications offer a more robust framework for assurance than marketing-driven analyst reports ever could.

These three perspectives form the basis for a more mature, realistic, and effective approach to third-party risk and AI governance.

Who Watches the Watchers?

Before we can talk about governance and risk, we need to talk about trust, and more importantly, who earns it. In today’s cybersecurity landscape, too many organizations mistake visibility for validation and marketing presence for operational assurance. This confusion starts at the top of the industry, with events and analyst firms that shape perception more than they shape policy.

Take Black Hat and RSA, for example. These are not neutral technical summits, they are vendor-driven marketing expos. While they play an important role in showcasing innovation and facilitating industry dialogue, we must be honest: they are not forums for independent validation. The same applies to analyst firms like Gartner and Forrester. Despite their claims of independence, their models are financially sustained by the vendors they rank. The now-common reality is that every vendor in Gartner’s Magic Quadrant is also a paying client, including holdouts like SAP, which once stood as a counterexample. Presence in the quadrant may reflect business influence and marketing sophistication, but it is not an assurance of security maturity or risk posture. These analysts do not perform audits, do not verify third-party integrations, and do not assess adherence to operational standards. That’s not their job.

So whose job is it? The answer lies in the certifiers. Independent certification bodies are governed by strict guidelines to ensure they operate objectively, evaluating vendors against published frameworks like SOC 2 Type II, ISO/IEC 27001, or the newer ISO/IEC 42001 for AI governance. These certifications don’t promise perfection—but they do confirm that a vendor has been tested against standards designed by global consensus. That’s more than any analyst or expo booth can provide.

To dismiss these certifications because they don’t reflect personal or organization-specific preferences is to misunderstand the role of standards. Standards exist to provide a common language of assurance. If a product or vendor isn’t certified, that’s a gap worth examining. But if a buyer chooses to prioritize Gartner over SOC and ISO, then the failure isn’t in the system. It’s in who they chose to trust.

The Limits of Transparency

Transparency is essential—but it’s also misunderstood. In the push for visibility into AI behavior, many buyers are now demanding full transparency as a proxy for trust. While that desire is rooted in legitimate risk concerns, the reality is that complete transparency isn’t always possible, and in many cases, it isn’t even useful.

At Fluency, we go to great lengths to provide transparency into how our AI workflows operate: what decisions were made, what actions were taken, and how the analysis evolved. However, we don’t expose everything. The initial prompt generation, data transformations, and protective guardrails that shape the input and output of our AI models are core to our intellectual property. This is not about hiding risk; it’s about protecting the innovation that makes one solution better than another. How workflows are composed, how agent permissions are scoped, how we manage robustness and segmentation. These are competitive differentiators. If we published everything, it would be a blueprint for replication.

Moreover, even if we did open up the source code and internals, most customers couldn’t meaningfully analyze it. Expecting a C-level executive or even a skilled analyst to validate AI logic, guardrails, or vulnerability mitigation strategies is like asking them to audit C++ function calls for memory overflows. It’s not reasonable, and it’s not where risk should be managed.

That’s why the real goal of transparency isn’t exposing everything, it’s exposing enough. Enough to understand the architecture of control: Are AI calls being routed through secure interfaces like MCP? Is the LLM isolated from raw prompts? Is input sanitized, state segmented, and output reviewed before action? These are the kinds of questions that matter—and where transparency provides value.

The recent Cursor vulnerability highlights this well. Even with the use of MCP, if the prompt handling is flawed or if injection isn’t properly segmented, those controls can be bypassed. That’s not a transparency issue, it’s a design and implementation issue. And it’s not something an analyst firm or even a certifier can discover by reading a product brochure.

In fact, one of the greatest risks in the current AI wave isn’t vendor opacity, it’s DIY deployment. We’ve seen alarming examples of do-it-yourself configurations using open-source tools like Wazuh, N8N, and direct LLM access (e.g., ChatGPT or DeepSeek) stitched together to automate security tasks. These setups often lack any meaningful control structure, segmentation, or state handling. They’re brittle, insecure, and built by teams that often don’t fully understand the implications of what they’ve constructed.

And yet, these same organizations will say, “We don’t trust the vendors. We’ll build it ourselves.” That’s the real danger. It’s the cybersecurity equivalent of building a car with no brakes and then showing off how fast it goes.

Transparency is critical, but transparency without discipline is chaos, and transparency without expertise is dangerous. What we need is not full visibility into proprietary codebases, but confidence in the controls, the architecture, and the intent. That’s what certification frameworks aim to provide—and that’s where real governance begins.

Standards, Certification, and the Culture Behind the Product

When it comes time to make a purchasing decision, security leaders and executives are ultimately asking a simple question: How do I trust this product? The answer begins with structure. Before comparing features or outcomes, a vendor should be able to show alignment with recognized frameworks such as SOC 2, ISO 27001, or the newer ISO 42001 for AI. Even if full certification has not yet been achieved, the company should be clearly communicating their adherence to standardized practices and demonstrating their intent to follow structured, well-understood models like MCP for secure AI integration.

Many vendors today focus entirely on results. That focus is understandable, especially when buyers often look only at outputs and functionality. However, this approach can lead to a dangerous imbalance. The more complex and autonomous the technology becomes, the more important it is to evaluate the structure and discipline behind it.

This is where certifiers play a critical role. They provide independent evaluations of whether vendors are following the right processes and controls to manage risk. These certifications should carry greater weight than analyst opinions. Analysts can be helpful when comparing two already-certified solutions, but they are not responsible for evaluating how well a product handles third-party risk. That is outside their scope.

Unfortunately, conversations about these issues often occur at industry events where marketing pressure tends to dominate. Conferences such as Black Hat or RSA are important for visibility, but they are fundamentally sales-driven and vendor-funded. As a result, discussions around governance and certification are often overshadowed by competitive positioning and hype. This environment makes it harder for buyers to have grounded, rational conversations about risk and assurance.

There is also a business reality to consider. Companies need to get products to market in order to generate revenue, and that revenue often funds the cost of certification. Buyers sometimes expect products to be certified before launch, but in practice, most certifications are pursued after a product has already demonstrated viability. This does not reflect a lack of commitment, but rather a practical sequence. If AI had not achieved market success, we would not have standards like ISO 42001 today. The existence of the standard is proof that the market came first.

This is why organizational culture matters. A company that already holds certifications such as SOC 2 or ISO 27001 in other areas of the business, and is extending that discipline to its AI efforts, is demonstrating a security-minded approach. That behavior reflects a culture of responsibility, and it should be viewed as a key signal during product evaluation.

Today, that culture is under pressure. The demand to release AI features quickly has led many vendors to cut corners, both in security and in product design. Features are being rushed into production to meet investor expectations or to respond to market hype. This creates conditions where AI may be misused or poorly integrated, especially in areas like SIEM, where the role of AI is often misunderstood. In some cases, AI is being presented as a tool to help the analyst, when in fact it should be replacing repetitive tasks entirely. This confusion points to a lack of maturity in the vendor’s strategy.

Ultimately, AI is still maturing, and the frameworks designed to govern it are also evolving. The real question for buyers is not whether the product is perfect, but whether the company behind it shows a commitment to building things the right way. Certification is part of the answer, but so is culture. Responsible development starts with intent, and that intent shows up in the processes a company chooses to follow.

Conclusion: Trust, Risk, and Responsibility

At the heart of this discussion is a simple but urgent question: Do we trust the vendors, and who are the watchers? The answer begins with recognizing that the true watchers in this space are not analysts or marketing voices. They are the certifiers—organizations that evaluate systems based on documented best practices rather than popular opinion or hype.

Certifiers are grounded in structured standards, which means they often move more slowly than the market. This is not a flaw, but a function of their purpose. When new technologies emerge quickly, such as with AI, the lack of immediate certification creates a gap. That gap is too often filled by marketing narratives or analyst reports, neither of which are designed to assess third-party risk. This leads to poor assumptions and misplaced trust.

When evaluating AI solutions, the first step is to examine whether the vendor has a history of responsible development. Companies that already hold SOC 2, ISO 27001, or other certifications are more likely to have mature risk management processes. These credentials reflect an internal culture of structure and accountability. They suggest that the organization understands how to manage risk, not just how to build features.

It is also important for buyers to reflect inward. Many internal teams have strong technical skills and a desire to build their own solutions. But too often, these homegrown systems are assembled from open-source components without proper security design or long-term planning. As complexity increases, so does risk. When organizations experiment with AI in this way, without isolation, segmentation, or lifecycle governance, the result is often insecure and difficult to maintain.

The more responsible path is to rely on vendors who demonstrate professional discipline in their AI implementations. Buyers should look for evidence of secure design practices, adoption of standards, and transparent communication around architecture and control. This does not mean expecting full transparency into proprietary methods, but rather clarity around how data is protected and how AI is governed.

AI delivers a significant competitive advantage, and there is every reason for organizations to adopt it. But adoption must come with awareness. Decisions should not be based solely on product demonstrations or analyst endorsements. Instead, they should begin with an understanding of risk and how it is managed. That responsibility ultimately sits with the buyer, particularly at the executive level.

Rather than distrusting the entire ecosystem, organizations should seek out those who operate with integrity. Vendors who build with structure, certifiers who evaluate with care, and buyers who ask the right questions all contribute to a more secure and mature AI landscape. Trust is possible—but only when it is earned, examined, and placed where it belongs.

J.P. Morgan’s Open Letter to Suppliers  https://www.jpmorgan.com/technology/technology-blog/open-letter-to-our-suppliers

Dark Reading: “AI Everywhere, Trust Nowhere”

https://www.darkreading.com/vulnerabilities-threats/rsac-2025-ai-everywhere-trust-nowhere