The More You Know, the Less You Trust AI, And That’s the Problem
A new study shows the more you know about AI the less you trust it—what that really means for hype, adoption, and the coming AI bubble.

Recently, a paper titled Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity caught wide attention. Both the Wall Street Journal and Futurism covered it, distilling the research into a catchy headline: the more you know about AI, the less you trust it.
But here’s the thing — the paper isn’t just a quick survey or a narrow study. At 58 pages, it’s a substantial piece of academic work, and what makes it stand out is scale. The authors used seven different studies across multiple populations and methods. This makes it a comprehensive statistical evaluation of the relationship between AI knowledge and trust in AI.
And their conclusion is clear: people with lower AI literacy are more likely to perceive AI as magical, which sparks awe and increases receptivity. By contrast, those who understand AI better lose the sense of wonder, which often translates into skepticism.
When Wayne Mitzen takes an interest in something, he learns it in depth. Whether it’s music, hardware design, or AI, Wayne approaches each field with the same intensity. In one of his blogs, he makes it plain that he has no awe about large language models (LLMs) or “intelligent” systems. Quite the opposite — he sees them for what they are: engines of prediction mathematics dressed up as something more. I do not see Wayne as a skeptic. Wayne is a realist. And this paper is really about the Waynes in this world.
Here is the problem companies face with this research: selling an AI future requires not explaining AI. To maintain the magic and hype, there needs to be fewer Waynes in the world. This shouldn’t surprise anyone. Sales and marketing always thrive on misrepresentation and false promises.
When I was at McAfee, product names were notoriously dull. McAfee’s IntruShield was renamed IPS. Salespeople joked that McAfee would rename sushi as “dead fish on rice.” If McAfee marketing were to launch an AI Antivirus, they might just call it Most Likely Protection.
This paper suggests that in order for companies to sell AI, they need to keep consumers ignorant. And that points to an “AI bubble” ahead — where companies, consumers, and investors are buying into a fantasy that will never materialize.
Now, should companies invest in and incorporate AI? As Wayne says, it’s all “math tricks.” But it’s still wonderful, scalable mathematics. As AI seemingly writes responses to our questions, we perceive a cognitive answer where there is only a mathematical one. The most likely answer is often the one we need. Whether it’s answering customer service calls or reviewing alerts at one in the morning, AI workflows can often respond faster and with fewer errors than an entry-level employee. What AI does better than people will be its own blog. In short, don’t belittle AI just because it’s not really intelligence. What it is remains impressive.
So what of this paper? It’s one of the first signs of AI growing up. This reckoning had to happen. As we adopt AI, we as users need to understand what it really is in order to take advantage of it. That means being realistic about what LLMs are actually doing, and what they are not.