THIS ARTICLE IS republished from The Conversation under a Creative Commons license.
The rapid spread of artificial intelligence has people wondering: Who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy—those who understand how AI works—who are most eager to adopt it.
Surprisingly, our new research, published in the Journal of Marketing, finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.
This link shows up across different groups, settings, and even countries. For instance, our analysis of data from market research company Ipsos spanning 27 countries reveals that people in nations with lower average AI literacy are more receptive toward AI adoption than those in nations with higher literacy.
Similarly, our survey of US undergraduate students finds that those with less understanding of AI are more likely to indicate using it for tasks like academic assignments.
The reason behind this link lies in how AI now performs tasks we once thought only humans could do. When AI creates a piece of art, writes a heartfelt response, or plays a musical instrument, it can feel almost magical—like it’s crossing into human territory.
Of course, AI doesn’t actually possess human qualities. A chatbot might generate an empathetic response, but it doesn’t feel empathy. People with more technical knowledge about AI understand this.
They know how algorithms (sets of mathematical rules used by computers to carry out particular tasks), training data (used to improve how an AI system works), and computational models operate. This makes the technology less mysterious.
On the other hand, those with less understanding may see AI as magical and awe inspiring. We suggest this sense of magic makes them more open to using AI tools.
Our studies show this lower literacy-higher receptivity link is strongest for using AI tools in areas people associate with human traits, like providing emotional support or counseling. When it comes to tasks that don’t evoke the same sense of humanlike qualities—such as analyzing test results—the pattern flips. People with higher AI literacy are more receptive to these uses because they focus on AI’s efficiency, rather than any “magical” qualities.
It’s Not About Capability, Fear, or Ethics
Interestingly, this link between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary. Their openness to AI seems to stem from their sense of wonder about what it can do, despite these perceived drawbacks.
This finding offers new insights into why people respond so differently to emerging technologies. Some studies suggest consumers favour new tech, a phenomenon called “algorithm appreciation,” while others show skepticism, or “algorithm aversion.” Our research points to perceptions of AI’s “magicalness” as a key factor shaping these reactions.
These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people’s enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption.