Magical Thinking, New Technology, and AI
A post for spooky season
Magic, Hope, and Delusion
Happy Halloween everyone! As it is spooky season, it is a good time to consume media on ghosts, witches, and magic. And a good Halloween tale, for me, involves things not being as they seem, whether to comedic effect or terrifying effect. My favorite Halloween movie ever though is It’s The Great Pumpkin, Charlie Brown. There’s something alternately heartwarming and inspirational, and tragically pathetic, about Linus’s unwavering faith that the great pumpkin will arrive. Even though Snoopy inadvertently tricks him, and even though everyone laughs at him, he still holds out hope that next year, the Great Pumpkin will be here.
Holding out hope and thinking positively can of course be very good things. But magical thinking tendencies, or a belief that your desires can influence real-life events and outcomes, can be quite problematic. Psychologists note that children often exhibit magical thinking as a normal part of development. Children believe in superstitions or in Santa, the Tooth Fairy, or the Great Pumpkin for that matter. But adults can engage in magical thinking as well, and one place where we see this sort of belief in magic emerge is around new technologies. And lately, that means, AI (which seems inescapable at this point).
Technology and Magical Thinking – A Long History
New technologies that seem powerful and that are poorly understand, whether through real complexities or as part of a marketing campaign designed to gloss over issues and convince people that the technology if powerful and amazing, can be prone to being framed as magical. The magical thinking about AI is a large part of the AI hype machine, where AI technologies are relentlessly promoted as all-powerful, all-knowing, able to do practically anything. An entire ecosystem has emerged that promotes AI, and many news outlets dutifully parrot what many AI companies are espousing. And the magic gets reinforced by an audience who, due to a lack of understanding or due to not being told how something works, end up thinking that AI really is all-powerful and magical.
A key way to cut through the hype and dispel the magic is via media and information literacy. Recent studies reveal that the less people know about AI, the more they like it or believe in its “magical” capabilities. Which is a concerning phenomenon and something that poses a problem for AI companies. If people learn more about your product and end up distrusting and disliking it, it seems you have a business problem (this coming from a liberal arts major and not an MBA). But I think it is also interesting to consider the relationship between magical thinking and technology in a broader context and to consider what’s driving the magic narrative around AI.
There’s a long history of freak-outs over new technology. People have been upset and disturbed by new technologies seemingly forever, from telephones, to trains, to the swivel chair (if you’re the Dowager Countess of Grantham). Many people were horrified by electric lighting when it first emerged, thinking it could leap out of walls and electrocute them or else finding it “garish.” Robert Louis Stevenson said electric streetlamps were “a lamp for a nightmare.” And, following the tragic electrocution and death of a New York City lineman in 1889, Harper’s Weekly declared “Nearly every wire you see in the open air is thick enough and strong enough to carry a death-dealing current. As things are at present, there is no safety, and danger lurks all around us.” Thanks for that upbeat take, Harper’s! A lack of understanding, the seemingly powerful and dangerous nature of a given technology, and resulting concerns and fear are all a recipe for some degree of magical thinking. But oddly enough, a few decades later electric light was seen as clean and modern, and a concerted effort to “electrify” countries as underway. Funny how a combination of education and a shifting media narrative can change things.
Similar to past technological panics, a combination of media coverage, hype, a lack of understanding, and fear can lead to magical thinking about new technologies like genAI, with people believing that a technology is so powerful that it can harm them and simultaneously so powerful that it can change the world. But when it comes to AI, there are some interesting differences to consider as well. For one, much media coverage of AI hypes up AI and its magical tendencies. Though arguably any narrative infused with magical thinking, whether positive as we often see with AI or negative as with the late 19th century electric panic pieces, can result in people lacking the information needed to make informed decisions about using a new technology. And, with AI, it seems that not even the developers know what is going on or how it actually works. The magical thinking in this instance seems to be a widespread and shared phenomenon, amongst developers, experts, and the general public alike.
So, what is driving this magical thinking, especially among those within the AI industry? It’s a question as to what extent people are buying their own hype. Do they know the hype is absurd and are just riding on the bandwagon of AI hype for as long as they can, or have deluded themselves into accepting the magical hype as a way to stave off the dawning realization that we seem to be in an AI bubble? As Charlie Brown and Linus teach us, identifying the line between hope and optimism and delusion can be tricky.
Magical thinking and AI are part of a longer phenomenon around magical thinking and technology. But some of the magical thinking about AI also seems to stem from a desperation to make the outlandish claims about AI and its power actually true. Raining on the parade of a magical narrative of belief isn’t always fun, but, whether through education and/or media coverage, it is necessary if we’re going to discuss AI and determine paths forward in an informed and grounded way.


