Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
After the whirlwind, unprecedented OpenAI drama of the past 10 days — in which the OpenAI board fired CEO Sam Altman; replaced him in the interim with CTO Mira Murati; president Greg Brockman quit; nearly all OpenAI employees threatened to resign; and Altman was reinstated the day before Thanksgiving — I was certain that the US holiday weekend would be the perfect opportunity for Silicon Valley to take a break from AI hype and relax over turkey and stuffing.
It was not to be. At dawn on Thanksgiving day, Reuters reported that before Altman was temporarily exiled, several of its researchers wrote a letter to the board of directors warning of a “powerful artificial intelligence discovery that they said could threaten humanity.” The news went viral just as people were sitting down to turkey-laden tables from Palo Alto to Cerebral Valley: Apparently the project, called Q*, was believed to possibly be a breakthrough in the efforts to build AGI (artificial general intelligence, which OpenAI defines as “autonomous systems that surpass humans in most economically valuable tasks.” According to Reuters, the new model “was able to solve certain mathematical problems” and “though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success.”
What’s really behind the nonstop AI hype cycle?
The fact that there was zero pause in the AI news and social media cycle between the OpenAI boardroom soap opera and the Q* discussions — keep in mind, I’m not favoring a six-month pause in AI development, just a day or two to watch the Thanksgiving Day Parade and eat some leftovers in peace — made me wonder: what’s really behind the nonstop AI hype cycle? After all, the Q* excitement was over an algorithm that senior AI scientist at Nvidia Jim Fan called a “fantasy.” He said that “in my decade spent on AI, I’ve never seen an algorithm that so many people fantasize about. Just from a name, no paper, no stats, no product.”
Sure, there’s some intellectual excitement, I suppose, as well as the usual media competition for the latest gossip and the next viral headline. There’s likely plenty of anticipatory greed, self-promotional messaging and typical human arrogance at play, too. But I wonder whether the inability to take even the briefest of breaks after the sleep-stealing OpenAI drama is simply about anxiety — through the lens of uncertainty.
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
AI uncertainty contributes to anxiety
According to a paper from University of Wisconsin researchers, uncertainty about possible future threats “disrupts our ability to avoid it or to mitigate its negative impact, and thus results in anxiety.” The human brain, the paper points out, has been called an “anticipation machine.” “The ability to use past experiences and information about our current state and environment to predict the future allows us to increase the odds of desired outcomes, while avoiding or bracing ourselves for future adversity. This ability is directly related to our level of certainty regarding future events – how likely they are, when they will occur, and what they will be like. Uncertainty diminishes how efficiently and effectively we can prepare for the future, and thus contributes to anxiety.”
This is true not only for non-tech ‘normies,’ but even for the top AI researchers and leaders of our time. The truth is, not even AI ‘godfathers’ like Geoffrey Hinton, Yann LeCun or Andrew Ng really know what is going to happen when it comes to the future of AI — so their Black Friday social media beefs, while based in intellectual arguments, are still nothing but predictions that do little to assuage our anxieties about what’s to come.
Lean into the unknown
With that in mind, we can look at the unceasing noodling, mulling, pondering, weighing, deliberating, deciphering, and analyzing that took place over the past week about OpenAI and Q*— including from me — as an expression of anxiety (and, perhaps, a big dollop of OCD) about how uncertain the future of AI seems right now.
If we don’t stop our excessive information-gathering, our reassurance-seeking, our repetitive thoughts and worries, we don’t have to lean into the bottom line when it comes to the future of AI: It is uncertain. It is unknown.
Of course, I’m not saying we don’t need to debate and discuss and plan and prepare and keep pace with the evolution of AI. But surely it can all wait until we have finished our meal and taken a Tryptophan-induced nap?
There’s now exactly four weeks until Christmas. Perhaps everyone — the Effective Altruists and the Effective Accelerationists, the Techno-Optimists and the Doomers, the industry leaders and the academic researchers, the ‘move fast and break things’ folks and the ‘slow down and move carefully’ cohort — can agree to the briefest of pauses in AI hype for eggnog and cookies? Our mutual anxiety over the future of AI — and the accompanying hype — will still be there after New Year’s. I promise.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.