AI Utopia? 6% Of Companies Actually Use AI, Experts Predict Imminent Crash
NovumWorld Editorial Team

Only 6% of large companies globally have actually deployed enterprise AI tools, suggesting the AI revolution may be more mirage than reality.
- Gartner projects worldwide AI spending will reach $3.3 trillion by 2029, representing a compound annual growth rate (CAGR) of about 22%, though such projections may be wildly optimistic.
- Goldman Sachs’ Jim Covello estimates that widespread AI implementation could demand a $1 trillion investment in data centers, utilities, and applications, which raises questions about economic viability.
- Former MIT CSAIL Director Rodney Brooks anticipates another AI winter, arguing current large language models (LLMs) lack true imagination and substance, mirroring the cyclical nature of tech hype.
The IBM Watson Health Debacle: $680K Wasted on Failed AI
The story of IBM Watson Health serves as a cautionary tale about the practical challenges and inflated promises of AI. The project, intended to revolutionize healthcare with AI-powered diagnostic tools, ultimately fell short of its ambitious goals. It highlights a common pitfall: the gap between theoretical potential and real-world implementation. The cost of these failures can be staggering.
Lucia Business Partners stepped in to help a client salvage what they could from 15 failed AI proof-of-concepts. What did they find? The client had flushed $680,000 down the toilet on AI tools that no one was actually using. These tools were supposed to streamline operations, improve decision-making, and ultimately boost the bottom line. Instead, they sat gathering digital dust, a testament to the overblown promises and underdelivered results that plague many AI initiatives. This isn’t an isolated incident; it’s a symptom of a larger problem in the tech industry.
The disconnect between hype and reality in AI is nothing new. Companies, seduced by the allure of cutting-edge technology, often rush into AI projects without a clear understanding of their needs, capabilities, or the data required to make these systems work. This leads to wasted resources, dashed expectations, and a growing sense of disillusionment with AI’s potential. In many cases, the technology simply isn’t mature enough, or the data infrastructure isn’t robust enough, to support the kinds of transformative applications that are promised. The IBM Watson Health case underscores the importance of approaching AI with a healthy dose of skepticism, grounded in a clear understanding of the technology’s limitations and the specific challenges of the problem being addressed.
Goldman Sachs’ Internal Divide: Is AI Worth the Trillion-Dollar Price Tag?, according to MIT Technology Review
The debate over AI’s economic viability extends to the highest echelons of the financial world. At Goldman Sachs, a clash of perspectives underscores the uncertainty surrounding AI’s true value. Jim Covello, Head of Stock Research at Goldman Sachs, has voiced concerns about the massive infrastructure costs required to support widespread AI implementation. He estimates that it could take a trillion-dollar investment in data centers, utilities, and applications.
This colossal price tag raises serious questions about the return on investment and the economic sustainability of the current AI boom. Is the potential productivity boost worth the exorbitant cost of upgrading infrastructure? Covello’s skepticism isn’t just about the money; it’s about the fundamental value proposition of AI. Are the promised benefits—increased efficiency, improved decision-making, and new revenue streams—truly achievable at scale? Or are they just a mirage, fueled by hype and wishful thinking?
Conversely, George Lee, Co-head of Geopolitical Advisory at Goldman Sachs, presents a more optimistic view. He believes AI will save workers time and increase productivity, ultimately justifying the investment. Lee argues that applications will emerge over time as the technology is refined and made more readily available. This internal division highlights the deep uncertainty surrounding AI’s economic potential. The conflicting viewpoints within Goldman Sachs reflect a broader debate within the industry. Is AI a transformative force that will revolutionize the economy, or is it an overhyped technology that will struggle to deliver on its promises? Only time will tell which perspective ultimately prevails.
Rodney Brooks’s “Hype Cycle”: Why the AI Winter is Coming
Former MIT CSAIL Director Rodney Brooks predicts that 2024 won’t be a golden age for AI, noting the current fanfare is “following a well-worn hype cycle." He anticipates another AI winter, a period of disillusionment and reduced investment following the current surge of excitement. Brooks believes current LLMs lack true imagination and genuine substance.
Brooks is not alone in his skepticism. Contrarian investor Rajiv Jain of GQG Partners believes that the tech sector and companies involved in the AI infrastructure buildout are exhibiting dotcom levels of overvaluation. This comparison to the dotcom bubble is particularly alarming, suggesting that the current AI boom could be built on shaky foundations. The rapid rise in valuations, fueled by investor enthusiasm and media hype, may not be supported by real-world revenue and profitability.
The “hype cycle” is a well-documented phenomenon in the tech industry. New technologies often go through a period of inflated expectations, followed by a crash as the limitations and challenges become apparent. The current AI boom appears to be following this pattern, with valuations soaring to unsustainable levels and promises of transformative change that may be difficult to deliver. The question is not whether an AI winter will come, but when and how severe it will be. Investors and companies that are aware of this cyclical pattern are better positioned to weather the storm. Those who are caught up in the hype may face a rude awakening.
Air Canada’s $0 Refund: When Chatbots Give Bad Advice
The real-world limitations of AI are often exposed in customer service interactions. The case of Air Canada’s chatbot providing incorrect refund information, leading to a legal dispute, illustrates the potential pitfalls of relying on AI for critical tasks. In this instance, the chatbot, designed to assist customers with inquiries and bookings, provided inaccurate information about the airline’s refund policy.
This error resulted in a customer being denied a refund that they were rightfully entitled to. The customer pursued legal action, arguing that they had relied on the chatbot’s advice in making their travel plans. Air Canada initially argued that it was not responsible for the chatbot’s mistakes, but the court ultimately ruled against the airline, holding it liable for the inaccurate information provided by its AI-powered assistant. This case has significant implications for companies that are increasingly relying on chatbots and other AI-powered tools for customer service. It highlights the importance of ensuring that these systems are accurate, reliable, and transparent.
Companies must also be prepared to take responsibility for the mistakes made by their AI systems. Simply disavowing responsibility, as Air Canada initially attempted to do, is not an acceptable solution. The Air Canada case serves as a wake-up call, reminding companies that AI is not a magic bullet. It requires careful planning, implementation, and oversight. Companies must also be prepared to deal with the consequences when these systems fail.
The AI “So What?”: Biased Data and Job Displacement Remain Untamed
Beyond the hype and the technical challenges, lies a deeper question: what is the real-world impact of AI? The World Economic Forum identifies the proliferation of misinformation and disinformation as the leading short-term risk, magnified by the widespread adoption of generative AI. The ability of AI to generate realistic-sounding but false information has the potential to undermine trust in institutions, erode public discourse, and even incite violence.
Bias in AI systems remains a significant concern. AI models are trained on data, and if that data reflects existing biases, the models will perpetuate and amplify those biases. This can lead to discriminatory outcomes in areas such as hiring, lending, and criminal justice. The potential for job displacement due to AI automation is another major concern. While some argue that AI will create new jobs, others fear that it will lead to widespread unemployment, particularly in low-skill and middle-skill occupations. These “so what” questions are crucial for understanding the true impact of AI.
They require a critical examination of the technology’s potential benefits and risks, as well as a thoughtful consideration of the ethical and societal implications. Jennifer Chayes, Dean of Berkeley’s College of Computing, Data Science, and Society, believes 2024 will be a double-edged sword, with rising AI capabilities alongside increasing issues in AI safety and security, such as deep fakes and voice-cloning scams. Simply embracing AI without addressing these underlying issues is a recipe for disaster.
The Bottom Line
The AI narrative feels a little too close to 17th-century tulip mania. Don’t drink the Kool-Aid.