Why Overinvesting In Generative AI Could Be A Trap

Corporate Explorer
Why Overinvesting In Generative AI Could Be A Trap

This article was originally published on Forbes.

Generative AI is the hottest property in corporate innovation. It offers such extraordinary potential for transformation that no company feels it can afford to be left out. The pace of investment is so frenzied that AI chip maker Nvidia has seen its revenues explode pushing its valuation above $1 trillion. However, is there a risk of spending too much, too fast? The lesson from other innovation trends, like “big data”, is that companies will waste money unless they increase the pace of learning to match the investment.

Disruptive innovation emerges from the fringes, making corporate executives nervous. They want to be the biggest player on the block, not the ones playing catch-up. Faced with the threat or opportunity of disruptive innovation corporate executives tend to overinvest to put themselves in the lead, where they feel they belong. A journalist friend of mine quoted his CEO as saying, “we are going to spend $2 billion on AI, I don’t know what on, but we are going to invest.” That sounds bold, but it could also be a sign of danger. Here is why and what to do about it.

Sin of Overinvesting

Amid the “big data revolution” of the early 2000s, GE launched a bold vision to become a “top ten software company.” It saw the possibility for an ‘Industrial Internet of Things’ with sensors gathering data about every aspect of a manufacturing plant’s operation being fed back and analyzed in the Cloud in real-time. GE forecast a market worth $500 billion by 2020 and committed itself to achieving first-mover advantage. It tripled its R&D budget, built a 1,000-person software division, and launched its own big data platform – Predix. Five years later it had failed. The CEO was fired, and the company dropped out of the Dow Jones 30 for the first time. A new CEO cancelled the strategy, the legacy business reasserted control, and GE’s ambition to be a software firm folded.

A decade on from the GE Predix strategy, the market is now very slowly starting to mature. GE’s problem was that they built a big data platform fit for all firms and every type of problem. The generic solution was a mismatch with the diversity of the manufacturing sector. Makers of food and beverages, automobiles, and pharmaceuticals have very different needs. In addition, industrial IoT was a totally new category. Nobody knew what it could or couldn’t do, why they wanted it, and how much they should invest. GE treated this emerging, uncertain market in the same way that they did the mature ones in which they operated. They made assumptions about what customers wanted and how they would deliver the service rooted in its old business model.

Most of the assumptions GE made were about non-technical topics – the priorities of customers, the similarity between manufacturers, the ease of capturing data, the readiness of IT organizations to support a new role in the business. These critical assumptions turned out to be toxic to the strategy, but they were all knowable in advance. GE lacked the patience, and perhaps humility, needed to go find out what potential customers thought about their strategy before they launched it. Millions of dollars could have been saved by running a series of customer interviews in which GE listened openly to what its target users wanted, and then using business experiments to test how they would respond to the project.

De-risk investments

The key to running business experiments is to only spend as much as you need to learn whether your assumptions are correct, or more likely, where you are wrong and need to adapt. Executives making large investments in AI in 2023 need to learn this lesson. Take seriously the task of de-risking the business models that Generative AI enables, rather than if the newness of the technology will carry all before it. Instead of leaping into new services, be disciplined about using small, relatively cheap experiments to test your hypotheses before not after you commit resources. The typical startup is forced to de-risk a business model because it starts with relatively few resources. This scarcity forces them to find out what customers want so that they can use that evidence in a pitch to a venture capitalist. Through successive rounds of funding, entrepreneurs learn that they need hard evidence of market traction for their idea to convince investors.

Ironically, corporate managers often resist the start small, go slow, and learn approach on the grounds that it is “not what a startup would do.” One chief strategy officer told me recently about how he proposed launching a series of small-scale pilots to find out if they could launch a digital only bank. His chairman objected, “we should act like a startup and move now on the opportunity.” A year later they cancelled the project and wrote off the $500M investment because the scale of marketing investment required was beyond their means. This was a non-technical risk, knowable before they started.

Henley Business School’s Narendra Laljani argues in a chapter for our new book, Corporate Explorer Fieldbook, that every business has a “mental model” that explains the world. These are the unconscious, unarticulated, and unexamined assumptions, and beliefs about what it takes to be successful. When the pressure is on, it defaults to this mental model when making decisions, which for corporates is often “go big or go home.”

Just because generative AI could be an innovation on the scale of the printing press should not be a license for indiscriminate spending. Leaders making investments in AI need to beware the mental model of past successes that seem to justify the “go big or go home” decisions. Instead, de-risk your innovations with rapid experiments to test the critical assumptions on which an investment is based. Fortune favors the learner, not the brave.