Signs of an imminent pop in the artificial intelligence (AI) economy bubble are clear to anyone who has been paying attention. Company valuations are wildly out of sync with revenues; financing for AI-related initiatives has shifted from equity to credit models, which analysts regard as riskier; and firms are increasingly relying on dizzying arrangements to prop each other up. If the sector avoids a pop, it will likely be because the United States and other governments pre-emptively backstop what they see as a sector too big and valuable to fail.
But is the AI sector too valuable to fail? Whatever pre-emptive or post-pop measures are considered, governments need to incorporate more than economic value into their policy calculations. Deciding which applications, infrastructure and other assets are worth protecting, and which should be set adrift, depends on assessing their implications for social, cultural and democratic value, in addition to their potential contribution to productivity and economic growth. Viewed through a more robust conception of value, it may be that parts of the AI economy are too big to be allowed to succeed.
Good and Bad Bubbles
Innovation expert William H. Janeway argues that what, if anything, we should do about a speculative bubble depends on its focus: “Do the assets that attract speculation have the potential to boost economic productivity when deployed at scale?” If the assets have productive potential, we might have a case for rescuing key technologies, assets and firms.
For Janeway, the credit bubble of 2004–2007 — which led to the global financial crisis of 2008–2009 — was focused on highly complex credit mechanisms that generated no real value. By contrast, the tech bubble of the 1990s — which led to the dot-com crash of 2000–2001 — was focused on the internet and its underlying physical infrastructure, which ultimately generated substantial gains in productivity and growth. The hard question we face now, as Janeway writes, is: “Where does the AI bubble fit on this spectrum?”
While Janeway helpfully prompts us to recognize that a bubble might have a good or bad focus, his productivity lens is a far too limiting criterion for the decisions we face. We need to consider not only the potential economic benefits and risks of AI, but also its social, cultural, political and environmental dimensions. With an expanded set of criteria, distinguishing between good and bad AI is even more fraught.
There may be genuine social and economic value in AI that, for example, assists (but does not replace) physicians performing health diagnostics, improves quality control in manufacturing and helps people learn Indigenous languages at risk of disappearing. On the productivity front, some recent studies suggest that few firms adopting AI are seeing gains, while other studies show positive signs — an uncertainty consistent with past technologies that saw early struggles before substantial returns. Even if the productivity gains fail to emerge before the pop, there may be reason to rescue underlying infrastructure and stranded assets if we think AI can generate other kinds of value.
At the same time, we are awash in AI slop — low-value, and often poor-quality, images, videos, text and audio produced with minimal effort using generative AI. While cute videos of panda bears walking fashion runways might appeal to some, the applications used to generate them are also used to make content that causes substantial social and political harm. AI has been used to generate deep-fake videos that aim to disrupt elections, victimize women and girls, and undermine our trust in the possibility of a shared reality. Scholars, philosophers and activists have been cataloguing and analyzing the harms for nearly a decade, but their insights are unlikely to have a hearing in discussions about AI bailouts.
What we do in the face of the AI bubble depends on our assessment of the value produced or damaged by different kinds of AI. Applications and underlying infrastructure that contribute or have real potential to make social, cultural or economic contributions might warrant public support. At the same time, because some applications have the potential to both create and destroy value, those we rescue should be subject to better regulation. The key point is that in thinking about whether and how to shepherd the AI economy through its post-bubble phase, democratic communities need to do the hard work of exercising collective judgment, making distinctions and wielding sovereign power to shape the AI future we want.
Too Big to Succeed?
In addition to thinking about what to do with specific AI applications and content, the AI bubble presents both a challenge and an opportunity to think about whether and which tech firms ought to be propped up. Past speculative bubbles have confronted governments with the question of whether certain firms are “too big to fail” because of their importance to the economy. The AI bubble gives rise to the question of whether some firms are too big to succeed because of the nature and scale of their influence on society and democratic politics.
While some firms sincerely aim to develop AI that contributes social and economic value, others knowingly enable malicious state and non-state behaviour, including surveillance, kidnappings and other human rights violations. Democracies pondering what to do with firms in the face of the AI bubble should be mindful not merely of the economic contribution of AI firms, but also of whether their products and services advance or damage human rights and civil society.
Moreover, the fact that some tech firms — otherwise well intentioned or not — have enormous economic, cultural and political power with which they can undermine public discourse, bend political decision making to their interests and threaten democratic governance, is itself a problem. An alliance of competition advocates and friends of democracy might find the AI bubble offers a golden opportunity to prompt governments to reflect on which tech firms are too big to succeed and to introduce better regulation to limit the economic and political clout of those that survive.
Choosing to See
In his dystopian but hopeful novel, Blindness, the late Nobel Laureate José Saramago imagines a community experiencing an epidemic of “blindness.” While navigating their perilous condition and the dark side of human nature that the crisis surfaces, some of Saramago’s residents begin to realize that their blindness might be a literal manifestation of the moral blindness that characterized their pre-epidemic lives. One character suggests, “I don’t think we did go blind. I think we are blind. Blind but seeing. Blind people who can see, but do not see.”
For many, the rapidly advancing age of AI has been accompanied by a selective moral blindness. Focused far too much on the potential economic benefits of AI, too few decision makers have listened to the voices of those who have witnessed or experienced the real harms of AI. If, and when, the AI bubble bursts, addressing the financial and economic impact will almost certainly take centre stage. Our lives and communities will be much better off if, in thinking about public bailouts, we incorporate social, cultural and democratic values into the deliberations.