AI firms have pushed the narrative that AI can replace humans in most jobs, but so far, the technology has done little more than assist programmers and copywriters in their work.
In the rush to capitalise on the generative artificial intelligence (genAI) gold rush, one potential outcome is rarely discussed: what if the technology never works well enough to replace human workers, companies fail to integrate AI effectively, or the majority of AI startups simply collapse?
Current estimates suggest that major AI firms face an $800 billion revenue shortfall. So far, genAI’s productivity gains are minimal and primarily limited to developers and copywriters. While genAI offers neat and helpful tools, it has yet to become the engine of a new economic era. This future—where AI is useful but not revolutionary—differs significantly from the one currently dominating news headlines and the narrative pushed by AI firms to fuel massive investment.
Indispensable or indefensible? The cost problem
The growing question is how genAI firms will generate sustainable revenue, as running free and cheap subscription services like ChatGPT and Gemini incurs enormous computing costs.
OpenAI CEO Sam Altman has been candid about these expenditures, once quipping that every time ChatGPT says “please” or “thank you,” it costs the firm millions. Altman has stated that even paid “pro” accounts lose money due to the high computational power required for each query.
Like many startups, genAI firms have followed the classic playbook: burn through cash to attract and lock in users with a “killer product.” However, most successful tech giants thrive on low-cost products funded primarily by advertising. When companies try to find new value, the result can be what journalist Cory Doctorow termed “enshittification”—the gradual decline of platforms, often seen here as an increase in advertisements to offset the losses from providing free services.
OpenAI is reportedly considering introducing ads to ChatGPT, though they claim they are being “very thoughtful and tasteful”. It is too soon to tell if this model will work, as advertising revenue may not be enough to justify the massive, ongoing infrastructure spending required to power genAI models.
The hidden costs: Copyright and liability
Another looming problem making genAI a financial liability is copyright. Most major AI firms are either facing lawsuits for using copyrighted content without permission or are entering into costly licensing contracts.
GenAI models “learned” by dubiously scraping data, including copyrighted books and nearly all publicly shared content online. For example, one model can reportedly recall 42 per cent of the first Harry Potter novel from memory. Firms face a significant financial headache from both lobbying for copyright exemptions and paying off publishers and creators to protect their models.
The American AI startup Anthropic proposed paying authors around $3,000 per book to train its models, a settlement that quickly swelled to $1.5 billion before being thrown out by courts for being too simplistic. Anthropic’s current valuation of $183 billion could be quickly consumed by escalating legal costs. The consequence of these mounting costs is that AI risks becoming a toxic asset: something useful but not inherently valuable or easy to own.
The threat of cheap or free GenAI
Meta has strategically released its genAI model, Llama, as open source, allowing anyone with a decent computer to run a local version for free. Similarly, the existence of other open models, which are often “good enough” and cheaper than their commercial counterparts, disrupts the high valuations placed on commercial AI firms.
When Chinese firm DeepSeek released an open model that performed on par with commercial models, it momentarily tanked AI stocks. Whether DeepSeek’s motives were competitive or ethical, its success contributes to growing doubts about the perceived value of high-cost genAI.
These open models—the by-products of intense industrial competition—are ubiquitous and increasingly accessible. With their growing success, commercial AI firms will find it harder to sell their services against free alternatives, making investors more skeptical and potentially drying up seed money.
Can AI ever be owned?
The idea that genAI may be worthless stems from the notion that the knowledge it’s trained on is intangibly valuable. The best models are trained on the world’s collective knowledge—an amount of information whose true price is impossible to calculate.
Ironically, AI firms’ efforts to commercialise this collective knowledge may be what ultimately damns their products. The systems may be so indebted to collaborative intellectual labor that their outputs cannot truly be owned.
If genAI fails to generate sustainable profits, the consequences will be mixed:
- Creators pursuing licensing deals may be out of luck, as there will be no big checks from companies struggling with liability.
- Progress on genAI could stall, leaving consumers with “good enough” tools that are free to use.
In this scenario, AI firms become less powerful and the technology less threatening, which might be perfectly acceptable. Users would still benefit from accessible, functional tools while being spared from another round of overhyped ventures doomed to fail. The threat of AI being worth less than anticipated may be the best defense against the growing power of big tech today. If the business case for generative AI proves unsustainable, what better place for such an empire to crumble than on the balance sheets?
ALSO READ: Why anyone can be victim of digital scams: Here’s psychology behind it
Inputs from PTI
