An AI sign is seen at the World Artificial Intelligence Conference in Shanghai, July 6, 2023.
Aly Song | Reuters
The buzzy generative artificial intelligence space is due something of a reality check next year, an analyst firm predicted Tuesday, pointing to fading hype around the technology, the rising costs needed to run it, and growing calls for regulation as signs that the technology faces an impending slowdown.
In its annual roundup of top predictions for the future of the technology industry in 2024 and beyond, CCS Insight made several predictions about what lies ahead for AI, a technology that has led to countless headlines surrounding both its promise and pitfalls.
The main forecast CCS Insight has for 2024 is that generative AI “gets a cold shower in 2024” as the reality of the cost, risk and complexity involved “replaces the hype” surrounding the technology.
“The bottom line is, right now, everyone’s talking generative AI, Google, Amazon, Qualcomm, Meta,” Ben Wood, chief analyst at CCS Insight, told CNBC on a call ahead of the predictions report’s release.
“We are big advocates for AI, we think that it’s going to have a huge impact on the economy, we think it’s going to have big impacts on society at large, we think it’s great for productivity,” Wood said.
“But the hype around generative AI in 2023 has just been so immense, that we think it’s overhyped, and there’s lots of obstacles that need to get through to bring it to market.”
Generative AI models such as OpenAI’s ChatGPT, Google Bard, Anthropic’s Claude, and Synthesia rely on huge amounts of computing power to run the complex mathematical models that allow them to work out what responses to come up with to address user prompts.
Companies have to acquire high-powered chips to run AI applications. In the case of generative AI, it’s often advanced graphics processing units, or GPUs, designed by U.S. semiconductor giant Nvidia that large companies and small developers alike turn to to run their AI workloads.
Now, more and more companies, including Amazon, Google, Alibaba, Meta, and, reportedly, OpenAI, are designing their own specific AI chips to run those AI programs on.
“Just the cost of deploying and sustaining generative AI is immense,” Wood told CNBC.
“And it’s all very well for these massive companies to be doing it. But for many organizations, many developers, it’s just going to become too expensive.”
CCS Insight’s analysts also predict that AI regulation in the European Union — often the trendsetter when it comes to legislation on technology — will face obstacles.
The EU will still be the first to introduce specific regulation for AI — but this will likely be revised and redrawn “multiple times” due to the speed of AI advancement, they said.
“Legislation is not finalized until late 2024, leaving industry to take the initial steps at self-regulation,” Wood predicted.
Generative AI has generated huge amounts of buzz this year from technology enthusiasts, venture capitalists and boardrooms alike as people became captivated for its ability to produce new material in a humanlike way in response to text-based prompts.
The technology has been used to produce everything from song lyrics in the style of Taylor Swift to full-blown college essays.
While it shows huge promise in demonstrating AI’s potential, it has also prompted growing concern from government officials and the public that it has become too advanced and risks putting people out of jobs.
Several governments are calling for AI to become regulated.
In the European Union, work is underway to pass the AI Act, a landmark piece of regulation that would introduce a risk-based approach to AI — certain technologies, like live facial recognition, face being barred altogether.
In the case of large language model-based generative AI tools, like OpenAI’s ChatGPT, the developers of such models must submit them for independent reviews before releasing them to the wider public. This has stirred up controversy among the AI community, which views the plans as too restrictive.
The companies behind several major foundational AI models have come out saying that they welcome regulation, and that the technology should be open to scrutiny and guardrails. But their approaches to how to regulate AI have varied.
OpenAI’s CEO Sam Altman in June called for an independent government czar to deal with AI’s complexities and license the technology.
Google, on the other hand, said in comments submitted to the National Telecommunications and Information Administration that it would prefer a “multi-layered, multi-stakeholder approach to AI governance.”
A search engine will soon add content warnings to alert users that material they are viewing from a certain web publisher is AI-generated rather than made by people, according to CCS Insight.
A slew of AI-generated news stories are being published every day, often littered with factual errors and misinformation.
According to NewsGuard, a rating system for news and information sites, there are 49 news websites with content that has been entirely generated by AI software.
CCS Insight predicts that such developments will spur an internet search company to add labels to material that is manufactured by AI — known in the industry as “watermarking” — much in the same way that social media firms introduced information labels to posts related to Covid-19 to combat misinformation about the virus.
Next year, CCS Insight predicts that arrests will start being made for people who commit AI-based identify fraud.
The company says that police will make their first arrest of a person who uses AI to impersonate someone — either through voice synthesis technology or some other kind of “deepfakes” — as early as 2024.
“Image generation and voice synthesis foundation models can be customized to impersonate a target using data posted publicly on social media, enabling the creation of cost-effective and realistic deepfakes,” said CCS Insight in its predictions list.
“Potential impacts are wide-ranging, including damage to personal and professional relationships, and fraud in banking, insurance and benefits.”