We ask chatbots and scour 2025 AI prediction reports to bring you what could be the four most likely advancements in artificial intelligence next year.
What do some of the most popular artificial intelligence (AI) chatbots say the most likely AI advancements will be in 2025? Euronews asked them to find out.
OpenAI’s ChatGPT, Microsoft’s Copilot, Perplexity AI, and Google’s Gemini gave our team very different responses, but with some common threads.
Anthropic’s Claude refused to answer the question, however, because it says its knowledge ends in April 2024.
We also compiled expert analysis to look into the four most likely advancements for AI this year.
‘Workflows fundamentally changed’ with AI agents
Both consulting experts and AI chatbots agree that this year will be the one where businesses take full advantage of AI agents: a type of AI that can make decisions and perform tasks without human intervention.
“In many cases, AI will take over routine or repetitive tasks, freeing up human workers to focus on strategic and creative activities,” according to ChatGPT.
Examples of those tasks could be customer inquiries, first drafts of software code, or turning design ideas into draft prototypes, according to the 2025 AI predictions from audit company PricewaterhouseCoopers (PwC).
According to consulting firm Deloitte’s 2025 AI predictions report, 25 per cent of companies that already use AI will be ready to deploy AI agents by the end of the year. The firm said that this number is expected to grow to 50 per cent by 2027.
The year will see “workflows fundamentally change,” with AI taking on these administration tasks but with human supervision, the report continued.
Commercial expansion in AI agents would follow the wider trend in 2025 of businesses adopting more AI technology.
The International Data Centre (IDC)estimates that worldwide spending on AI will go up to roughly $632 billion (€605.1 billion) by 2028.
Narrow or industry-specific AI
This year there will also be advancements in industry-specific AI, or what’s called “narrow AI,” claim AI chatbots Perplexity and Copilot.
Catherine Breslin, founder of AI consulting firm Kingfisher Labs, said as AI becomes more advanced, it is time for professionals in fields like law, medicine, and space to think about how it can enhance their work.
“It’s not necessarily hard to make it work in a specific domain,” Breslin said. “It just takes some work to sort of figure out what’s useful in a particular domain”.
AI tools Perplexity and ChatGPT predict that AI will branch out even further in 2025 in the medical field, especially for drug and product development.
So far, Breslin said AI has been used in medicine to ease some of the administrative burden such as for taking notes.
“What are the other places in medicine [where] it really would be useful?” Breslin said, referencing what professionals will be asking themselves more and more in 2025.
One of the benefits of narrow AI is that companies can use small or medium-sized language models to train it, using up fewer resources over time, according to Kate Devlin, professor of AI and society at King’s College London.
AI in our devices
The next thing to expect next year is even more AI-integrated devices with multiple companies last year starting to roll out smartphones that use AI.
By the end of next year, generative AI is predicted in roughly 30 per cent of all devices, according to a report fromDeloitte.
That number goes up to 50 per cent when talking about AI-enhanced laptops.
Industry-specific AI is also easier to host on phones, Breslin said, so companies can create apps that will not need an internet or data connection to work.
“If you look at some of the models, like ChatGPT or Meta’s Llama, these are really big models that… you need really hefty computing power to work with and not everybody has that computing power,” Breslin said.
“You also need them to be connected to the Internet… so that’s not ideal in a lot of cases”.
This is already something that’s starting, Breslin said. She pointed to Microsoft’s work on small language models, like Phi-4 which the companysays excels in “complex reasoning” in areas like math or advanced language processing.
The rise of multi-mediums
AI models are going to get better at generating different types of content, such as text, images, and speech at the same time in 2025.
This type of AI system, called a multimodal system, processes information from text, images, audio, and video to give users a more well-rounded response to their questions or to produce a piece of media, according to a 2025 predictions blog fromGoogle.
One example of how it could be used is to analyse market commentary videos, taking into account tone of voice and facial expressions to give people a “more nuanced understanding” of how investors are feeling about the economic market,Google’s report explained.
Multimodal AI could also analyse data like noise and vibrations in a manufacturing plant to proactively address what the needs could be on the floor.
The EU got its first taste of multimodal AI this year with therelease of Google’s latest version of AI chatbot Gemini 2.0 which processes text, images, audio, and video.
There could be issues with getting access to other advanced multimodal AI in Europe in 2025, however, if more companies, asMeta has, refuse to release their new models because of “regulatory unpredictability”.