Kubefeeds Team A dedicated and highly skilled team at Kubefeeds, driven by a passion for Kubernetes and Cloud-Native technologies, delivering innovative solutions with expertise and enthusiasm.

Why Agentic AI Could Be Doomed To Fail, and 3 More AI Predictions for 2025

4 min read

According to tech industry prognosticators, 2024 was set to be a banner year for generative AI. Real use cases were emerging, new technology was reducing barriers to entry, and artificial general intelligence was right around the corner.

But is that really how things played out?

Well, sort of. If 2024 were the year of generative AI, 2025 would be the year of setting reasonable expectations.

As we look forward, generative AI will still be at the top of business leaders’ minds, but that conversation is becoming more grounded. In this article, I’ll reflect on how far we’ve come, where we have yet to go, and more hot takes around the future of AI in the New Year.

#1: Agentic AI Is Excellent for Conversation — but Not Deployment

Suppose you’re swimming anywhere near the venture capital ponds these days. In that case, you’re likely to hear a couple of terms tossed around pretty regularly: “copilot,” which is a fancy term for an AI used to complete a single step (“correct my terrible code”), and “agents” which are a multistep workflow that can gather information and use it to perform a task (“write a blog about my terrible code and publish it to my WordPress”).

We’ve undoubtedly seen a lot of successful AI copilots in 2024 (just ask Github, Snowflake, the Microsoft paperclip.etc), but what about AI agents?

While “agentic AI” has been fun wreaking havoc on customer support teams, it looks like that’s all it’s destined to be soon. While these early AI agents are an essential step forward, the accuracy of these workflows is still poor.

Accuracy of 75%-90% is state-of-the-art for AI. Most AI is equivalent to a high school student. But if you have three steps of 75-90% accuracy, your ultimate accuracy is around 50%.

We’ve trained elephants to paint with better accuracy than that.

Far from driving revenue for organizations, most AI agents would be actively harmful if released into production at their current performance. We need to solve that problem first.

While it’s essential to be able to discuss them, no one has had any success outside of a demo. Regardless of how much people in the Valley might love to talk about AI agents, that talk doesn’t translate into performance.

#2: GenAI Will NOT Be a Revenue Driver for Most Organizations in 2025

Like any data product, GenAI’s value comes in two forms: reducing costs or generating revenue.

On the revenue side, you might use GenAI-powered chatbots or recommendations. These tools can generate a large sales pipeline, but it won’t necessarily be healthy. So, if it’s not generating revenue, AI needs to cut costs — and in that regard, this budding technology has certainly found some footing.

To me, an AI use case presents the opportunity for cost reduction if one of three criteria is met:

  • It’s eliminating or reducing repetitive jobs
  • It’s making up for unfilled roles due to a challenging labor market
  • It’s addressing an urgent hiring need

A great example from the wild of taking advantage of GenAI’s cost-saving potential is the digital banking company Dave, who created an internal chatbot that uses RAG to answer internal team members’ questions about their company data. This allows less technical team members to get solid answers about their data more quickly and saves them precious time better spent on, well, helping stakeholders generate revenue.

#3: The Future of AI Will Be About Small Data, not Giant Models 

The open source versus managed debate is a tale as old as time. But that question gets a lot more complicated when it comes to AI.

At the enterprise level, it’s not simply a question of control or interoperability — though that can certainly play a part — it’s a question of operational cost.

The largest B2C companies will use off-the-shelf models, while B2B will trend toward their smaller, cheaper, proprietary models instead.

For data leaders at these companies, it’s not all dollars and cents. Small models also improve performance. Like Google, large models are designed to service a variety of use cases. Users can ask a large model about effectively anything, so that model needs to be trained on a large enough corpus of data to deliver a relevant response. Water polo. Chinese history. French toast.

Unfortunately, the more topics a model is trained on, the more likely it is to conflate multiple concepts, and the more erroneous its outputs will become over time.

Furthermore, ChatGPT and other managed solutions are frequently challenged in court over claims that their creators didn’t have legal rights to the data on which those models were trained. In many cases, that’s probably not wrong.

In addition to cost and performance, this factor will likely impact the long-term adoption of proprietary models — particularly in highly regulated industries — but the severity of that impact remains uncertain. Of course, proprietary models aren’t lying down, either.

Proprietary models are already aggressively cutting prices to drive demand. Models like ChatGPT have already cut prices by roughly 50% and expect to cut them by another 50% in the next six months. That cost-cutting could be a much-needed boon for B2C companies hoping to compete in the AI arms race.

#4: The Rise of the Unstructured Data Stack

The idea of leveraging unstructured data in production isn’t new by any means — but in the age of AI, unstructured data has taken on a whole new role.

According to an IDC report, only about half of an organization’s unstructured data is being analyzed.

In 2025, all that is about to change.

The success of enterprise AI depends mainly on the panoply of unstructured data used to train, fine-tune, and augment it. As more organizations look to operationalize AI for enterprise use cases, enthusiasm for unstructured data — and the burgeoning “unstructured data stack” — will also continue to grow.

Some teams are even exploring how they can use additional LLMs to structure unstructured data and increase its usefulness in additional training and analytics use cases.

Identifying what unstructured first-party data exists within your organization and how you could potentially activate that data for your stakeholders is a greenfield opportunity for data leaders looking to demonstrate the business value of their data platform (and hopefully secure an additional budget for priority initiatives along the way).

If one thing is abundantly clear from this list, it’s that technology leaders aren’t just identifying gaps but also discovering hard points of value. AI standards and best practices will be introduced as the new year approaches.

Process, value, and scalability will be the priorities in 2025. In 2026, we will only discuss technologies that deliver on that promise.

The post Why Agentic AI Could Be Doomed To Fail, and 3 More AI Predictions for 2025 appeared first on The New Stack.

Kubefeeds Team A dedicated and highly skilled team at Kubefeeds, driven by a passion for Kubernetes and Cloud-Native technologies, delivering innovative solutions with expertise and enthusiasm.