It sounds like a robot cartoon: Robots will rule! AI agents will rise! The software will die!
Cue the howling laughter of some oversized, demonic, armor-covered lizard nerd.
But really, are we headed this way?
We know software won’t go poof and evaporate. That sounds more like a serverless joke. It’s ephemeral!
But Mark Hinkle, CEO and founder of Peripety Labs, is bullish on AI agents on this episode of The New Stack Makers. We discussed what the agents mean for software development and their connections to serverless technologies and other software, such as Infrastructure as Code (IaC) and configuration technologies.
Yes, there are similarities — especially as we think more about computational entities. True, serverless and IaC reflect how we view different levels of abstraction.
“If we think about the agents as almost a dumb robot — think of them as chopping wood and carrying water,” Hinkle said. “They would be exchanging data, or they would be querying an API and things like that.
“But the brain for those [agents], I think they will still call back to the large language model (LLM) itself. So, the workers, the dumb robots, will probably be serverless functions. It might be something written in Python or JavaScript that does stuff, but the agent itself will still be querying through an API to a large language model.”
LLMs Will “Generate Their Own Toolset”
Just think about the nature of LLMs, Hinkle pointed out. They are good at coding simply because open source software is so readily available. Using that knowledge, an LLM can provide code, and AI agents can be used to create bespoke or on-the-fly tools for specific tasks.
“What we will see is a lot of these large language models are going to auto-generate their own toolset, and they will be bespoke, rather than like we see with HashiCorp in the cloud space, or we would have seen with Chef or Puppet in the configuration management space,” Hinkle said.
“I think that they will generate tooling that is specific to the task, And they might do that on the fly. They may do that and have almost the ability to call functions that are serverless or may just be persistent for CI/CD loops or monitoring or things like that.”
Optimizing agents will become a responsibility of managing AI systems, which speaks to why we hear so much about observability and evaluation.
Hinkle summed up, “I think that’s where software is going.”
For more from Hinkle, please check out this latest episode of The New Stack Makers.
The post AI Agents Are Dumb Robots, Calling LLMs appeared first on The New Stack.