Skip to content

LLMs and AI Agents Evolving Like Programming Languages

In the episode of The New Stack Makers, Yam Marcovitz, CEO of Emcie likens LLM development to the evolution of programming languages, from punch cards to modern languages like Python.

The emergence of the World Wide Web allowed developers to build tools and platforms on top of it. The advent of LLMs allows for developing tools and platforms on top of LLMs. For example, AI agents can develop new ways to interact with LLMs, execute tasks, and autonomously make decisions.

Those tasks and autonomous decisions need verification, and critical reasoning may be one way to address the problem, said Yam Marcovitz, tech lead at Parlant.io and CEO of Emcie.co.

Marcovitz, in this episode of The New Stack Makers, concurs with the view that the internet serves as a platform for development. But he prefers the analogy to programming languages.

Marcovitz said that pioneering technologists started with punch cards, which were more for inputs than anything else. For context, low-level languages were followed by assembly languages such as Fortran. The industry later saw the emergence of C, SQL, C++, and Python.

Today, we see the emergence of LLMs, which started with small transformer models, and others, such as BERT, preceded GPT 3. Now, dynamic configurations emerge instead of just finer tuning on the text and doing auto-completion. What we see emerging are better reasoning models that can provide complex instructions.

Parlant provides a customer-facing AI agent using what they call “attentive reasoning queries” (ARQs), Marcovitz said. ARQs maintain consistency and coherence through long and complex prompts. It uses more tokens, but the approach, using reasoning, helps achieve close to 100% accuracy, like 99.999% close.

People interpret instructions. For the Parlant team, it meant developing what it calls guidelines. This means instructions aren’t built from scratch like the traditional model. Nor do the models get free reign. Instead, the Parlant team takes a sculptured approach by aligning it to the shape that is envisioned.

Marcovitz said that no matter the size of an LLM, the problems faced are often a matter of subjectivity. What we want to achieve gets interpreted in different ways. An employee may want to achieve something, but the manager may have a different view.

So, it’s impractical to believe an LLM would act much differently.

“So instead, we came up with this innovation where we have you define guidelines, and each guideline describes two things: the conditions in which some action should hold, as well as the action itself,” Marcovitz said. “We call these atomic guidelines. So instead of just having a very amorphous, large prompt, very complex, you just define it as guideline number one… This is guideline number two.

“The system actually picks and chooses and matches the right guidelines for every specific state or stage of any conversation. It figures out exactly which needs to be activated right now. So, there are multiple moving parsers. Once we have all that guidance, we can give specific individual feedback on them. The very fact that they’re atomic and small means that we can apply an informative approach to each one and make sure it is applied accurately.”

For more about Parlant’s approach to AI agents, please listen to this episode of The New Stack Makers for more about how agents can provide subjectivity to an LLM using reasoning techniques.

The post LLMs and AI Agents Evolving Like Programming Languages appeared first on The New Stack.

Published inKubernetesTechnology

Comments are closed.