Home / Technology / Announcing Dapr AI Agents

Announcing Dapr AI Agents

The Dapr project is excited to announce Dapr Agents, a framework for developers to simplify the creation of AI agents that reason, act, and collaborate using LLMs.

Graphic for Dapr Agents

Today, we are excited to announce Dapr Agents, a framework built on top of Dapr that combines stateful workflow coordination with advanced Agentic AI features. Dapr Agents is the best way to build systems of agents fit for enterprise use cases:

  • Reliably runs thousands of agents on a single core
  • Automatically retries complex agentic workflows and guarantees each agent task completes successfully
  • Deploys and operates natively on Kubernetes
  • Enables data loading from documents, databases, and unstructured data directly to an agent
  • Built-in support for easily authoring multi-agent systems that are secure and observable by default
  • Vendor-neutral: Mitigating risks of license changes and IP infringement 
  • Built on top of Dapr, a trusted enterprise framework that covers observability, security, and resiliency at scale used in governments and thousands of companies worldwide

Build your first Dapr AI Agent

In the Dapr Agent framework, agents are autonomous entities powered by large language models (LLMs) that serve as their reasoning engine. These agents use the LLM’s knowledge to process information, reason in natural language, and interact dynamically with their environment by leveraging tools – external capabilities that an Agent can leverage to perform actions beyond its built-in knowledge and reasoning. Tools allow the agents to perform real-world tasks, gather new information, and adapt their reasoning based on feedback.

In the Python code below, we create a single code_review_agent that has access to two tools: get_pr_code and perform_review which in this case access the GitHub API to fetch code and feed it into an LLM for review. Tools empower agents to identify the right tools for a task, format the necessary arguments, and execute the tools independently. The results are then passed back to the LLM for further processing. By annotating functions with @tool and optionally specifying the argument schema, you transform them into agent tools that can be invoked dynamically during workflows.

When creating an agent, you can specify attributes such as name, role, goal, and instructions while assigning the tools. This equips your agent with a clear purpose and the ability to interact with its environment.

import logging
import asyncio
import requests
from dotenv import load_dotenv


from dapr_agents.llm.dapr import DaprChatClient
from dapr_agents import AssistantAgent, tool


# Load environment variables
load_dotenv()


logging.basicConfig(level=logging.INFO)


@tool
def get_pr_code(repository: str, pr: str) -> str:
   """Get the code for a given PR"""
 response = requests.get(f"https://api.github.com/repos/{repository}/pulls/{pr}/files")
   files = response.json()
   code = {file["filename"]: requests.get(file["raw_url"]).text for file in files}
   return code


@tool
def perform_review(code: str) -> str:
   """Review code"""
   response = DaprChatClient().generate(f"Review the following code: {code}")
   return response.get_content()


# Define Code Review Agent
code_review_agent = AssistantAgent(
  name="CodeReviewAgent",
  role="Review PRs",
  instructions=["Review code in a pull request, then return comments and/or suggestions"],
  tools=[get_pr_code, perform_review],
  message_bus_name="messagepubsub",
  state_store_name="workflowstatestore",
  agents_registry_store_name="agentstatestore",
  service_port=8001,
)


# Start Agent Workflow Service
await code_review_agent.start()

We can start the agentic workflow with the following task:

workflow_url = "http://localhost:8001/RunWorkflow"
task_payload = {"task": "Review PR https://github.com/dapr/dapr/pull/1234"}
requests.post(workflow_url, json=task_payload)

Dapr Agents guarantees that all the different activities performed in the agentic workflow are run to completion. Agents are completely resilient to process crashes, node scaling and network interruptions. Dapr can distribute thousands of agents transparently across pods and nodes on a Kubernetes cluster or on raw VMs.

Build an LLM Task workflow

LLM-based Task Workflows allow developers to design step-by-step workflows where LLMs provide reasoning and decision-making at defined stages. These workflows are deterministic and structured, enabling the execution of tasks in a specific order, often defined by Python functions. This approach does not rely on event-driven systems or pub/sub messaging but focuses on defining and orchestrating tasks with the help of LLM reasoning when necessary. This is ideal for scenarios that require a predefined flow of tasks enhanced by language model insights.

Dapr Agents introduces tasks, which simplify defining and managing workflows while adding features like tool integrations and LLM-powered reasoning.  Tasks are built on the concept of workflow activities and bring additional flexibility, including using Python function signatures to make them easy to define.

Let’s look at an example.

an example of Dapr
from dapr_agents.workflow import WorkflowApp, workflow, task
from dapr_agents.types import DaprWorkflowContext
from dotenv import load_dotenv


# Load environment variables
load_dotenv()
# Define Workflow logic
@workflow(name='mytrip_workflow')
def plan_trip(ctx: DaprWorkflowContext, locations: list) -> str:
 # This workflow is composed of two tasks
 # Each task is durable and can be retried
 forecast = yield ctx.call_activity(determine_forecast, input=(locations[0]))
 trip_details = yield ctx.call_activity(create_itinerary, input=(forecast))
 return trip_details


@task(description="Get the weather forecast for the next 5 days in {location}")
def determine_forecast(location: str) -> str:
 pass


@task(description="For each of the days, create an indoor and outdoor itinerary based on the weather {forecast} saying what the weather is like for each day")
def create_itinerary(forecast: str) -> str:
 pass


wfapp = WorkflowApp()
results = wfapp.run_and_monitor_workflow(
    workflow="mytrip_workflow",
    input=(["Gotham"])
)
print(f" Your itinerary is: {results}")

In the code above the plan_trip workflow calls two tasks in sequence. The task decorator allows you to provide a description parameter, which acts as a prompt for the default LLM inference client. You can use function arguments to pass variables to the prompt, letting you dynamically format the prompt before it’s sent to the text generation endpoint, in this example with the location and forecast variables. This workflow makes it simple to implement workflows that follow the Dapr Task chaining pattern, similar to the previous agent example, but with even more flexibility and with all the benefits of Dapr workflows to provide a framework for managing long-running processes and interactions across distributed systems.

Multi-agent Workflows

Finally, let’s see how easy it is to build multi-agent workflows using Dapr Agents. 

Agents need to operate as autonomous entities that respond to events dynamically, enabling real-time interactions and collaboration coordinated with workflows. These event-driven agentic workflows take advantage of Dapr’s pub/sub messaging system. This allows agents to communicate, share tasks, and reason through events triggered by their environment.

To achieve this, you create an agentic workflow service that uses Dapr Workflow under the hood to orchestrate communication among agents. This allows you to send messages to agents to trigger their participation and monitor a shared message bus to listen for all messages being passed.

This enables many types of complex, self-reasoning agentic workflows, including:

how the agent works
  • LLM-based: Leverages an LLM to decide which agent to trigger based on the content and context of the task and chat history. This is where you could switch between different LLMs, such as OpenAI, Anthropic, AWS Bedrock, etc. 
  • Random: Distributes tasks to agents randomly, ensuring a non-deterministic selection of participating agents for each task.
  • RoundRobin: Cycles through agents in a fixed order, ensuring each agent has an equal opportunity to participate in tasks.

Here’s an example of defining a multi-agent system.

# Define Agent #1
hobbit_service = AssistantAgent(
   role="Hobbit",
   name="Frodo",
   goal="Carry the One Ring to Mount Doom, resisting its corruptive power while navigating danger and uncertainty.",
   instructions=[
      "Speak with humility, determination, and a growing sense of resolve.",
      "Endure hardships and temptations, staying true to the mission even when faced with doubt."
   ],
   message_bus_name="kafka",
   state_store_name="postgres",
   agents_registry_store_name="postgres",
   service_port=8001,
   daprGrpcPort=50001
)


# Define Agent #2
elf_service = AssistantAgent(
   role="Elf",
   name="Legolas",
   goal="Act as a scout, marksman, and protector, using keen senses and deadly accuracy to ensure the success of the journey.",
   instructions=[
      "Speak with grace, wisdom, and keen observation.",
      "Be swift, silent, and precise, moving effortlessly across any terrain."
   ],
   message_bus_name="kafka",
   state_store_name="postgres",
   agents_registry_store_name="postgres",
   service_port=8002,
   daprGrpcPort=50002
)


await hobbit_service.start()
await elf_service.start()

In this example, we defined two agents whose job is to take the One Ring to Mordor. The AssistantAgent is in charge of connecting the agents to an underlying message broker and database, in this case Kafka and Postgres. 

Now we can create an LLMOrchestrator service which ties all our agents together:

workflow_service = LLMOrchestrator(
   name="LLMOrchestrator",
   message_bus_name="kafka",
   state_store_name="postgres",
   agents_registry_store_name="postgres",
   service_port=8003,
   daprGrpcPort=50003)


await workflow_service.start()

The only thing left is to ask the monumental question and start the agentic workflow:

workflow_url = "http://localhost:8003/RunWorkflow"
task_payload = {"task": "How to get to Mordor?"}
requests.post(workflow_url, json=task_payload)

Combining Choreography and Orchestration yields the best results

Dapr Agents support both deterministic workflows and event-driven interactions. Built on Dapr Workflows, which leverage Dapr’s virtual actors underneath, agents function as self-contained, stateful entities that process messages sequentially, eliminating concurrency concerns. At the same time, Dapr Workflows provide durable, long-running execution, orchestrating agent behavior from simple tasks to complex coordination patterns while ensuring resiliency and recovery in case of failures.

In summary, the Dapr Agents framework seamlessly integrates Dapr’s service invocation and event-driven pub/sub messaging with workflow-based orchestration, leveraging the scalability and state management of virtual actors to enable resilient and adaptive agent interactions.

Under the hood

There are many developer frameworks emerging today to develop agentic AI applications. However, “agentic systems” is just another tech industry term for “distributed applications with smarts”. Depending on your level of application complexity there are key requirements that are needed in a framework. These are:

  1. Secure and reliable communication. This means agents discover each other, communicate over secure channels, and retry calls in the event of failure. You also want different communication patterns such as request/reply API calls or asynchronous messaging.
  2. Stateful, long-running “objects” or agents encapsulate data (referred to as memory) along with actions on that data (methods). Dapr’s virtual actors are exactly this and are the basis for agentic workflows, handling the reasoning logic and execution of workflow activities abstracted as tasks. This makes agents extremely lightweight in terms of resource consumption, providing fast activation and high scalability (into the millions) using average compute resources. They scale down to zero when unneeded and back up in less than 50ms.
  3. Orchestration and coordination. Despite all discussions on multi-agent coordination, this is no different from any classic workflow engine, which is a state machine. However instead of procedure code or human input being the only thing to affect the workflow steps, now we can incorporate LLMs to make decisions on the flow. Dapr’s developer-friendly code-first workflows—now stable in the latest v1.15 release—are ideal for agent coordination and utilize Dapr actors for durable execution.
  4. Infrastructure and platform independence with production-level observability. Dapr was designed from the beginning to integrate with any infrastructure services in a cloud-neutral way, which has been a major driver of the adoption of its APIs.  

Given these requirements, let’s now look at how Dapr Agents provides many benefits to developers looking to build agentic AI applications:

AI Agents fit for enterprise use: Dapr Agents considerably simplify the development and execution of all agentic patterns using workflows that provide deterministic steps. They ensure reliability and scale and are built for durable, long-running, business-critical operations. Dapr Agents operates alongside any existing software and can run in the cloud as well as on-premises.

Accelerated development: Enables developers to build agentic applications with a highly efficient event-driven architecture that is resilient to failures.  This reduces the time to market and de-risks production deployments of AI workloads. Dapr Agents includes features such as structured outputs, multiple LLM provider integrations, agentic tool selection, contextual memory, prompt flexibility, and more.

Cost-effective AI adoption: Dapr Agents’ “Scale to Zero” design minimizes compute costs and infrastructure demands, making AI agents affordable to adopt. Agents under the hood are represented as virtual actors, allowing users to run thousands of agents on-demand on a single core machine with boot times in double-digit milliseconds.

Data centric: Dapr can securely connect to 50+ enterprise data sources and loads structured and unstructured data efficiently from an agent task. Dapr Agents allows you to start from basic scenarios like PDF extraction and move to loading large amounts of data from SQL and NoSQL databases with minimal code changes.

Vendor Neutral: As a vendor-neutral framework part of the CNCF, Dapr Agents eliminates risks of vendor lock-in, litigation, or intellectual property infringement, offering organizations flexibility and peace of mind.

What makes Dapr Agents different from other frameworks?

First off, Dapr Agents is built on top of Dapr’s full featured workflow engine. Many other agent and LLM frameworks use homegrown workflow systems that aren’t reliable for production use cases. Dapr Agents uses Dapr’s proven workflow system, which is designed to handle failures, retries, and scaling. This gives you robust and well-integrated workflow capabilities right from the start.

Another big difference is how Dapr Agents handles infrastructure. It abstracts integrations with databases and message brokers using Dapr’s consistent programming model. This means you can easily switch between databases like Postgres, MySQL, AWS DynamoDB and a dozen others without having to rewrite your agent code. In addition, Dapr Agents integrates seamlessly with Kubernetes environments and runs just as well locally or on a VM.

As described above, Dapr Agents is designed for multi-agent interactions where agents communicate through message brokers. This allows for collaborative workflows where agents with different roles can share context. While other frameworks offer multi-agent patterns, they don’t always integrate with a distributed message-driven architecture, leading to less reliable communications and loss of context in a large multi-agent setup.

Dapr Agents offers metrics and tracing out-of-the-box, supporting Prometheus and OpenTelemetry formats respectively.

Finally, Dapr Agents allows for event-driven and non-deterministic execution. This means the next agent to respond can be dynamically determined by an LLM, enabling autonomous and evolving workflows.

Dapr Agents origins

Dapr Agents was originally developed by Roberto “Cyb3rWard0g” Rodriguez , a member of Microsoft’s Security AI & Research organization. None of this would have been possible without his incredible contributions. Roberto developed Dapr Agents (formerly named Floki) as part of his research into Agentic Systems at Microsoft, where he explored the application of Agentic AI to autonomous defense and protection. He was curious about how to build reliable, scalable autonomous systems using proven, production-grade technologies. We are happy to welcome Roberto as a maintainer of Dapr Agents.

How to get started with Dapr Agents

The best way to learn more about Dapr Agents is to write some code and you can find many examples in the repo https://github.com/dapr/dapr-agents. Start with reading the overview in the docs and then work through the Quickstarts found in that GitHub repo.

Give it a try and join the thriving community on the Dapr Discord channel .

Call for collaboration

We are calling on every company or individual interested in advancing vendor-neutral AI software to collaborate with us for the benefit of developers everywhere.