AI Agents
AI Agents in DYPAI are first-class citizens of your backend. They plug into the same workflow system as everything else, call your existing endpoints as tools, remember conversations, stream responses to your frontend, and work with OpenAI, Anthropic, or Gemini β interchangeably.
No extra setup
Agents are just a node type. Drop one into a workflow, pick a provider and model, optionally attach tools, and expose it as an endpoint. Your frontend talks to it with a useChat hook from the SDK.
What makes DYPAI agents different
Tools = your endpoints
Any endpoint marked as a tool becomes callable by the agent. Reuse the same code that powers your UI.
Model-agnostic
Switch between OpenAI, Anthropic, and Gemini by changing one field. No code rewrites.
Built-in memory
Per-session or per-user memory, stored in your project database. No external vector DB needed for simple chat.
Streaming out of the box
The SDK's useChat() hook renders partial responses as they arrive. Plug and play.
How an agent works
Receive a message
Your frontend sends a user message to an agent endpoint via the SDK.
Load memory (optional)
If memory is enabled, past messages for the session or user are loaded automatically.
Think and decide
The LLM reads the system prompt, conversation history, and available tools. It decides whether to answer directly or call a tool.
Call tools if needed
When the agent wants data or wants to perform an action, it calls one of your tool endpoints. The result is fed back into the conversation.
Repeat until done
The loop runs up to max_iterations times (default 5). Each iteration can involve another tool call or a final answer.
Return and persist
The final response is returned to your frontend. If memory is on, the conversation is saved automatically.
Quick example
Here's the minimum setup for a chat assistant that can query your tasks table.
1. Add a credential
In the dashboard, go to Credentials β Add new and save your OpenAI (or Anthropic / Gemini) API key.
2. Create a tool endpoint
Build a normal endpoint called list_tasks that returns tasks from your database. In the endpoint header, flip the Tool toggle on and write a short description for the agent:
"Lists all tasks for the current user. Returns an array of
{id, title, done, due_date}."
3. Create the agent endpoint
Create an endpoint with a single AI Agent node:
| Field | Value |
|---|---|
| Provider | openai |
| Model | gpt-4o-mini |
| Credential | The OpenAI credential you added |
| System prompt | You are a helpful task assistant. Use the tools available to answer questions about the user's tasks. |
| Tools | Select list_tasks |
| Memory | Per User |
4. Call it from your frontend
import { useChat } from '@dypai-ai/client-sdk/react'
function ChatBox() {
const { messages, sendMessage, isStreaming } = useChat('task_assistant')
return (
<div>
{messages.map((m, i) => (
<div key={i}>{m.role}: {m.content}</div>
))}
<button onClick={() => sendMessage('What tasks are due this week?')}>
Ask
</button>
</div>
)
}
That's it. The agent will call list_tasks, read the results, and answer in natural language β streaming back to the UI.
Building agents from your IDE
Everything above can be set up through the MCP instead. Just tell your AI assistant:
"Create a chat assistant that can list and create tasks. Use Gemini 2.5 Flash. Turn the
list_tasksandcreate_taskendpoints into tools and attach them to the agent."
It'll configure the credential, mark the endpoints as tools, create the agent endpoint, and test it end-to-end.