Streaming
Waiting 20 seconds for an agent to finish before seeing anything is a bad experience. DYPAI agents stream their output — text, tool calls, and tool results — in real time, so your UI can render partial responses as they generate.
With the React SDK (recommended)
The useChat hook from @dypai-ai/client-sdk/react wraps all the streaming mechanics for you:
import { useChat } from '@dypai-ai/client-sdk/react'
function ChatBox() {
const {
messages, // full message history
sendMessage, // send a new message
isStreaming, // true while the agent is responding
stop, // abort the current stream
error, // any error that occurred
} = useChat('my_agent_endpoint')
return (
<div>
<div className="messages">
{messages.map((m, i) => (
<div key={i} className={`msg msg-${m.role}`}>
{m.content}
</div>
))}
{isStreaming && <div className="typing">...</div>}
</div>
<ChatInput
onSubmit={text => sendMessage(text)}
disabled={isStreaming}
/>
{isStreaming && (
<button onClick={stop}>Stop</button>
)}
{error && <div className="error">{error.message}</div>}
</div>
)
}
That's a full chat UI in about 20 lines. messages updates as tokens arrive, triggering React re-renders so users see text appear progressively.
Session IDs and memory
If your agent uses memory, pass a sessionId so the hook loads and persists the right conversation:
const { messages, sendMessage } = useChat('my_agent_endpoint', {
sessionId: 'conv-123',
})
Store sessionId in state (for a single conversation) or in your database (for a sidebar with multiple conversations). The useChatList hook helps with the latter — see Memory.
What's in a message
Each message in messages has:
role—"user","assistant", or"tool"content— the text shown to the userparts— the raw Vercel AI SDK parts (text, tool-call, tool-result). Useful if you want to render tool calls specially.
For a typical chat UI, just use content. For a richer UI that shows "Agent is searching products…", inspect parts and render custom components for each part type.
Showing tool calls in the UI
If you want users to see what the agent is doing (e.g., "Searching orders…", "Looking up the user…"), inspect parts:
messages.map((m, i) => (
<div key={i}>
{m.parts?.map((p, j) => {
if (p.type === 'text') return <span key={j}>{p.text}</span>
if (p.type === 'tool-call')
return <ToolBadge key={j} name={p.toolName} args={p.args} />
if (p.type === 'tool-result')
return <ToolResult key={j} result={p.result} />
})}
</div>
))
This pattern gives you ChatGPT-style "thinking" indicators for free.
Stopping mid-stream
stop() cancels the current generation. The partial response is kept in messages (truncated), and memory saves whatever was generated up to that point.
Useful for:
- A "Stop generating" button
- Cancelling when the user navigates away
- Rate-limiting a runaway agent
Without the SDK
If you're not using React, or want to build your own UI, call the endpoint directly. Agent endpoints stream using the Vercel AI SDK data stream format (a newline-delimited event stream over HTTP).
const res = await fetch(`${DYPAI_URL}/api/v0/my_agent_endpoint`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${jwt}`,
},
body: JSON.stringify({
messages: [{ role: 'user', content: 'Hello' }],
session_id: 'conv-123',
}),
})
const reader = res.body.getReader()
const decoder = new TextDecoder()
while (true) {
const { value, done } = await reader.read()
if (done) break
// Each chunk contains one or more stream events.
// See the Vercel AI SDK docs for the full protocol.
console.log(decoder.decode(value))
}
For most cases, use the SDK — it handles reconnection, error boundaries, and message formatting for you.
Non-streaming calls
If you need a final response as a plain JSON value (for a server-to-server call, a cron, or a webhook that doesn't handle streams), call the agent like any other endpoint with dypai.api.post:
const { data, error } = await dypai.api.post('my_agent_endpoint', {
prompt: 'Summarize today\'s orders',
})
console.log(data.content) // final text answer
console.log(data.usage) // tokens used
console.log(data.steps) // tool calls made
DYPAI automatically waits for the full response and returns the final object.
Common gotchas
| Problem | Fix |
|---|---|
| Messages not showing up | Make sure the agent endpoint is set to jwt auth and the SDK is authenticated. |
messages resets on unmount | useChat state is local. For persistent conversations, use memory with a sessionId. |
| Streaming stalls | Check max_iterations — if the agent is looping on tools, it may just take longer than you'd expect. |
| Tool calls not shown | The SDK only surfaces tool-call parts when you read message.parts. message.content only includes the final text. |