AI Integration
DYPAI is designed from the ground up for the AI era, with native support for the Model Context Protocol (MCP), intelligent agents that execute multi-step tasks, and seamless integration with leading LLM providers.
Overview
Every DYPAI project is AI-ready out of the box. The platform exposes your database, storage, and edge functions as structured tools that any MCP-compatible LLM can invoke directly. This means you can connect Claude, GPT-4, or any other supported model to your backend and let it query data, upload files, and run functions -- all through a standardized protocol with built-in security and auditing.
The AI integration layer provides three core capabilities:
- MCP Protocol: A standardized interface for LLMs to interact with your DYPAI backend
- Intelligent Agents: Autonomous agents that execute complex, multi-step tasks using available tools
- Multi-model Support: Connect any compatible LLM provider and switch models without changing your integration code
- Custom Tools: Extend the built-in toolset with your own edge functions exposed as MCP tools
- Automated Workflows: Trigger agent actions based on database events, webhooks, or schedules
MCP Protocol
The Model Context Protocol (MCP) is an open standard that defines how LLMs communicate with external tools and data sources. DYPAI implements a full MCP server that exposes your project resources as discoverable, typed tools. Any MCP-compatible client -- including Claude Desktop, Cursor, and custom applications -- can connect and start interacting with your backend immediately.
To connect an MCP client to your DYPAI project, add the following configuration to your client's MCP settings file:
{
"mcpServers": {
"dypai": {
"command": "dypai",
"args": ["mcp", "serve"],
"env": {
"DYPAI_PROJECT_ID": "your-project-id",
"DYPAI_API_KEY": "your-api-key"
}
}
}
}Once connected, the MCP server automatically discovers your project's schema and exposes the appropriate tools. The LLM can then see the available tables, storage buckets, and edge functions, and use them to fulfill user requests.
Available Tools
When MCP is enabled, the following tools are exposed to connected LLMs. Each tool includes a typed schema that the model uses to construct valid requests, along with descriptions that help the model choose the right tool for each task.
| Tool | Description | Example Use |
|---|---|---|
database_query | Execute read-only SQL queries against your database | Retrieve sales figures, search records, generate reports |
database_insert | Insert one or more rows into a table | Create new records from user input or external data |
database_update | Update existing records matching a condition | Change order status, update user profiles |
database_delete | Delete records matching a condition with confirmation | Remove expired entries, clean up test data |
database_schema | Inspect table structures, columns, and relationships | Understand the data model before writing queries |
storage_upload | Upload a file to a specified bucket and path | Save generated reports, store processed images |
storage_list | List files in a bucket or folder with metadata | Browse available files, check if a file exists |
storage_download | Download a file's contents for processing | Read CSV data, parse uploaded documents |
storage_delete | Remove a file from a bucket permanently | Clean up temporary files, remove outdated assets |
api_call | Call a REST API endpoint on your project | Trigger custom endpoints, interact with your API layer |
edge_function | Invoke an edge function with a JSON payload | Run data processing, call external services, execute AI pipelines |
Intelligent Agents
DYPAI agents are autonomous programs that use LLMs and tools to accomplish complex tasks. Unlike simple API calls, agents can plan multi-step workflows, maintain context across interactions, and adapt their approach based on intermediate results. You define an agent with a name, a set of instructions, and the tools it is allowed to use.
DYPAI provides predefined agent types optimized for common business scenarios. You can use them directly or customize their instructions to fit your specific needs:
| Agent Type | Default Tools | Description |
|---|---|---|
sales-analyst | database_query, edge_function | Analyzes sales data, generates revenue reports, identifies trends |
support-agent | database_query, database_update, api_call | Handles customer inquiries, updates ticket status, escalates issues |
data-processor | database_query, database_insert, storage_download | Ingests data from files, transforms records, populates tables |
content-moderator | database_query, database_update, database_delete | Reviews user-generated content, flags violations, enforces policies |
Here is an example of configuring and running an agent using the Python SDK:
# Create a sales analyst agent
agent = dypai.Agent(
name="sales-analyst",
model="claude-sonnet",
instructions="""
You are a sales data analyst. When asked for reports:
1. Query the orders and products tables to gather raw data.
2. Calculate key metrics: total revenue, average order value,
top-selling products, and period-over-period growth.
3. Format results as a structured summary with numbers.
4. Highlight any anomalies or notable trends.
""",
tools=["database_query", "database_schema", "edge_function"],
max_tokens=4096,
temperature=0.1,
)
# Run the agent with a natural language prompt
result = await agent.run(
"Generate a sales report for January 2026, comparing it to December 2025. "
"Include top 5 products by revenue."
)
print(result.output)
# The agent will execute multiple tool calls autonomously:
# 1. Inspect the database schema
# 2. Query January 2026 orders
# 3. Query December 2025 orders for comparison
# 4. Calculate metrics and return the formatted reportSupported Models
DYPAI's AI integration works with multiple LLM providers. You can specify the model when creating an agent or making direct API calls. All models have access to the same set of MCP tools.
| Model | Provider | Best For |
|---|---|---|
| Claude Sonnet | Anthropic | General-purpose tasks, analysis, and code generation |
| Claude Opus | Anthropic | Complex reasoning, research, and multi-step planning |
| GPT-4 | OpenAI | Broad knowledge, creative writing, and general tasks |
| GPT-4o | OpenAI | Fast responses, multimodal input, and cost-efficient tasks |
To use a specific model, set the model parameter when creating an agent or pass it in the API request. You must configure the corresponding provider API key in your project settings before using a model.
Security
AI integrations introduce unique security considerations. DYPAI provides multiple layers of protection to ensure that LLMs can only access the data and operations you explicitly allow:
- API key scoping: Create dedicated API keys for AI integrations with limited permissions. A key can be restricted to read-only database access or specific buckets.
- Tool permissions: Each agent is configured with an explicit list of allowed tools. An agent cannot use tools outside its granted set, even if the LLM requests them.
- Token limits: Set maximum token budgets per agent invocation to control costs and prevent runaway conversations. The agent stops gracefully when it approaches the limit.
- Audit logging: Every tool invocation is logged with the agent name, model used, input parameters, and timestamp. Logs are available in the dashboard and via the API.
- Row-level security: Database tools respect your existing RLS policies, so an agent running with a user's context can only access rows that user is authorized to see.
Next steps
- Configure MCP -- Set up MCP servers and connect your LLM clients
- Create agents -- Build and deploy intelligent agents for your use case
- AI functions in Python -- Write Python edge functions that call LLM APIs