Skip to content

Chat with AI Agent

POST
/api/v3/organizations/{organisation}/ai/agents/{agentId}/chat

Initiates a chat session with a specific AI agent. The agent’s configuration (system prompt, temperature, model, allowed tools) is automatically applied. * * Key Features: * - Session Management: Automatic session creation and state tracking * - Multi-turn Conversations: Full conversation history maintained server-side * - Agent’s system prompt is prepended to conversation * - Only agent’s allowed tools are available * - All tools are auto-executed on cloud (no client confirmation needed) * - Temperature and model from agent config * - Supports sync, streaming, and async modes * * Execution Modes: * - Sync Mode (default): Standard JSON response, waits for completion * - Streaming Mode: Set stream: true for SSE token-by-token responses * - Async Mode: Set async: true for long-running tasks with polling * * Async/Durable Mode (async: true): * - Returns immediately with requestId and pollUrl (HTTP 202) * - Uses AWS Lambda Durable Functions for long-running agent tasks * - All tools are auto-executed on cloud (no waiting_callback state) * - Poll /ai/chat/executions/{requestId} for status * - Ideal for agents with slow tools (image generation, web search, etc.) * * Session Support: * - Omit sessionId to create a new session automatically * - Include sessionId to continue an existing conversation * - Sessions expire after 60 minutes of inactivity * - Sessions work in all modes (sync, streaming, async) * - Use /sessions/{sessionId} to retrieve full conversation history

Authorizations

Parameters

Path Parameters

organisation
required
string

The organisation ID

agentId
required
string format: uuid

The agent ID

Request Body required

object
message
required

The user’s message to the agent

string
sessionId

Optional session ID to continue a conversation

string format: uuid
userId

Optional user identifier for session isolation

string
stream

Whether to stream the response (SSE)

boolean
async

Enable async/durable execution mode. When true, returns 202 with pollUrl. Use for long-running agent tasks.

boolean
system

Optional additional system prompt (appended to agent’s configured prompt)

string

Responses

200

Agent response generated successfully (sync mode)

object
sessionId
string format: uuid
response
object
text
string
stopReason
string
usage
object
inputTokens
integer
outputTokens
integer
toolResults
Array<object>
object

202

Async execution started (when async: true in request)

object
requestId
required

Unique request identifier for polling

string
90fe0c53-a41b-4b28-b19e-5e900b3df959
agentId
required

The agent processing the request

string format: uuid
agentName

Human-readable agent name

string
Weather Assistant
sessionId

Session ID (if provided)

string
status
required

Initial status

string
Allowed values: queued
queued
message
string
Agent execution started. Poll the status endpoint for updates.
pollUrl
required

URL to poll for execution status

string
/ai/chat/executions/90fe0c53-a41b-4b28-b19e-5e900b3df959

400

Invalid request parameters

403

Access denied

404

Agent not found

500

Failed to chat with agent