Send chat messages to AI models. This endpoint routes your requests to the configured AI providers and returns the response. Supports system context messages for better control.
Send chat messages to AI models with optional system context.
{
"model": "string (optional, default: 'google/gemini-2.5-flash')",
"responsetype": "string (optional, default: 'stream')",
"messages": [
{
"role": "user | system",
"prompt": "string (required)"
}
]
}model: The AI model to use. Format: provider/model-name (e.g., google/gemini-2.5-flash)
responsetype: Response format type. Currently supports "stream"
messages: Array of message objects. Each message must have:
role: "user" - User messages that form the main promptrole: "system" - System messages used as context/instructionsprompt: The message content (required)Note: System messages are combined and used as context. At least one user message is required.
curl -X POST https://api.nextcraftai.com/v1/chat \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "google/gemini-2.5-flash",
"responsetype": "stream",
"messages": [
{
"role": "user",
"prompt": "What is the date and time today"
}
]
}'curl -X POST https://api.nextcraftai.com/v1/chat \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "google/gemini-2.5-flash",
"responsetype": "stream",
"messages": [
{
"role": "system",
"prompt": "You are a helpful assistant that provides accurate information."
},
{
"role": "user",
"prompt": "What is the date and time today"
}
]
}'System messages provide context and instructions to the AI model. They are combined and prepended to user messages.
{
"id": "7d52dacc-83f9-404f-b1e1-f00d3382453e",
"created": 1763369195,
"model": "google/gemini-2.5-flash",
"responsetype": "stream",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "I'm sorry, but I do not have access to real-time information, including the current date and time. My knowledge cut-off is **June 2024**.\n\nTo find out the current date and time, please check your device's clock."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 7,
"completion_tokens": 56,
"total_tokens": 63
}
}id: Unique identifier for this request
created: Unix timestamp when the response was created
model: The model that processed the request
choices: Array containing the AI's response
usage: Token usage statistics (prompt_tokens, completion_tokens, total_tokens)