Text Generation (Chat Completions)
MiniMax large language model-based chat completion interface. Compatible with OpenAI Chat Completions API format, can be used directly as a replacement for OpenAI calls.
API Endpoints
POST
/text/chatcompletion_v2Create chat completion
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Required | Model name, e.g. MiniMax-M2.5 |
messages | array | Required | Array of conversation messages |
temperature | number | Optional | Sampling temperature, range 0-2, default 0.7 |
max_tokens | integer | Optional | Maximum number of tokens to generate |
stream | boolean | Optional | Whether to use streaming output |
top_p | number | Optional | Nucleus sampling parameter, range 0-1 |
Request Example
Request Example
{
"model": "MiniMax-M2.5",
"messages": [
{"role": "system", "content": "你是一个有用的助手。"},
{"role": "user", "content": "你好,介绍一下你自己"}
],
"temperature": 0.7,
"max_tokens": 1024,
"stream": false
}Response Example
Response Example
{
"id": "chatcmpl-xxxxx",
"object": "chat.completion",
"created": 1234567890,
"model": "MiniMax-M2.5",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "你好!我是 MiniMax 的 AI 助手..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 50,
"total_tokens": 70
}
}Code Examples
import requests
url = "https://your-proxy-domain.com/v1/text/chatcompletion_v2"
headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
data = {
"model": "MiniMax-M2.5",
"messages": [
{"role": "system", "content": "你是一个有用的助手。"},
{"role": "user", "content": "你好"}
],
"temperature": 0.7,
"max_tokens": 1024
}
response = requests.post(url, headers=headers, json=data)
print(response.json())