Text Generation (Chat Completions)

MiniMax large language model-based chat completion interface. Compatible with OpenAI Chat Completions API format, can be used directly as a replacement for OpenAI calls.

API Endpoints

POST/text/chatcompletion_v2

Create chat completion

Request Parameters

ParameterTypeRequiredDescription
modelstringRequiredModel name, e.g. MiniMax-M2.5
messagesarrayRequiredArray of conversation messages
temperaturenumberOptionalSampling temperature, range 0-2, default 0.7
max_tokensintegerOptionalMaximum number of tokens to generate
streambooleanOptionalWhether to use streaming output
top_pnumberOptionalNucleus sampling parameter, range 0-1

Request Example

Request Example
{
  "model": "MiniMax-M2.5",
  "messages": [
    {"role": "system", "content": "你是一个有用的助手。"},
    {"role": "user", "content": "你好,介绍一下你自己"}
  ],
  "temperature": 0.7,
  "max_tokens": 1024,
  "stream": false
}

Response Example

Response Example
{
  "id": "chatcmpl-xxxxx",
  "object": "chat.completion",
  "created": 1234567890,
  "model": "MiniMax-M2.5",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "你好!我是 MiniMax 的 AI 助手..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 50,
    "total_tokens": 70
  }
}

Code Examples

import requests

url = "https://your-proxy-domain.com/v1/text/chatcompletion_v2"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}
data = {
    "model": "MiniMax-M2.5",
    "messages": [
        {"role": "system", "content": "你是一个有用的助手。"},
        {"role": "user", "content": "你好"}
    ],
    "temperature": 0.7,
    "max_tokens": 1024
}

response = requests.post(url, headers=headers, json=data)
print(response.json())