← Back to models
Claude Sonnet 4.6
Anthropic
Code assistantLong documentsChat & customer supportAutomation & agents
model: claude-sonnet-4-6
Try itDemo
Advanced parameters
⌘/Ctrl + Enter to send
Demo mode: doesn't call the upstream model. After signup with your key, the same request hits the real model.
Use cases
- Cursor / Claude Code refactors and cross-file Q&A
- 80-page contract Q&A and summarization in one shot
- Multi-turn customer support with bilingual context
Code samples
cURL
curl https://api.sealink.asia/v1/chat/completions \-H "Authorization: Bearer $SEALINK_API_KEY" \-H "Content-Type: application/json" \-d '{"model": "claude-sonnet-4-6","messages": [{"role": "user", "content": "Hello."}]}'
Python (OpenAI SDK)
from openai import OpenAIclient = OpenAI(base_url="https://api.sealink.asia/v1", api_key="<your-sealink-key>")resp = client.chat.completions.create(model="claude-sonnet-4-6",messages=[{"role": "user", "content": "Hello."}],)print(resp.choices[0].message.content)
Node.js (OpenAI SDK)
import OpenAI from "openai";const client = new OpenAI({baseURL: "https://api.sealink.asia/v1",apiKey: process.env.SEALINK_API_KEY,});const resp = await client.chat.completions.create({model: "claude-sonnet-4-6",messages: [{ role: "user", content: "Hello." }],});console.log(resp.choices[0].message.content);
Once you sign in, your base_url and API key will be inlined automatically.
Performance
Last 30 days from SeaLink's own probes; launch values use model-tier estimates until live probe history is available.
TTFT P50
872ms
TTFT P95
1857ms
Tokens/sec
76
30d uptime
99.95%
Capabilities & limits
- Context length
- 200K tokens
- Capabilities
- tools · streaming · cache
- Status
- operational