← Back to models
BGE-M3
BAAI
Knowledge search
model: bge-m3
Try itDemo
Advanced parameters
⌘/Ctrl + Enter to send
Demo mode: doesn't call the upstream model. After signup with your key, the same request hits the real model.
Use cases
- Chinese + SEA languages retrieval
- Free-tier RAG
- Cross-lingual search
Code samples
cURL
curl https://api.sealink.asia/v1/embeddings \-H "Authorization: Bearer $SEALINK_API_KEY" \-H "Content-Type: application/json" \-d '{"model": "bge-m3","input": "SeaLink routes every model through one API."}'
Python (OpenAI SDK)
from openai import OpenAIclient = OpenAI(base_url="https://api.sealink.asia/v1", api_key="<your-sealink-key>")resp = client.embeddings.create(model="bge-m3",input="SeaLink routes every model through one API.",)print(resp.data[0].embedding[:5])
Node.js (OpenAI SDK)
import OpenAI from "openai";const client = new OpenAI({baseURL: "https://api.sealink.asia/v1",apiKey: process.env.SEALINK_API_KEY,});const resp = await client.embeddings.create({model: "bge-m3",input: "SeaLink routes every model through one API.",});console.log(resp.data[0].embedding.slice(0, 5));
Once you sign in, your base_url and API key will be inlined automatically.
Performance
Last 30 days from SeaLink's own probes; launch values use model-tier estimates until live probe history is available.
TTFT P50
386ms
TTFT P95
787ms
Tokens/sec
150
30d uptime
99.90%
Capabilities & limits
- Context length
- 8K tokens
- Capabilities
- —
- Status
- operational