Docs
Run the Quickstart in five minutes. Use OpenAI-compatible endpoints. Connect Cursor, Claude Code, Dify, and n8n with your SeaLink API key.
Concepts
What's a token, model, RAG? 30-second primer for non-developers.
Read →Quickstart
Make your first chat completions call in 5 minutes.
Read →Cursor
Add SeaLink as a custom model provider in Cursor settings.
Read →Claude Code
Configure Claude Code via environment variables.
Read →OpenAI SDK
Just change base_url and api_key. Python, Node.js, Go.
Read →Dify
Add SeaLink as a model provider in Dify settings.
Read →n8n
Use the HTTP Request node with SeaLink's OpenAI-compatible endpoint.
Read →LangChain
Swap base_url and api_key in ChatOpenAI. Python and JS.
Read →Windsurf
Add SeaLink as a custom model provider in Windsurf settings.
Read →Function calling
OpenAI tool syntax across every SeaLink model — SeaLink handles the per-vendor translation.
Read →Vision (image input)
Send images via OpenAI-style image_url to multimodal models.
Read →Prompt caching
Cut repeat-prompt cost by 50–90% on Claude. Auto on OpenAI.
Read →Streaming
Stream tokens over SSE to cut perceived latency 3-5×.
Read →Embeddings
Use text-embedding-3-large for RAG and semantic search.
Read →Rate limits
RPM / TPM by plan. Headers and proper backoff.
Read →Error codes
12 common HTTP statuses SeaLink returns and how to recover.
Read →