Quickstart
This path gives you a working integration in minutes, then immediately covers the first reliability checks you should add.
Start from a server environment where you can keep secrets private. Generate an API key in /dashboard/keys, save it in your secret manager, and expose it to runtime as SYLICA_API_KEY. Avoid embedding root keys in browser bundles.
- Create and store an API key with least privilege for the environment.
- Set base URL to https://api.sylicaai.com/v1 in your OpenAI-compatible SDK.
- Send a streaming chat completion and confirm first token latency in logs.
- Log x-sylica-request-id for every request to simplify support debugging.
Quick Verification Request
bash
curl https://api.sylicaai.com/v1/chat/completions \
-H "Authorization: Bearer $SYLICA_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "sylica/auto",
"messages": [
{"role":"user","content":"Write a two-line haiku about routing."}
],
"stream": true
}'TypeScript Example
typescript
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.sylicaai.com/v1",
apiKey: process.env.SYLICA_API_KEY,
});
const stream = await client.chat.completions.create({
model: "anthropic/claude-sonnet-4.5",
messages: [{ role: "user", content: "Hello from Sylica" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}Operational Guidance
As soon as your first call works, add timeouts and retry policy before expanding feature use. Early reliability work prevents hard-to-debug production incidents when request volume grows.