Sylica
ModelsBlogsPricingDocsComplianceSign inGet started
ModelsBlogsPricingDocsCompliance
Sign inGet started

Documentation

Sylica Gateway v1

Last updated: April 20, 2026

Documentation HomeOverviewQuickstartAuthenticationEndpointsChat CompletionsStreamingRoutingErrors and RetriesBilling and CreditsProduction Readiness
(c) 2026 Sylica AI. The Unified Interface For LLMs.
PrivacyTermsCookiesDPA

Chat Completions

The chat completion contract is OpenAI-compatible, with optional Sylica routing extensions for provider control.

The required payload is intentionally small: model and messages. Everything else should be treated as workload-specific tuning. Start minimal, measure output quality and latency, then introduce advanced controls one by one.

Request Field Contract

FieldTypeRequiredNotes
modelstringyesConcrete model slug or meta-model.
messagesarrayyesOpenAI-compatible message array with at least one entry.
streambooleannoEnables SSE token streaming when true.
temperaturenumbernoSampling control from 0 to 2.
top_pnumbernoNucleus sampling probability from 0 to 1.
max_tokensintegernoMaximum generated token count.
toolsarraynoFunction/tool schema, OpenAI compatible.
tool_choicestring|objectnoTool execution policy.
response_formatobjectnoStructured output mode (json_object/json_schema).
reasoning_effortstringnoModel-dependent reasoning controls.
providerobjectnoSylica routing policy controls.

Advanced Request Example

This request demonstrates structured output plus provider constraints. Use this pattern for business workflows where determinism and fallback policy both matter.

json
{
  "model": "sylica/auto",
  "messages": [
    { "role": "system", "content": "Return concise JSON." },
    { "role": "user", "content": "Top 3 actions for onboarding latency." }
  ],
  "stream": true,
  "temperature": 0.2,
  "response_format": { "type": "json_object" },
  "provider": {
    "order": ["openai", "anthropic"],
    "require": ["openai", "anthropic", "google"],
    "allow_fallbacks": true
  }
}

Non-Streaming Response Shape

json
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1760000000,
  "model": "openai/gpt-5-mini",
  "choices": [
    {
      "index": 0,
      "message": { "role": "assistant", "content": "Hello!" },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 18,
    "completion_tokens": 9,
    "total_tokens": 27
  }
}