The request schema, represented as a TypeScript type, serves as the body of the POST request to the /api/v1/chat/completions endpoint. An example of this can be found in the Quick Start above.
The responses align closely with the OpenAI Chat API, ensuring that choices are always presented as an array, even when the model returns only one completion. Each choice will include a delta property if a stream was requested, and a message property otherwise. This design simplifies the code usage across all models.
4EVERLAND streamlines the schema across various models and providers, thereby requiring the learning of only one schema.
Response Body
It's important to note that the finish_reason may vary depending on the model provider. The model property provides information about the model used within the underlying API.
Here's the response schema, represented as a TypeScript type:
typeResponse= { id:string;// Depending on whether you set "stream" to "true" and// whether you passed in "messages" or a "prompt", you// will get a different output shape choices: (NonStreamingChoice|StreamingChoice|NonChatChoice|Error)[]; created:number; // Unix timestamp model:string; object:'chat.completion'|'chat.completion.chunk';// For non-streaming responses only. For streaming responses,// see "Querying Cost and Stats" below. usage?: { completion_tokens:number; // Equivalent to "native_tokens_completion" in the /generation API prompt_tokens:number; // Equivalent to "native_tokens_prompt" total_tokens:number; // Sum of the above two fields total_cost:number; // Number of credits used by this generation };};// Subtypes:typeNonChatChoice= { finish_reason:string|null; text:string;};typeNonStreamingChoice= { finish_reason:string|null; // Depends on the model. Ex: 'stop' | 'length' | 'content_filter' | 'tool_calls' | 'function_call' message: { content:string|null; role:string; tool_calls?:ToolCall[];// Deprecated, replaced by tool_calls function_call?:FunctionCall; };};typeStreamingChoice= { finish_reason:string|null; delta: { content:string|null; role?:string; tool_calls?:ToolCall[];// Deprecated, replaced by tool_calls function_call?:FunctionCall; };};typeError= { code:number; // See "Error Handling" section message:string;};typeFunctionCall= { name:string; arguments:string; // JSON format arguments};typeToolCall= { id:string; type:'function'; function:FunctionCall;};
Here's an example:
json{"id": "gen-xxxxxxxxxxxxxx","choices": [ {"finish_reason":"stop",// Different models provide different reasons here"message": {// will be "delta" if streaming"role":"assistant","content":"Hello there!" } } ],"model": "openai/gpt-3.5-turbo"// Could also be "anthropic/claude-2.1", etc, depending on the "model" that ends up being used}