Documentation
¶
Overview ¶
Package gai provides a unified interface for interacting with various Large Language Model providers.
Overview ¶
The gai package abstracts away the specific API details of each LLM provider, making it easy to switch between different models and providers.
Features ¶
- Provider-agnostic interface for LLMs
- Automatic model-to-provider routing with pluggable router system
- Model aliases with optional whitelist mode for access control
- Support for both regular and streaming completions
- Function/tool calling support with a registry system
- Multi-modal (text + image) content support
- Configurable with functional options pattern
- Easy to extend with new providers
Available Providers ¶
Currently implemented providers:
- openai: OpenAI's API (GPT-3.5, GPT-4, etc.) using the official openai-go SDK
- openrouter: OpenRouter's unified API providing access to multiple LLM providers through OpenAI-compatible interface
Adding New Providers ¶
To add a new provider, implement the llm.ProviderClient interface:
type ProviderClient interface {
ID() string
Name() string
ListModels(ctx context.Context) ([]Model, error)
GetModel(ctx context.Context, modelID string) (*Model, error)
Generate(ctx context.Context, req GenerateRequest) (*Response, error)
GenerateStream(ctx context.Context, req GenerateRequest) (ResponseStream, error)
}
Multi Modality ¶
You have three flexible patterns for multi-modal content:
Simple multi-modal request:
req := gai.GenerateRequest{
Input: gai.MultiModalInput{
Content: []gai.MessageContent{
gai.TextInput{Text: "What's in this image?"},
gai.ImageInput{URL: "image.jpg"},
},
},
}
Traditional conversation (separate messages):
conversation := gai.Conversation{
Messages: []gai.Message{
{Role: gai.RoleUser, Content: gai.TextInput{Text: "What's in this image?"}},
{Role: gai.RoleUser, Content: gai.ImageInput{URL: "image.jpg"}},
},
}
Multi-modal content within conversation messages:
conversation := gai.Conversation{
Messages: []gai.Message{
{
Role: gai.RoleUser,
Content: gai.MultiModalInput{
Content: []gai.MessageContent{
gai.TextInput{Text: "Compare this image with the document"},
gai.ImageInput{URL: "chart.jpg"},
gai.FileInput{FileID: "file123", Filename: "report.pdf"},
},
},
},
},
}
This gives you maximum flexibility while still preventing circular references (no nested conversations).
Index ¶
- Variables
- type Annotation
- type Client
- func (c *Client) AddProvider(provider ProviderClient) error
- func (c *Client) Generate(ctx context.Context, req GenerateRequest) (*Response, error)
- func (c *Client) GenerateStream(ctx context.Context, req GenerateRequest) (ResponseStream, error)
- func (c *Client) GetModelRouter() ModelRouter
- func (c *Client) GetProvider(providerID string) (ProviderClient, error)
- func (c *Client) ListModels(ctx context.Context) ([]Model, error)
- func (c *Client) ListProviders() []ProviderClient
- type ClientOption
- type Conversation
- type FileInput
- type FinishReason
- type Function
- type GenerateRequest
- type ImageInput
- type Input
- type MCPToolExecutor
- type Message
- type MessageContent
- type Model
- type ModelCapabilities
- type ModelNotFoundError
- type ModelPricing
- type ModelRouter
- type MultiModalInput
- type OutputDelta
- type OutputItem
- type ProviderClient
- type Response
- type ResponseChunk
- type ResponseError
- type ResponseFormat
- type ResponseStatus
- type ResponseStream
- type Role
- type StandardModelRouter
- func (r *StandardModelRouter) AddModelAlias(alias, actualModelID string)
- func (r *StandardModelRouter) IsAliasOnlyMode() bool
- func (r *StandardModelRouter) ListModelAliases() map[string]string
- func (r *StandardModelRouter) ListModels(ctx context.Context) ([]Model, error)
- func (r *StandardModelRouter) ListProviders() []ProviderClient
- func (r *StandardModelRouter) RegisterProvider(ctx context.Context, provider ProviderClient) error
- func (r *StandardModelRouter) RemoveModelAlias(alias string)
- func (r *StandardModelRouter) RouteModel(ctx context.Context, modelID string) (ProviderClient, error)
- func (r *StandardModelRouter) SetAliasOnlyMode(enabled bool)
- type StandardToolExecutor
- type TextInput
- type TextOutput
- type TokenUsage
- type Tool
- type ToolCall
- type ToolCallDelta
- type ToolCallFunction
- type ToolCallOutput
- type ToolChoice
- type ToolChoiceFunction
- type ToolExecutor
- type ToolRegistry
- type ToolRegistryOption
- type ToolResult
Examples ¶
- Client (AliasOnlyMode)
- Client (Basic)
- Client (CustomRouting)
- Client (DefaultRouting)
- Client (ModelAliases)
- Client (MultiModal)
- Client (OpenRouter)
- Client (Streaming)
- Client (ToolCalling)
- Client.Generate
- Client.Generate (MultiModal)
- Client.Generate (WithTools)
- Client.GenerateStream
- MultiModalInput
- WithAliasOnlyMode
Constants ¶
This section is empty.
Variables ¶
var ErrNotImplemented = errors.New("not implemented")
ErrNotImplemented is returned when requested functionality is requested but not implemented. This is particularly useful when implementing a provider where some of the functionality isn't implemented in the providers API.
var ErrProviderNotFound = errors.New("provider not found")
ErrProviderNotFound is returned when a requested provider is not registered.
var GlobalClientOptions []ClientOption
GlobalClientOptions is a list of options that are applied to all clients created.
Functions ¶
This section is empty.
Types ¶
type Annotation ¶
type Annotation struct {
Type string
Text string
StartPos int
EndPos int
// Additional annotation data can be stored here
Data map[string]any
}
Annotation represents annotations on text output.
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
Client provides a unified way to interact with various LLM providers.
Example (AliasOnlyMode) ¶
ExampleClient_aliasOnlyMode demonstrates alias-only (whitelist) mode.
package main
import (
"context"
"codeberg.org/gai-org/gai"
)
func main() {
ctx := context.Background()
// Whitelist mode - ONLY aliased models can be accessed
client := gai.New(
gai.WithModelAliases(map[string]string{
"approved": "openai/dummy-model",
"budget": "openai/dummy-model-lite",
}),
gai.WithAliasOnlyMode(true), // Enable whitelist mode
)
// This works - using an alias
resp, err := client.Generate(ctx, gai.GenerateRequest{
ModelID: "approved", // ✅ Allowed
Input: gai.TextInput{Text: "Hello"},
})
_ = resp
_ = err
// This fails - direct model ID not allowed
resp, err = client.Generate(ctx, gai.GenerateRequest{
ModelID: "openai/dummy-model", // ❌ Rejected in alias-only mode
Input: gai.TextInput{Text: "Hello"},
})
_ = resp
_ = err
// Dynamic control - cast to StandardModelRouter to access alias methods
router := client.GetModelRouter()
if standardRouter, ok := router.(*gai.StandardModelRouter); ok {
standardRouter.SetAliasOnlyMode(false) // Disable whitelist mode
standardRouter.AddModelAlias("new", "new-model") // Add new alias
}
}
Example (Basic) ¶
ExampleClient_basic demonstrates basic usage of the gai package.
package main
import (
"context"
"fmt"
"codeberg.org/gai-org/gai"
"codeberg.org/gai-org/gai/internal/dummy"
)
func main() {
// Create an LLM client
client := gai.New()
// Create and register a provider (dummy provider for example)
provider := dummy.NewProvider("openai", "OpenAI")
err := client.AddProvider(provider)
if err != nil {
// Handle error
panic(err)
}
// Create a generate request
req := gai.GenerateRequest{
ModelID: "openai/dummy-model",
Instructions: "You are a helpful assistant.",
Input: gai.TextInput{Text: "What's the weather like today?"},
Temperature: 0.7,
}
// Send the generate request
resp, err := client.Generate(context.Background(), req)
if err != nil {
// Handle error
panic(err)
}
// Use the response
for _, output := range resp.Output {
if textOutput, ok := output.(gai.TextOutput); ok {
fmt.Println(textOutput.Text)
}
}
}
Example (CustomRouting) ¶
ExampleClient_customRouting demonstrates using a custom router.
package main
import (
"context"
"codeberg.org/gai-org/gai"
)
// CustomRouter is an example custom router implementation.
type CustomRouter struct {
}
// RegisterProvider implements the ModelRouter interface.
func (r *CustomRouter) RegisterProvider(ctx context.Context, provider gai.ProviderClient) error {
return nil
}
// RouteModel implements the ModelRouter interface.
func (r *CustomRouter) RouteModel(ctx context.Context, modelID string) (gai.ProviderClient, error) {
return nil, nil
}
// ListModels implements the ModelRouter interface.
func (r *CustomRouter) ListModels(ctx context.Context) ([]gai.Model, error) {
return nil, nil
}
// ListProviders implements the ModelRouter interface.
func (r *CustomRouter) ListProviders() []gai.ProviderClient {
return nil
}
func main() {
// Use custom router
customRouter := &CustomRouter{}
client := gai.New(gai.WithModelRouter(customRouter))
_ = client // Use the client
}
Example (DefaultRouting) ¶
ExampleClient_defaultRouting demonstrates using default routing.
package main
import (
"context"
"codeberg.org/gai-org/gai"
"codeberg.org/gai-org/gai/internal/dummy"
)
func main() {
ctx := context.Background()
// Use default routing (StandardModelRouter)
client := gai.New()
// Register a provider for routing to work
provider := dummy.NewProvider("openai", "OpenAI")
err := client.AddProvider(provider)
_ = err
// Now models will be routed to the appropriate provider
req := gai.GenerateRequest{
ModelID: "openai/dummy-model",
Input: gai.TextInput{Text: "Hello"},
}
resp, err := client.Generate(ctx, req)
_ = resp
_ = err
}
Example (ModelAliases) ¶
ExampleClient_modelAliases demonstrates basic model aliases usage.
package main
import (
"codeberg.org/gai-org/gai"
)
func main() {
// Basic aliases - models can be accessed by alias or actual ID
client := gai.New(
gai.WithModelAliases(map[string]string{
"fast": "openai/dummy-model",
"smart": "openai/dummy-model-pro",
"claude": "anthropic/dummy-model",
}),
)
// Use alias in requests
req := gai.GenerateRequest{
ModelID: "fast", // Resolves to "openai/gpt-3.5-turbo"
Input: gai.TextInput{Text: "Hello!"},
}
_ = req // Use the request
_ = client // Use the client
}
Example (MultiModal) ¶
ExampleClient_multiModal demonstrates multi-modal input usage.
package main
import (
"codeberg.org/gai-org/gai"
)
func main() {
client := gai.New()
// Create a multi-modal conversation
conversation := gai.Conversation{
Messages: []gai.Message{
{
Role: gai.RoleUser,
Content: gai.TextInput{Text: "What's in this image?"},
},
{
Role: gai.RoleUser,
Content: gai.ImageInput{URL: "https://example.com/image.jpg"},
},
},
}
// Create request with multi-modal conversation
req := gai.GenerateRequest{
ModelID: "dummy-vision-model", // Must be a vision model
Input: conversation,
}
_ = req // Use the request
_ = client // Use the client
}
Example (OpenRouter) ¶
ExampleClient_openRouter demonstrates using OpenRouter provider.
package main
import (
"context"
"codeberg.org/gai-org/gai"
"codeberg.org/gai-org/gai/internal/dummy"
)
func main() {
// Create OpenRouter provider (dummy for example)
provider := dummy.NewProvider("openrouter", "OpenRouter")
client := gai.New()
err := client.AddProvider(provider)
if err != nil {
// Handle error
panic(err)
}
// Use any model available on OpenRouter
req := gai.GenerateRequest{
ModelID: "openrouter/dummy-model", // OpenRouter model format
Instructions: "You are a helpful assistant.",
Input: gai.TextInput{Text: "Hello from OpenRouter!"},
}
resp, err := client.Generate(context.Background(), req)
if err != nil {
// Handle error
panic(err)
}
_ = resp // Use the response
}
Example (Streaming) ¶
ExampleClient_streaming demonstrates streaming API usage.
package main
import (
"context"
"fmt"
"io"
"codeberg.org/gai-org/gai"
"codeberg.org/gai-org/gai/internal/dummy"
)
func main() {
// Create an LLM client
client := gai.New()
// Create and register a provider
provider := dummy.NewProvider("openai", "OpenAI")
err := client.AddProvider(provider)
if err != nil {
// Handle error
panic(err)
}
// Create a streaming generate request
req := gai.GenerateRequest{
ModelID: "openai/dummy-model",
Instructions: "You are a helpful assistant.",
Input: gai.TextInput{Text: "Write a short poem about clouds."},
Stream: true,
}
// Send the streaming generate request
stream, err := client.GenerateStream(context.Background(), req)
if err != nil {
// Handle error
panic(err)
}
defer func() {
if err := stream.Close(); err != nil {
// Handle close error appropriately
}
}()
// Process the stream
for {
chunk, err := stream.Next()
if err == io.EOF {
break
}
if err != nil {
// Handle error
break
}
// Process the chunk
fmt.Print(chunk.Delta.Text)
}
}
Example (ToolCalling) ¶
ExampleClient_toolCalling demonstrates tool calling functionality.
// SPDX-FileCopyrightText: 2025 Mads R. Havmand <[email protected]> // // SPDX-License-Identifier: MIT package main import ( "encoding/json" "fmt" ) // ExampleClient_toolCalling demonstrates tool calling functionality. func main() { client := New() // Define a tool tools := []Tool{ { Type: "function", Function: Function{ Name: "get_weather", Description: "Get the current weather for a location", Parameters: map[string]any{ "type": "object", "properties": map[string]any{ "location": map[string]any{ "type": "string", "description": "City name", }, }, "required": []string{"location"}, }, }, }, } // Create a tool registry and register the weather tool registry := NewToolRegistry() registry.RegisterTool(tools[0], &weatherExecutor{}) // Create a request with tools req := GenerateRequest{ ModelID: "dummy-model", Input: TextInput{Text: "What's the weather like in Paris?"}, Tools: tools, } // Simulate model response with tool call toolCall := ToolCall{ ID: "call_123", Type: "function", Function: ToolCallFunction{ Name: "get_weather", Arguments: `{"location": "Paris"}`, IsComplete: true, }, } // Execute the tool call result, err := registry.ExecuteTool(toolCall) if err != nil { return } // Create tool result to pass back to model toolResult := ToolResult{ ID: toolCall.ID, ToolName: toolCall.Function.Name, Content: result, IsError: false, } _ = req // Use the request _ = client // Use the client _ = toolResult // Use the tool result } // weatherExecutor implements ToolExecutor for the weather tool. type weatherExecutor struct{} func (e *weatherExecutor) Execute(toolCall ToolCall) (string, error) { // Parse the arguments to get the location args := make(map[string]any) if err := json.Unmarshal([]byte(toolCall.Function.Arguments), &args); err != nil { return "", fmt.Errorf("failed to parse arguments: %w", err) } location, ok := args["location"].(string) if !ok { return "", fmt.Errorf("location parameter missing or invalid") } // Simulate weather API call weatherData := map[string]any{ "location": location, "temperature": 22, "condition": "sunny", "humidity": 65, } // Return weather data as JSON result, err := json.Marshal(weatherData) if err != nil { return "", fmt.Errorf("failed to marshal weather data: %w", err) } return string(result), nil }
func New ¶
func New(options ...ClientOption) *Client
New creates a new LLM client with the given options.
func (*Client) AddProvider ¶
func (c *Client) AddProvider(provider ProviderClient) error
AddProvider registers a new provider with this client and the model router.
func (*Client) Generate ¶
Generate sends a response generation request, automatically discovering the provider for the model.
Example ¶
Example demonstrates basic usage of gAI with OpenAI provider.
package main
import (
"context"
"errors"
"fmt"
"log"
"strings"
"time"
llm "codeberg.org/gai-org/gai"
)
// mockProvider is a mock implementation of ProviderClient for testing.
type mockProvider struct {
id string
name string
}
func (m *mockProvider) ID() string {
return m.id
}
func (m *mockProvider) Name() string {
return m.name
}
func (m *mockProvider) ListModels(ctx context.Context) ([]llm.Model, error) {
models := []llm.Model{
{
ID: "model1",
Name: "Test Model 1",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
},
},
{
ID: "model2",
Name: "Test Model 2",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsJSON: true,
},
},
}
if m.id == "openai" {
models = append(models,
llm.Model{
ID: "gpt-4",
Name: "GPT-4",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: false,
},
},
llm.Model{
ID: "gpt-4-vision-preview",
Name: "GPT-4 Vision Preview",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: true,
},
},
)
}
return models, nil
}
func (m *mockProvider) GetModel(ctx context.Context, modelID string) (*llm.Model, error) {
if modelID == "model1" {
return &llm.Model{
ID: "model1",
Name: "Test Model 1",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
},
}, nil
}
if modelID == "model2" {
return &llm.Model{
ID: "model2",
Name: "Test Model 2",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsJSON: true,
},
}, nil
}
if m.id == "openai" && (modelID == "gpt-4" || modelID == "gpt-4-vision-preview" || modelID == "openai/gpt-4" || modelID == "openai/gpt-4-vision-preview" || modelID == "openai/gpt-3.5-turbo") {
baseModel := modelID
if after, found := strings.CutPrefix(modelID, "openai/"); found {
baseModel = after
}
vision := baseModel == "gpt-4-vision-preview"
return &llm.Model{
ID: modelID,
Name: baseModel,
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: vision,
},
}, nil
}
return nil, errors.New("model not found")
}
func (m *mockProvider) Generate(ctx context.Context, req llm.GenerateRequest) (*llm.Response, error) {
return &llm.Response{
ID: "resp1",
ModelID: req.ModelID,
Status: llm.StatusCompleted,
Output: []llm.OutputItem{
llm.TextOutput{
Text: "Hello from " + m.id + " (new interface)",
},
},
Usage: &llm.TokenUsage{
PromptTokens: 10,
CompletionTokens: 5,
TotalTokens: 15,
},
CreatedAt: time.Now(),
}, nil
}
func (m *mockProvider) GenerateStream(ctx context.Context, req llm.GenerateRequest) (llm.ResponseStream, error) {
return nil, errors.New("streaming not implemented in mock")
}
func main() {
// Create an LLM client
client := llm.New()
// Add a provider (OpenAI in this example)
// Note: Replace "your-api-key-here" with your actual API key
provider := &mockProvider{id: "openai", name: "OpenAI"}
_ = client.AddProvider(provider)
// Create a generate request
req := llm.GenerateRequest{
ModelID: "gpt-4",
Instructions: "You are a helpful assistant.",
Input: llm.TextInput{Text: "What's the weather like today?"},
Temperature: 0.7,
}
// Send the generate request
resp, err := client.Generate(context.Background(), req)
if err != nil {
log.Fatal(err)
}
// Use the response
for _, output := range resp.Output {
if textOutput, ok := output.(llm.TextOutput); ok {
fmt.Println(textOutput.Text)
}
}
}
Output: Hello from openai (new interface)
Example (MultiModal) ¶
Example demonstrates multi-modal input with text and images.
package main
import (
"context"
"errors"
"fmt"
"log"
"strings"
"time"
llm "codeberg.org/gai-org/gai"
)
// mockProvider is a mock implementation of ProviderClient for testing.
type mockProvider struct {
id string
name string
}
func (m *mockProvider) ID() string {
return m.id
}
func (m *mockProvider) Name() string {
return m.name
}
func (m *mockProvider) ListModels(ctx context.Context) ([]llm.Model, error) {
models := []llm.Model{
{
ID: "model1",
Name: "Test Model 1",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
},
},
{
ID: "model2",
Name: "Test Model 2",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsJSON: true,
},
},
}
if m.id == "openai" {
models = append(models,
llm.Model{
ID: "gpt-4",
Name: "GPT-4",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: false,
},
},
llm.Model{
ID: "gpt-4-vision-preview",
Name: "GPT-4 Vision Preview",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: true,
},
},
)
}
return models, nil
}
func (m *mockProvider) GetModel(ctx context.Context, modelID string) (*llm.Model, error) {
if modelID == "model1" {
return &llm.Model{
ID: "model1",
Name: "Test Model 1",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
},
}, nil
}
if modelID == "model2" {
return &llm.Model{
ID: "model2",
Name: "Test Model 2",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsJSON: true,
},
}, nil
}
if m.id == "openai" && (modelID == "gpt-4" || modelID == "gpt-4-vision-preview" || modelID == "openai/gpt-4" || modelID == "openai/gpt-4-vision-preview" || modelID == "openai/gpt-3.5-turbo") {
baseModel := modelID
if after, found := strings.CutPrefix(modelID, "openai/"); found {
baseModel = after
}
vision := baseModel == "gpt-4-vision-preview"
return &llm.Model{
ID: modelID,
Name: baseModel,
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: vision,
},
}, nil
}
return nil, errors.New("model not found")
}
func (m *mockProvider) Generate(ctx context.Context, req llm.GenerateRequest) (*llm.Response, error) {
return &llm.Response{
ID: "resp1",
ModelID: req.ModelID,
Status: llm.StatusCompleted,
Output: []llm.OutputItem{
llm.TextOutput{
Text: "Hello from " + m.id + " (new interface)",
},
},
Usage: &llm.TokenUsage{
PromptTokens: 10,
CompletionTokens: 5,
TotalTokens: 15,
},
CreatedAt: time.Now(),
}, nil
}
func (m *mockProvider) GenerateStream(ctx context.Context, req llm.GenerateRequest) (llm.ResponseStream, error) {
return nil, errors.New("streaming not implemented in mock")
}
func main() {
client := llm.New()
provider := &mockProvider{id: "openai", name: "OpenAI"}
_ = client.AddProvider(provider)
// Create a multi-modal request
req := llm.GenerateRequest{
ModelID: "gpt-4-vision-preview", // Must be a vision model
Instructions: "You are a helpful assistant that can analyze images.",
Input: llm.Conversation{
Messages: []llm.Message{
{
Role: llm.RoleUser,
Content: llm.TextInput{Text: "What's in this image?"},
},
{
Role: llm.RoleUser,
Content: llm.ImageInput{URL: "https://example.com/image.jpg"},
},
},
},
}
resp, err := client.Generate(context.Background(), req)
if err != nil {
log.Fatal(err)
}
for _, output := range resp.Output {
if textOutput, ok := output.(llm.TextOutput); ok {
fmt.Println(textOutput.Text)
}
}
}
Output: Hello from openai (new interface)
Example (WithTools) ¶
Example demonstrates tool calling functionality.
package main
import (
"context"
"errors"
"fmt"
"log"
"strings"
"time"
llm "codeberg.org/gai-org/gai"
)
// mockProvider is a mock implementation of ProviderClient for testing.
type mockProvider struct {
id string
name string
}
func (m *mockProvider) ID() string {
return m.id
}
func (m *mockProvider) Name() string {
return m.name
}
func (m *mockProvider) ListModels(ctx context.Context) ([]llm.Model, error) {
models := []llm.Model{
{
ID: "model1",
Name: "Test Model 1",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
},
},
{
ID: "model2",
Name: "Test Model 2",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsJSON: true,
},
},
}
if m.id == "openai" {
models = append(models,
llm.Model{
ID: "gpt-4",
Name: "GPT-4",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: false,
},
},
llm.Model{
ID: "gpt-4-vision-preview",
Name: "GPT-4 Vision Preview",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: true,
},
},
)
}
return models, nil
}
func (m *mockProvider) GetModel(ctx context.Context, modelID string) (*llm.Model, error) {
if modelID == "model1" {
return &llm.Model{
ID: "model1",
Name: "Test Model 1",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
},
}, nil
}
if modelID == "model2" {
return &llm.Model{
ID: "model2",
Name: "Test Model 2",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsJSON: true,
},
}, nil
}
if m.id == "openai" && (modelID == "gpt-4" || modelID == "gpt-4-vision-preview" || modelID == "openai/gpt-4" || modelID == "openai/gpt-4-vision-preview" || modelID == "openai/gpt-3.5-turbo") {
baseModel := modelID
if after, found := strings.CutPrefix(modelID, "openai/"); found {
baseModel = after
}
vision := baseModel == "gpt-4-vision-preview"
return &llm.Model{
ID: modelID,
Name: baseModel,
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: vision,
},
}, nil
}
return nil, errors.New("model not found")
}
func (m *mockProvider) Generate(ctx context.Context, req llm.GenerateRequest) (*llm.Response, error) {
return &llm.Response{
ID: "resp1",
ModelID: req.ModelID,
Status: llm.StatusCompleted,
Output: []llm.OutputItem{
llm.TextOutput{
Text: "Hello from " + m.id + " (new interface)",
},
},
Usage: &llm.TokenUsage{
PromptTokens: 10,
CompletionTokens: 5,
TotalTokens: 15,
},
CreatedAt: time.Now(),
}, nil
}
func (m *mockProvider) GenerateStream(ctx context.Context, req llm.GenerateRequest) (llm.ResponseStream, error) {
return nil, errors.New("streaming not implemented in mock")
}
func main() {
client := llm.New()
provider := &mockProvider{id: "openai", name: "OpenAI"}
_ = client.AddProvider(provider)
// Define a tool
tools := []llm.Tool{
{
Type: "function",
Function: llm.Function{
Name: "get_weather",
Description: "Get the current weather for a location",
Parameters: map[string]any{
"type": "object",
"properties": map[string]any{
"location": map[string]any{
"type": "string",
"description": "City name",
},
},
"required": []string{"location"},
},
},
},
}
// Create a request with tools
req := llm.GenerateRequest{
ModelID: "gpt-4",
Instructions: "You are a helpful assistant with access to weather data.",
Input: llm.TextInput{Text: "What's the weather like in Paris?"},
Tools: tools,
}
resp, err := client.Generate(context.Background(), req)
if err != nil {
log.Fatal(err)
}
for _, output := range resp.Output {
if textOutput, ok := output.(llm.TextOutput); ok {
fmt.Println(textOutput.Text)
}
}
}
Output: Hello from openai (new interface)
func (*Client) GenerateStream ¶
func (c *Client) GenerateStream(ctx context.Context, req GenerateRequest) (ResponseStream, error)
GenerateStream sends a streaming response generation request, automatically discovering the provider for the model.
Example ¶
Example demonstrates streaming API usage.
package main
import (
"context"
"errors"
"fmt"
"strings"
"time"
llm "codeberg.org/gai-org/gai"
)
// mockProvider is a mock implementation of ProviderClient for testing.
type mockProvider struct {
id string
name string
}
func (m *mockProvider) ID() string {
return m.id
}
func (m *mockProvider) Name() string {
return m.name
}
func (m *mockProvider) ListModels(ctx context.Context) ([]llm.Model, error) {
models := []llm.Model{
{
ID: "model1",
Name: "Test Model 1",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
},
},
{
ID: "model2",
Name: "Test Model 2",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsJSON: true,
},
},
}
if m.id == "openai" {
models = append(models,
llm.Model{
ID: "gpt-4",
Name: "GPT-4",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: false,
},
},
llm.Model{
ID: "gpt-4-vision-preview",
Name: "GPT-4 Vision Preview",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: true,
},
},
)
}
return models, nil
}
func (m *mockProvider) GetModel(ctx context.Context, modelID string) (*llm.Model, error) {
if modelID == "model1" {
return &llm.Model{
ID: "model1",
Name: "Test Model 1",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
},
}, nil
}
if modelID == "model2" {
return &llm.Model{
ID: "model2",
Name: "Test Model 2",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsJSON: true,
},
}, nil
}
if m.id == "openai" && (modelID == "gpt-4" || modelID == "gpt-4-vision-preview" || modelID == "openai/gpt-4" || modelID == "openai/gpt-4-vision-preview" || modelID == "openai/gpt-3.5-turbo") {
baseModel := modelID
if after, found := strings.CutPrefix(modelID, "openai/"); found {
baseModel = after
}
vision := baseModel == "gpt-4-vision-preview"
return &llm.Model{
ID: modelID,
Name: baseModel,
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: vision,
},
}, nil
}
return nil, errors.New("model not found")
}
func (m *mockProvider) Generate(ctx context.Context, req llm.GenerateRequest) (*llm.Response, error) {
return &llm.Response{
ID: "resp1",
ModelID: req.ModelID,
Status: llm.StatusCompleted,
Output: []llm.OutputItem{
llm.TextOutput{
Text: "Hello from " + m.id + " (new interface)",
},
},
Usage: &llm.TokenUsage{
PromptTokens: 10,
CompletionTokens: 5,
TotalTokens: 15,
},
CreatedAt: time.Now(),
}, nil
}
func (m *mockProvider) GenerateStream(ctx context.Context, req llm.GenerateRequest) (llm.ResponseStream, error) {
return nil, errors.New("streaming not implemented in mock")
}
func main() {
// Create an LLM client
client := llm.New()
// Add a provider
provider := &mockProvider{id: "openai", name: "OpenAI"}
_ = client.AddProvider(provider)
// Create a streaming generate request
req := llm.GenerateRequest{
ModelID: "gpt-4",
Instructions: "You are a helpful assistant.",
Input: llm.TextInput{Text: "Write a short poem about clouds."},
Stream: true,
}
// Note: This example shows the API structure
// Real streaming implementation would require a proper provider
_, err := client.GenerateStream(context.Background(), req)
if err != nil {
fmt.Println("Streaming not implemented in mock provider")
}
}
Output: Streaming not implemented in mock provider
func (*Client) GetModelRouter ¶
func (c *Client) GetModelRouter() ModelRouter
GetModelRouter returns the model router used by this client. This allows access to router-specific features like adding aliases.
func (*Client) GetProvider ¶
func (c *Client) GetProvider(providerID string) (ProviderClient, error)
GetProvider returns a provider by its ID.
func (*Client) ListModels ¶
ListModels returns all available models across all providers.
func (*Client) ListProviders ¶
func (c *Client) ListProviders() []ProviderClient
ListProviders returns all registered providers.
type ClientOption ¶
type ClientOption interface {
// contains filtered or unexported methods
}
ClientOption configures a Client.
func WithAliasOnlyMode ¶
func WithAliasOnlyMode(enabled bool) ClientOption
WithAliasOnlyMode enables alias-only mode where only models with aliases can be accessed. This provides a whitelist approach for controlling which models are available.
Example ¶
Example demonstrates alias-only mode for access control.
package main
import (
"context"
"errors"
"fmt"
"strings"
"time"
llm "codeberg.org/gai-org/gai"
)
// mockProvider is a mock implementation of ProviderClient for testing.
type mockProvider struct {
id string
name string
}
func (m *mockProvider) ID() string {
return m.id
}
func (m *mockProvider) Name() string {
return m.name
}
func (m *mockProvider) ListModels(ctx context.Context) ([]llm.Model, error) {
models := []llm.Model{
{
ID: "model1",
Name: "Test Model 1",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
},
},
{
ID: "model2",
Name: "Test Model 2",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsJSON: true,
},
},
}
if m.id == "openai" {
models = append(models,
llm.Model{
ID: "gpt-4",
Name: "GPT-4",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: false,
},
},
llm.Model{
ID: "gpt-4-vision-preview",
Name: "GPT-4 Vision Preview",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: true,
},
},
)
}
return models, nil
}
func (m *mockProvider) GetModel(ctx context.Context, modelID string) (*llm.Model, error) {
if modelID == "model1" {
return &llm.Model{
ID: "model1",
Name: "Test Model 1",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
},
}, nil
}
if modelID == "model2" {
return &llm.Model{
ID: "model2",
Name: "Test Model 2",
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsJSON: true,
},
}, nil
}
if m.id == "openai" && (modelID == "gpt-4" || modelID == "gpt-4-vision-preview" || modelID == "openai/gpt-4" || modelID == "openai/gpt-4-vision-preview" || modelID == "openai/gpt-3.5-turbo") {
baseModel := modelID
if after, found := strings.CutPrefix(modelID, "openai/"); found {
baseModel = after
}
vision := baseModel == "gpt-4-vision-preview"
return &llm.Model{
ID: modelID,
Name: baseModel,
Provider: m.id,
Capabilities: llm.ModelCapabilities{
SupportsStreaming: true,
SupportsJSON: true,
SupportsTools: true,
SupportsVision: vision,
},
}, nil
}
return nil, errors.New("model not found")
}
func (m *mockProvider) Generate(ctx context.Context, req llm.GenerateRequest) (*llm.Response, error) {
return &llm.Response{
ID: "resp1",
ModelID: req.ModelID,
Status: llm.StatusCompleted,
Output: []llm.OutputItem{
llm.TextOutput{
Text: "Hello from " + m.id + " (new interface)",
},
},
Usage: &llm.TokenUsage{
PromptTokens: 10,
CompletionTokens: 5,
TotalTokens: 15,
},
CreatedAt: time.Now(),
}, nil
}
func (m *mockProvider) GenerateStream(ctx context.Context, req llm.GenerateRequest) (llm.ResponseStream, error) {
return nil, errors.New("streaming not implemented in mock")
}
func main() {
// Create client with whitelist mode enabled
client := llm.New(
llm.WithModelAliases(map[string]string{
"fast": "openai/gpt-3.5-turbo",
"smart": "openai/gpt-4",
"vision": "openai/gpt-4-vision-preview",
}),
llm.WithAliasOnlyMode(true), // Only aliased models allowed
)
provider := &mockProvider{id: "openai", name: "OpenAI"}
_ = client.AddProvider(provider)
// This works - using an alias
req := llm.GenerateRequest{
ModelID: "smart", // Resolves to "openai/gpt-4"
Input: llm.TextInput{Text: "Hello!"},
}
resp, err := client.Generate(context.Background(), req)
if err == nil && resp != nil {
fmt.Println("Alias request succeeded")
}
// This would fail - direct model ID not allowed in alias-only mode
req.ModelID = "openai/gpt-4" // Direct ID rejected
_, err = client.Generate(context.Background(), req)
if err != nil {
fmt.Println("Direct model ID rejected in alias-only mode")
}
}
Output: Alias request succeeded Direct model ID rejected in alias-only mode
func WithClientLogger ¶
func WithClientLogger(logger *slog.Logger) ClientOption
WithClientLogger sets the logger for the client.
func WithModelAliases ¶
func WithModelAliases(aliases map[string]string) ClientOption
WithModelAliases sets up model aliases on the default StandardModelRouter. If a custom router is already set, this option is ignored. aliases should be a map of alias name to actual model ID. Example: WithModelAliases(map[string]string{"gpt4": "openai/gpt-4", "claude": "anthropic/claude-3-sonnet"})
func WithModelRouter ¶
func WithModelRouter(router ModelRouter) ClientOption
WithModelRouter sets a custom model router for the client.
type Conversation ¶ added in v0.5.0
type Conversation struct {
Messages []Message
}
Conversation represents a multi-turn conversation as input.
func (Conversation) InputType ¶ added in v0.5.0
func (c Conversation) InputType() string
InputType implements the Input interface.
type FinishReason ¶
type FinishReason string
FinishReason defines why a language model's generation turn ended.
const ( // FinishReasonUnspecified indicates an unspecified reason. FinishReasonUnspecified FinishReason = "" // FinishReasonStop indicates generation stopped due to a stop sequence. FinishReasonStop FinishReason = "stop" // FinishReasonLength indicates generation stopped due to reaching max length. FinishReasonLength FinishReason = "length" // FinishReasonToolCalls indicates generation stopped to make tool calls. FinishReasonToolCalls FinishReason = "tool_calls" // FinishReasonError indicates generation stopped due to an error. FinishReasonError FinishReason = "error" // FinishReasonContentFilter indicates generation stopped due to content filtering. FinishReasonContentFilter FinishReason = "content_filter" )
type Function ¶
type Function struct {
// Name is the name of the function.
Name string `json:"name"`
// Description is a human-readable description of the function.
Description string `json:"description,omitempty"`
// Parameters represents the function's parameters in JSON schema format.
Parameters map[string]any `json:"parameters,omitempty"`
}
Function represents a callable function definition.
type GenerateRequest ¶ added in v0.7.0
type GenerateRequest struct {
// ModelID is the model to use for generation.
ModelID string
// Instructions (system/developer message) - this is a system-level message provided to the model.
Instructions string
// Input - the actual content to process.
Input Input
// MaxOutputTokens is the maximum number of tokens to generate.
MaxOutputTokens int
// Temperature controls randomness in generation (0-1).
Temperature float32
// TopP controls diversity of generation (0-1).
TopP float32
// Tools is a list of tools the model can call.
Tools []Tool
// ToolChoice controls how the model uses tools.
ToolChoice *ToolChoice
// ResponseFormat specifies the format of the response.
ResponseFormat *ResponseFormat
// Stream indicates if responses should be streamed.
Stream bool
// StopSequences defines custom sequences that will stop generation when encountered.
StopSequences []string
}
GenerateRequest represents a request to generate a model response.
type ImageInput ¶ added in v0.5.0
ImageInput represents image content input.
func (ImageInput) InputType ¶ added in v0.5.0
func (i ImageInput) InputType() string
InputType implements the Input interface.
type Input ¶
type Input interface {
InputType() string
}
Input represents the input content for a response request.
type MCPToolExecutor ¶
type MCPToolExecutor struct {
// contains filtered or unexported fields
}
MCPToolExecutor executes tools via MCP protocol
func NewMCPToolExecutor ¶
func NewMCPToolExecutor(client *mcp.Client, logger *slog.Logger) *MCPToolExecutor
NewMCPToolExecutor creates a new MCP tool executor
type Message ¶
type Message struct {
// Role is who sent this message (user, assistant, system, etc).
Role Role `json:"role"`
// Content is the content of the message using the MessageContent interface.
// Only TextInput, ImageInput, and FileInput are allowed (prevents nested conversations).
Content MessageContent `json:"content"`
// Name is an optional identifier for the sender, used in some providers.
Name string `json:"name,omitempty"`
// ToolCalls contains any tool calls made by the assistant in this message.
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
// ToolCallID is the ID of the tool call this message is responding to.
ToolCallID string `json:"tool_call_id,omitempty"`
}
Message represents a single message in a chat conversation.
type MessageContent ¶
type MessageContent interface {
Input
// contains filtered or unexported methods
}
MessageContent defines valid content types for messages. It excludes Conversation to prevent circular references.
type Model ¶
type Model struct {
// ID is the unique identifier for this model within its provider.
ID string
// Name is a human-readable name for the model.
Name string
// Provider is the provider ID this model belongs to.
Provider string
// Pricing contains information related to how this model is priced.
Pricing ModelPricing
// Capabilities describe what the model can do.
Capabilities ModelCapabilities
// Metadata contains provider-specific metadata for this model.
Metadata map[string]any
}
Model represents a specific model from a provider.
type ModelCapabilities ¶
type ModelCapabilities struct {
// SupportsStreaming indicates if the model can stream responses.
SupportsStreaming bool
// SupportsJSON indicates if the model supports JSON mode responses.
SupportsJSON bool
// SupportsTools indicates if the model supports function calling.
SupportsTools bool
// SupportsVision indicates if the model supports analyzing images.
SupportsVision bool
// SupportsReasoning indicates if the model supports reasoning or thinking.
SupportsReasoning bool
// MaxInputTokens indicates how many tokens the model can process as input.
MaxInputTokens int
// MaxOutputTokens indicates how many tokens the model can output.
MaxOutputTokens int
}
ModelCapabilities describes the capabilities of a model.
type ModelNotFoundError ¶
type ModelNotFoundError struct {
ModelID string
}
ModelNotFoundError is returned when no provider supports the requested model.
func (*ModelNotFoundError) Error ¶
func (e *ModelNotFoundError) Error() string
type ModelPricing ¶ added in v0.4.0
ModelPricing contains pricing information related to a model.
type ModelRouter ¶
type ModelRouter interface {
// RegisterProvider registers a provider with the router for the models it supports.
RegisterProvider(ctx context.Context, provider ProviderClient) error
// RouteModel returns the provider that should handle requests for the given model.
RouteModel(ctx context.Context, modelID string) (ProviderClient, error)
// ListModels returns all models available across all registered providers.
ListModels(ctx context.Context) ([]Model, error)
// ListProviders returns all registered providers.
ListProviders() []ProviderClient
}
ModelRouter is responsible for routing model requests to the appropriate provider. This interface allows for custom routing logic to be implemented.
type MultiModalInput ¶ added in v0.7.0
type MultiModalInput struct {
Content []MessageContent
}
MultiModalInput represents multiple content items as a single input. This allows multi-modal content without requiring a conversation structure.
Example ¶
ExampleMultiModalInput demonstrates how to use MultiModalInput for simple multi-modal requests
package main
import (
gai "codeberg.org/gai-org/gai"
)
func main() {
// Create multi-modal content without conversation structure
req := gai.GenerateRequest{
ModelID: "dummy-vision-model",
Input: gai.MultiModalInput{
Content: []gai.MessageContent{
gai.TextInput{Text: "What's in this image?"},
gai.ImageInput{
URL: "https://example.com/image.jpg",
Detail: "high",
},
},
},
}
// This demonstrates the structure - in real usage you'd pass to client.Generate()
_ = req
}
func (MultiModalInput) InputType ¶ added in v0.7.0
func (m MultiModalInput) InputType() string
InputType implements the Input interface.
type OutputDelta ¶
type OutputDelta struct {
// Text contains new text content, if any.
Text string
// ToolCall contains tool call information, if any.
ToolCall *ToolCallDelta
}
OutputDelta represents the content delta in a streaming response chunk.
type OutputItem ¶
type OutputItem interface {
OutputType() string
}
OutputItem represents different types of output content.
type ProviderClient ¶
type ProviderClient interface {
// ID returns the unique identifier for this provider.
ID() string
// Name returns a human-readable name for this provider.
Name() string
// ListModels returns all available models from this provider.
ListModels(ctx context.Context) ([]Model, error)
// GetModel returns information for a specific model.
GetModel(ctx context.Context, modelID string) (*Model, error)
// Generate sends a response generation request.
Generate(ctx context.Context, req GenerateRequest) (*Response, error)
// GenerateStream sends a streaming response generation request.
GenerateStream(ctx context.Context, req GenerateRequest) (ResponseStream, error)
}
ProviderClient is an interface that must be implemented by all LLM provider clients.
type Response ¶
type Response struct {
// ID uniquely identifies this response.
ID string
// ModelID is the model that generated this response.
ModelID string
// Status indicates the current status of the response.
Status ResponseStatus
// Error contains error information if Status is StatusFailed.
Error *ResponseError
// IncompleteReason explains why the response is incomplete if Status is StatusIncomplete.
IncompleteReason *string
// Output contains the generated content.
Output []OutputItem
// Usage contains token usage information.
Usage *TokenUsage
// FinishReason indicates why the response generation ended.
FinishReason FinishReason
// StopSequence contains the stop sequence that triggered completion, if any.
StopSequence *string
// CreatedAt is when this response was created.
CreatedAt time.Time
}
Response represents a model response.
type ResponseChunk ¶
type ResponseChunk struct {
// ID uniquely identifies the response this chunk belongs to.
ID string
// Delta contains the new content in this chunk.
Delta OutputDelta
// Finished indicates if this is the final chunk in the stream.
Finished bool
// Status indicates the current status of the response.
Status ResponseStatus
// FinishReason indicates why the response generation ended, if finished.
FinishReason FinishReason
// StopSequence contains the stop sequence that triggered completion, if any.
StopSequence *string
// Usage contains token usage information for this chunk.
Usage *TokenUsage
}
ResponseChunk represents a single chunk of a streaming response.
type ResponseError ¶
type ResponseError struct {
// Code is the error code.
Code string
// Message is the error message.
Message string
// Details contains additional error details.
Details map[string]any
}
ResponseError represents an error in a response.
type ResponseFormat ¶
type ResponseFormat struct {
// Type is the response format type, e.g., "json_object".
Type string `json:"type"`
}
ResponseFormat specifies the format of the response.
type ResponseStatus ¶
type ResponseStatus string
ResponseStatus represents the status of a response.
const ( StatusCompleted ResponseStatus = "completed" StatusFailed ResponseStatus = "failed" StatusInProgress ResponseStatus = "in_progress" StatusIncomplete ResponseStatus = "incomplete" )
Response status constants.
type ResponseStream ¶
type ResponseStream interface {
// Next returns the next chunk in the stream.
Next() (*ResponseChunk, error)
// Close closes the stream.
Close() error
// Err returns any error that occurred during streaming, other than io.EOF.
Err() error
}
ResponseStream represents a streaming response using the new interface.
type StandardModelRouter ¶
type StandardModelRouter struct {
// contains filtered or unexported fields
}
StandardModelRouter is the default implementation of ModelRouter. It routes models to the first provider that supports them and supports model aliases.
func NewStandardModelRouter ¶
func NewStandardModelRouter(logger *slog.Logger) *StandardModelRouter
NewStandardModelRouter creates a new standard model router.
func (*StandardModelRouter) AddModelAlias ¶
func (r *StandardModelRouter) AddModelAlias(alias, actualModelID string)
AddModelAlias adds an alias that maps a friendly name to an actual model ID.
func (*StandardModelRouter) IsAliasOnlyMode ¶
func (r *StandardModelRouter) IsAliasOnlyMode() bool
IsAliasOnlyMode returns whether alias-only mode is currently enabled.
func (*StandardModelRouter) ListModelAliases ¶
func (r *StandardModelRouter) ListModelAliases() map[string]string
ListModelAliases returns all currently configured model aliases.
func (*StandardModelRouter) ListModels ¶ added in v0.6.0
func (r *StandardModelRouter) ListModels(ctx context.Context) ([]Model, error)
ListModels returns all models available across all providers. In alias-only mode, only returns models that have aliases (using alias names as model IDs).
func (*StandardModelRouter) ListProviders ¶ added in v0.6.0
func (r *StandardModelRouter) ListProviders() []ProviderClient
ListProviders returns all registered providers.
func (*StandardModelRouter) RegisterProvider ¶
func (r *StandardModelRouter) RegisterProvider(ctx context.Context, provider ProviderClient) error
RegisterProvider registers a provider and maps all its supported models.
func (*StandardModelRouter) RemoveModelAlias ¶
func (r *StandardModelRouter) RemoveModelAlias(alias string)
RemoveModelAlias removes a model alias.
func (*StandardModelRouter) RouteModel ¶
func (r *StandardModelRouter) RouteModel(ctx context.Context, modelID string) (ProviderClient, error)
RouteModel returns the provider that should handle the given model.
func (*StandardModelRouter) SetAliasOnlyMode ¶
func (r *StandardModelRouter) SetAliasOnlyMode(enabled bool)
SetAliasOnlyMode enables or disables alias-only mode.
type StandardToolExecutor ¶
type StandardToolExecutor struct {
// contains filtered or unexported fields
}
StandardToolExecutor is a simple implementation of ToolExecutor for standard tools
type TextOutput ¶
type TextOutput struct {
Text string
Annotations []Annotation
}
TextOutput represents text content in the response.
func (TextOutput) OutputType ¶
func (t TextOutput) OutputType() string
OutputType implements the OutputItem interface.
type TokenUsage ¶
type TokenUsage struct {
// PromptTokens is the number of tokens in the prompt.
PromptTokens int
// CompletionTokens is the number of tokens in the completion.
CompletionTokens int
// TotalTokens is the total number of tokens used.
TotalTokens int
}
TokenUsage provides token count information.
type Tool ¶
type Tool struct {
// Type is the type of the tool.
Type string `json:"type"`
// Function is the function definition if Type is "function".
Function Function `json:"function"`
}
Tool represents a tool that can be called by the model.
type ToolCall ¶
type ToolCall struct {
// ID uniquely identifies this tool call.
ID string `json:"id"`
// Type is the type of tool being called.
Type string `json:"type"`
// Function contains details about the function call if Type is "function".
Function ToolCallFunction `json:"function"`
}
ToolCall represents a call to a tool by the model.
type ToolCallDelta ¶
ToolCallDelta represents tool call information in a response delta.
type ToolCallFunction ¶
type ToolCallFunction struct {
// Name is the name of the function to call.
Name string `json:"name"`
// Arguments is a JSON string containing the function arguments.
Arguments string `json:"arguments"`
// IsComplete indicates if the arguments field contains complete valid JSON.
// This is particularly important during streaming where arguments may come in fragments.
IsComplete bool `json:"-"`
}
ToolCallFunction represents a function call made by the model.
type ToolCallOutput ¶
ToolCallOutput represents a tool call in the response.
func (ToolCallOutput) OutputType ¶
func (t ToolCallOutput) OutputType() string
OutputType implements the OutputItem interface.
type ToolChoice ¶
type ToolChoice struct {
// Type is the tool choice type ("none", "auto", or "function").
Type string `json:"type"`
// Function is required if Type is "function".
Function *ToolChoiceFunction `json:"function,omitempty"`
}
ToolChoice controls how the model chooses to call tools.
type ToolChoiceFunction ¶
type ToolChoiceFunction struct {
// Name is the name of the function.
Name string `json:"name"`
}
ToolChoiceFunction specifies a function the model should use.
type ToolExecutor ¶
type ToolExecutor interface {
// Execute executes a tool call and returns the result
Execute(toolCall ToolCall) (string, error)
}
ToolExecutor is responsible for executing tools and returning their results
type ToolRegistry ¶
type ToolRegistry struct {
// contains filtered or unexported fields
}
ToolRegistry maintains a registry of available tools and their implementations
func NewToolRegistry ¶
func NewToolRegistry(opts ...ToolRegistryOption) *ToolRegistry
NewToolRegistry creates a new tool registry with optional configuration
func (*ToolRegistry) ExecuteTool ¶
func (tr *ToolRegistry) ExecuteTool(toolCall ToolCall) (string, error)
ExecuteTool executes a tool call and returns the result
func (*ToolRegistry) GetTools ¶
func (tr *ToolRegistry) GetTools() []Tool
GetTools returns all registered tools
func (*ToolRegistry) RegisterTool ¶
func (tr *ToolRegistry) RegisterTool(tool Tool, executor ToolExecutor)
RegisterTool adds a tool to the registry
type ToolRegistryOption ¶
type ToolRegistryOption func(*ToolRegistry)
ToolRegistryOption defines functional options for configuring ToolRegistry
func WithToolRegistryLogger ¶
func WithToolRegistryLogger(logger *slog.Logger) ToolRegistryOption
WithToolRegistryLogger sets the logger for the ToolRegistry
func WithToolRegistryTool ¶
func WithToolRegistryTool(tool Tool, executor ToolExecutor) ToolRegistryOption
WithToolRegistryTool registers a tool during initialization
type ToolResult ¶
type ToolResult struct {
// ID is the unique identifier of the tool call this result corresponds to.
ID string `json:"id"`
// ToolName is the name of the tool that was called.
// While the LLM knows this from the original ToolCall, including it can be useful for logging and consistency.
ToolName string `json:"tool_name"`
// Content is the result of the tool's execution, typically a string (e.g., JSON output).
Content string `json:"content"`
// IsError indicates whether the tool execution resulted in an error.
// If true, Content might contain an error message or a serialized error object.
IsError bool `json:"is_error,omitempty"`
}
ToolResult represents the result of a tool's execution. This is sent back to the LLM to inform it of the outcome of a tool call.