Model Configuration
Configure AI models and provider credentials for your platform.
Overview
The Model Configuration section allows you to add and manage AI models that users can access in your platform. You can configure models from various providers or add custom models with your own endpoints.
Available Model Providers
Provider | Models Available | Access Method |
---|---|---|
OpenAI | GPT-5, GPT-4.1, GPT-4.1-mini, GPT-4o, GPT-4o-mini | via Microsoft Azure |
Meta | Llama 4 Scout, Llama 4 Maverick,Llama 3.1 405B, Llama 3.3 70B | via Microsoft Azure / via Ionos |
Mistral | Mistral-Large, Ministral 8B | via Mistral |
OpenAI Open Source | gpt-oss-120b | Microsoft Azure / via Ionos |
DeepSeek | DeepSeek R1, DeepSeek V3 | via Microsoft Azure |
xAI | Grok 3, Grok 3 Mini | via Microsoft Azure |
Microsoft | Phi-4, Phi-4-mini | via Microsoft Azure / via Ionos |
Managing Models
Adding a New Model
- Click the "Add Model" button
- Fill in the required information:
- Name: The display name that users will see in model selection dropdowns (e.g., "GPT-4 Turbo", "Claude 3 Sonnet")
- Model: The exact model identifier/slug used by the API endpoint or deployed resource (e.g., "gpt-4-turbo-preview", "claude-3-sonnet-20240229", "llama-2-70b-chat")
- Provider: Select from supported providers or "Custom"
- Credentials: Configure provider-specific authentication
- Configure optional settings:
- Max Tokens: Set token limit for responses
- Accepts Images: Enable/disable image input capability
- Smartness Score: Rate intelligence (1-10) for user guidance
- Price Score: Rate cost efficiency (1-10) for user guidance
- Description: Add details about the model's capabilities
Editing Models
- Click any model row in the table to edit its configuration
- Update credentials, settings, or metadata as needed
- Changes are applied immediately after saving
Deleting Models
- Open a model for editing and click the Delete button
- Type the exact model name to confirm deletion
- This action cannot be undone
Provider Configuration
Azure OpenAI
Configure Azure OpenAI models using your Azure deployment. Supports both text generation and image generation models.
Required Credentials:
- API Endpoint: Your Azure OpenAI service endpoint (e.g.,
https://your-resource.openai.azure.com/
) - Instance Name: Your deployment instance name
- API Key: Your Azure OpenAI API key
- API Version: API version
Setup Instructions:
- Create an Azure OpenAI resource
- Deploy your desired models
- Copy the endpoint, deployment name, and API key to your model configuration
Supported Model Types:
- Text Generation: GPT-4, GPT-3.5, and other chat models
- Image Generation: DALL-E 3 and GPT-Image-Gen (GPT-4o based image generation)
Mistral AI
Configure Mistral AI models using the Mistral API.
Required Credentials:
- API Key: Your Mistral API key
Setup Instructions:
- Sign up for Mistral AI
- Generate an API key
- Add the API key to your model configuration
IONOS Cloud
Configure IONOS LLM models using IONOS Cloud services.
Required Credentials:
- Token: Your IONOS API token
- Base URL: IONOS inference endpoint (default:
https://openai.inference.de-txl.ionos.com/v1
)
Setup Instructions:
- Create an IONOS Cloud account
- Generate an API token
- Configure the token and endpoint in your model settings
Custom Provider
Configure any OpenAI-compatible API endpoint.
Required Credentials:
- API Key: Authentication key for your service
- Endpoint: Base URL for your API (e.g.,
https://api.example.com/v1
)
Setup Instructions:
- Ensure your API is OpenAI-compatible
- Obtain your API credentials from your provider
- Configure the endpoint URL and API key