

Unlock the power of multiple AI models with Potpie’s Multi-LLM Support feature. Seamlessly integrate OpenAI, Gemini, Claude, and more to optimize performance, cost, and flexibility. Learn how to set up and configure your preferred LLM for smarter AI Agents today!
Every day at Potpie, we’re focused on making our AI Agents smarter, faster, and more adaptable to real-world developer needs. We continuously refine their capabilities, ensuring they stay ahead with modern and more powerful features that provide greater flexibility and efficiency.
With that goal in mind, we're excited to introduce Multi-LLM Support, a new feature that allows our AI Agents to work with multiple Large Language Models from different providers, making them even more versatile and customizable for your specific needs.
This upgrade enables AI Agents to go beyond a single model provider by dynamically connecting to various LLMs based on your preferences. Whether it's leveraging OpenAI's capabilities, exploring the Claude's code understanding, or utilizing cost-effective alternatives like DeepSeek, Multi-LLM Support ensures that you can select the best model for your specific use case, providing developers with unprecedented flexibility.
This feature is available in our latest release (v0.1.1), which you can access here: https://github.com/potpie-ai/potpie/releases/tag/v0.1.1
To power this feature, we've integrated LiteLLM, a lightweight framework that standardizes API calls across multiple AI providers. This allows Potpie to seamlessly route requests to different LLMs while handling request formatting, and model-specific optimizations under the hood.
The Multi-LLM Support feature allows users to integrate their preferred language model provider based on their specific needs for cost, performance, and capabilities.
With this update, AI Agents can now use different LLMs while keeping their full capabilities, providing accurate and informed responses from your chosen model.
- Provider Flexibility: Users can select from multiple LLM providers including OpenAI, Anthropic, Google, Meta, and DeepSeek.
- Model Selection: Configure different models for lightweight tasks versus complex reasoning, optimizing for both performance and cost.
- Seamless Integration: All existing Potpie features continue to work regardless of which underlying LLM powers your agents.
- API Key Management: Securely use your own API keys for any supported provider.
This feature enhances the AI Agent's ability to adapt to different requirements while reducing dependency on a single model provider.
Several AI development platforms offer multi-model support, but Potpie's implementation stands out for its seamless integration with custom AI Agents. Unlike simpler implementations that just route prompts to different providers, Potpie's Multi-LLM Support maintains context awareness and specialized capabilities across different providers.
The Multi-LLM Support feature is built using LiteLLM, a lightweight framework that makes it easy to connect with different AI providers. This open-source tool ensures that all API calls follow a standard format, allowing Potpie to seamlessly work with multiple LLMs without compatibility issues.
LiteLLM provides a seamless way to interact with different LLMs, enabling our Multi-LLM Support feature to:
- Standardize API Calls: Maintain consistent API formats across different providers
- Handle Provider-Specific Optimizations: Adjust parameters for each model's requirements
- Manage Authentication: Handle different API key formats securely
- Support Streaming: Enable streaming capabilities for interactive responses
- Structure Outputs: Work with structured data using the Instructor library
By incorporating LiteLLM, we ensure that our Multi-LLM Support feature delivers consistent, high-quality responses regardless of the underlying provider, enhancing the AI Agent's flexibility without compromising functionality.
The Multi-LLM Support feature operates using a provider service architecture that dynamically selects and configures LLMs based on user preferences and task requirements:
Model Configuration
The system maintains pre-configured settings for popular LLMs, including:
- Small Models: Optimized for lightweight, faster tasks
- Large Models: Designed for complex reasoning and advanced capabilities
- Custom Configurations: Support for user-defined models and settings
Dynamic Provider Selection
When an AI Agent needs to generate a response, the system:
1. Checks user preferences for their default provider
2. Determines if the task requires a "small" or "large" model
3. Retrieves the appropriate API key
4. Configures the correct parameters for the selected model
5. Routes the request to the appropriate provider
Potpie's Multi-LLM Support feature comes with built-in support for the following providers:
1. OpenAI's GPT – Versatile language models with strong natural language understanding and code generation capabilities
2. Google's Gemini – Optimized for multimodal tasks with advanced reasoning and contextual awareness
3. Anthropic's Claude – Specialized in code understanding and generation with advanced capabilities for technical problem-solving
4. Meta's Llama – Open-source foundation models known for strong performance with greater customization options
5. DeepSeek – Cost-efficient models with strong reasoning capabilities, ideal for complex problem-solving
We will continue expanding support for more LLMs based on community feedback and emerging advancements in AI models, ensuring developers always have access to the best tools for their needs.
There are two main ways to use the Multi-LLM Support feature: using Potpie's dashboard with the default supported providers or setting up Potpie locally for maximum customization.
To leverage the benefits of multiple LLMs in your AI Agents via your Potpie dashboard, here’s how you can get started:
With this flexibility, you can experiment with different LLMs and optimize performance based on your project’s specific requirements.
Note: If you select Deepseek, Meta-Llama, or Gemini as your desired LLM model, you will need to enter the OpenRouter API key. However, for other LLM models, you can enter their respective API keys.
Setting up Potpie locally is straightforward and provides full control over your AI Agents' LLM configurations:
1. First, install Potpie on your local machine following our Getting Started Guide
2. Configure your preferred LLM by setting these four key parameters:
- `LLM_PROVIDER` – The name of your AI provider (e.g., OpenRouter, Ollama)
- `LLM_API_KEY` – Your API key for the chosen provider
- `LOW_REASONING_MODEL` – The model used for lightweight tasks
- `HIGH_REASONING_MODEL` – The model used for complex reasoning
Here's an example configuration using OpenRouter with DeepSeek models:
```
LLM_PROVIDER=openrouter
LLM_API_KEY=sk-or-your-key
LOW_REASONING_MODEL=openrouter/deepseek/deepseek-chat
HIGH_REASONING_MODEL=openrouter/deepseek/deepseek-chat
```
For Ollama users, the configuration would look like this:
```
LLM_PROVIDER=ollama
LLM_API_KEY=ollama
LOW_REASONING_MODEL=ollama_chat/qwen2.5-coder:7b
HIGH_REASONING_MODEL=ollama_chat/qwen2.5-coder:7b
```
For those using any of our default supported LLMs (Anthropic, Gemini, OpenAI, Deepseek, and Meta-Llama), you don't need to specify reasoning models—Potpie automatically assigns the best-suited model based on our tested configurations.
Let's see how you might use different models for different use cases:
Using Claude for Advanced Code Understanding
Claude excels at code comprehension and technical problem-solving:
```
LLM_PROVIDER=anthropic
LLM_API_KEY=your-anthropic-key
```
With this configuration, your AI Agent will leverage Claude's strong code understanding capabilities to provide more accurate technical assistance.
Using DeepSeek for Cost-Efficient Operations
DeepSeek offers powerful reasoning at a lower cost:
```
LLM_PROVIDER=openrouter
LLM_API_KEY=your-openrouter-key
LOW_REASONING_MODEL=openrouter/deepseek/deepseek-chat
HIGH_REASONING_MODEL=openrouter/deepseek/deepseek-r1
```
This setup helps you balance cost and performance for projects with budget constraints.
You can now integrate your preferred AI models directly into Potpie. Whether you want to leverage the power of OpenAI, explore the capabilities of Gemini, or experiment with Claude, Potpie gives you complete control over which LLM powers your AI Agents.
Set up Potpie locally today and start building with the AI model that best suits your needs:
- Checkout Potpie GitHub Repository
- Getting Started Guide, click here
Got stuck anywhere? Join our Discord Server for community support!