Supported Models
ChainForge supports five model providers out-of-the-box:
Model Provider | Example Models | Environment variables to set for API keys |
---|---|---|
OpenAI (chat and text) | gpt-4 , gpt-3.5-turbo , etc |
OPENAI_API_KEY |
Anthropic (Claude) | claude-2 , claude-1 , etc |
ANTHROPIC_API_KEY |
Google Gemini, PaLM2 | gemini-pro , chat-bison-001 , text-bison-001 |
PALM_API_KEY |
HuggingFace Inference and Inference Endpoints | falcon.7b , starcoder , etc, or any custom text or chat completions model endpoint |
HUGGINGFACE_API_KEY |
Microsoft Azure OpenAI Services | (All OpenAI models) | Set two keys, AZURE_OPENAI_KEY and AZURE_OPENAI_ENDPOINT . Note that the endpoint should look like a base URL. For examples on what these keys look like, see the Azure OpenAI documentation. |
Amazon Bedrock Endpoints | (various models) | Set up to four keys, AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY , AWS_REGION and AWS_SESSION_TOKEN . |
Ollama (locally-hosted models) | llama2 , mistral , etc |
(none) |
For details on how to set API keys as environment variables or get access to specific model providers, see How to Install.
Add Custom Providers
On a locally installed version of ChainForge, you can also call custom providers by writing completion functions in Python,
and dropping them into the Custom Providers
tab of the global Settings window.
Here's an extremely simple provider that just reverses the prompt it's given:
from chainforge.providers import provider
# A 'dummy' response provider that just returns the prompt reversed
@provider(name="Mirror", emoji="🪞")
def mirror_the_prompt(prompt: str, **kwargs) -> str:
return prompt[::-1] # just reverse the prompt
For more details, see Adding Custom Providers.
Request to add a model or provider
To request native support for a new model or provider, open an Issue on our GitHub. The request should be to a model or provider that is in high demand or you believe ChainForge needs to support. (If your model or provider is idiosyncratic --like a custom model you are running on your machine --you should instead write a custom provider function.) For new provider requests, turnaround time to implementation is generally 1 week.