Skip to content

Installation Guide

You can install ChainForge locally or try it out on the web. This document concerns local installation.

Installing ChainForge on your local machine provides the benefit of being able to load API keys from environment variables, run Python evaluator nodes, and query Ollama-hosted models. If you are a developer looking to run ChainForge from source to modify or extend it, see the For Developers page.

Installation

Step 1. Install on your machine

The simplest and safest way to install the latest public build of ChainForge is to:

1. Create a new directory and cd into it

2. (Optional, but recommended!) Create a virtual environment. On Mac, you can do

python -m venv venv 
source venv/bin/activate

3. Install chainforge via pip

pip install chainforge 

4. Run ChainForge

chainforge serve

5. Open localhost:8000 in a recent version of Chrome, Firefox, Edge or Brave browser.

Note

If you'd like to run ChainForge on a different hostname and port, specify --host and --port. For instance, chainforge serve --host 0.0.0.0 --port 3400

ChainForge beta version currently does not support other browsers, but if you want support, please open an Issue or make a Pull Request on our GitHub. The main barrier is that CSS formatting is slightly different for Safari and others.

Step 2. Get and set API keys for certain model providers

Though you can run Chainforge, you can't do anything with it without the ability to call an LLM. Currently we support model providers:

  • OpenAI models GPT3.5 and GPT4, including all variants and function calls
  • HuggingFace models (via the HuggingFace Inference and Inference Endpoints API)
  • Anthropic models (Claude-2, etc)
  • Google Gemini and PaLM2 (chat and text bison models)
  • Microsoft Azure OpenAI Endpoints
  • Amazon Bedrock Endpoints
  • (Locally run) models hosted via Ollama. Install Ollama, download the models you want to try, and use ollama serve in the console. Add Ollama in the "Model" section of Prompt or Chat Nodes, then set the model name to the appropriate Ollama model name in the provider settings window.
  • ...and any other provider through custom provider scripts!

How to Set API keys for specific model providers

To use a model provider with a blackboxed LLM API, you need to do two things:

1. Get an API key. HuggingFace API keys are free. OpenAI API keys are easy to access, and you can even get one for free during a trial period. For other providers, see their pages and sign up for access.

2. Set the relevant API key in ChainForge. You can input your API keys manually via the Settings button in the top-right corner. However, this can become tedious fast. If you'd prefer to not be bothered every time you load ChainForge, you can set them as environment keys. To do so, follow this guide, section 3, "Use Environment Variables in place of your API key." When following the instructions, swap OPENAI_API_KEY for the alias of your specific model provider, listed below:

  • OpenAI: OPENAI_API_KEY
  • HuggingFace: HUGGINGFACE_API_KEY
  • Anthropic: ANTHROPIC_API_KEY
  • Google (Gemini or PaLM2): PALM_API_KEY
  • Azure OpenAI: Set two keys, AZURE_OPENAI_KEY and AZURE_OPENAI_ENDPOINT. Note that the endpoint should look like a base URL. For examples on what these keys look like, see the Azure OpenAI documentation.
  • Amazon Bedrock: See the Supported Models page.

When you are done setting the API key(s), reopen your terminal. (This is because the terminal loads the environment variables when it is first opened, so it needs to be refreshed before running chainforge serve.)

For instance, to set an OpenAI API key as an environment variable on Macs, do this from the terminal:

echo "export OPENAI_API_KEY='yourkey'" >> ~/.zshrc
source ~/.zshrc
echo $OPENAI_API_KEY
Then, make sure to reopen your terminal.

Step 3. Check out Examples!

Click Example Flows to get a sense of what ChainForge is capable of. A popular choice is ground truth evaluations, which use Tabular Data nodes.