Getting Started#

You can run your own OpenPlugin server, or use public instances that are already hosted in the cloud.

Setup Environment Variables#

The OpenPlugin server supports multiple function calling LLM models. To enable support for the desired function-calling models on your server, it is necessary to provide the API Key for each model. This is accomplished through the use of environment variables.

Note: Make sure to replace <YOUR KEY> with your API key.

Note: You only need to set the keys for the models you intend to use. For example, if you only intend to use OpenAI’s ChatGPT, you only need to set the OPENAI_API_KEY variable.

The content of your .env file based on provider should be as follows:



There are different ways to start the OpenPlugin API server.

NOTE: OpenPlugin is built with the python version 3.9.

Start the OpenPlugin server using python library from PyPI#

pip install openplugin
openplugin --help
openplugin start-server /path/to/your/.env

Start the OpenPlugin server from code using poetry#

git clone
cd openplugin
# install poetry in the machine
poetry install
# add .env file with the required API keys
poetry run python

NOTE: The script reads the .env file to setup the keys.

Start the OpenPlugin server using docker#

# Passing environment variables in the startup script
docker run --name openplugin_container -p 8006:8006 -e "OPENAI_API_KEY=<your_key>" -e "COHERE_API_KEY=<your_key>" -e "GOOGLE_APPLICATION_CREDENTIALS=<your_key>" -d shrikant14/openplugin:latest

# Passing environment variables as a file
nano [env-filename]
Add to file
docker run --name openplugin_container -p 8006:8006 --env-file my_env.env -d shrikant14/openplugin:latest

API Hosted by Imprompt#

Hosted API Spec:

NOTE: Host your own instance of the service or you’ll need to get a key from jeffrschneider[at]gmail[dot]com to access the hosted service.