Using Commercial Models

This is the 3rd part of my series on running AI models locally with Simon Willison’s LLM tool. The other 2 parts are here:

As I mentioned in the previous post, you can only go so far with models running locally. When you out-grow these models, you’ll probably want to step up to a paid model such as something from OpenAI. This is easy to setup.

OpenAI Setup

First, go to openai.com and login to the API Platform. If this is your first login, you’ll need to setup an organization (I used my name for this), and a project (mine is called Default project).

Next, go the Billing section to add a Credit Balance. You can start off small here. I added $10.00. Check the Pricing page in the OpenAI docs to see costs for each model. The prices are per million tokens, so even $10.00 should last a while.

Finally, go to API Keys and create a new key. You’ll want to save this in a secure location. I added mine to 1Password. To use the API key, you’ll need to make it available to your Python program.

Environment Variables

The easiest way to use your API key would be to enter it directly into the code, but this is a bad idea if you host your code somewhere like GitHub. Anyone could take your key and use up all your credits.

The better (and default) option is storing your API key in the environment variable OPENAI_API_KEY. To set this variable add export OPENAI_API_KEY=<your key> to your ~/.zshrc file and restart your terminal. You might also want to look into python-dotenv for loading environment variables.

If you’re using the LLM command line app, set the key with this command:

llm keys set openai

LLM will also use the environment variable if it is set.

Using LLM

Now we’re finally ready to write some code. Let’s start by using LLM. The Python API documentation covers all of the features. Here’s a simple example:

Initialize a new project with uv and add the llm library:

uv init llm-openai
cd llm-openai
uv add llm

Note, that we didn’t add llm-mlx this time. We’re using the model run by OpenAI, not a local model. Here’s a simple program to generate a haiku. Add this to main.py

import llm
model = llm.get_model("gpt-4o-mini")
response = model.prompt("Write an inspiring haiku about AI.")
print(response.text())

Now run this with uv run main.py.

Using the OpenAI Python Library

You can also use the official OpenAI Python library if you like. The steps are basically the same.

Initialize a new project and add the openai library:

uv init openai-api
cd openai-api
uv add openai

The program is very similar. Add this to main.py:

import os
from openai import OpenAI
client = OpenAI(
    api_key=os.environ.get("OPENAI_API_KEY"),
)
response = client.responses.create(
    model="gpt-4o-mini",
    input="Write an inspiring haiku about AI.",
)
print(response.output_text)

Loading the API key from OPENAI_API_KEY is the default. You can leave that out if you want. Run this with uv as before:

uv run main.py

You’re now well equipped to write code using both local-run libraries and the commercial models from OpenAI. I hope you learned something from this series. I can’t wait to see what you build.