Configuration Setup
Automatic environment variable loading is currently not working. While we work on fixing this, please use model object format:
Quick Start
Get started with Google Gemini (recommended for speed and cost):First Class Models
Use any model from the following supported providers.- Google
- Anthropic
- OpenAI
- Azure
- Cerebras
- DeepSeek
- Groq
- Mistral
- Ollama
- Perplexity
- TogetherAI
- xAI
Custom Models
Amazon Bedrock, Cohere, all first class models, and any model from the Vercel AI SDK is supported. Use this configuration for custom endpoints and custom retry or caching logic. We’ll use Amazon Bedrock and Google as examples below.Amazon Bedrock
Amazon Bedrock
1
Install dependencies
Install the Vercel AI SDK for your provider.
- npm
- pnpm
- yarn
- bun
2
Import, create provider, and create client
3
Pass client to Stagehand
Google
1
Install dependencies
Install the Vercel AI SDK for your provider.
- npm
- pnpm
- yarn
- bun
2
Import, create provider, and create client
3
Pass client to Stagehand
All Providers
All Providers
To implement a custom model, follow the steps for the provider you are using. See the Amazon Bedrock and Google examples above. All supported providers and models are in the Vercel AI SDK.
1
Install dependencies
Install the Vercel AI SDK for your provider.
2
Import, create provider, and create client
3
Pass client to Stagehand
Choose a Model
Different models excel at different tasks. Consider speed, accuracy, and cost for your use case.Model Selection Guide
Find detailed model comparisons and recommendations on our Model Evaluation page.
| Use Case | Recommended Model | Why |
|---|---|---|
| Production | google/gemini-2.5-flash | Fast, accurate, cost-effective |
| Intelligence | google/gemini-2.5-pro | Best accuracy on hard tasks |
| Speed | google/gemini-2.5-flash | Fastest response times |
| Cost | google/gemini-2.5-flash | Best value per token |
| Local/offline | ollama/qwen3 | No API costs, full control |
Advanced Options
Agent Models (with CUA Support)
Automatic environment variable loading is currently not working. While we work on fixing this, please use model object format:
model parameter, which accepts any first class model, including computer use agents (CUA). This is useful when you’d like the agent to use a different model than the one passed to Stagehand.
- Google CUA
- Anthropic CUA
- OpenAI CUA
- Example First Class Model
All Supported CUA Models
All Supported CUA Models
| Provider | Model |
|---|---|
google/gemini-2.5-computer-use-preview-10-2025 | |
| Anthropic | anthropic/claude-3-7-sonnet-latest |
| Anthropic | anthropic/claude-haiku-4-5-20251001 |
| Anthropic | anthropic/claude-sonnet-4-20250514 |
| Anthropic | anthropic/claude-sonnet-4-5-20250929 |
| OpenAI | openai/computer-use-preview |
| OpenAI | openai/computer-use-preview-2025-03-11 |
For overriding the agent API key, using a corporate proxy, adding provider-specific options, or other advanced use cases, the agent model can also take the form of an object. To learn more, see the Agent Reference.
Custom Endpoints
If you need Azure OpenAI deployments or enterprise deployments.- OpenAI
- Anthropic
- All Other Providers
For OpenAI, you can pass configuration directly without using
llmClient using the model parameter:Extending the AI SDK Client
For advanced use cases like custom retries or caching logic, you can extend theAISdkClient:
Legacy Model Format
The following models work without theprovider/ prefix in the model parameter as part of legacy support:
Google
gemini-2.5-flash-preview-04-17gemini-2.5-pro-preview-03-25gemini-2.0-flashgemini-2.0-flash-litegemini-1.5-flashgemini-1.5-flash-8bgemini-1.5-pro
Anthropic
Anthropic
claude-3-7-sonnet-latestclaude-3-7-sonnet-20250219claude-3-5-sonnet-latestclaude-3-5-sonnet-20241022claude-3-5-sonnet-20240620
OpenAI
OpenAI
gpt-4ogpt-4o-minio1o1-minio3o3-minigpt-4.1gpt-4.1-minigpt-4.1-nanoo4-minigpt-4.5-previewgpt-4o-2024-08-06o1-preview
Cerebras
Cerebras
cerebras-llama-3.3-70bcerebras-llama-3.1-8b
Groq
Groq
groq-llama-3.3-70b-versatilegroq-llama-3.3-70b-specdecmoonshotai/kimi-k2-instruct
Troubleshooting
Error: API key not found
Error: API key not found
Error:
API key not foundSolutions:- Check
.envfile has the correct variable name for the provider you are using - Ensure environment variables are loaded (use
dotenv) - Restart your application after updating
.envfile
| Provider | Environment Variable |
|---|---|
GOOGLE_GENERATIVE_AI_API_KEY or GEMINI_API_KEY | |
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI | OPENAI_API_KEY |
| Azure | AZURE_API_KEY |
| Cerebras | CEREBRAS_API_KEY |
| DeepSeek | DEEPSEEK_API_KEY |
| Groq | GROQ_API_KEY |
| Mistral | MISTRAL_API_KEY |
| Ollama | None (local) |
| Perplexity | PERPLEXITY_API_KEY |
| TogetherAI | TOGETHER_AI_API_KEY |
| xAI | XAI_API_KEY |
Error: Model not supported
Error: Model not supported
Error:
Unsupported modelSolutions:- Use the
provider/modelformat:openai/gpt-5 - Verify the model name exists in the provider’s documentation
- Check model name is spelled correctly
- Ensure your Model API key can access the model
Model doesn't support structured outputs
Model doesn't support structured outputs
Error:
Model does not support structured outputsSolutions:- Check our Model Evaluation page for recommended models
High costs or slow performance
High costs or slow performance
Python support for Stagehand v3 or custom models
Python support for Stagehand v3 or custom models
We are working on Python support for Stagehand v3 and custom models.Solutions:
- Request updates on Python support from our support team at support@browserbase.com
Need Help? Contact Support
Can’t find a solution? Have a question? Reach out to our support team:Contact Support
Email us at support@browserbase.com

