How to Configure Ollama and OpenClaw Locally: Browser Control, Telegram, and Best Models for Mac
A complete guide to running OpenClaw locally with Ollama on your MacBook or Mac Mini.
Dzianis Vashchuk
5 min read
Running AI agents entirely on your local hardware gives you privacy, zero API costs, and total control over your system. With the integration between Ollama and OpenClaw, getting a highly capable local AI assistant up and running has never been easier.
In this post, we'll walk through how to launch OpenClaw with Ollama, set up browser control, connect it to a Telegram channel, and choose the right local models based on your Mac's RAM.
1. Launching OpenClaw with Ollama
Ollama has made it incredibly simple to run local LLMs, and now it officially supports launching OpenClaw directly. You don't need to manually configure API keys or base URLs — Ollama handles the handshake.
The One-Command Setup
If you already have Ollama installed, simply run:
ollama launch openclaw
Here's what happens under the hood:
- Installation: If you don't have OpenClaw installed yet, Ollama will prompt to install it via npm.
- Security Notice: On the first launch, Ollama shows a security notice explaining the risks of giving an AI agent tool access. Read it carefully.
- Model Selection: You'll be prompted to pick a model from a selector — local or cloud.
- Onboarding: Ollama configures the provider, installs the Gateway daemon, sets your model as the primary, and automatically installs the web search and fetch plugin (giving your local model the ability to search the web).
- Gateway Startup: The Gateway starts in the background and the OpenClaw TUI opens.
Once started, your local agent is ready to work.
Important: Local models require a context window of at least 64k tokens to work properly with OpenClaw. Ollama handles this for recommended models, but if you're using a custom model, make sure to set the context length appropriately. See Ollama's context length docs for details.
Non-Interactive (Headless) Mode
For scripts, CI/CD, or Docker:
ollama launch openclaw --model qwen2.5-coder:14b --yes
The --yes flag auto-pulls the model, skips all interactive selectors, and requires --model to be specified.
Changing the Model Without Relaunching
To switch models without starting the gateway and TUI:
ollama launch openclaw --config
Or specify a model directly:
ollama launch openclaw --model kimi-k2.5:cloud
If the gateway is already running, it restarts automatically to pick up the new model.
Web Search for Local Models
The web search plugin is installed automatically when you launch via Ollama. If you need to install it separately:
openclaw plugins install @ollama/openclaw-web-search
Note: Web search for local models requires signing in to Ollama first:
ollama signin. Cloud models handle this automatically.
2. Configuring Browser Control
For most macOS users, the browser setup you want is the built-in user
profile. It lets OpenClaw attach to your normal Google Chrome session
through Chrome DevTools MCP, so the agent can use tabs and logins you
already have open.
For this setup:
- do not set
cdpUrl - do not launch Chrome with
--remote-debugging-port - do not use the extension or relay flow
This path uses Chrome DevTools MCP, not the managed Playwright browser.
macOS Setup
- Open your normal Google Chrome.
- Open
chrome://inspect/#remote-debugging. - Turn on remote debugging.
- Keep Chrome open.
- If Chrome asks for permission, approve the attach prompt.
Then verify that OpenClaw can see your live browser session:
openclaw browser --browser-profile user start
openclaw browser --browser-profile user status
openclaw browser --browser-profile user tabs
What success looks like:
statusshowsdriver: existing-sessionstatusshowstransport: chrome-mcptabsshows your current Chrome tabs
If attach does not work:
- make sure remote debugging is still enabled at
chrome://inspect/#remote-debugging - make sure Google Chrome is still open
- restart the OpenClaw Gateway and try again
When you want the agent to use your signed-in Chrome session, use
profile="user".
Warning: The
userprofile gives the agent access to your signed-in browser session. Only use it when you're at the computer and can monitor what it is doing.
3. Connecting OpenClaw to Telegram
If you want to chat with your local OpenClaw setup from your phone, connect it to Telegram. For most people, a simple DM bot is enough.
-
Create a bot with @BotFather and copy the token.
-
Configure Telegram in OpenClaw. The easiest way is:
openclaw configure --section channels
If you prefer editing config directly, the minimum setup is:
{
channels: {
telegram: {
enabled: true,
botToken: "123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11",
dmPolicy: "pairing"
}
}
}
- Start the gateway:
openclaw gateway
- Message your bot in Telegram. The first time, approve the pairing code:
openclaw pairing list telegram
openclaw pairing approve telegram <CODE>
That is enough for a private Telegram DM setup. Group policies, allowlists, and forum-topic routing are advanced options and do not need to be part of your first local setup.
4. Which Model Works Best on Mac?
Choosing the right model is crucial for local performance. If a model is too large, it will swap to disk and run incredibly slowly. If it's too small, the agent won't be smart enough to use tools or browser control effectively.
Mac Hardware Matrix
Here is our recommended matrix for MacBooks and Mac Minis running Apple Silicon (M1/M2/M3/M4). Remember: local models need at least 64k context to work properly with OpenClaw.
| Mac RAM | Recommended Model | Approx. Size (4-bit) | Use Case & Performance | Command |
|---|---|---|---|---|
| 8 GB | qwen2.5-coder:7b | ~4.7 GB | Basic coding and chat. Browser control might struggle with complex pages. Tight on memory. | ollama pull qwen2.5-coder:7b |
| 16 GB | qwen2.5-coder:14b | ~9.0 GB | Sweet Spot. Leaves enough RAM for macOS and browser. Excellent tool use and reasoning. | ollama pull qwen2.5-coder:14b |
| 16 GB | deepseek-r1:14b | ~9.0 GB | Great for deep reasoning tasks, slightly slower output than Qwen. | ollama pull deepseek-r1:14b |
| 32 GB | glm-4.7-flash | ~25 GB | Ollama's recommended local model. Strong reasoning and code generation. | ollama pull glm-4.7-flash |
| 32 GB+ | qwen2.5-coder:32b | ~20 GB | Desktop-class power. Competes closely with cloud models for coding and browser tasks. | ollama pull qwen2.5-coder:32b |
| 64 GB+ | llama3.3:70b | ~43 GB | Ultimate local intelligence. Requires serious hardware but offers world-class performance. | ollama pull llama3.3:70b |
If your Mac does not have enough RAM for a good local model, use a cloud model with the same workflow:
ollama signin
ollama launch openclaw --model kimi-k2.5:cloud
Wrapping Up
The shortest path is simple: install Ollama, run ollama launch openclaw,
pick a model that matches your Mac, and only add browser or Telegram setup if
you actually need those capabilities.
For most people:
- start with
ollama launch openclaw - use
qwen2.5-coder:14bon a 16GB Mac orglm-4.7-flashon a 32GB Mac - use
profile="user"only when you need your real signed-in Chrome session - add Telegram only if you want to chat with OpenClaw from your phone