↳ Support & Documentation
Help Centre
Everything you need to get Samantha running and make the most of your AI agent.
↳ Frequently Asked Questions13 articles
Samantha is an open-source, voice-first AI agent built by ZenonAI. It runs entirely on your local machine using Ollama — no cloud, no subscriptions, no API keys required. You can control your desktop, browse the web, search Wikipedia, send emails, and more, all with a single voice command.
Python 3.9 or later, Git, and curl are required. For voice mode you also need a microphone. We recommend at least 8 GB of RAM (16 GB for larger models). Samantha runs on macOS, Linux, and Windows (WSL2). A discrete GPU speeds up inference but is not mandatory.
Run the one-liner installer: curl -fsSL https://zenon.ai/install.sh | sh — it takes care of everything automatically. You can also follow the manual 4-step guide on the home page: install Ollama, pull a model, clone the repo, and run python main.py.
Samantha supports any model available through Ollama, including Mistral 7B (recommended), LLaMA 3 8B, Falcon 7B, Phi-3 Mini, Gemma 2 9B, and Neural Chat. Switch models by editing the OLLAMA_MODEL value in your .env file — no code changes needed.
Open ~/samantha/.env and change OLLAMA_MODEL to your preferred model name (e.g. llama3, phi3, gemma2). Then run ollama pull <model-name> to download it. Restart Samantha and it will use the new model automatically.
No. All processing happens locally on your machine. Your voice is transcribed by Whisper locally, your queries are handled by Ollama locally, and your conversation history is stored in a local SQLite database. Nothing leaves your machine.
Samantha uses OpenAI Whisper (running locally) for speech-to-text transcription and pyttsx3 for text-to-speech. Run python main.py without any flags to enable voice mode. Use --text for text-only mode if you prefer to type.
Samantha can: browse any URL, perform Google searches, compose and send Gmail messages, open desktop applications, take screenshots, send keyboard shortcuts, query Wikipedia, and recall previous conversations from memory. More capabilities are added regularly.
Samantha stores your conversation history in a local SQLite database (~/samantha/memory.db). When you ask it what it remembers, it retrieves relevant past interactions and uses them to give contextual, personalised answers across sessions.
First, make sure Ollama is running (ollama serve). Check that you pulled a model (ollama list). Ensure your .env has the correct OLLAMA_MODEL value. If using voice mode, check your microphone permissions. Run python main.py --text to isolate audio issues.
Navigate to ~/samantha, activate your virtual environment (source .venv/bin/activate), then run git pull to get the latest code, followed by pip install -r requirements.txt to update dependencies.
Samantha is fully open source on GitHub (github.com/hellowaste344). You can open issues, submit pull requests, improve documentation, or suggest new capabilities. All contributions are welcome — see the README for the contribution guide.
Yes, Samantha is completely free and open source under the MIT licence. If you find it useful, consider supporting the project via the Donate page to help fund ongoing development.
Still stuck?
Open a GitHub issue
The community and maintainer (x-ashe) are active on GitHub. Describe your issue and someone will help you out.
Open Issue on GitHub