Installation
Prerequisites
- Rust 1.70+ - Install from rustup.rs
- An LLM API key (at least one of):
- OpenAI API key
- Anthropic API key
- Local Ollama installation
Install from crates.io
# Full install (includes desktop GUI)
cargo install localgpt
# Headless install (no desktop GUI — for servers, Docker, CI)
cargo install localgpt --no-default-features
Building from Source
# Clone the repository
git clone https://github.com/localgpt-app/localgpt.git
cd localgpt
# Build release binary (includes desktop GUI)
cargo build --release
# Build headless (no desktop GUI — skips eframe/egui/winit)
cargo build --release --no-default-features
# The binary will be at target/release/localgpt
Docker / Headless Server
For Docker or headless environments where display servers (X11/Wayland) are unavailable, build without the desktop feature to avoid winit compilation issues:
cargo build --release --no-default-features
Or in a Dockerfile:
FROM rust:1.83 AS builder
WORKDIR /app
COPY . .
RUN cargo build --release --no-default-features
FROM debian:bookworm-slim
COPY --from=builder /app/target/release/localgpt /usr/local/bin/
CMD ["localgpt", "daemon", "start", "--foreground"]
The headless binary includes all features except the desktop GUI: CLI, web UI, HTTP API, WebSocket, daemon mode, and heartbeat.
Installation
Copy the binary to your PATH:
# Option 1: Install to /usr/local/bin
sudo cp target/release/localgpt /usr/local/bin/
# Option 2: Install to ~/.local/bin (no sudo required)
mkdir -p ~/.local/bin
cp target/release/localgpt ~/.local/bin/
Initial Setup
- Create the configuration directory:
mkdir -p ~/.localgpt/workspace/memory
- Create the configuration file:
cp config.example.toml ~/.localgpt/config.toml
- Edit the configuration with your API key:
# Set your API key in the environment or edit config.toml
export OPENAI_API_KEY="your-api-key"
Verify Installation
# Check version and help
localgpt --help
# Test with a simple question
localgpt ask "What is 2+2?"
Using with Ollama (Local Models)
If you prefer fully local operation with Ollama:
- Install Ollama from ollama.ai
- Pull a model:
ollama pull llama3 - Update your config:
[agent]
default_model = "llama3"
[providers.ollama]
endpoint = "http://localhost:11434"