Self-Hosted Local AI Tools
You can run AI locally with two parts:
- Backend: the engine that loads and runs models.
- Frontend: the UI you click on. Historically, frontends just talked to a backend and did not decide GPU or VRAM. In 2025 many ship an embedded runner or RAG engine, but performance still comes from the backend you configure.
For example, Open WebUI can run fully offline, connect to Ollama / llama.cpp / vLLM / TGI / LM Studio / other OpenAI-style APIs, and now includes multi-user channels, DMs, knowledge bases + RAG, and tool integrations. (Open WebUI )
Updated December 2025 for GPT-OSS, newer Open WebUI releases, AMD Gaia, and recent Ollama/LM Studio changes.
Quick start
- Pick a backend that fits your hardware and model format.
- Pick a frontend to chat or manage models.
- Or choose an all-in-one app that bundles both.
- Download a model from a catalog you trust.
- Check licenses (model + app). Bind services to
localhostunless you add auth.
Windows vs Linux: what to install first
Windows
Discrete NVIDIA GPU (desktop/server)
- Use Ollama, LM Studio, llama.cpp, or text-generation-webui.
- Ollama now ships an official desktop app for Windows/macOS with GUI chat, history, and drag-and-drop files, while still exposing the same local API/CLI.
- For high-throughput multi-model serving, run vLLM on WSL2 or a Linux host and expose an OpenAI-style endpoint.
Laptops / mini PCs with iGPU (AMD/Intel) or Intel Arc
- Prefer LM Studio or llama.cpp / KoboldCpp builds that use Vulkan. LM Studio can offload layers to AMD and Intel iGPUs via Vulkan, which is significantly faster than CPU-only on integrated-graphics systems.
- Start with 7B–13B GGUF models; 20B (e.g., gpt-oss-20b) is realistic on 16 GB-class machines with good offload.
Ryzen AI laptops / NPUs
- Consider AMD Gaia: GUI + CLI that runs local LLM agents on Ryzen AI NPU + iGPU, using a RAG pipeline and an OpenAI-compatible REST API. It also runs on non-Ryzen systems (at lower performance).
Frontends
- Open WebUI or LibreChat pointed at your local endpoint. Open WebUI is now closer to a self-hosted “ChatGPT + team chat” for your own models (channels, DMs, knowledge bases, tools) and uses a custom BSD-3-based license with a branding clause in v0.6.x.
- Page-level browser extensions (e.g., Page Assist) can talk to Ollama/LM Studio APIs if you prefer in-browser chat.
Linux
NVIDIA (server / homelab)
- vLLM (v1) and Text Generation Inference (TGI) 3.x are the standard high-throughput OpenAI-style servers. vLLM focuses on efficient serving; recent releases add architectural speed-ups and improved multimodal support. TGI adds multi-backend support (TensorRT-LLM, vLLM, etc.).
- For simpler setups, Ollama or llama.cpp remain practical single-node servers.
AMD GPUs
- Use ROCm builds when available (vLLM/TGI/llama.cpp), or KoboldCpp / llama.cpp with Vulkan.
- Gaia has Linux support as well, but its sweet spot is Ryzen AI laptops with NPU + iGPU.
Use Docker for servers. Bind to
127.0.0.1and put a UI (Open WebUI, LibreChat, etc.) in front.
Rule of thumb: VRAM caps the model size. Smaller quantized models still beat larger high-precision models on weak hardware. Start with 7B, then move up.
Frontend
Frontends are thin clients from the user’s perspective. They connect to whatever backend you run (local engines, remote APIs, or both). Some now bundle light backends, RAG, and multi-user features, but you still point them at LLM endpoints.
| App | OS | Connects to | Notes | |
|---|---|---|---|---|
| Open WebUI | Win, macOS, Linux | OpenAI-style endpoints, Ollama, llama.cpp, vLLM, TGI, LM Studio | Extensible default; channels + DMs, KB/RAG, tools; custom BSD-3-based license (v0.6.6+). | (Open WebUI ) |
| SillyTavern | Win, macOS, Linux | KoboldAI/KoboldCpp, text-gen-webui, Ollama, OpenAI-style | RP and character tools | |
| LibreChat | Win, macOS, Linux | OpenAI-style endpoints | Team features; custom endpoints | (LibreChat ) |
| Kobold Lite | Any browser | KoboldAI/KoboldCpp, AI Horde | Zero-install client | (lite.koboldai.net , GitHub ) |
| KoboldAI Client | Win, macOS, Linux | Local or remote LLM backends | Story-writing UI | (GitHub ) |
| AnythingLLM | Win, macOS, Linux | Ollama or APIs | Built-in RAG, project-style workspaces | (anythingllm.com , GitHub ) |
| LM Studio (UI) | Win, macOS | Built-in local server, OpenAI-style | Catalog for GPT-OSS/Qwen3/Gemma3/DeepSeek; Vulkan iGPU offload; exposes local OpenAI API; SDKs | (LM Studio ) |
| Jan | Win, macOS, Linux | Built-in local server, OpenAI-style | Offline-first desktop app, supports modern open-weight models | (Jan ) |
| GPT4All Desktop | Win, macOS, Linux | Built-in local server | Private, on-device; large local model catalog | (Nomic AI , docs.gpt4all.io ) |
Backend
Engines that load models and expose a local API.
| App | OS | GPU accel | VRAM (typical) | Models / Formats |
|---|---|---|---|---|
| llama.cpp (llama-server) | Win, macOS, Linux | CUDA, Metal, HIP/ROCm, Vulkan, SYCL | 7B q4 ≈ 4 GB; 13B q4 ≈ 8 GB | GGUF, OpenAI-style server. GGUF is the native format. (GitHub ) |
| vLLM | Linux, Win WSL | CUDA, ROCm | Model dependent | Transformers; high-throughput OpenAI-style server; 2025 v1 architecture improves throughput + multimodal. (VLLM Documentation ) |
| Text Generation Inference (TGI) | Linux | CUDA, ROCm | Model dependent | HF production server; 3.x adds multi-backend support (TensorRT-LLM, vLLM) and mature deployment tooling. (Hugging Face ) |
| KoboldCpp | Win, macOS, Linux | CUDA, ROCm, Metal, Vulkan | 7B q4 ≈ 4 GB | GGUF, Kobold API; focus on story/RP workloads. (GitHub ) |
| MLX LLM | macOS (Apple Silicon) | Apple MLX | Model dependent | MLX or GGUF-converted |
| TensorRT-LLM | Linux | NVIDIA TensorRT | High for fp16 | Transformers; max-throughput NVIDIA deployment |
Both: frontend + backend in one
| App | Form | OS | GPU accel | VRAM (typical) | Models / Formats |
|---|---|---|---|---|---|
| Ollama | CLI + API + GUI | Win, macOS, Linux | CUDA, ROCm, Metal | Follows llama.cpp | GGUF, local API; official desktop app; one-command pulls via Library ; optional Turbo cloud for large GPT-OSS models. (GitHub , Ollama ) |
| LM Studio | Standalone UI | Win, macOS | CUDA, Metal, Vulkan | Model dependent | GGUF; catalog for GPT-OSS, Qwen3, Gemma3, DeepSeek; local OpenAI-style API; JS/Python SDKs. (LM Studio ) |
| GPT4All Desktop | Standalone UI | Win, macOS, Linux | Embedded llama.cpp | Model dependent | GGUF, local API. (Nomic AI ) |
| Jan | Standalone UI | Win, macOS, Linux | Embedded | Model dependent | GGUF / other formats via runners; local API. (Jan ) |
| text-generation-webui | Standalone UI | Win, macOS, Linux | CUDA, CPU, AMD, Apple Silicon | Model dependent | Transformers, ExLlamaV2/V3, AutoGPTQ, AWQ, GGUF. (GitHub ) |
| Llamafile | Standalone UI | Win, macOS, Linux | Via embedded llama.cpp | Follows llama.cpp | Single-file executables, local API. (GitHub ) |
| Tabby (TabbyML) | Standalone UI | Win, macOS, Linux | CUDA, ROCm, Vulkan | ~8 GB for 7B int8 | Self-hosted code assistant; IDE plugins; REST API. (tabbyml.com , tabby.tabbyml.com ) |
| AMD Gaia | Standalone UI | Win, Linux | Ryzen AI NPU + AMD iGPU/CPU | Model dependent | Multi-agent RAG app around local LLMs (Llama, Phi, etc.), optimized for Ryzen AI PCs; exposes OpenAI-style API and MCP. |
Image / video UIs
| App | Form | OS | GPU accel | VRAM (typical) | Models / Formats |
|---|---|---|---|---|---|
| ComfyUI | Standalone UI | Win, macOS, Linux | CUDA, ROCm, Apple MPS | SD1.5 ≈ 8 GB, SDXL ≈ 12 GB | Node-graph pipelines; 2025 Node 2.0 UI and rich video flows. (GitHub ) |
| AUTOMATIC1111 SD WebUI | Standalone UI | Win, Linux (macOS unofficial) | CUDA, ROCm, DirectML | 4–6 GB workable; more for SDXL | SD1.5/SDXL, many extensions. (GitHub ) |
| InvokeAI | Standalone UI | Win, macOS, Linux | CUDA, AMD via Docker, Apple MPS | 4 GB+ | SD1.5, SDXL, node workflows. (Invoke AI ) |
| Fooocus | Standalone UI | Win, Linux, macOS | CUDA, AMD, Apple MPS | ≥4 GB (NVIDIA) | SDXL presets |
| Stable Video Diffusion | Model + demo | Win, Linux | CUDA | ~14–24 GB common | SVD and SVD-XT image-to-video. (Hugging Face ) |
Hardware sizing (plain rules)
These are still rough, but align with 2025 open-weight releases:
- 7B q4: ~4 GB VRAM/RAM.
- 13B q4: ~8 GB.
- 20B (e.g., gpt-oss-20b): ~16 GB VRAM or a mix of VRAM + fast RAM.
- 70B in heavy quant: ≥24 GB VRAM, often more.
- Bigger context windows need more memory. Prioritize VRAM (or NPU-accessible RAM) over raw GPU cores for LLMs.
Model formats you’ll see
| Format | Use with | Notes |
|---|---|---|
| GGUF | llama.cpp, Ollama, LM Studio, KoboldCpp | Quantized, CPU/GPU-friendly. Native for llama.cpp. (GitHub ) |
| GPTQ | ExLlama, text-generation-webui | NVIDIA-focused, good chat speed |
| AWQ | vLLM, TGI, text-generation-webui | Activation-aware quantization |
| EXL2 | ExLlamaV2/V3 | Optimized GPTQ variant for Llama-family |
| ONNX | Gaia, custom runtimes, some TGI/vLLM | Framework-agnostic; often used for NPU / DirectML / Ryzen AI / edge deployments via SDKs |
Modern “flagship” open-weight families like GPT-OSS-20B/120B, Qwen3, Gemma 3, and DeepSeek R-series/V-series usually ship HF safetensors plus community quantizations in GGUF/GPTQ/AWQ/EXL2.
Local RAG building blocks
| Tool | Type | Local friendly |
|---|---|---|
| Chroma | Embedded vector DB | Yes |
| Qdrant | Vector DB | Yes |
| LanceDB | Vector DB on Arrow | Yes |
| SQLite + sqlite-vec | Embedded | Yes |
Tip: keep chunks ~500–1000 tokens, store sources, and version your indexes. Many frontends (Open WebUI, AnythingLLM, Gaia) now have built-in RAG layers using one of these patterns plus an embedding model.
Speech and media blocks
| Task | Tool | Notes |
|---|---|---|
| ASR | faster-whisper | CPU or GPU. Local. |
| TTS | Piper | Small, offline. |
| Diarization | pyannote.audio | Multi-speaker audio. |
60-second installs
Windows (NVIDIA, beginner-friendly)
- Install Ollama (desktop app includes CLI + GUI).
- Open a terminal:
ollama run gpt-oss:20borollama run llama3to test. - Install Open WebUI
or LibreChat
. Point to
http://localhost:11434. (GitHub , Ollama , Open WebUI , LibreChat )
Windows (laptop / mini PC, no big GPU)
- Install LM Studio .
- Use its model browser to download a 7B–20B GGUF model (e.g., GPT-OSS-20B, Gemma 3 12B, Qwen3-Coder).
- In the model settings, enable GPU offload to your AMD/Intel iGPU, then enable the local API if you want to connect Open WebUI/LibreChat.
Windows (Ryzen AI)
- Install AMD Gaia using the Hybrid installer on a Ryzen AI PC.
- Choose a built-in agent (chat, YouTube Q&A, code) and attach your documents or repos.
- Optionally call Gaia via its REST API or MCP interface from tools that speak OpenAI-style APIs.
Linux (NVIDIA, server)
- Run vLLM or TGI via Docker to expose an OpenAI-style endpoint.
- Put Open WebUI or LibreChat in front for your UI. (VLLM Documentation , Hugging Face , Open WebUI , LibreChat )
Windows or Linux (desktop GUI)
Use LM Studio or GPT4All. Download a 7B GGUF, enable the local API, then connect your frontend if needed. (LM Studio , Nomic AI )
API interop map
- OpenAI-style servers: llama.cpp server, vLLM, TGI, Ollama, LM Studio, AMD Gaia. (GitHub , VLLM Documentation , Hugging Face , LM Studio , Ollama , 30 )
- Kobold API: KoboldCpp, KoboldAI backends. (GitHub )
- text-generation-webui: adapters for multiple backends. (GitHub )
Most 2025 frontends expect an OpenAI-style API; if your backend exposes one, you can usually swap it in without changing the UI.
Model catalogs
| Site | Focus | What you get | |
|---|---|---|---|
| Hugging Face Hub | LLMs (GPT-OSS, Qwen3, Gemma 3, DeepSeek, etc.), vision, audio, SD/SVD | Large model zoo with tooling (Spaces, Inference, datasets) | (Hugging Face ) |
| Civitai | Image/video models, LoRAs, embeddings | Community checkpoints and LoRAs | |
| ModelScope | Broad model repository | Direct downloads and SDK | (ModelScope ) |
| Ollama Library | Local LLM manifests | One-command pulls for Ollama | (Ollama ) |
| LM Studio Discover | Local LLM catalog in-app | Browse and download for local use | (LM Studio ) |
| Stability AI releases | SD, SDXL, SVD | Reference weights and licenses | (Hugging Face ) |
Security and licensing
- Bind to
127.0.0.1by default. - If you must expose a port, use a reverse proxy and auth (at minimum HTTP auth, preferably SSO or VPN).
- Read model licenses (GPT-OSS: Apache-2.0, Qwen/Gemma/DeepSeek: mostly permissive) before commercial use.
- Read app licenses too. Open WebUI v0.6.6+ uses a custom BSD-3-based license with a branding clause; white-labelling or rebranding may require an enterprise license.
- Store API keys in env vars or a secrets manager.
- Treat LLM backends as you would any network service:
- Run them under non-privileged users.
- Keep them patched.
- Only install images/binaries from reputable sources.
- Be aware that local AI stacks can be abused. For example, security researchers have already shown ransomware using GPT-OSS-20B locally via the Ollama API to generate and run attack scripts. If you allow untrusted code to talk to your local LLM stack, include that in your threat model.
Common pitfalls
- Pulling a model larger than your VRAM/RAM. Start small.
- Mixing GPU drivers or CUDA/ROCm versions. Keep them clean.
- Exposing services publicly with no auth.
- Assuming the frontend controls performance. It does not; the backend and model choice do.
- Forgetting that many “frontends” (Open WebUI, AnythingLLM, Gaia) now include RAG indexes and agents; back them up and secure them like any other data store.
References
- Open WebUI: Home
- Quick Start
- KoboldAI Lite
- LostRuins/lite.koboldai.net
- GitHub - KoboldAI/KoboldAI-Client: For GGUF support ...
- AnythingLLM | The all-in-one AI application for everyone
- Mintplex-Labs/anything-llm
- Get started with LM Studio | LM Studio Docs
- Jan.ai
- GPT4All – The Leading Private AI Chatbot for Local ...
- GPT4All
- ggml-org/llama.cpp: LLM inference in C/C++
- vLLM
- Text Generation Inference
- LostRuins/koboldcpp: Run GGUF models easily with a ...
- ollama/ollama: Get up and running with OpenAI gpt-oss, ...
- library
- LM Studio - Discover, download, and run local LLMs
- oobabooga/text-generation-webui: LLM UI with advanced ...
- Mozilla-Ocho/llamafile: Distribute and run LLMs with a ...
- Tabby - Opensource, self-hosted AI coding assistant
- What's Tabby
- comfyanonymous/ComfyUI: The most powerful and ...
- Stable Diffusion web UI
- Invoke
- stabilityai/stable-video-diffusion-img2vid
- The Model Hub
- ModelScope
- Model Catalog
- AMD Gaia