LM Studio documentation: reference index for the desktop app

Every topic on this site, organized by silo and audience — so you can jump directly to the page that matches what you are doing right now.

Field Notes

LM Studio documentation spans four silos on this site. If you are new, the quickstart and tutorial cover the first session end-to-end. If you know exactly what you need, use the index table below to jump to the right page directly.

How the LM Studio documentation is structured here

Four silos — Platforms, Capabilities, Resources, and Support — map every LM Studio topic to the question a reader is most likely to be asking at that moment.

This site organizes LM Studio documentation around reader intent rather than product features. Someone who just downloaded the application for the first time is asking a different question than a developer wiring up an agent framework, and those two readers should land on different pages. The four-silo model solves that by separating installation help from capability deep-dives, and both of those from the comparison and troubleshooting material that serves readers who are already running the app.

The Platforms silo answers "how do I install this on my machine?" with separate pages for Windows, macOS, Linux, and the portable build. System requirements live here too, because the decision about whether to download at all is tied to what the hardware can realistically run. Each platform page notes the GPU acceleration path for that operating system: CUDA on Windows with an NVIDIA card, Metal on macOS with Apple Silicon, ROCm for AMD on Linux, and Vulkan as a cross-platform fallback.

The Capabilities silo answers "how do I use this feature?" The local LLM page covers the model loading flow, context window options, and chat presets. The server mode page walks through enabling the built-in HTTP server and choosing a port. The API page documents the OpenAI-compatible endpoint that server mode exposes. The model library page explains how the in-app browser surfaces quantized variants with hardware-fit hints. The performance page covers the GPU layer-offload slider and the RAM thresholds that determine which model classes are realistic on a given machine.

The Resources silo contains the pages you are most likely to share: the quickstart for a colleague who is brand new, the tutorial for a structured first-session walkthrough, the vs-Ollama comparison for someone evaluating alternatives, the alternatives roundup for a wider survey, and the GitHub presence overview for anyone tracking open-source activity. This documentation index is itself part of Resources.

The Support silo covers what happens when something goes wrong: the troubleshooting page lists common errors and fixes, the security policy explains the data-handling guarantees, the support hub aggregates help links, and the contact page provides a direct channel for issues that need escalation.

What each silo covers

Six core topics account for most of what people actually look up: install, model loading, server mode, the API, troubleshooting, and comparisons with other tools.

LM Studio documentation topic index — topic, page, and intended audience
TopicPageAudience
Platform install (Windows, Mac, Linux, portable)windows.html, mac.html, linux.html, portable.htmlFirst-time installers
First session from download to first promptquickstart.html, tutorial.htmlNew users
Server mode and local OpenAI-compatible APIserver.html, api.htmlDevelopers, agent builders
Model library, quantization, hardware fitmodels-library.html, performance.htmlPower users, researchers
Comparisons with Ollama and other alternativesvs-ollama.html, alternative.htmlEvaluators, switchers
Troubleshooting, security, and supporttroubleshooting.html, security-policy.htmlAll users

Where to start depending on your situation

Your next click depends on one question: have you already installed the application, or are you still deciding whether to download it?

If the application is not yet on your machine, open the download page and pick the installer for your platform. Windows users should grab the .exe installer; macOS users can choose between the Apple Silicon and Intel builds; Linux users get an AppImage that runs on most major distributions without a package manager. After the download completes, the quickstart page walks through the first ten minutes step by step, including loading your first model and running a prompt.

If LM Studio is already installed but you want to go deeper, the tutorial expands the quickstart into a six-step walkthrough that includes server mode activation, a test API call with curl, and configuring a chat preset for a specific use case. The tutorial is the single most complete worked example on this site.

If you are comparing LM Studio against another tool, the vs-Ollama comparison breaks down the key differences across eight features with a side-by-side table. The alternatives page casts a wider net, covering Jan, GPT4All, llama.cpp, KoboldCpp, and text-generation-webui.

If something is broken, the troubleshooting page lists the five most common failure modes — model won't load, GPU not detected, slow inference, server port conflict, and AppImage permissions on Linux — each with a cause and a concrete fix. For issues not covered there, the support hub aggregates community and official channels.

External standards this documentation references

Two external sources inform the way this site discusses AI system evaluation and data handling: NIST's AI Risk Management Framework and MIT's open courseware on machine learning systems.

Where this LM Studio documentation touches on evaluating model quality or responsible deployment, it references NIST's AI Risk Management Framework as a neutral standard for thinking about risk categories. For readers who want a deeper grounding in how language models actually work before evaluating them in LM Studio, the MIT OpenCourseWare catalog includes relevant machine learning courses that are freely available.

Frequently asked questions

Five questions readers most often have when arriving at the LM Studio documentation index.