GitHub presence
The full release archive, issue tracker, and community discussions for LM Studio are maintained on the public GitHub presence page.
View GitHub pageHow LM Studio release notes are structured, where to find them, how to interpret version numbers, when to stay on stable versus switch to beta, and how to roll back safely if a new build causes issues.
LM Studio release notes live on the GitHub releases page and are structured by version with categorised sections: new features, bug fixes, model runtime updates, and breaking changes. Stable builds ship every few weeks; beta builds appear more frequently. Rolling back is safe — model files are unaffected by app version changes. Version numbers follow semantic versioning with a build suffix.
Each release entry follows a consistent structure that lets you quickly identify what changed and whether the update is relevant to your workflow.
Every LM Studio release entry includes a version number at the top in semantic versioning format (MAJOR.MINOR.PATCH), a release date, and a structured changelog divided into labelled sections. The most common sections are: New features — capabilities that did not exist in the previous build; Improvements — refinements to existing behavior such as faster model loading or a redesigned UI element; Bug fixes — specific issues resolved, often with a reference to the GitHub issue number; Runtime updates — changes to the bundled llama.cpp or inference engine version; and Breaking changes — items that may affect existing workflows, such as configuration format changes or removed options.
The runtime updates section is particularly important for users tracking model compatibility. When the bundled inference engine updates, it may add support for new model architectures, change quantization behavior, or shift performance characteristics for existing models. Reading this section before updating helps you anticipate any changes in token output or speed.
If a release note entry says "breaking change," read it carefully before updating on a machine where LM Studio is part of a production workflow. Breaking changes are rare but can include renamed API endpoints, configuration key changes, or modified default sampling parameters that affect scripted workflows.
Release notes are maintained in a single authoritative location on GitHub, with secondary references on the download page for the most recent build.
The canonical home for LM Studio release notes is the GitHub releases page, accessible from the GitHub presence page on this site. Each release appears as a tagged entry with the full changelog text. Older releases are preserved indefinitely — you can scroll back through the full version history from the current build to the earliest public release.
The download page on this site surfaces the changelog highlights for the most recent stable build. For the complete history and older installer files, the GitHub releases page is the place to go. A changelog RSS feed is also available from the GitHub releases page if you prefer to track updates in a feed reader without checking manually.
In-app update notifications appear when LM Studio detects a newer stable build available. The notification includes a one-click link to the changelog before you commit to downloading, which is a useful way to review what changed without leaving the app.
The channel you run on determines the trade-off between feature access and reliability — stable is conservative, beta gives early access at the cost of occasional rough edges.
The stable channel receives builds that have completed a full testing cycle. These are the builds recommended for everyday use, for machines used in production environments, and for anyone who wants consistent behavior between sessions. Stable releases ship roughly every two to four weeks, though the cadence varies with the scope of changes in each cycle.
The beta channel ships candidate builds earlier in the release cycle. Beta builds include new features, experimental inference backends, and updated model runtime versions before they land in stable. They are useful if you want to test support for a newly released model architecture or try an experimental feature before it is finalized. Beta builds occasionally have rough edges — a settings UI that is not yet fully polished, or a model that loads slightly differently than expected. The trade-off is early access versus stability.
To switch channels, open Settings in LM Studio and look for the "Update channel" setting. Switching from stable to beta and back is safe — neither direction modifies your model files or settings beyond updating the application binary. If a beta build causes an issue, rolling back to the most recent stable build resolves it.
Rolling back is safe because model files and settings are stored independently from the application binary.
If a new LM Studio release introduces a regression — a feature that stopped working, an inference speed slowdown, or a server mode change that broke a connected client — rolling back to the previous version is straightforward. The model files in the cache directory are untouched by the update; they are plain GGUF files on disk that any version of LM Studio can read.
On Windows: download the older installer from the GitHub releases archive and run it. The installer will overwrite the current version. Settings in %APPDATA%\LM Studio persist unless the older version uses an incompatible configuration format, which is rare.
On macOS: download the older .dmg, open it, and drag the app to Applications, replacing the current version. The Gatekeeper prompt may appear again for the older build if it was signed with a different certificate revision.
On Linux: replace the AppImage file with the older version from the GitHub releases archive. Since AppImages are self-contained, you can keep multiple versions on disk simultaneously and switch between them by launching the appropriate file.
Stanford's Human-Centered AI group (stanford.edu) has published research on software version management and rollback practices that is relevant context for teams managing LM Studio deployments at scale.
The inference engine bundled with LM Studio updates independently from the app UI, and those updates often matter more than UI changes for model compatibility and speed.
LM Studio bundles a version of the llama.cpp inference engine (or a compatible fork) inside each release. When the bundled runtime updates, the behavior of model loading, tokenization, and sampling can shift. New model architectures — a freshly released Qwen, Mistral, or Phi variant — typically require a runtime update before LM Studio can load them.
The release notes "Runtime updates" section lists the bundled engine version and any notable changes. If you load a model after an update and notice it produces different outputs than before, check the runtime version in the changelog. A change in the default sampling parameters or a corrected tokenizer implementation can shift model behavior in ways that are technically improvements but feel surprising on first use.
For users who maintain fixed prompting workflows — where consistent outputs across sessions matter — it is worth keeping a note of which LM Studio version and runtime version produced a baseline output, so you can attribute output changes to the correct cause when they appear.
| Release | Date | Highlight |
|---|---|---|
| 0.3.8 (stable) | 2026-03-14 | llama.cpp r1820 — adds native support for Qwen2.5-VL vision models; CUDA 12.4 compatibility |
| 0.3.7 (stable) | 2026-02-22 | Server mode: streaming response headers now include X-Model-Name; improved token-count display in chat |
| 0.3.6 (stable) | 2026-01-30 | ROCm 6.1 support for AMD RX 7000 series on Linux; reduced VRAM overhead for Q8 models |
| 0.3.5 (beta) | 2026-01-08 | Experimental: multi-model load — keep two models resident simultaneously; early preview, beta only |
| 0.3.4 (stable) | 2025-12-11 | macOS 15 Sequoia compatibility fixes; Metal backend throughput improvements on M4 Max |
The full release archive, issue tracker, and community discussions for LM Studio are maintained on the public GitHub presence page.
View GitHub pageGet the current stable build for Windows, macOS, or Linux, with a changelog summary for the most recent release.
Go to downloadsCommon post-update issues — models not loading, server mode changes, and performance regressions — with step-by-step fixes.
Open troubleshootingCommon questions about LM Studio version management, changelogs, and update channels.
Release notes for each build are published on the LM Studio GitHub releases page, accessible from the GitHub presence page on this site. Each entry lists the version number, release date, and a categorised changelog covering new features, bug fixes, and model runtime updates. The full archive goes back to the earliest public release.
The stable channel receives fully tested builds on a two-to-four-week cadence, recommended for everyday and production use. The beta channel ships earlier builds with new features and runtime experiments before they reach stable — useful for testing upcoming model support or new inference backends, but with occasional rough edges. Switch channels in Settings under "Update channel."
Download the specific older installer from the GitHub releases archive. On Windows, run it over the current version or uninstall first. On macOS, drag the older app bundle to Applications to replace the current one. On Linux, swap the AppImage file. Model files and settings are stored separately and are not affected by any version change in either direction.
No. Application updates do not modify, delete, or re-download model files. The model cache directory is completely separate from the app binary and its bundled runtime. Updating LM Studio replaces only the application and bundled inference libraries — your GGUF files remain exactly where they were, in exactly the same state.