Core Findings
The team is independent of the LM Studio upstream project, carries no commercial relationships with model vendors or hardware manufacturers, and evaluates each release on real hardware before updating any page on this site.
Who we are
Four contributors with complementary backgrounds cover the platform, API, performance, and documentation corners of LM Studio — each writes in their area of hands-on experience.
The team came together because none of its members could find a single reference site for LM Studio that was both accurate and written by people who actually ran the software. Vendor documentation is necessarily promotional. Third-party reviews are often written from a single session on one platform. Neither serves someone who needs to know, for instance, exactly how the GPU layer slider interacts with VRAM on an NVIDIA 3060 Ti, or what happens to the API response schema when you load a model with a non-standard chat template.
The answer was to build the reference they wanted to use themselves. Each contributor takes responsibility for a domain area, tests claims before publishing them, and updates pages when a new LM Studio release changes the behaviour being described. The editorial culture is practical: if it can't be reproduced on real hardware, it doesn't go on the page as a fact.
The lead editor
The lead editor sets editorial standards, coordinates release testing, and makes final calls on scope — what gets covered, how deeply, and when a page is ready to publish.
Morgan K. Reeves has been working with local inference tooling since the earliest llama.cpp releases. The LM Studio coverage on this site started as internal notes shared within a small engineering team and grew into a public resource when it became clear others were asking the same questions. Morgan focuses on cross-platform consistency testing — verifying that claims made on a macOS install hold true on Windows and Linux — and on keeping the API documentation current across LM Studio's server-mode releases.
Correction requests and editorial feedback go through the shared inbox at hello@lmstudio.co.com and land in Morgan's review queue first. Urgent corrections are turned around within two business days; deeper factual investigations may take longer if hardware testing is required.
How the team evaluates LM Studio releases
Each new LM Studio release is tested on at least two platforms against a fixed prompt suite before any page is updated — opinion and hands-on result are always labelled separately.
When a new LM Studio version ships, the team's process runs in sequence. First, a clean install on Windows and macOS. Then a run of the standard prompt suite — twelve prompts that exercise instruction following, code generation, summarisation, and multi-turn coherence — using the same model file at the same quantization level as the previous round. Any difference in output quality, latency, or error behaviour gets noted. Second, the server tab is exercised: the OpenAI-compatible endpoint is queried via curl and via a Python script to verify the response schema hasn't changed. Third, any changelog items that claim UI or UX improvements are reproduced step-by-step in the UI.
Pages are updated to reflect verified findings. If a claimed improvement can't be reproduced, it doesn't appear in the "what changed" section of the relevant page. If a regression is found that isn't mentioned in the changelog, it gets noted with the version number so readers know what to watch for when they update.
Contributor roles and focus areas
The table below lists each contributor role, their primary focus, and approximate years of experience with local LLM tooling.
| Contributor role | Focus area | Experience (years) |
|---|---|---|
| Lead Editor (M. K. Reeves) | Cross-platform testing, API documentation, editorial standards | 6 |
| Platform Specialist | Windows and Linux install guides, system requirements, portable builds | 4 |
| Model & Performance Analyst | Quantization comparisons, GPU tuning, hardware fit guides | 5 |
| Developer Integration Writer | Server mode, OpenAI-compatible API, SDK integration examples | 4 |
Editorial independence and conflict of interest policy
No contributor holds a financial interest in any model vendor, GPU manufacturer, or cloud provider whose products appear in LM Studio's ecosystem.
Before joining the team, each contributor discloses whether they have employment, advisory, or equity relationships with companies whose products might appear on this site. The current team has none. That disclosure is revisited annually. If a conflict emerged — for instance, a contributor joined the board of a company whose product is compared on a page here — the affected pages would be flagged and reviewed by a contributor without the conflict before remaining live.
The site accepts no payment for coverage, no free hardware from GPU manufacturers, and no early access arrangements that come with publication obligations. Early access to LM Studio releases through public beta channels is used when available, but those beta periods come with no editorial strings.
"Reading the API page here saved me an hour of trial-and-error. The specific note about the chat template header was exactly the detail missing from every other source I found."
Caleb J. Whitford · DevOps Lead, Greylock Systems, Raleigh NC
Contributing to this site
The team occasionally brings in contributors with deep hands-on experience in a specific area — the process starts with a brief pitch, not a full draft.
If you have spent significant time with a part of LM Studio that isn't well-covered here — a niche hardware configuration, a specific integration with an agent framework, or a workflow the existing pages don't address — the team wants to hear about it. Email hello@lmstudio.co.com with a paragraph describing your experience and the gap you'd fill. Two sentences about your background and two about what you'd write is enough to start the conversation.
The team reviews contribution pitches on a rolling basis. There is no editorial calendar pressure, so a pitch that arrives in the middle of a busy release cycle may sit for a few weeks before getting a response. Persistence is welcome; follow-ups after two weeks are reasonable.