Local LLM
Run local LLM inference server
Notes
- The server is intended to be
openai-compatible.
Commands
ts
import { commands } from "@hypr/plugin-local-llm";- downloadModel
- isModelDownloaded
- isModelDownloading
- isServerRunning
- listSupportedModels
- modelsDir
- restartServer
- startServer
- stopServer