Add Developer settings section with local API toggle
Details
New "Developer" category in Settings (between Security and Enterprise)
with UI to enable/disable the local HTTP API server:
- Enable toggle that starts/stops the Kestrel server immediately
- Port configuration field (default 9720)
- API key display with Copy and Regenerate buttons
- Status indicator showing running state
- Quick test curl command for easy verification
Document local HTTP API in APP_CONTEXT.md
Details
Add IApiProvider to the capability table and describe the local API server
in the shell features section (port, auth, endpoints, opt-in setting).
Add IApiProvider SDK capability and local HTTP API server
Details
New SDK capability interface (IApiProvider) lets plugins declare HTTP API
routes and handle requests via pure SDK DTOs — no Kestrel dependency in
plugins. The shell hosts a Kestrel minimal API server on 127.0.0.1:9720
(configurable) that discovers providers and maps routes.
- IApiProvider interface + ApiProviderModels DTOs (ApiMethod, ApiRouteDescriptor,
ApiRequest, ApiResponse with static factory methods)
- LocalApiServer: WebApplication.CreateSlimBuilder(), API key auth middleware
(constant-time compare), per-provider route mapping, /api/v1/status (no auth)
and /api/v1/routes shell endpoints
- AppSettings: ApiEnabled (default false), ApiPort (default 9720), ApiKey
(auto-generated base64url on first enable)
- ServiceRegistration: LocalApiServer singleton
- App.axaml.cs: conditional server start in deferred background services
- PluginRegistry.ActivatePlugin: auto-register IApiProvider to CapabilityBroker
- FrameworkReference Microsoft.AspNetCore.App in Desktop csproj
Add stop_reason/finish_reason validation to all AI providers
Details
All cloud AI providers (Anthropic, OpenAI, Gemini, Groq, Mistral) now
check the API's stop/finish reason field to detect token-limit truncation.
Previously, truncated responses were returned as Success=true with
partial content, causing garbled output or literal "MAX_TOKENS" error
strings to leak through to the user.
Changes:
- AiResponse: add WasTruncated property for consumer-side handling
- AnthropicProvider: check stop_reason == "max_tokens"
- OpenAiProvider: check finish_reason == "length"
- OpenAiCompatibleProviderBase: check finish_reason == "length" (covers
Groq, Mistral, and any future OpenAI-compatible providers)
- GeminiProvider: check finishReason == "MAX_TOKENS", log warning
- All providers log a warning with the MaxTokens limit when truncated
Add BoxShadows elevation token system to SharedTokens
Details
Define a 5-tier elevation shadow system as BoxShadows resources:
ThemeShadowXs — cards at rest, subtle depth cue
ThemeShadowSm — surface cards, raised sections
ThemeShadowMd — elevated cards, hover lift, floating toolbars
ThemeShadowLg — menus, popovers, dropdowns, toasts
ThemeShadowXl — modals, dialogs, full-screen overlays
Plus two hover-state tokens (ThemeShadowXsHover, ThemeShadowSmHover)
for cards that lift on pointer-over.
Migrate all theme style classes (card, surface-card, elevated-card,
stat-card, item-card, modal, shadow-sm/md/lg, hoverable) and the
menu/context-menu templates to reference these tokens instead of
hardcoded values. Also migrate inline shadows in MainWindow,
UniversalSearchDropdown, and ToastContainer.
Previously there were 80+ hardcoded shadow values with inconsistent
opacity levels across the codebase. This establishes a single source
of truth that can be tuned in one place.
Get notified about new releases