Fix ACTION block parser failing on JSON with arrays/nested objects
Details
The regex-based ACTION block parser used [^[]* which broke on any JSON
containing literal [ characters (e.g. checklist arrays, tags arrays).
This caused 0 blocks parsed despite valid-looking [ACTION] output,
silently dropping AI actions that would have updated tasks.
Replaced regex approach with brace-depth counting that handles nested
{}, [], and quoted strings correctly. Also added FlattenSlotValue()
which converts JSON arrays to newline-separated strings — so when the
AI sends "add_checklist": ["item1", "item2"], it becomes the
newline-delimited format the intent handler expects.
Added ACTION format header guidance for slot naming (add_checklist not
checklist) and array support.
Version: 1.65.1 -> 1.65.2
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Fix cloud AI token limits causing MAX_TOKENS truncation
Details
Cloud providers handle their own context limits and bill per token, so
artificially capping them at 200/1200/2500 caused truncation errors
(MAX_TOKENS from Gemini). Raised cloud token budgets to 1024/4096/8192
— the system prompt's length guidance still controls actual verbosity.
Also bumped AiMemoryExtractor MaxTokens from 100 to 256 to prevent
JSON truncation that caused the JsonReaderException on memory extraction.
Version: 1.65.0 -> 1.65.1
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add hardware-aware AI recommendations with privacy-first cloud providers
Details
Hardware profiling: GPU detection (CUDA/ROCm/Metal), CPU/SIMD capability
assessment (AVX2/AVX-512), and available memory detection. Composite
fitness scoring (0-100) with Green/Yellow/Red tiers determines local AI
viability and drives cloud provider recommendations.
New cloud providers: Mistral AI (EU-based, GDPR-native, HighPrivacy) and
Groq (fast inference, StandardApi). Both use OpenAiCompatibleProviderBase
which extracts shared OpenAI chat format logic.
Privacy tier system: PrivacyTier property on all providers — HighPrivacy
(Anthropic, Mistral, Local) vs StandardApi (OpenAI, Gemini, Groq).
Provider dropdown now shows "Privacy-First" labels.
Smart GPU offload: LocalLlamaProvider uses PlatformDetector.DetectGpu()
to set GpuLayerCount to -1 (all layers) when GPU is available, or 0
(CPU-only) when no accelerator is detected — prevents crashes and
unusable performance on CPU-only systems.
Settings UI: Hardware Report Card replaces single RAM recommendation line
showing RAM/GPU/CPU details, fitness score badge, and cloud-recommended
banner for Yellow/Red tier systems.
Desktop version bumped from 1.64.3 to 1.65.0.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add PrivacyTier enum and property to AiProviderInfo SDK type
Details
Introduces PrivacyTier enum (HighPrivacy, StandardApi) to classify AI
providers by their data handling guarantees. Adds nullable PrivacyTier
property to AiProviderInfo record. Bumps SDK version from 1.64.0 to
1.65.0 to publish updated types before dependent desktop changes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move wiki-link resolution and conversation history off UI thread
Details
ResolveWikiLinksAsync was running on the UI thread before Task.Run,
performing up to 5 async database lookups via ILinkableItemProvider
.GetItemByIdAsync — each one blocking the UI. Moved it inside Task.Run.
BuildConversationHistory was also running on UI thread with LINQ over
the ObservableCollection. Replaced with a snapshot taken on the UI
thread (fast — just select/toList) and the filtering/trimming logic
moved inside Task.Run.
This eliminates the brief UI freeze between pressing Send and seeing
the streaming response begin. Bumps Desktop to 1.64.3.
Get notified about new releases