Skip to main content

Changelog

Every improvement, automatically tracked from our commit history.

Subscribe via Atom feed
← Prev Page 49 of 139 Next →
February 22, 2026
patch Core

Add NotEquals variant to FilterOperator enum

Core 1.15.0 → 1.15.1 | 6510ac87
Details

The dataset view filter system was missing a NotEquals operator,

causing deserialization failures when views used not-equals filters.

Added the variant to the FilterOperator enum so it serializes as

"not_equals" via serde rename_all snake_case. Bumped workspace

version to 1.15.1.

patch Desktop Shell

Fix RAG index never starting on app launch

Desktop 1.60.0 → 1.60.1 | 99cdf5c0
Details

RagIndexService was registered as a lazy singleton but never resolved

during startup — only when downloading the embedding model or seeding.

Its constructor contains a 5-second delayed auto-init task that calls

StartFullIndexAsync, but since the singleton was never constructed, the

RAG pipeline was completely dead on normal launches. Duncan had zero

indexed context for every chat message.

Eagerly resolve RagIndexService in the deferred background services

block so its constructor runs and the auto-init fires.

Bump version to 1.60.1.

minor Desktop ShellSDK

Add conversation history support to local LLM provider

Desktop 1.59.0 → 1.60.0 | af811ab0
Details

Local models were running every message as a standalone prompt with zero

conversational context — follow-up questions like "but my income is 4700"

had no reference to the prior exchange. This made multi-turn chat useless.

FormatPrompt now accepts ConversationHistory and builds proper multi-turn

chat templates for all three model families (Llama 3.x, Mistral, Phi-3),

including correct turn delimiters and system prompt positioning. History

is capped at 6 turns to fit within local context windows.

The chat view model now passes BuildConversationHistory() to local model

requests, giving Duncan the same conversational continuity as cloud models.

Bump version to 1.60.0.

minor Desktop Shell

Improve Duncan AI response quality and add model recommendation

Desktop 1.58.8 → 1.59.0 | 9cf80968
Details

Three fixes for Duncan giving hallucinated, context-unaware responses:

1. System prompt now instructs Duncan to use provided data context and

never fabricate numbers — applies to both local and cloud prompts.

2. Local RAG context budget increased from 200 to 400 chars per chunk

(6 chunks max, up from 5) so budget data isn't truncated to nothing.

Mistral 7B context window increased from 4096 to 8192 tokens to

accommodate the richer context.

3. System RAM detection via GC.GetGCMemoryInfo().TotalAvailableMemoryBytes

drives a model recommendation in AI settings: 16+ GB gets Mistral 7B,

8+ GB gets Phi-3 Mini, smaller systems get Llama 3.2 1B. The recommended

model is labeled in the dropdown and an explanation shown below it.

Bump version to 1.59.0.

patch Desktop Shell

Fix </s> stop tokens leaking into Duncan chat responses

Desktop 1.58.7 → 1.58.8 | 9474b487
Details

Two bugs: (1) The ChatTokenPattern regex only matched <|token|> format but

not bare </s> from Mistral models. Added "s" to the alternation group.

(2) Cloud provider responses bypassed Sanitize() entirely, allowing any

stray tokens or formatting artifacts through. Now all responses go through

the same sanitization pipeline regardless of provider.

Bump version to 1.58.8.

← Prev Page 49 of 139 Next →

Get notified about new releases