Skip to main content

Changelog

Every improvement, automatically tracked from our commit history.

Subscribe via Atom feed
← Prev Page 28 of 139 Next →
February 27, 2026
patch Desktop Shell

Inject compact action catalog when embedded datasets are present

Details

When a note has embedded dataset tables, always inject the

ActionFormatHeader + a compact DatasetActionReference (~188 tokens)

listing notes.create_note, notes.update_note, and data.generate_insights

with chart block syntax. This ensures Duncan knows how to use [ACTION]

blocks for note operations without relying on RAG index freshness.

Collapsed the reference from ~481 tokens to ~188 by removing duplicated

format rules (ActionFormatHeader already covers those).

patch Desktop Shell

Include dataset ID in embedded context metadata

Details

Add the dataset UUID to the context injected for embedded dataset-backed

tables, so Duncan can reference it in [CHART:] markers when creating

notes with chart blocks.

patch Desktop Shell

Enable Duncan to dynamically query datasets via SQL

Details

Add [QUERY] block support to Duncan's cloud chat, allowing dynamic SQL

execution against embedded datasets. When a note contains dataset-backed

tables, Duncan now receives full schema metadata (column names, types,

row count) and can emit [QUERY]SELECT ... FROM source:"Name"[/QUERY]

blocks. The system executes queries via IDatasetService.ExecuteSqlV2Async,

injects results into conversation history, and triggers a follow-up AI

call for analysis.

Safety: read-only validation (SELECT/WITH only), 100 row cap per query,

max 3 queries per response, cloud-only (local models excluded).

patch Desktop Shell

Remove artificial token limits and sentence truncation for cloud models

Details

Cloud models use the user's own API key — they should speak freely.

The tier-based token ceilings (1024/4096/8192) and sentence-count

truncation were designed for local models that can't self-regulate.

Changes:

  • CloudMaxTokensFor now returns a flat 16384 regardless of tier
  • Sanitize() accepts isCloud flag: skips sentence truncation and

markdown stripping for cloud responses

  • Local model path unchanged (still gets tight limits + cleanup)
patch Desktop Shell

Bump memory extractor MaxTokens from 256 to 512

Details

Gemini 2.5 Pro tends to be verbose in its JSON responses, causing

truncation at 256 tokens which breaks JSON parsing. 512 gives enough

headroom for the memory extraction response without significant cost.

← Prev Page 28 of 139 Next →

Get notified about new releases