Fix RecurrencePattern deserialization crash for missing type discriminator
Details
Task #3 had a recurrence.pattern JSON object without a "type" field,
causing NotSupportedException with [JsonPolymorphic]. Replaced the
attribute-based approach with a custom RecurrencePatternConverter that
handles missing/unknown discriminators gracefully (defaults to daily).
This matches the pattern already used by RecurrenceEnd and is resilient
to data written by older versions or external sync sources.
Also added RustWeekdayConverter.TryParse for use by the converter.
Remove artificial token limits and sentence truncation for cloud models
Details
Cloud models use the user's own API key — they should speak freely.
The tier-based token ceilings (1024/4096/8192) and sentence-count
truncation were designed for local models that can't self-regulate.
Changes:
- CloudMaxTokensFor now returns a flat 16384 regardless of tier
- Sanitize() accepts isCloud flag: skips sentence truncation and
markdown stripping for cloud responses
- Local model path unchanged (still gets tight limits + cleanup)
Bump memory extractor MaxTokens from 256 to 512
Details
Gemini 2.5 Pro tends to be verbose in its JSON responses, causing
truncation at 256 tokens which breaks JSON parsing. 512 gives enough
headroom for the memory extraction response without significant cost.
Fix embedded dataset extraction — drill through PageDocument wrapper
Details
The note entity JSON nests blocks at content.content (PageDocument has
{ "type": "doc", "content": [...blocks...] }), not at content directly.
The previous fix checked content.ValueKind == Array which was Object
(the PageDocument), so it fell through to the else branch and returned
without finding any dataset references.
Now drills content → content.content to find the block array.
Fix Duncan not seeing embedded datasets in notes pages
Details
AppendEmbeddedDatasetContentAsync expected note content to be a JSON
string and scanned for "dataset_id" via text search. Content is actually
a JSON array of typed block objects, so the method bailed at the
type check and never found any dataset references.
Rewritten to:
- Parse content as a block array, iterating table/chart blocks
- Extract dataset_id (Data plugin datasets) and provider_plugin_id +
provider_query_key (cross-plugin queries from Tasks, Contacts, etc.)
- Query via IPluginDataSourceProvider using the correct provider for
each backing mode
- Fetch first 20 rows and inject columns + data into Duncan's context
- Legacy string fallback preserved for older content formats
- Capped at 3 embedded data sources per note to stay within token limits
Get notified about new releases