Add DatasetInsightRequestMessage, bump SDK to 1.53.0
Details
New SDK message type for dataset insight generation. The Data plugin
sends this message with column metadata and sample rows when the user
clicks "Generate Insights". The shell's DatasetInsightOrchestrator
subscribes and runs AI analysis.
Enhance AI context: email/web clips, embedded datasets, history tab fix
Details
Add email_message and web_clip to EntityTypeMap so the AI tray sends
context for those entity types. For notes containing embedded
datasets/tables, parse dataset_id references from content and fetch
actual row data via IPluginDataSourceProvider. Add fallback entity
fetching through DataSourceProvider for unmapped types like dataset_row.
Fix history tab: clicking a previous conversation now switches to the
chat view by subscribing to SelectedTabIndex PropertyChanged in the
code-behind.
Fetch full entity data for active context injection in AI chat
Details
When the user views an item and chats with the AI, cloud models now receive
the full entity JSON (fetched via SDK Read) instead of just the title. This
gives the AI complete knowledge of the note body, task description, contact
details, etc. for the item the user is looking at.
Entity data is fetched async on active item change via SdkMessage Read,
serialized as indented JSON, and capped at 8K chars to avoid prompt bloat.
Local models still get the short "Currently viewing: Title (type)" summary
since their context windows are too small for full entity payloads.
Add tabbed AI panel, conversation persistence, token limits & active context
Details
Tabbed AI Panel: Split the single-panel AI tray into Chat, Intents, and History
tabs. Chat shows free-form conversation, Intents shows intent suggestions and
content suggestion cards, History shows browseable past sessions.
Conversation Persistence: New AiConversationStore saves chat sessions to disk
(ai-conversations.json) with debounced writes. Sessions are auto-titled from the
first user message, capped at 100 with oldest auto-pruned.
Token Tracking: Each AI provider now declares ContextWindowTokens per model. The
chat tracks estimated token usage against the active model's context window. At
80% capacity, a warning banner suggests starting a new chat.
Active Context Injection: The AI system prompt now includes what the user is
currently viewing (active plugin item from InfoPanelService) for more relevant
responses. Cloud models get full detail fields, local models get a short summary.
Split ViewModel: AiSuggestionTrayViewModel refactored into partial files — main
orchestrator, Chat (persistence + tokens), Intents (suggestion handlers), and
History (session browsing/deletion).
Version bump: Desktop 1.51.1 → 1.52.0
Add ContextWindowTokens to AiModelInfo, bump SDK to 1.52.0
Details
Added int ContextWindowTokens property to AiModelInfo record in PrivStack.Sdk
to expose per-model context window sizes. This enables the desktop AI chat tray
to track token usage against model limits and warn users when approaching context
exhaustion. Backward-compatible — defaults to 0 for existing consumers.
Get notified about new releases