Add grouped aggregate SDK types for multi-series chart support
Details
Introduces GroupedAggregateQuery, AggregateSeriesResult, and GroupedAggregateResult
records to the SDK dataset models. Adds AggregateGroupedAsync method to both
IDatasetService and IDataObjectProvider interfaces, enabling plugins to request
multi-series chart data where a group_column produces separate value series sharing
a common x-axis. Bumps SDK version from 1.53.0 to 1.54.0.
Add DatasetInsightOrchestrator for AI-powered dataset analysis
Details
Shell-side orchestrator that receives DatasetInsightRequestMessage from the
Data plugin, calls AiService.CompleteAsync with a structured analysis prompt,
parses the response into sections, and saves results as Notes pages under a
"Duncan Generated Data Insights" parent page with table of contents.
New files:
- DatasetInsightOrchestrator.cs: Message handler + AI call + note creation
- DatasetInsightModels.cs: Result/section record types
- InsightPageBuilder.cs: JSON payload builder for Notes page creation
DI wiring in ServiceRegistration + eager activation in MainWindowViewModel.
Desktop version bumped to 1.53.0.
Add DatasetInsightRequestMessage, bump SDK to 1.53.0
Details
New SDK message type for dataset insight generation. The Data plugin
sends this message with column metadata and sample rows when the user
clicks "Generate Insights". The shell's DatasetInsightOrchestrator
subscribes and runs AI analysis.
Add tabbed AI panel, conversation persistence, token limits & active context
Details
Tabbed AI Panel: Split the single-panel AI tray into Chat, Intents, and History
tabs. Chat shows free-form conversation, Intents shows intent suggestions and
content suggestion cards, History shows browseable past sessions.
Conversation Persistence: New AiConversationStore saves chat sessions to disk
(ai-conversations.json) with debounced writes. Sessions are auto-titled from the
first user message, capped at 100 with oldest auto-pruned.
Token Tracking: Each AI provider now declares ContextWindowTokens per model. The
chat tracks estimated token usage against the active model's context window. At
80% capacity, a warning banner suggests starting a new chat.
Active Context Injection: The AI system prompt now includes what the user is
currently viewing (active plugin item from InfoPanelService) for more relevant
responses. Cloud models get full detail fields, local models get a short summary.
Split ViewModel: AiSuggestionTrayViewModel refactored into partial files — main
orchestrator, Chat (persistence + tokens), Intents (suggestion handlers), and
History (session browsing/deletion).
Version bump: Desktop 1.51.1 → 1.52.0
Add ContextWindowTokens to AiModelInfo, bump SDK to 1.52.0
Details
Added int ContextWindowTokens property to AiModelInfo record in PrivStack.Sdk
to expose per-model context window sizes. This enables the desktop AI chat tray
to track token usage against model limits and warn users when approaching context
exhaustion. Backward-compatible — defaults to 0 for existing consumers.
Get notified about new releases