Fix HttpClient timeouts due to broken IPv6 connectivity
Details
Root cause: privstack.io has an AAAA (IPv6) record (2a02:4780:4:99db::1)
alongside its A (IPv4) record. .NET's SocketsHttpHandler uses Happy
Eyeballs (RFC 8305) which tries IPv6 first. On this Mac, IPv6
connectivity to privstack.io is completely broken (curl -6 hangs for 5s+)
while IPv4 works in <200ms. The HttpClient was hanging on the IPv6
attempt for the full Timeout duration (30s) before falling back.
Fix: Use SocketsHttpHandler.ConnectCallback to force IPv4 (AF_INET)
connections on all HttpClient instances that connect to privstack.io.
This bypasses the broken IPv6 path entirely.
Verified: curl -4 returns in 157ms, curl -6 times out at 5s.
Affected services:
- PrivStackApiClient (update checks, plugin registry, auth, OAuth)
- PluginInstallService (online check, plugin downloads)
- RegistryUpdateService (update downloads)
Filter Graph/Backlink data by active plugins and add memory diagnostics
Details
BacklinkService and GraphDataService legacy fallbacks loaded ALL entity
types from hardcoded lists regardless of plugin activation state. When
plugins are disabled, their IGraphDataProvider isn't available, so every
entity type fell into the legacy path — loading all data into memory and
rendering it in the Graph. Now both services build a set of entity types
from ActivePlugins and filter legacy loading against it.
Added detailed memory diagnostics:
- SystemMetricsService.GetDetailedMemoryDiagnostic() returns GC generation
sizes, native memory estimate, loaded assemblies, thread count, and
active plugin counts
- Dashboard memory card now shows "X GC heap · Y native" with full
breakdown on tooltip hover
- Startup logs a [MemoryDiag] line after deferred services complete,
showing the full breakdown for post-mortem analysis
Fix HttpClient timeouts caused by ASP.NET Core framework reference
Details
After the architecture split (ec98aeb), PrivStackApiClient moved from
PrivStack.Desktop into PrivStack.Services, which has a FrameworkReference
to Microsoft.AspNetCore.App (needed for LocalApiServer/Kestrel). When
ASP.NET Core is loaded, bare `new HttpClient()` can pick up a different
default HttpMessageHandler from ASP.NET Core's infrastructure rather
than the standard SocketsHttpHandler, causing outgoing HTTPS connections
to silently fail on macOS (30s timeout despite the server responding
in <400ms from curl).
Fix: Explicitly construct all static HttpClients with a SocketsHttpHandler
to bypass any ASP.NET Core handler injection. Also added ConnectTimeout
(10s) and PooledConnectionLifetime (10min) for healthier connection
management.
Affected services:
- PrivStackApiClient (update checks, plugin registry, auth)
- PluginInstallService (online check, plugin downloads)
- RegistryUpdateService (update downloads)
Skip initialization for disabled plugins and gate AI startup behind AiEnabled
Details
Two bugs caused the app to load everything at startup even when disabled:
1. PluginRegistry initialized ALL discovered plugins (vault unlock, host
creation, InitializeAsync with SDK queries) before checking disabled
state in Phase 3. Now the disabled check runs before Phase 1 — disabled
plugins only get lightweight schema registration (needed for backlinks
and sync data integrity) and skip vault unlock, host creation, and
InitializeAsync entirely.
2. RagIndexService unconditionally loaded the ONNX embedding model and ran
a full RAG index 5 seconds after startup, regardless of AiEnabled
setting. Now gated at two levels: App.axaml.cs only resolves the service
when AiEnabled is true, and the service's auto-init task checks the
setting as defense-in-depth.
With everything disabled: plugins go from 16 initialized to only core
plugins, and the ~800MB embedding model stays unloaded.
Fix thread pool starvation blocking HTTP calls during startup
Details
Root cause: DiscoverAndInitialize() (sync) parallelized plugin init
via Task.WhenAll but blocked the calling thread pool thread with
.GetAwaiter().GetResult(). All call sites wrapped this in Task.Run(),
so one thread pool thread was blocked waiting for ~15 async plugin
init tasks that themselves needed thread pool threads to run their
continuations. This starved the pool (default min = CPU count ≈ 8-10),
causing unrelated HttpClient calls (update check, plugin registry) to
time out at 30s despite the server responding in <400ms.
Fix: Switch all call sites from Task.Run(() => sync DiscoverAndInitialize)
to the properly async DiscoverAndInitializeAsync(). Also added the
parallel Task.WhenAll init pattern to the async version (it previously
initialized plugins sequentially) so startup time is not regressed.
The sync DiscoverAndInitialize() is retained for Reinitialize() (called
from the UI thread during workspace switches) but ReinitializeAsync()
now also uses the async path.
Get notified about new releases