Skip to main content

Changelog

Every improvement, automatically tracked from our commit history.

Subscribe via Atom feed
← Prev Page 8 of 139 Next →
March 1, 2026
patch Core

Implement two-phase FFI initialization for SQLCipher encryption

Core 1.15.3 | 72d8000e
Details

The database can no longer be opened immediately at init time because

SQLCipher requires a password-derived key. This commit restructures

the FFI layer into a two-phase initialization:

Phase 1 (privstack_init): If a salt file exists (encrypted DB), opens

an in-memory placeholder so the handle exists for non-DB operations.

If no salt file exists (first run / legacy), opens unencrypted as before.

Phase 2 (auth_initialize / auth_unlock): Opens the real encrypted

database and swaps the Connection inside the shared Arc<Mutex<Connection>>.

All stores (entity, event, blob, vault) automatically use the new

connection on their next operation since they share the same Arc.

Key changes:

  • PrivStackHandle gains a main_conn field holding the shared connection Arc
  • PrivStackHandle::swap_connection() replaces the inner Connection and

re-initializes all store schemas on the new database

  • EntityStore, EventStore, BlobStore gain reinitialize_schema() methods

for re-running CREATE TABLE IF NOT EXISTS after a connection swap

  • VaultManager gains reinitialize_vaults() to clear cached Vault instances

so they re-create tables on the new connection

  • auth_initialize: generates salt, derives Argon2id key, creates encrypted

DB via open_db(), swaps connection, writes salt file, then inits vault

  • auth_unlock: reads salt, derives key, opens encrypted DB, swaps

connection, then unlocks vault

  • auth_lock: swaps back to in-memory placeholder for encrypted DBs
  • auth_change_password: rekeys DB with PRAGMA rekey + fresh salt
  • Recovery functions: rekey DB after vault password reset
  • auth_is_initialized: checks salt file existence as primary indicator
  • Salt file: 16-byte random salt at <db_path>.privstack.salt
  • derive_db_key helper: Argon2id -> raw hex key for SQLCipher
patch Core

Clean up remaining DuckDB references in comments

Core 1.15.3 | 5427eacd
Details

Replace stale DuckDB references with SQLite in comments across

cloud sync engine, datasets preprocessor/mutations/helpers, and

FFI dataset queries. No functional changes.

patch Core

Consolidate 5 database files into 2 in FFI initialization

Core 1.15.3 | a23fd134
Details

Replace 4 separate database opens (vault.db, blobs.db, entities.db,

events.db) with a single shared connection to privstack.db. All stores

(VaultManager, BlobStore, EntityStore, EventStore) now use

open_with_conn() to share one Arc<Mutex<Connection>>.

datasets.db remains separate and unchanged.

Updated both init_core() and init_with_plugin_host_builder() cfg-gated

init functions. Updated privstack_db_diagnostics() and

privstack_compact_databases() to reflect the new file layout — scanning

privstack.db + datasets.db instead of the old 5-file set.

patch Core

Remove duckdb from workspace and update FFI file paths

Core 1.15.3 | ae11fc24
Details

Phase 7 (Consolidation — partial):

  • Remove duckdb workspace dependency from Cargo.toml entirely
  • Update all FFI database file extensions: .duckdb → .db
  • Update FFI comments: DuckDB → SQLite throughout
  • Full workspace compiles cleanly with zero duckdb references
  • All tests pass except 2 pre-existing P2P relay timing tests

(dht_sync_code_bidirectional, multi_entity_divergence_real_p2p)

which are network-timing-sensitive and unrelated to the storage

migration

Remaining Phase 7 work (separate commit):

  • Consolidate 5 separate .db files into 2 (privstack.db + datasets.db)

with shared connections — requires reworking the FFI startup flow

  • Implement one-time DuckDB→SQLite data migration for existing users
  • Wire up SQLCipher-authenticated startup flow
patch Core

Migrate privstack-datasets from DuckDB to SQLite (privstack-db)

Core 1.15.3 | ac2700b4
Details

Phase 6 of the DuckDB -> SQLite migration. This is the most complex

migration because datasets used DuckDB for OLAP features.

Key changes:

  • Replace duckdb dependency with privstack-db + csv crate
  • CSV import: replace DuckDB's read_csv_auto() with Rust-side csv crate

parsing with automatic type inference (Integer -> Float -> Text widening)

  • Column introspection: replace information_schema.columns with

PRAGMA table_info()

  • ALTER COLUMN SET DATA TYPE: replaced with table rebuild pattern

(create new table -> copy with CAST -> drop old -> rename) since

SQLite doesn't support ALTER COLUMN

  • DESCRIBE SELECT: replaced with stmt.columns() using rusqlite's

column_decltype feature for type introspection

  • ILIKE -> LIKE (SQLite's LIKE is case-insensitive for ASCII by default)
  • BOOLEAN columns stored as INTEGER (0/1) with conversion in read/write
  • Dry-run mutations use SAVEPOINT/ROLLBACK TO instead of DuckDB's

explicit BEGIN/ROLLBACK transaction pattern

  • DDL types normalized: VARCHAR->TEXT, BIGINT->INTEGER, BOOLEAN->INTEGER
  • open_datasets_db() removed; replaced by privstack_db::open_db_unencrypted()
  • Added open_with_conn() constructor for external connection management

All 73 tests pass, zero warnings. The privstack-ffi crate (which depends

on this crate) compiles cleanly.

← Prev Page 8 of 139 Next →

Get notified about new releases