Lorevox Private AI memoir studio
Local-first · Private by design · In active development

Preserve a life story with care.

Lorevox helps older adults preserve their life stories through guided conversation, strengthening memory, identity, and family connection. A conversational guide named Lori conducts structured life-story interviews, extracting biographical facts from natural speech and organizing them into a living archive that grows into a publishable memoir.

Public GitHub Repo dev@lorevox.com

Lorevox is currently in development and is not a public online app. It is being built for local hosting, with a tightly scoped private narrator universe.

Core model
ARCHIVE → HISTORY → MEMOIR

Lorevox preserves original source material, builds a structured historical layer from what is said, and assembles a memoir draft — never collapsing those layers, never crossing them without a human review.

Archive

Is the preserved source record: transcripts, audio, photos, scans, and session material. Nothing is deleted. Everything is timestamped and source-tagged.

History

Is the structured layer: facts, claims, contradictions, relationships, and timeline events — extracted from the archive and verified by a human reviewer.

Memoir

Is the narrative draft: assembled with AI assistance, but always treated as editable writing rather than a published claim.

What Lorevox is

  • A memoir and life-story platform centered on guided conversation.
  • A local-first and privacy-conscious system.
  • A way to turn spoken memory into structured history.
  • A writing environment for human-reviewed memoir drafts.

What Lorevox is not

  • Not a public social platform.
  • Not a cloud memory-mining product.
  • Not a medical records system.
  • Not a replacement for the person's own voice and judgment.

Why this direction

Most personal AI tools are built for productivity or conversation, not for preserving a life with dignity. Lorevox starts from the opposite premise: the person is the author, the archive is sacred, and the AI is a careful guide.

Structured knowledge extraction

Turning conversation into structured biography.

When a narrator speaks, Lorevox processes their response through a multi-layer extraction pipeline that produces structured biographical claims with confidence scores, schema-bound field paths, and explicit contradictions — never silently overwritten, never quietly inferred.

LLM Extraction

A local language model parses conversational responses into structured field-value pairs with confidence scores. No external API calls, ever.

Multi-Stage Validation

Extracted claims pass through field-path validation, relation allowlists, confidence floors, and negation guards before reaching the proposal layer.

Compound Entities

Narrators naturally mention multiple people, places, and events in a single response. The pipeline disambiguates and routes each claim to the right entity.

Evaluation

A 104-case evaluation suite benchmarks extraction accuracy across three real older-adult narrators, covering single, compound, and narrative response shapes.

What gets captured

Voice, video, and synchronized time — all anchored together.

Every session preserves more than words. Voice, video, and synchronized timestamps stay aligned so a moment in a life story isn't just text — it's something you can return to. A grandchild who never met the narrator can click an extracted fact and hear her say it, see her face when she said it, feel the pause before the word.

⦿ Audio archive

Every session, archived

Per-turn audio captured locally and kept beside the transcript. Two-sided text + audio for every conversation, in the narrator's own voice. Operator can export the full archive as a single zip at any time.

▶ Video moments

Curated video, on the narrator's terms

Audio is the default. Video is opt-in per session — for moments the family wants to preserve visually. Like audio, every frame stays on the narrator's machine. Family decides what gets kept.

⏱ Shared-clock timestamps

Anchored in time

Audio, video, transcript, and facial-expression signal all share the same capture clock. A weight in the voice, a pause before a memory, a softening of expression — anchored to the exact words that produced them. Memoir becomes multimodal.

Technology

Local-first, private by design.

FastAPI backend with local LLM inference (GPU-accelerated), SQLite storage, browser-based WebRTC audio capture, and on-device facial signal processing. Nothing leaves the device. No external API for any modality, including facial recognition. Audio is processed locally and never transmitted.

Speech & Transcription

Whisper variants run locally for transcription. Web Speech API for browser-side capture. The narrator's voice never reaches a hosted service.

Language Model

Llama 3.1 8B (4-bit quantized) on a local GPU. Hermes 3 / Qwen swappable as hardware advances. The fusion contract stays stable; only the upstream extractor changes.

Facial & Acoustic Signal

MediaPipe FaceMesh in the browser; only derived affect labels (steady / engaged / reflective / moved) leave the camera-preview boundary. No video, no landmarks, no raw vectors.

Storage & Export

SQLite + filesystem on the narrator's machine. Per-session zip export, two-sided text transcripts, per-turn audio archive. Family controls the data.

Speech ➜ local Whisper LLM ➜ local Llama 3.1 Facial signal ➜ local MediaPipe Acoustic ➜ local librosa TTS ➜ local Coqui Zero hosted APIs
Lab to gold · By deliberate decision

Hornelore is the crucible. Lorevox is the gold.

Every feature is exercised against three real older-adult narrators in the family-locked Hornelore R&D fork before being considered for promotion to the public Lorevox product. The relationship is one-way and deliberate: features move only by promotion, after they prove themselves with real narrators — never by file-parity backport.

R&D crucible

Hornelore

The family-locked private build. Three real older-adult narrators (Chris, Kent, Janice). Heavy heritage language: coin, metal, stone, runic border. Where every feature meets actual aging-parent use before anything moves forward.

  • Closed narrator universe — no add/delete
  • Pre-seeded family identity templates
  • Bug Panel + UI Health Check harness
  • Photo intake, document archive, audio archive
  • Adaptive silence ladder + WO-10C cognitive support mode
Distilled product

Lorevox

The public-facing memory archive and memoir platform. Local-first, private, careful, human-authored. Inherits only what earns the move — the surviving capability gets generalized for arbitrary narrators and the family-specific scaffolding stays behind.

  • Generalized narrator universe
  • Identity-first onboarding for any older adult
  • Three-pass interview model (seed · spine · scenes)
  • Four-layer truth pipeline (shadow → propose → review → promote)
  • Cognitive Support Model — dementia-safe pacing as a first-class behavior

Lab → Gold · By deliberate decision · Never by file parity

Why I'm building Lorevox

Occupational therapy shaped this project.

Lorevox is being developed by Christopher Horne, OTR/L. After 40 years in pediatrics, school-based occupational therapy, and family collaboration, I'm retiring January 1, 2026 — and starting Lorevox full-time. The values behind it are closely aligned with OT: listening carefully, meeting the person where they are, and treating partial progress as real.

I'm also testing Lorevox directly with my own parents — both 86, with memories starting to slip in different ways. My dad tells long, meandering stories with sudden cynical humor. My mom rarely initiates but with the right cue opens into vivid, passionate stories. They are, in a real sense, the people Lorevox is being built for.

Collaborate

Feedback, ideas, and thoughtful collaboration are welcome.

Lorevox is in active development. If you work in memoir design, privacy-first AI, family history, occupational therapy, life review with older adults, or open-source local AI — I'd be glad to talk.