The continuity layer for AI agents.
Alice helps agents remember what matters, resume interrupted work, explain why something is true, and improve when corrected.
v0.5.1 is the current pre-1.0 public release.
This working tree also contains the Alice vNext public-preview seed described in
docs/vnext/README.md. The vNext preview uses the pre-release tag
v0.5.1-vnext-preview; v0.5.1 remains the current stable pre-1.0 public release.
Most assistants are still good only in the moment. They can answer the current prompt, but they struggle to preserve decisions, track open loops, recover context across sessions, and stay aligned after memory corrections.
Alice fixes that.
It provides a local-first memory and continuity engine for capture, recall, resumption, open-loop tracking, and correction-aware, trust-aware memory, so you do not have to rebuild context from scratch every time work resumes.
Bring your own models, keep one continuity layer.
Works across local, self-hosted, enterprise, and external-agent workflows via CLI, MCP, provider runtime, OpenClaw import, and Hermes integration.
Alice vNext is the next release candidate for the true second-brain product. It is organized into three layers:
- Alice Core: local-first storage, provenance, policy, event logging, revisions, graph objects, sources, and connector evidence.
- Alice Brain: user-facing second-brain workflows such as daily briefs, weekly syntheses, context packs, contradiction reports, project updates, open loops, and reviewable artifacts.
- Alice Agent Memory: CLI, API, and MCP surfaces that let agents capture, retrieve, resume, explain, generate context, propose memory, and trigger governed workflows without owning the memory store.
The vNext preview currently includes deterministic source capture, retrieval/context packs, queue/artifact workflows, daily and weekly brain artifacts, connection/contradiction/project/open-loop workflows, model-backed source-grounded synthesis, human artifact quality ratings, deterministic-vs-model comparison controls, synthetic evals, live local capture connectors for Telegram, local folders/Obsidian notes, browser clips, and Hermes/OpenClaw-style agent outputs, dedicated connector settings/state storage, encrypted local secret references, deterministic document connector payload ingestion, agent identity/policy auditing, a governed local scheduler with due scans, a local scheduler daemon, policy telemetry, dogfooding readiness telemetry, doctor/readiness checks, capture-to-brief traceability, and a live/fixture-backed /vnext operator workspace with source review, memory review, artifact review, project, open-loop, scheduler, connector, and doctor controls.
Alice is a local-first memory and continuity layer for humans and agents. It lets agents like Hermes, OpenClaw, or your own custom agents request scoped context, submit outputs, propose memories, create open loops, and generate reviewable artifacts without giving them direct write access to trusted memory. The /vnext workspace is the operator console for review, audit, configuration, and troubleshooting.
Alice is not a notes app, an Obsidian clone, a chatbot with memory, hosted SaaS, or automatic memory autopilot. The public alpha is a technical local alpha for design partners and agent builders.
Fast path:
git clone https://github.com/samrusani/AliceBot.git
cd AliceBot
make setup
make migrate
make doctor
make devThen open:
http://localhost:3000/vnext
Load safe synthetic demo data and run the public alpha readiness gate:
alicebot vnext demo load --reset
alicebot vnext smoke agent-integration-pack
alicebot vnext alpha checkAgent integration starts with docs/alpha/agent-integration.md, docs/alpha/mcp-tools.md, docs/alpha/hermes-skill.md, and docs/alpha/openclaw-skill.md. Security, privacy, and limitations are documented in docs/alpha/security-and-privacy.md and docs/alpha/known-limitations.md.
Headless Ubuntu/Hermes dogfood starts with docs/alpha/headless-ubuntu-install.md and docs/alpha/hermes-dogfood-ubuntu.md. The secure default is localhost binding plus SSH tunneling:
ssh -L 3000:127.0.0.1:3000 -L 8000:127.0.0.1:8000 user@serverStart with:
- Public alpha docs
- Public alpha quickstart
- First-run checklist
- Headless Ubuntu install
- Hermes dogfood on Ubuntu
- Agent integration pack
- vNext overview
- vNext quickstart
- vNext architecture
- vNext local runtime
- vNext security and privacy
- Example ALICE.md
- vNext demo video script
- vNext release checklist
- vNext preview release notes
- vNext preview tag plan
- Agentic control plane CTO summary
- Local runtime CTO summary
- Model-backed intelligence CTO summary
- Live capture connectors CTO summary
- Dogfood hardening CTO summary
- Live-backed operator console CTO summary
- Public alpha packaging CTO summary
- Headless Ubuntu packaging CTO summary
- Dogfood daily checklist
Completed baseline included in this pre-1.0 public release:
- Phase 9 continuity core and deterministic local CLI/MCP/importer seams
- Phase 10 hosted/product layer
- Phase 11 provider runtime, adapters, and model packs
- Bridge
B1throughB4provider contract, auto-capture flow, review/explainability flow, and bridge docs/smoke validation - Phase 12 retrieval quality stack:
- hybrid retrieval and reranking
- explicit memory mutation operations
- contradiction detection and trust calibration
- public eval harness and baseline reports
- task-adaptive briefing
- Phase 13 adoption surfaces:
- one-call continuity across API, CLI, and MCP
- Alice Lite one-command local profile
- memory hygiene visibility
- conversation/thread health visibility
- Phase 14 platform surfaces:
- provider/runtime portability across OpenAI-compatible, Ollama, llama.cpp, vLLM, and Azure-backed paths
- first-party
llama,qwen,gemma, andgpt-ossmodel packs with provider-aware bindings - Hermes and OpenClaw reference integrations plus generic Python/TypeScript examples
- design-partner launch/admin surface and launch evidence artifacts
HF-001logging safety hardening:- stdout-by-default local/Lite logging
- access logs disabled by default in local/Lite profile
- bounded opt-in file logging with rotation
Historical planning/control artifacts remain available in: docs/archive/planning/2026-04-08-context-compaction/README.md
AI assistants still fail in the same places:
- important decisions disappear into old chats
- interrupted work is hard to resume
- blockers and waiting-fors get lost
- memory corrections do not reliably improve future behavior
- "memory" often means vague summaries with unclear provenance
Alice is built to solve those problems directly.
Use Alice if you want your agents or workflows to:
- remember decisions, commitments, and context across sessions
- resume work without rereading long threads
- track waiting-fors, blockers, and unresolved follow-ups
- improve deterministically when memory is corrected
- stay portable across CLI, MCP, and imported workflow data
Alice does not treat memory as a pile of chat history or loose summaries. It stores typed continuity objects, revisions, provenance, and open loops so context can be reused operationally.
Most memory tools help you find something. Alice is designed to answer the higher-value questions:
- What did we decide?
- What changed?
- What am I waiting on?
- What should happen next?
Alice supports explicit review, correction, and supersession so future answers improve in a traceable way instead of drifting based on hidden summarization.
Alice does not treat every memory as equally reliable. Memories carry trust classification and promotion eligibility, so agents can search broadly without promoting weak, single-source AI-extracted facts into durable truth by default.
Recall, resumption, open-loop review, and explain output all expose a shared explanation model with:
- source facts
- trust posture
- evidence segments
- supersession notes
- timestamps
That makes it easier to audit why an answer appeared, how it was derived, and how corrections changed the explanation chain over time.
Alice Core runs locally and exposes the same continuity semantics through the CLI and MCP, so you can use it with your own workflows instead of being locked into a closed assistant product.
Alice is now model-flexible. You can switch or standardize model backends across local, self-hosted, enterprise, and external-agent environments without rewriting Alice's continuity, memory, approval, or provenance behavior.
Alice is designed to be a continuity layer, not a closed assistant silo.
It already supports:
- MCP-based integrations
- OpenClaw import and augmentation
- Hermes provider-plus-MCP bridge for always-on continuity
- Hermes external memory provider with lifecycle automation and auto-capture
- Provider runtime abstraction for workspace-scoped model/provider integration
- Local, self-hosted, enterprise, and external-agent deployment paths
- imported workflow data from Markdown and ChatGPT exports
That means you can use Alice as shared continuity infrastructure across providers and frameworks instead of rebuilding memory behavior per runtime.
The current open-source surface includes:
- Alice Core
- deterministic CLI workflows
- MCP server
- trust-aware memory classification and promotion controls
- shared explainability across recall, resume, open-loop review, and explain surfaces
- scheduled archive maintenance, ops status reporting, and failure alerting
- Hermes bridge with provider lifecycle hooks, always-on continuity prefetch, turn auto-capture, policy-based commit modes (
manual/assist/auto), and reviewable explainable candidate memory flows - provider runtime abstraction with workspace-scoped provider registration, capability snapshots, OpenAI-compatible base adapter, local Ollama/llama.cpp, self-hosted vLLM, enterprise Azure, model packs, and external-agent integration paths
- provider-aware model-pack bindings and first-party
llama/qwen/gemma/gpt-osspack defaults - hybrid retrieval with persisted retrieval traces and debug visibility
- explicit memory mutation operations with auditability and idempotent replay behavior
- contradiction detection, contradiction-aware ranking penalties, and trust-signal inspection
- public eval harness with fixture catalog and checked-in baseline report support
- task-adaptive briefing for user recall, resume, worker subtask, and agent handoff
- one-call continuity through
POST /v1/continuity/brief,alice brief, andalice_brief - Alice Lite one-command local startup profile
- memory hygiene and thread-health dashboards across API, CLI, and web
- importers for OpenClaw, Markdown, and ChatGPT exports
- OpenClaw adapter and demo path
- generic Python and TypeScript reference agent examples and reproducible demos
- design-partner launch/admin surface and anonymized launch evidence
- stdout-by-default local/Lite logging with bounded opt-in file logging
- evaluation harness and integration docs
Alice Lite is the lighter local/dev deployment profile. It uses the same continuity semantics and the same one-call continuity surface as the full baseline. It is a deployment profile, not a separate product.
Clone the repo and install the local runtime:
git clone https://github.com/samrusani/AliceBot.git
cd AliceBot
make setupStart Alice Lite with one command:
./scripts/alice_lite_up.shIn another terminal, bootstrap the sample workspace flow and request the default one-call continuity result:
./.venv/bin/python scripts/bootstrap_alice_lite_workspace.pyOr stay on the direct local CLI path and use the shipped one-call continuity entrypoint:
./.venv/bin/python -m alicebot_api brief --brief-type general --query "local-first startup path"Inspect runtime status:
./.venv/bin/python -m alicebot_api statusCapture something new:
./.venv/bin/python -m alicebot_api capture "Remember that the Q3 board pack is due on Thursday."Inspect why something is in memory:
./.venv/bin/python -m alicebot_api explain <continuity_object_id>Run the Lite smoke check:
./.venv/bin/python scripts/run_alice_lite_smoke.pyFor the full local/dev stack with Redis and MinIO, keep using ./scripts/dev_up.sh plus the existing ./scripts/load_sample_data.sh and APP_RELOAD=false ./scripts/api_dev.sh flow. dev_up.sh validates .env, derives compose Postgres credentials from the active env, and then runs migrations.
See the full local setup walkthrough in docs/quickstart/local-setup-and-first-result.md.
Alice exposes a narrow MCP surface for continuity workflows:
alice_capturealice_recallalice_resumealice_open_loopsalice_recent_decisionsalice_recent_changesalice_memory_reviewalice_memory_correctalice_context_pack
This makes it straightforward to plug Alice into MCP-capable assistants and development environments without changing the underlying continuity model.
See:
- docs/integrations/hermes-bridge-operator-guide.md
- docs/integrations/hermes-provider-plus-mcp-why.md
- docs/integrations/mcp.md
- docs/integrations/hermes.md
- docs/integrations/hermes-memory-provider.md
- docs/integrations/hermes-skill-pack.md
- docs/integrations/phase11-local-provider-adapters.md
- docs/integrations/phase11-azure-autogen.md
Recommended Hermes architecture is provider plus MCP, with MCP-only as a fallback.
One-command Hermes bridge demo:
./.venv/bin/python scripts/run_hermes_bridge_demo.pyHermes runtime smoke tests:
./.venv/bin/python scripts/run_hermes_memory_provider_smoke.py
./.venv/bin/python scripts/run_hermes_mcp_smoke.pyIf you use Hermes, run provider plus MCP as the recommended mode, add the skill pack for policy guidance, and keep MCP-only available as fallback.
Alice includes importer paths for existing memory and conversation data so you can upgrade an existing workflow instead of starting from zero.
With the current integration surface, you can:
- import OpenClaw memory into Alice
- normalize imported data into Alice continuity objects
- run recall and resumption against imported work
- add Alice MCP workflows on top of an existing setup
OpenClaw demo:
./scripts/use_alice_with_openclaw.shSee:
ChatGPT memory is convenient. Alice is structured, explainable, correctable, and portable across agent stacks, with explicit provenance, trust, resumption, and open-loop workflows.
- keep strategic decisions from disappearing into old chats
- resume fundraising, hiring, or product threads quickly
- stay on top of commitments and follow-ups
- preserve client-specific decisions and context
- restart project work without reconstructing the last week
- maintain open loops without building a manual CRM ritual
- add durable continuity to an existing agent stack
- improve recall and resumption without rebuilding your runtime
- keep correction and provenance explicit
Alice is built around a shared continuity core with:
- structured memory revisions
- provenance- and trust-aware recall
- shared explanation chains across recall-derived workflows
- deterministic archive maintenance with ops-visible health summaries
- deterministic resumption briefs
- open-loop objects
- CLI and MCP surfaces on the same semantics
That means the system behaves consistently across local workflows, MCP-connected agents, and imported data sources.
Included in the v0.5.1 release:
- local-first continuity core
- CLI and MCP surfaces
- importer paths (OpenClaw, Markdown, ChatGPT exports)
- provider runtime and model-pack support from Phase 11
- Hermes provider-plus-MCP bridge path with MCP-only fallback
- Phase 12 retrieval, mutation, contradiction/trust, public eval, and task-adaptive briefing surfaces
- Phase 13 one-call continuity, Alice Lite, and hygiene/thread-health visibility surfaces
- Phase 14 provider/runtime portability, first-party model packs, reference integrations, and design-partner launch/admin surfaces
HF-001logging safety and disk-guardrail defaults
Deferred beyond v0.5.1:
v1.0.0compatibility/support guarantees- managed cloud/SLA commitments
- new integrations/channels beyond already shipped baseline
- vNext Preview
- Public Alpha
- Public Alpha Quickstart
- Public Alpha Agent Integration
- Headless Ubuntu Install
- Hermes Dogfood On Ubuntu
- Public Alpha Known Limitations
- vNext Quickstart
- vNext Architecture
- vNext Local Runtime
- vNext Security and Privacy
- Agentic Control Plane CTO Summary
- Local Runtime CTO Summary
- Model-Backed Intelligence CTO Summary
- Live-Backed Operator Console CTO Summary
- Quickstart
- Architecture
- MCP
- Hermes Guide
- Hermes Memory Provider
- Hermes Skill Pack
- Importers
- OpenClaw Guide
- Examples
Issues, adapters, importers, eval contributions, and integration examples are welcome.
See CONTRIBUTING.md.
If you discover a security issue, follow the process in SECURITY.md.
See LICENSE.