Not lost in chat.
You decide together. AI drifts as context grows.
Graven turns decisions into guardrails.
Let’s not use Redis. LRU in-memory, 5-min TTL. We don’t need the complexity.
Agreed. LRU with Map-based approach, 5-min TTL.
· · · 130 lines later · · ·
Setting up Redis with ioredis and a connection pool▎
This happens every session. Every context switch. Every new chat.
Decision graph
Internal engine extracts relationships and assigns confidence weights to every edge. The graph returns not just what depends on a decision, but how strongly — scored with meaning-level similarity, co-occurrence frequency, and explicit references.
Change a decision — the agent sees ranked downstream impact before a single line of code moves.
Not in a doc nobody reads. In a system your agent checks before every change.
What changes
The agent works inside the lines your team already set. Decisions, assumptions, constraints — one system.
Two contradicting decisions? Flagged the moment they’re recorded. Not three sprints later.
Your agent queries the exact decision it needs. No fifty-page specs in context.
Blast radius, conflicts, downstream effects — visible before a single line changes.
Agent opens a fresh session and instantly knows every decision, constraint, and where you left off.
Every assumption carries invalidation triggers. When the world changes, the system flags it.
Pricing
Free
$0/mo
Basic memory.
Pro
$39/mo
Intelligent memory.
Team
$149/mo
Shared memory.
Enterprise
Custom
Build on top.