Rotate your device to landscape mode to view the interactive demo
The standard approach to AI memory is a vector store and a similarity search. You embed your data, retrieve the closest matches, stuff them into the context window, and call it memory. That's not memory. That's a library with a bad index.
LOOP is a different bet: what if the constraints of human memory aren't the problem, but the mechanism? We forget things for reasons. We make unexpected connections between things we only partially recall. The limitations aren't bugs in human cognition. They're what makes it work.
LOOP runs three layers. Working memory is deliberately small. It forces prioritization, because a system that can hold everything has no reason to decide what matters. Associative memory links concepts by semantic similarity, temporal proximity, and contextual relevance. Recall spreads through connections. Long-term memory consolidates what survives, building a model of what the agent actually knows versus what it looked up once.
Bayesian weighting runs through the whole thing. Every piece of information has a relevance score that decays, strengthens, or shifts based on how the agent uses it. Human memory has been solving for useful recall for a few hundred thousand years. LOOP borrows the same constraint.
One feature worth calling out: the hypnotize command. Traditional memory is passive. The agent learns things over time as you tell it. Hypnotize is active. You inject a directive directly into long-term memory at maximum importance, and the agent accepts it as ground truth. Behavioral rules, hard limits, identity anchors, critical facts. It doesn't weigh them against other evidence. It just holds them.
LOOP is built and open source. A working proof of concept, not a whitepaper. Several of my other projects use it. The question it asks — whether intelligence is something you scale up or something that emerges from the right constraints — is one I keep finding reasons to take seriously.