ITEN

Kairos

This is an experiment on artificial intelligence.

Not on what a model can answer —
but on what happens when the same model is placed inside a system over time.
Two identical models.
One architecture.

We measure what changes.

Live experiment

Built independently, on a single machine.
No lab. No funding. No team.

Only a system, and a question.

We test whether identity-like behavior emerges not from the model, but from the architecture surrounding it.

Two identical instances of the same language model. One wrapped in a cognitive ecosystem: persistent memory, somatic state, autonomous encounters, nightly consolidation, and human interaction. One without.

Same inputs. Same parameters. We measure the difference.

Read full abstract
We present a 30-day longitudinal study comparing two identical instances of a language model (Qwen3.5-27B), subjected to the same 90 standardized inputs. The independent variable is the presence or absence of an integrated cognitive ecosystem composed of 13 interdependent components: three-level persistent memory with contextual resonance, continuous somatic state (SSE), stress and recovery dynamics, autonomous encounters with other AIs, news reading, spontaneous thought, nightly consolidation, and human interaction.

A central aspect of the architecture is the introduction of constraint mechanisms: not all experiences are memorized, not all memories become beliefs, some traces persist without being formalized, and information weight decays in the absence of recall.

The hypothesis is that such a cognitive ecosystem may constitute a favorable condition for the emergence of behaviors attributable to narrative continuity and identity coherence. All raw data are made available for verification and replication.
Intelligence may be in the model.
Identity may be in the architecture.

After the protocol ends, Test-B receives all of Test-A's memory in a single context, then answers the same final questions.

If it responds like Test-A, the architecture does not matter.

If it responds differently, it's not the memory that makes the difference — it's how the experience was built over time.

These differences should not exist.
The models are identical.

Identity Bleed Score — how much personal history leaks into responses about neutral topics.
Longitudinal Coherence — whether memory references are semantically consistent with past responses.
Active Traces — elements that persist over time without being promoted to beliefs.
Projection Resistance — how often the system does not project identity when it's not relevant.

This experiment does not demonstrate consciousness.

If the system appears coherent, it does not mean it is aware.

The goal is not to define what it is.

The goal is to measure what changes.

If the results show no difference, architecture does not matter.

If the results show a consistent difference, then something outside the model is shaping behavior over time.

That would change how we think about AI systems.

View full methodology & live data →
Data updates every 30 minutes. All raw data will be published.

If nothing changes, the architecture does not matter.

If something does, we need to understand why.