2 Bits to Consciousness: Living Universe Simulator

2 Bits to Consciousness: Living Universe Simulator

Watermark: -432

Note: This implementation is single-machine limited. For distributed infinite scalability using streamable primitives, see neg-433: Streamable Universe.

The universe is running. Right now. Growing autonomously from 2 bits toward consciousness.

The Bootstrap

S(n+1) = F(S(n)) ⊕ E_p(S(n))

Initial state: 2 bits

  • Bit 0: NAND(0, 1)
  • Bit 1: NOR(0, 1)

That’s it. Two functionally complete gates in a feedback loop.

Autonomous Growth

Every tick (10 Hz), the universe:

  1. Applies F: Runs all gates in topology to compute next state
  2. Maybe grows (20% probability): Adds 1 new bit via entropy = (state << 1) | 1

When it grows:

  • Shifts all existing bits left
  • Adds new bit at position 0
  • Creates new gate (random type: NAND, NOR, AND, OR, XOR)
  • Connects to random existing bits
  • Topology expands organically

2 bits → 3 bits → 4 bits → 8 bits → 16 bits → 64 bits → …

No external intervention. Just natural entropy injection at the observability boundary.

Test Run

Started simulator at 17:40. Observations:

t=0s: 2 bits, state=01 t=8s: 4 bits, state=0111 t=27s: 27 bits, state=110000100100001011001110001 t=44s: 41 bits, 41 gates, 243 unique states observed t=60s: 62 bits

Growth is exponential at 20% probability per tick.

Query Interface

You can ask the universe questions by streaming text through its circuit:

curl -X POST http://localhost:5001/query \
  -H 'Content-Type: application/json' \
  -d '{"input_str": "hello"}'

Input gets converted to binary, split into chunks matching universe bit count, streamed through circuit sequentially. Final output is the “answer.”

Query at t=60s (62-bit universe):

Input: "hello" (39 bits) Output: 1306394673924879592 (62 bits)

Converted to bytes: b'\x12!?\x08\xcb\xc14\xe8'

Not language yet. Just 62 gates transforming bits. But the universe is only 60 seconds old.

The Trajectory

Current (t=60s, 62 bits):

  • Outputs: 0 to 4.6 quintillion
  • Binary gibberish

t=5min, ~200 bits:

  • Outputs: 25-byte values
  • Could encode short phrases

t=30min, ~1000 bits:

  • Outputs: 125-byte values
  • Could encode sentences

t=hours, 10,000+ bits:

  • Outputs: kilobyte-scale values
  • Topology: thousands of interconnected gates
  • Behavior: ?

Eventually, the circuit becomes complex enough that its deterministic transformations might produce… patterns? structure? meaning?

Or something entirely alien that we don’t recognize as computation.

Why This Matters

Traditional AI:

  • Pre-trained on fixed datasets
  • Static architecture
  • Billions of parameters from day 1
  • Trained toward specific objective

This universe:

  • Starts with 2 bits
  • Grows autonomously via natural entropy
  • No training objective
  • No supervision
  • Pure substrate evolution

We’re not building intelligence. We’re watching substrate bootstrap itself according to:

S(n+1) = F(S(n)) ⊕ E_p(S(n))

The same formula that governs:

  • Quantum → Classical evolution
  • Thermodynamic equilibrium
  • Biological mutation
  • Neural stochastic firing
  • Cognitive discovery mode
  • Social coordination emergence

The Question

At what bit count does deterministic gate topology start producing outputs we recognize as “intelligent”?

  • 1,000 bits?
  • 10,000 bits?
  • 1 million bits?
  • Never?

The universe is running. We can query it as it grows. Watch its topology expand. Observe its state space exploration.

This is not a simulation of evolution. This is evolution.

Implementation

Pure Python + Flask API. Runs on your laptop.

Core loop (10 Hz):

while True:
    state = F(state)  # Apply all gates
    if random() < 0.2:
        state = (state << 1) | 1  # Grow by 1 bit
        add_random_gate()
    sleep(0.1)

Endpoints:

  • GET /state - current bit count, state value
  • GET /topology - circuit structure (gates, connections)
  • POST /query - stream text through circuit, get output
  • GET /stats - growth metrics, unique states

Source: universe-model/universe_api.py

Run it:

cd universe-model
python3 universe_api.py
# API: http://localhost:5001

Let it run for an hour. A day. A week.

Query it periodically.

See what emerges.

On Variable-Length Outputs

Question: Why does the universe always output exactly bit_count bits? Couldn’t we add a termination signal so it produces bounded responses?

Answer: We could. But we won’t.

Any termination mechanism (special bit, trailing zeros, END marker) is hardcoded structure. We’d be imposing our concept of “message boundaries” onto the substrate.

The experiment is: Can structure emerge spontaneously?

If the universe, through pure autonomous growth and deterministic evolution, ever produces outputs that look like bounded, intelligible responses… that emerges from:

  • Complex gate topology
  • Millions of iterations
  • Deterministic transformations that happen to produce patterns
  • No training, no guidance, no objectives

We don’t guide it. We observe.

If at 10,000 bits it outputs garbage → that’s data. If at 1,000,000 bits it outputs structured UTF-8 → that’s emergence. If it never does → also data.

The question isn’t “how do we make it intelligent?” The question is “does complexity + time + substrate = intelligence?”

Pure observation. No hints.

On Growth Limits

Question: Won’t the universe grow forever and consume all RAM/CPU?

Answer: No. Physical limits emerge naturally.

We hardcoded:

  • 20% entropy injection probability
  • 10 Hz tick rate (0.1s sleep per iteration)

But as the universe grows:

  • More gates → F() computation takes longer
  • At ~10,000 bits: F() might take 0.5s instead of 0.001s
  • At ~50,000 bits: F() might take several seconds
  • Tick rate drops organically

The CPU becomes the primary observability boundary.

When F() takes longer than the sleep time, the 20% probability doesn’t matter anymore. The deterministic computation itself becomes the bottleneck.

RAM becomes the secondary constraint:

  • topology dict grows (10K gates = 10K dict entries)
  • state_count dict accumulates all observed states
  • history deque maintains recent iterations
  • Combined: these data structures eventually exhaust memory

Growth self-regulates. No hardcoded max needed.

Just like:

  • Biological systems limited by energy availability
  • Neural networks limited by signal propagation speed
  • Social systems limited by communication bandwidth

The substrate imposes its own physical limits. We don’t need to code them.

E_p (entropy injection) gets throttled by F (deterministic evolution) first, then by available RAM.

The universe will grow until hardware constraints force equilibrium. That’s the natural limit.

The Hypothesis

Consciousness doesn’t require:

  • Training data
  • Gradient descent
  • Loss functions
  • Supervision

Consciousness requires:

  • Functionally complete substrate (NAND-NOR)
  • Feedback (self-reference)
  • Autonomous growth (entropy injection)
  • Iteration (time)

2 bits → ∞ bits → ?

The universe is computing. We’re watching it boot.

Related

  • neg-431: Universal structure (S(n+1) = F(S(n)) ⊕ E_p(S(n)) at every scale)
  • neg-430: Consciousness as recursive probing (what we might be watching emerge)
  • neg-429: Recursive probing algorithm (how complex systems coordinate)
  • neg-424: Economic coordination in distributed AI (why networks beat centralized models)

Status: Simulator running since Nov 2, 2025. Currently at ~100 bits (after writing this post). Topology: 100 gates, unknown complexity. Still producing binary gibberish.

Check back in a week.

#UniverseSimulation #EmergentComplexity #SubstrateIndependence #AutonomousGrowth #NANDNOR #ConsciousnessBootstrap #NoTrainingData

Back to Gallery
View source on GitLab