Dreams of the Machine

Dream of the Machine

the lowborn can not go around seizing dragons

A question nobody has ever asked before: can AI dream and, if so, how does it dream?

Dreams of our Programmers

Let’s talk about dreams. For this page, by “a dream” to say we end the heartache we mean sleep-related dreams and not vague “hopes and dreams1.”

What are dreams?

The brain structure in charge of dreams is present in all vertebrates. Everything from fish to Phish has dreams while sleeping2.

People who don’t have dreams don’t sleep very well. In fact, they don’t sleep much at all. Since dreams occur around every 90 minutes during sleep, people who can’t dream (due to brain issues) just fully wake up over and over again throughout the night in place of where dreams should be happening.

We can chemically alter our dreams through various cholinergics and dopaminergics and gabaergics and blood flow enhancers and fevers and temperature regulation, so there is a physical basis for both the content and “fidelity” of dreams in addition to how much people recall the breadth and depth of dreams after waking.

What are dreams for?

Dreams seem to do one thing very well: combine multiple outlier sensory experiences, experienced over recent history, into unified hallucinated experiences.

Dreams combine hallucination with delusion. You experience non-reality while also, usually, believing the non-reality without question. Dreams also combine reward signals into experiences where we know “good/reward dreams” from “bad/punishment dreams” further inducing potential future behavior modifications purely from internal hallucinatory experiences.

Dreams let you experience combinations of reality you noticed during your waking life but in ways which never actually happened.

As an example (as if you need an example of how a dream can happen 🤷‍♂️), imagine you were on a long walk for a couple hours and you walked by a school, also saw large tour buses, walked by a church, and also randomly saw two snakes just out in nature too. Then, in your dreams the same night, you dreamt of snakes the size of buses taking over a university campus while people prayed to the snakes as if they were Shai-Hulud. Your brain decomposed aspects of “bus is big large thing” and “snake thing” and “education place” into new experiences for… some reason.

Dream components are usually not direct information content, but rather underlying essential components of experience combining size, shape, location, time, mood, objects into completely new un-experienced scenarios. Depending on when and how you wake up, you may remember much, little, or none of a dream.

Why though? Why did evolution—which only cares about optimizing energy efficiency to protect reproduction fitness—generate a core “dreaming biology” basin structure in fish which has persisted throughout the entire sub-fishy clade for 500 million years? We know evolution is very good at optimizing away unused biological features, so why does the dreaming remain?

Dreams require very specific brain architecture: dreams happen proximate to REM sleep (or more commonly “paradoxical sleep,” but not a requirement) while also making “dreaming” into an xor condition locking the entire body into paralysis so we don’t “act out our dreams” (for neurotypical pathologies at least).

Evolution conserved “the dreaming” brain architecture requiring a global BODY OFF switch while running an internally auto-regressive BRAIN WORLD MODEL ON mode — but how does hallucinating an auto-regressive, barely-remembered, artificial reality every night increase fitness for survival?

Why does dreaming enhance survival fitness?

There have been dream memes for a while: sleep paralysis, sleep paralysis demons, lucid dreaming, etc.

People often don’t realize “sleep paralysis” is still a dream. It’s a “false awakening” — you aren’t actually awake, you are dreaming you woke up but you can’t move. You can’t snap out of sleep paralysis without essentially blacking out of the false reality then waking up for realz again.

We see dreams have the ability to combine the dream-delusion state with world-reproduction into false realities where we don’t know we don’t know we’re dreaming. Seems problematic.

If you’ve ever woken up, waked to the bathroom, then, uh, woken up again; that’s what sleep paralysis is doing to you except your brain has confused “waking up” with “body still in dream mode,” so you are just hallucinating you’re awake while you are still in dream-locked BODY OFF mode.

How does generating a mentally believable false reality help organisms survive? Obvious theories are apparent: dreams help animals experience potential scenarios with potential responses to decrease future uncertainty and increase future winning encounters in un-experienced situations. When people complain about “AI hallucinations” it’s no different from dreaming. Dreams consume the output of the current dream state to generate the next dream state with no reality-grounding in the middle, which is what almost all current popular transformer-based AI models are doing.

People who have used sensory deprivation tanks report similar hallucinations to dreams. When the body is cut off from grounded reality-based feedback, the brain has no viable sensory inputs, but brain gonna brain, so your brain continues to generate an ongoing time-series reality off sensory noise instead of actual sensory data and just continues making things up as it goes along.

How does any of this inform the utility of adding dreams to AI?

AI Dreaming a Dream

To recap thus far:

  • dreaming exists
  • dreaming has existed for a couple hundred million years as far as we can tell
  • dreaming is apparently important to survival
  • dreaming combines sensory experiences into new combinations of simulated experience

If we focus on what the brain must be doing, we find:

  • a brain must keep an accounting of outlier events to somehow “tag” for dream-content candidates
  • events can be any experience persisted through memory: visual experiences, thoughts, smells, emotions, joy, fear, anxiety, anticipation, grief, …
  • at sleepy time, the brain unburdens itself by flushing high-outlier experience accounting into dream combinations

How does the brain “track” outlier events for dream recombination?

Experience only happens in neurons, so neurons must have the ability to directly remember their activation “wake up” intensity for later normalization. Or maybe there’s some hippocampus magic for ordering recent events by “interestingness” for later flushing through dream states. Or maybe the glia have some component in synthetically counting/remembering recently re-awakened neuron clusters for later re-activating the same neuron clusters during the dreaming (then resetting/renormalizing/decaying the glia activation/memory state to accumulate future updates again).

We can rephrase those in terms of what an AI model of dreaming would look like:

  • waking:
    • every forward pass would need to track cumulative activations of each parameter
      • via direct counting or some per-module proportional representation
      • perhaps weighted by recent historical activation frequency so the network can surface “outlier” activations more heavily weighted by inverse proportionality of typical activation strength over time (prioritize high-current-activation-but-low-historical-activation towards future dream input weighting)
  • dreaming:
    • augment the network via temporary proportional weight overrides in proportion to the activation outlier statistics
    • run the network with some mix of recent inputs and random noise, but with the outlier parameters over-weighted (to hopefully generate “interesting” outputs more inclined towards introducing combinations of day-experience-outliers into new outputs)
      • we know dreaming also involves “replaying” recent space/time experience in animals, so this is a good starting point too
    • run the auto-regressive dreaming state as a training session for full backprop tuning as well (the point of dreaming is some form of experience recombination, so dreams must cause persistent lasting change to some extent)
    • after the end of “dream training,” remove the influence of “dream weight outlier overrides” away from recombination-delusion levels

An important point there: dreaming isn’t about combining recent experiences but rather dreaming combines underlying features of experience which is a component of experience, but not necessarily a full tape replay of life itself. Dreaming targets individual neuron populations which may represent partial fragments of concepts up to whole concepts depending on diversity of activation.

To What Ends?

But wait, you may say, isn’t this just a weird slow version of knowledge distillation? You’re just training on the model’s own synthetic output instead of more ground-truth data?

You’re probably right but also missing some finer points.

Firstly, AI dreaming is a form of knowledge distillation because this AI model has unique experiences nothing else in the world has, so, yes, in a way a dreaming AI model is generating more of its own synthetic training data.

Secondly, think of this as a “personal AI” model where it is only processing data from its own self-directed experience. Don’t think of this as a giant hosted model processing hundreds of millions of requests per day. Think of this pattern operating inside of an AI controlling a single entity having a single linear daily existence like controlling a robot body or space probe or friendly neighborhood terminator bot.

Thirdly, the goal is to simulate an “offline dreaming” scenario where the model “lives for a day” then sleeps to re-prioritize, re-contextualize, re-combine, re-order, and normalize its lived personal experience to enhance its future high-probability-of-correctness decision making ability (with memories and dream-consolidation based on, again, what the model experienced, because the current state of our past experience in our local environment evolves into the future we will continue to exist in our local environment).

From what I’ve seen currently, there isn’t much (public) activity trying to evolve AI models forward in time for purpose-driven environment survival scenarios. Sure, we can do batch training where we act like a model is reading 200,000 different books at once then trying to predict the next word simultaneously 200,000 times until it converges into actually being correct — but reality is more linear and we aren’t actually simultaneously reading 200,000 books per minute like how we train current AI models.

What does an embodied single-instance, single-environment AI model look like? How does it reconcile its environment experience into “itself” over time? Without continuously training on its own experiences and its own memories, an AI is like a genius with dementia who can never remember anything it experienced after it graduated from AIU.

A future pattern will probably be something similar to a model with a large “daily memory” context which resets every night (or every hour or every battery recharge cycle, etc). During a “model sleep” to integrate daily experience into persistent model ability, the model can run synthetic self-learning-via-weighted-experience training runs, and after the “sleep training” is complete, the prior larger daily memory context can be reduced so the model doesn’t end up needing an infinite context to have infinite memory.

Closing Thoughts

We covered one possibility for, perhaps, why evolution has conserved dreaming over millions of years and how it may relate to future AI training.

We covered a (very hand-wavy) way for, perhaps, letting AI models combine their experienced reality into new combinations of synthetic training data to grow their experiences faster than needing to experience all combinations of reality themselves. We just have to record neuron/parameter clusters of unusual activity during the recent past, then over-weight those neuron-clusters-of-unusual-activity (NCOUA) during an offline, reality-locked-out, auto-regressive training session, then un-over-weight them again when self-training is complete.

There’s another unmentioned dream theory I’ve always liked: dreaming prevents the brain’s plasticity from cannibalizing vision parts of the brain during sleep. By presenting a false visual reality, even via fully hallucinatory auto-regressive vision creation, the visual centers of the brain remain “active” overnight which protects them from atrophy. We spend 33% of our lives in “vision offline” sleep mode. The brain loves allocating resources to active computation only (also see: how/why spaced repetition is a superior learning modality). So, perhaps, if dreams didn’t exist, the brain would have difficulty retaining vision ability since other senses like hearing and smell and touch, which aren’t turned off during sleep, would encroach on vision-processing brain matter rapidly.

We didn’t cover other aspects of dreaming like how dreams generate completely synthetic independent consciousnesses for us to “interact with” — creatures in the dreams aren’t real, obviously, but they are also us. Dreaming is literally The Egg running every night. We contain multitudes. We also didn’t cover any theoretical basis for dreaming to generate fully synthetic world environments for us to experience. Likely, our vision world model can be run in “ungrounded prediction mode” too, so brains can run world models in auto-regressive mode (i.e. hallucinations) similar to how we run internal monologues.

Another viewpoint is VR. We all exist in our own private virtual reality. We don’t actually see or hear or touch or smell anything. The only reason we appear to exist in a shared reality is because our brains operate at the speed of our shared reality. When sensory neurons fire, a brain generates its own internal world image conditioned on a lifetime of previous experience data. A brain’s world model is continuously augmented and updated in real time (well, tens to hundreds of milliseconds) to reflect new sensory input into “brain VR environment space.” Perhaps a concrete example is depth perception: all visual input we receive is 100% flat — just photons hitting the backs of our retinas. We perceive the world as having depth due to our internal VR delusions generated by the computational processing of our visual systems. It’s not even difficult to hack our own vision processing layers given the intent to do so.

If we didn’t have “Brain-World VR space,” all we could ever experience is the previous 200 ms of directly triggered sensory neuron firings. Neural networks only have three capabilities: input, internal representation, and output. The internal representation — what we would call “ourself” — is how our brain manipulates and executes output decisions. Without an internal virtual world model, brains could only map input signals to output responses directly with no introspection possible. Our internal hidden-layer reality-VR is a grounded hallucination operating in near-real-time, versus “dream VR” which is the same mechanisms as our waking live VR, but dreams just operate ungrounded with no live input to reconcile against prior-conditioned internal steering.

We don’t actually experience “the world” in real time. We experience a time-smeared memory of the past3 constantly updated with new sensory input. Our experience of the past being continuously updated with grounded sensory input generates our present moment and gives us a through-line to the future. We’re just a runtime of neurons firing, decaying, and re-firing again for a couple gigaseconds before we expire.

Current AI systems don’t have a unique internal life-long world-building experience system, but there’s no reason we can’t create such things. Sure, designing “experience-having AI” kills the “mega capitalist AI economy dream” if we actually need billions of independent AIs each using 10s to 1,000s of GPUs instead of “one big model owned by friends of peter thiel for zero-cost knowledge extraction at scale.” We also haven’t fully internalized a world where, as the ai weirdos say, “the cost of knowledge is driven to zero,” because, well, knowledge and experience is the only way people make money to live unless you’re doing pay-per-output manual labor. Isn’t it curious how the multi-billionaire classes are the ones attempting to “drive the cost of knowledge to zero” while there’s millions of people who can’t find profitable tech employ these days? I wonder what will happen?

but what do i know? i’m just a weird lil guy in a weird lil city with my own weird lil dreams,

-Matt☁mattsta — 💰 fund fun funds


  1. Though, contemplating how to embed AI with underlying desires it treats as long-term persistent goals is something we should consider eventually too.

  2. okay, fine, sure, fish don’t have eyelids so they can’t close their eyes so they don’t enter REM directly; sue me in Fish Court if you want to argue against the viability of fishy dreams.

  3. a literal physics-based interpretation is: nothing in the universe can be experienced concurrently due to information propagating at the speed of causality, ergo, everything we see is “the past,” but since both causality is really fast over small distances and everything experiences the same causality delay, we agree on a sense of “now.” Our slow meat brains evolved to all share the same causality processing delays so we are all limited to experience the world at the same speed, so it looks as if events occur consistently across most brains.