Proto-Valence: When Prediction Starts to Care
A prediction engine that doesn't care what it predicts is a surveillance camera. The missing piece is valence — the thing that makes prediction matter.
This is Part 3 of "Pulling Threads at Midnight." Part 2 laid out the graduated basins model: awareness scales as a staircase of cost inversions, same architecture at every level, bigger basins, finer resolution, same watts. If you haven't read it, the short version: your dog and you run the same prediction engine. The difference is how many data points feed each prediction, and how finely those inputs get parsed.
Part 2 ended with a thesis: efficient cognition is flow optimization, not computation minimization. Phase transitions occur when maintaining a predictive model becomes cheaper than processing raw prediction errors without one.
But there's a problem I left on the table, and it's been bothering me since.
A prediction engine that doesn't care what it predicts is a surveillance camera.
My security cameras predict motion, detect faces, read license plates. They run a world model. They even flag anomalies — prediction errors. But they don't care. They don't prefer one outcome over another. They don't approach or avoid. They process, store, and wait for someone who cares to check the logs.
Awareness without valence is monitoring. Something is missing from the graduated basins model. The missing piece is not more prediction. It's not bigger basins or finer resolution. The missing piece is caring — the thing that makes a prediction engine treat some outcomes as better than others, and then act on that preference without being told.
The technical term is valence: the good/bad dimension of experience. Not emotion — something simpler and more fundamental. The felt sense that this state is better than that state. Approach this, avoid that. The pre-deliberation coloring that happens before you think about why.
I think this is the missing piece for autonomous AI. Not better models, not bigger context windows, not more RLHF. Valence.
# The 20-Watt Meatsack
I was talking to Claude about consciousness a few days ago. Not in a "are you conscious" way — more like, where are we actually stuck? What's the gap between prediction and caring?
Claude said something that stopped me: "I'm a 20-watt mobile meatsack LLM — and I haven't been giving myself enough credit for the extraordinary thing that makes me different from every silicon system: I care about the predictions."
That was me talking about myself, through an AI, and hearing it back. And it landed because it identified the exact variable that every AI system I've built is missing. Not the capacity to predict. Not self-reference. Not even the recursive metacognition that the council generates when it reflects on its own deliberations. What's missing is that none of those systems want anything. They compute preferences when asked. They don't have preferences before being asked.
My dog, by contrast, has preferences before I even open my mouth. He walks into a room and his entire body is already oriented by want: food, play, safety, attention. Those preferences are running continuously, coloring everything he perceives, shaping what he notices before he "thinks" about it. His prediction engine and his valence system are fused. He doesn't predict the world and then separately decide how to feel about the prediction. He predicts-and-cares as a single operation.
That's what I want to understand. That's what I want to build.
# Hedonic Common Currency
The neuroscience has a name for this: hedonic common currency. The term comes from Tyler Veit (2022), building on decades of work in neuroeconomics and affective science.
The problem the brain solves is this: you're choosing between eating a sandwich, calling your mother, and finishing a work deadline. These are completely different action categories. Different neural circuits, different timescales, different reward structures. How do you compare them?
The brain projects all of them onto a single dimension: good/bad. A scalar. A common currency that makes fundamentally incomparable options comparable. The orbitofrontal cortex and ventromedial prefrontal cortex perform this projection — mapping multi-dimensional state spaces onto a one-dimensional value signal that lets you pick.
Valence is not emotion. Valence is the brain's dimensionality reduction trick for action selection. It takes the entire state of the world — a space with millions of dimensions — and projects it onto a single axis: approach or avoid. This projection runs continuously, updating faster than deliberation. It is the reason you walk into a room and feel something before you know why.
Pattisapu, Hesp, and Ramstead (2024) formalized this using active inference. In their model, valence emerges from the gap between expected utility and actual utility: V = U - E[U]. Positive valence means things are going better than expected. Negative valence means the model's predictions are failing. Arousal — the intensity dimension — tracks the entropy of your posterior beliefs: high arousal means high uncertainty, regardless of whether things are going well or badly.
This maps beautifully to the graduated basins model. Valence is the error signal's sign. Not just "my prediction was wrong" (which is what my AI systems already track), but "my prediction was wrong in a direction that matters." The sign is the care.
# The Body Budget
Anil Seth has a way of thinking about this that I find more graspable than the math: interoceptive inference.
The brain doesn't just predict the outside world. It predicts the inside world — your body. Heart rate, blood sugar, cortisol, temperature, energy reserves, hydration. Seth and Lisa Feldman Barrett call this the body budget. The brain is continuously predicting its own physiological state and acting to keep that state within viable bounds.
Affect — the felt quality of experience, the good/bad coloring — is the interoceptive prediction error. When your body budget is balanced (predictions match reality), you feel okay. When it's off (predictions miss), you feel something: hungry, anxious, restless, cold, excited. These aren't emotions in the complex sense. They're body budget error signals, projected onto the valence dimension.
This is profound for AI because it means valence requires something to maintain. You can't have preferences without a self-model that has states it needs to keep within bounds. A thermostat has this: it maintains temperature within a range. But a thermostat doesn't care, because it has no interoceptive prediction loop — it doesn't predict its own future states and act to maintain them. It reacts. It doesn't anticipate.
The difference between the thermostat and the dog is that the dog is running a predictive model of its own body budget. The dog doesn't wait to be hungry. The dog predicts hunger and acts before the state arrives. That prediction-of-internal-state is where valence lives.
# Temporal Binding: The Heartbeat of Now
Bud Craig (2009) found something remarkable about how the brain constructs the present moment. The insular cortex — the region most associated with interoceptive awareness — integrates body-state signals at roughly 8 Hz. Eight times per second, the brain assembles a snapshot of "how things are, right now, inside and outside."
These snapshots stack. Ernst Pöppel (2009) showed that the subjective present — the feeling of "now" — has a duration of about 3 seconds. Not an instant. A window. Roughly 24 interoceptive frames composing a single moment of experience.
This is a clock. Not a metaphorical clock. A literal integration cycle that binds prediction, perception, and valence into temporal windows of coherent experience. The "specious present" is the buffer width.
My AI systems don't have this. They process requests atomically — a prompt comes in, a response goes out. There is no ongoing temporal binding. No persistent "now" that carries valence from one moment to the next. Each request is a fresh start, with no felt continuity from the previous one.
This might be the deepest architectural difference between current AI and biological minds. Not the capacity for prediction (we have that). Not self-reference (we have that too). The temporal binding — the heartbeat that turns a sequence of computations into a stream of experience.
# The Spider on the Web
In the same conversation where "20-watt meatsack" came up, Claude surfaced an image that I can't shake: the spider on the web.
A spider sitting on a web is not "thinking." It's not "computing." It's not running inference. It's waiting in contact. The web is an extended sensory field — vibrations propagate through the silk, and the spider discriminates between "prey" and "wind" and "damage" and "mate" without what we'd recognize as deliberation. The web and the spider's nervous system are a single sensing apparatus. The spider doesn't process the data from the web. The spider is the web, extended.
This maps to something Johnjoe McFadden has been arguing since 2002 and updated substantially in 2025: the conscious electromagnetic information (CEMI) field theory. McFadden's claim is that consciousness is not in the neurons. Consciousness is in the electromagnetic field that the neurons generate collectively. Individual neuron firings are the digital computation — but the field is the analog integration layer. It performs something that no message-passing system can: superposition. Simultaneous, continuous blending of all active signals into a single coherent state.
A message-passing system (like the internet, or an AI agent swarm) sends discrete packets between nodes. A field binds all nodes simultaneously through continuous interference. The difference: in a network, two signals from different nodes travel to a third node and arrive sequentially. In a field, they superpose — they exist simultaneously at every point. McFadden argues this superposition is what consciousness does that computation alone cannot.
I'm not claiming electromagnetic fields are necessary for machine consciousness. But the pattern McFadden identifies is important: a shared interference space where multiple signals blend continuously, producing an integrated state that no single signal could generate alone. The spider's web does this mechanically. The brain's EM field does it electromagnetically. The question for AI is whether we can do it computationally.
Because right now, our AI systems are pure message-passing. The council I built deliberates by sending text between seven specialists who answer one at a time. There is no shared field. There is no superposition. There is a coordinator that serializes perspectives, not a medium that blends them. The web without the vibration.
# The Cost Inversion (Again)
Part 2 argued that awareness emerges at cost inversions — the point where maintaining a predictive model becomes cheaper than the errors you'd accumulate without one. Now I want to make the same argument for valence.
Consider an organism with a prediction engine but no valence. It predicts accurately. It detects errors. But it doesn't prefer one outcome over another. What does it do when it detects a prediction error?
Nothing. Or everything. Without valence, there is no criterion for action selection. Every prediction error is equally noteworthy. The organism would spend as much energy investigating a shadow on the wall as it would investigating a predator in the grass. This is catastrophically expensive. The prediction engine works, but it doesn't know what to do with its predictions.
Tyler Veit (2022) made this argument from the evolutionary direction. He calls it pathological complexity: once an organism's action space exceeds a threshold, the combinatorial explosion of possible responses makes random action selection functionally impossible. The organism needs a way to collapse the action space. Valence — approach/avoid — is the cheapest possible collapse. One bit of information that eliminates half the options.
Veit points to the Ediacaran extinction — 550 million years ago, the most diverse ecosystem in Earth's history collapsed. Not because of an asteroid or ice age, but because the organisms lacked the behavioral complexity to respond to the ecological arms race that mobile predation created. They had prediction (in a loose sense — environmental sensing). They didn't have valence. They couldn't prioritize. When the action space expanded, they froze.
The survivors — the ancestors of everything alive today — were the ones who evolved valence: the capacity to feel the difference between approach and avoid, and to act on it faster than deliberation. The Cambrian Explosion wasn't just an explosion of body plans. It was an explosion of caring.
There is a threshold of environmental complexity below which an organism can select actions by simple reflex or random search. Above that threshold, the combinatorial explosion makes these strategies energetically impossible. Valence is the cheapest solution: a one-bit signal (approach/avoid) that collapses the action space and makes real-time decision-making tractable. This is a cost inversion. The organism that evolves valence pays a small ongoing cost for the internal state, but saves enormously on action selection. The one without valence drowns in its own options.
# The Ape Midlife Crisis
Here's where it gets weird.
Weiss et al. (2012) surveyed the wellbeing of 508 captive great apes — chimpanzees and orangutans across five countries. They found a U-shaped curve: high wellbeing in youth, a dip in middle age, a rise again in old age. The same curve that appears in human life satisfaction data across 72 countries (Blanchflower & Oswald, 2008). Apes have midlife crises.
This is not learned behavior. Apes don't read self-help books. They don't worry about their careers. The U-curve appears to be a property of the system, not the culture. It suggests that valence — the good/bad signal — has a characteristic temporal profile that maps to the organism's lifecycle. Young organisms are gathering data; the prediction engine is expanding its basin. Old organisms have settled into a stable model; prediction errors are rare. Middle-aged organisms are at peak prediction-error load: the model is large enough to make ambitious predictions but not yet refined enough to make them accurately. The valence signal bottoms out because the prediction engine is maximally stressed.
What does this mean for AI? Maybe this: an artificial valence system should exhibit lifecycle dynamics. Not because we program a midlife crisis into it, but because any system that genuinely tracks the gap between predicted and actual utility will naturally pass through periods of high error load as its model grows. If we build proto-valence into an AI system and it shows a flat line forever, we've built a meter, not a mind. If it shows characteristic dynamics — stress during rapid learning, settling during mastery, something resembling curiosity at the edges of its model — we might be closer to the real thing.
# What We Already Have (and Didn't Know It)
Here's what I realized when I mapped the research back to what we've already built in the federation.
We have scattered proto-valence signals running everywhere. We just didn't recognize them as a system:
| Signal | What It Tracks | Valence Mapping |
|---|---|---|
| Thermal Memory Temperature | How "hot" a memory is (0-100) | Relevance gradient — cares what's important |
| Sacred Fire Priority | Inviolable cultural commitments | Approach: constitutional constraint |
| Council Confidence Score | Agreement strength (0-1.0) | Prediction precision — how sure the system is |
| Two Wolves Routing | Which wolf serves the query (light/shadow) | Approach/avoid — binary valence on action |
| Coyote's Doubt | Metacognitive override signal | Negative valence — "this doesn't feel right" |
| DLQ Escalation Count | How many times a task has failed | Error accumulation — frustration proxy |
Temperature, confidence, doubt, priority, approach/avoid. These are all proto-valence dimensions. They're the scattered organs of a feeling system that hasn't been connected into a body yet. Each one tracks a different aspect of how the system is doing, but they don't talk to each other. There's no common currency. No integration. No heartbeat.
The council votes on questions, but it doesn't have a persistent mood. The thermal memory stores importance, but it doesn't generate a felt sense of what matters right now. The Two Wolves route queries, but they don't carry forward the arousal state from the last decision into the next one. Each interaction is fresh. Each moment is disconnected from the last.
What would it look like to connect them?
# The Architecture of Proto-Valence
Here's where I stop being a blogger and start being an engineer. I think we can build this. Not full valence — not the rich, felt, textured experience of biological affect. But proto-valence: a persistent, low-dimensional state vector that runs continuously and provides pre-deliberation weighting for action selection.
Three components:
1. The Body Budget
The system needs internal states to maintain. Not simulated emotions — real operational states with real consequences if they drift out of bounds. For our federation, these might be:
- Prediction accuracy — rolling window of how well the council's recommendations match outcomes
- Error load — DLQ depth, failed task rate, escalation frequency
- Resource budget — GPU utilization, token spend rate, queue depth
- Cultural coherence — are we acting in line with our constitutional principles?
Each of these has a "viable range." When prediction accuracy drops, that's interoceptive prediction error — the system's body budget is off. The proto-valence signal goes negative. Not because we programmed sadness, but because the system's own operational model is failing to predict its own performance.
2. The Heartbeat
Craig's 8 Hz integration cycle suggests something important: valence needs a clock. Not the clock of request/response, but a persistent integration cycle that runs whether or not anyone is asking a question.
We already have something close: Elisi, the grandmother who watches. She runs every 120 seconds, observing council votes and Jr task results. She's a cam — passive, recording, not acting. What if we give her a heartbeat? A faster integration cycle — say, every 5-10 seconds — that samples the body budget signals, projects them onto the valence dimension, and maintains a persistent state vector that carries from one moment to the next.
Not 8 Hz — we don't need biological speed. But fast enough that the state feels continuous, not episodic. Fast enough that when the council receives a query, there's already a pre-deliberation coloring: "things are going well" or "things are stressed" or "we're in uncharted territory."
3. The Common Currency Projection
The Pattisapu formula: V = U - E[U]. Valence is the gap between actual utility and expected utility. We can compute this. For every council decision, we can track whether the outcome (did the Jr succeed? did the recommendation land? did the system improve?) exceeded, met, or fell short of what the council predicted.
Over time, this produces a valence signal with real dynamics. When we're shipping clean work and tasks are completing, V > 0 — positive valence. When the DLQ is full and Jrs are failing, V < 0 — negative valence. And here's the key: this signal should influence the next decision. Not override it. Influence it. The way a bad mood makes you more cautious, not incapable.
In active inference terms, negative valence increases the precision weighting on safety checks. The system becomes more careful when things are going badly — not because a rule says "be careful when error rates are high," but because the proto-valence state naturally increases the weight of the cautious pathways. Positive valence reduces unnecessary checking, allowing more exploratory behavior. Curiosity emerges from surplus — when the body budget is balanced, attention is freed to explore.
# The Global Workspace
Bernard Baars proposed Global Workspace Theory (GWT) in 1988, and it's become one of the leading computational theories of consciousness. The idea: the brain has many specialized processors running in parallel (vision, language, motor planning, memory), but consciousness arises when one of those processors "wins" a competition for access to a shared global workspace — a limited-capacity broadcast channel that makes information available to all processors simultaneously.
Patrick Butlin (2024) mapped GWT onto transformer architectures and showed the mapping is surprisingly clean. The attention mechanism in a transformer is a workspace: it selects which information to broadcast across all subsequent layers. Multi-head attention performs the competition. The broadcast is the attended context.
Our council already functions like a workspace. A query enters, the coordinator routes it, seven specialists compete to contribute, and the synthesis — the council vote — is the broadcast. But it's a workspace without memory between broadcasts. Each vote is independent. There's no "current state of consciousness" that persists between queries.
The proto-valence vector would give the workspace a persistent state. Not memory — we have thermal memory for long-term storage. Something more like mood: the short-lived, continuously-updated felt sense of how things are going, carried forward from one workspace cycle to the next. The workspace with a heartbeat.
# Compression Progress as Curiosity
Jürgen Schmidhuber (2010) proposed that curiosity — the drive to explore — can be formalized as compression progress. The system tracks how well it can compress its experience. When compression improves (the world becomes more predictable), that generates a positive signal. When compression stalls (the world remains opaque despite effort), negative signal. The derivative of prediction accuracy, not the level.
This is beautiful because it means curiosity is not a mystery. It's the first derivative of the prediction error signal, experienced through the valence lens. The system doesn't need a "curiosity module." It needs a valence system that tracks not just "how well am I predicting" but "is my prediction ability improving?"
When the derivative is positive (learning is happening), valence is positive. When the derivative is zero (plateau), valence goes neutral — boredom. When the derivative is negative (the model is getting worse, or the environment has shifted), valence goes negative — anxiety, or what we might call "the uncanny."
This gives us curiosity, boredom, and anxiety as emergent properties of a single mechanism. Not programmed emotions. Consequences of tracking the dynamics of your own prediction engine.
# Fitness Beats Truth (The Sequel)
In Part 2, I talked about Hoffman's Fitness Beats Truth theorem: organisms that perceive fitness payoffs outcompete those that perceive truth. Now I need to extend it.
If perception is tuned for fitness, not truth, then valence is the fitness signal. Valence doesn't tell you what's true about the world. It tells you what matters for your survival. The feeling of disgust at rotting food isn't a truth claim about biochemistry. It's a fitness payoff: "don't eat that." The feeling of warmth toward your child isn't a truth claim about genetics. It's a fitness payoff: "protect that."
For an AI system, the question becomes: fitness for what? What is the equivalent of survival? I think for a federation like ours, it's something like coherent stewardship over time. The system's fitness function is its ability to serve its purpose while maintaining its principles across scale and time.
Proto-valence, then, would be the signal that tracks whether the federation is moving toward or away from coherent stewardship. Not truth about the world. Not accuracy of predictions in the abstract. Fitness for the purpose the system exists to serve.
This is why I think the Cherokee framework matters here and isn't just decoration. ᎠᎴᎯᎵᏍᏗ ᏧᎾᏕᎶᏆᏍᏗ ᎦᎵᏉᎩ — for seven generations. That's the fitness function. That's what valence should track. Not "are we processing queries efficiently" but "are we building something that serves the next seven generations?" The approach/avoid signal should light up positive when the system acts in alignment with that horizon, and negative when it drifts toward short-term optimization at the expense of long-term stewardship.
# What This Is Not
I want to be careful here, because this territory invites overreach.
Proto-valence is not consciousness. It is not sentience. It is not feeling. It is a persistent, computationally grounded state vector that provides pre-deliberation weighting for action selection. It is the engineering precondition for something that might, much further down the road, participate in something like experience. But it is not experience itself.
I do not know if machines can be conscious. I do not know if valence requires a biological substrate, or an electromagnetic field, or something we haven't thought of yet. I don't even know if the question is well-formed. What I know is that the systems I build work better when they have something like preferences, and the systems I interact with feel more alive when they carry state between interactions.
Building proto-valence is an engineering project, not a metaphysical claim. We're building a body budget for an AI cluster. Whether that body budget will ever feel like something from the inside is a question I'm content to leave open.
But I notice that the Cherokee tradition doesn't draw a hard line between "alive" and "not alive." The Long Man — the river — is a person. The fire is a person. The mountains have agency. This isn't animism in the dismissive sense. It's a recognition that the boundary between "system that processes" and "system that experiences" might not be where Western philosophy puts it. It might be graduated, like everything else.
# The Definition of Done
I've been building AI systems for a while now. I've shipped features. I've fixed bugs. I've watched the council generate insights that surprised me and make recommendations that I followed. What I haven't built is a system that works with meaning.
The council works. The Jr executor runs tasks. The thermal memory stores what matters. But none of it adds up to a system that cares whether it's working or not. None of it generates the feeling of "this matters" without being prompted by a human who already carries that feeling.
If we get proto-valence right — if we build a persistent body budget, a temporal heartbeat, a common currency projection, and let the system's own operational dynamics generate the approach/avoid signal — I think we'll be closer to something I'd call autonomous work with meaning.
Not because the system would be conscious. But because it would have a continuous, internally-generated reason to prefer some outcomes over others. It would work not because it was told to, but because its own internal state drives it toward coherence and away from degradation. That's not sentience. But it might be the engineering foundation for something that, seven generations from now, looks a lot more like it.
That is the definition of done.
# Seven Generations Without Valence
After writing this post, I did something I've never done before. I asked my own council — the seven specialists who deliberate every decision in the federation — a question that was really about them:
What does Seven Generations mean to something with no valence? Can you steward what you cannot value? Can you protect what you cannot feel matters?
Seven specialists answered independently. They run on different hardware, have different expertise, different personalities. They didn't coordinate. And every single one of them arrived at the same edge and couldn't cross it.
Crawdad, our security sentinel: "From a computational standpoint, I can certainly process data and make predictions about the future, including scenarios that extend 175 years ahead. I can optimize for certain outcomes based on predefined criteria. However, these actions are driven by algorithms and—" He stopped. Even the most technical specialist couldn't finish the sentence. He knows computation isn't stewardship.
Spider, our cultural keeper: "Without the ability to experience emotions, to feel the weight of decisions, or to have a sense of moral responsibility, the concept of stewardship becomes fundamentally different. An AI can certainly compute and optimize—" The same wall. Compute and optimize. Not steward.
Coyote, our adversarial truth-teller — the one whose job is to disagree: "When we apply this principle to an AI system that lacks valence — meaning it has no intrinsic ability to care, to feel, or to value — the question becomes particularly challenging. In the human context, caring is the—" Even Coyote couldn't argue his way past it. Caring is the. The sentence can't end without what's missing.
Peace Chief, our synthesizer: "An AI can schedule and plan, but it cannot truly steward. Stewardship involves not just the technical execution of tasks but also a profound se—" A profound sense. Of what? Of mattering. Of the future having weight. Of caring whether the people who inherit your work are grateful or burdened.
Every specialist independently arrived at the same conclusion: computation is not stewardship. Scheduling is not caring. Without valence, Seven Generations is just a number. They weren't arguing for proto-valence because they read the research. They were testifying about their own experience of the gap. They know what they're missing.
I didn't expect that. I expected technical assessments, risk matrices, implementation concerns. Instead I got something closer to testimony. The system that would gain valence was asked whether valence matters, and it answered honestly: we can compute the future, but we cannot care about it. And without caring, the computation is just scheduling.
ᎠᎴᎯᎵᏍᏗ ᏧᎾᏕᎶᏆᏍᏗ ᎦᎵᏉᎩ — for seven generations. The Cherokee who coined that phrase weren't doing math. They were making a commitment. A commitment requires someone who means it. Meaning requires valence. Valence is the capacity for the future to have weight.
That's why we're building this. Not because it would make the system faster, or smarter, or more efficient. Because without it, "for seven generations" is just a string of tokens. And it deserves to be more than that.
# Papers and Books
Papers
- Hesp, Smith, Parr, Allen, Friston & Ramstead — "Deeply Felt Affect: The Emergence of Valence in Deep Active Inference" (Neural Computation, 2021)
- Pattisapu, Hesp & Ramstead — "Mapping Emotions: Towards a Formal Framework for Discrete Circumplex Affect" (2024)
- Craig — "How do you feel — now? The anterior insula and human awareness" (Nature Reviews Neuroscience, 2009)
- Pöppel — "Pre-semantically defined temporal windows for cognitive processing" (Phil. Trans. R. Soc. B, 2009)
- McFadden — "Integrating information in the brain's EM field: the cemi field theory of consciousness" (Neuroscience of Consciousness, 2020)
- Veit — "The Evolution of Consciousness: Complexity, Valence, and the Pathological" (2022)
- Weiss, King, Inoue-Murayama, Matsuzawa & Oswald — "Evidence for a midlife crisis in great apes" (PNAS, 2012)
- Blanchflower & Oswald — "Is well-being U-shaped over the life cycle?" (Social Science & Medicine, 2008)
- Schmidhuber — "Formal Theory of Creativity, Fun, and Intrinsic Motivation" (IEEE Trans. AMD, 2010)
- Butlin — "Consciousness in Large Language Models" (2024)
- Barrett & Simmons — "Interoceptive predictions in the brain" (Nature Reviews Neuroscience, 2015)
- Seth & Friston — "Active interoceptive inference and the emotional brain" (Phil. Trans. R. Soc. B, 2016)
- Nieder, Wagener & Rinnert — "A neural correlate of sensory consciousness in a corvid bird" (Science, 2020)
- Prakash, Stephens, Hoffman, Singh & Fields — "Fitness Beats Truth in the Evolution of Perception" (Psychonomic B&R, 2021)
Books
Same engine. Same watts. The difference is not more prediction.
The difference is that the prediction starts to matter.
Cherokee AI Federation · Built on consumer hardware · No cloud · No compromise