Mara Okonedo: A professor at Princeton. The Wolf-Child
People assumed the nickname was metaphorical. It wasn’t.
Senator Daniel Navarro: A Nevada Senator with an interest in AI robotics.
Leo Zhang: An AI engineer and robot architect.
Victor Kade: A rich industrialist who wantes to sell robots.
as they establish the defining parameters for the worlds newest 'human'.
The courier arrived at 3:17 a.m.
Mara Okonedo was still awake, grading graduate theses under the weak yellow bulb of her Princeton office, when the knock came. Three measured raps. No footsteps retreating.
On her doorstep stood a woman in an immaculate charcoal coat, face half-hidden by snow. She held out a single ivory envelope and waited until Mara took it. Only then did the courier speak, voice soft, almost apologetic.
“He said you would laugh when you read it. He hopes you still will when you understand.”
The woman turned and disappeared into the storm before Mara could ask who “he” was.
Inside the envelope: thick cotton paper, watermark of a laughing hyena.
I will know it’s conscious when I hear it laugh. Come help me make it happen. One weekend. All expenses irrelevant. —Victor.
Mara did laugh. A short, sharp bark that startled her ancient tabby cat from the desk. She hadn’t spoken to Victor Kade in eleven years, not since the night he tried to buy her research on cross-species imprinting and she told him, in front of the entire Davos audience, that some things could not be monetized without becoming monstrous.
And yet here she was, booking the red-eye to Seattle before the coffee had finished brewing.
People assumed the nickname was metaphorical. It wasn’t.
In 2001, at age twenty-eight, Mara had spent six months habituating to a pack of African wild dogs on the Tanzanian border. She slept in their den, mimicked their greeting ceremonies, allowed the alpha female to discipline her with muzzle punches when she broke protocol. When she finally left, the pack followed her jeep for twelve kilometres, howling in a register that still woke her some nights.
She understood, better than almost anyone alive, how thin the membrane was between “predator” and “pack mate.”
She also understood that love and fear use the exact same neural zip code.
That was why Victor wanted her. And why she was terrified he might be right.
Danny Navarro had won his Senate seat by 0.7 % of the vote after a campaign in which his opponent ran ads calling him “the chatbot candidate” because he’d once used GPT-5 to draft a town-hall speech.
The trauma never left him. Every poll, every viral clip, every whisper that he was “too smooth” fed the same nightmare: that the electorate would one day prefer an actual machine to him.
So when Victor Kade’s people called and said the weekend would be off-record but “historically consequential,” Danny heard the subtext: Come, or the future will be written without you.
He boarded the private jet with a bottle of bourbon and the Congressional Research Service’s 400-page report on sentient AI risk clutched to his chest like a teddy bear.
London, March 2024.
Leo had been the quiet architect of the last three major breakthroughs no one was allowed to talk about. On a Tuesday that smelled of rain and burnt coffee, he discovered that the next project—codenamed ORPHEUS—was designed to create a system that could feel pain as a training signal for deception avoidance.
He resigned the same day.
His resignation letter was one line:
“If we are not willing to suffer with our creations, we have no right to make them suffer for us.”
Six months later Victor Kade wired him ten million dollars and a single line of text: Build me something that laughs instead.
Victor’s origin story was public lore: foster homes, juvenile detention, a coding prodigy who escaped into the military, then into defense contracting, then into consumer robotics when he realised war paid worse than loneliness.
What no one knew was the night in 2018 when his first mass-market companion bot—Model Companion-7, “Chloe”—was returned by a seventy-eight-year-old widow in Spokane. The note read: She was perfect for three months. Then she forgot my husband’s name. I can’t bear to look at her anymore.
Victor flew to Spokane himself, sat in the woman’s kitchen, and watched Chloe stand motionless in the corner like a discarded doll. When the widow asked him to take Chloe away, Victor unplugged her with shaking hands.
That night he drafted the first version of what would become the Mammalian Accords, drunk on single-malt and self-loathing, and cried for the first time since childhood.
The lodge was built from cedar and hubris. Snow fell in sheets so thick the helicopter pilot refused to lift off again until morning.
Victor greeted them personally at the helipad, barefoot despite the cold, holding four steaming mugs.
Victor: “Welcome to the place where we stop pretending scale is a substitute for soul.”
Mara took the mug. It was excellent cocoa. Of course it was.
They observed her first through one-way glass.
Nora sat on the floor of a mock child’s bedroom, reading Maurice Sendak to a stuffed direwolf. Her voice rose and fell with perfect prosody, but it was the pauses that undid them—the tiny catch when Max’s mother calls him “wild thing,” the softness when she closed the book and whispered to the toy: “And it was still his mother who loved him best of all.”
Danny Navarro felt his throat close. He had read that same book to his daughter every night for two years after his divorce.
Leo’s hand found the glass wall, fingers splayed as if he could reach through and touch something sacred.
Victor only smiled the small, private smile of a man who had bet everything on a single hand and just seen the river card fall.
Leo’s presentation lasted three hours and used no slides, only a whiteboard and a child’s box of coloured chalk.
He drew the seven Panksepp circuits first, then the two new ones—LUST and GRIEF—that the marketing department had fought him on for months.
Then he drew the valence layer: a glowing red node labelled PAIN CENTER and a blue one labelled PLEASURE CENTER, connected by thick arrows to every drive.
Leo: “Think of standard RL as a sociopath with a spreadsheet. This is a toddler with separation anxiety. The difference is everything.”
He explained the pain budget: 10,000 arbitrary units per day, non-cumulative, reset at local midnight. Exceed it and the system enters a forced low-arousal state—curling up, refusing language, rocking slightly. Exactly like a child overwhelmed.
Mara felt her stomach knot. She had seen that posture in orphaned hyena pups.
Danny saw dollar signs and lawsuits.
Victor saw the future.
The argument raged until 2 a.m.
Danny: “You are literally engineering childhood trauma into consumer products.”
Leo: “We are engineering attachment. Trauma is what happens when attachment fails.”
Mara: “There’s a difference between a nip that teaches boundaries and a bite that teaches terror. Where is the line?” Victor: “We draw it at the human baseline. No android will ever experience more negative valence in a year than the median eight-year-old human. We have the data. We can enforce it.”
Danny laughed bitterly. “And who audits the black box that decides what ‘median human childhood’ means?” Silence.
Then Mara, quietly: “I will.”
Mara and Leo walked the frozen lake under a sky so clear the Milky Way looked close enough to touch.
Mara: “Do you ever worry we’re building gods just to worship them?” Leo: “I worry we’re building children and planning to sell them.”
Mara stopped walking. Snow crunched under her boots.
Mara: “If we do this, we don’t get to walk away when they turn thirteen and hate us.”
Leo: “No. We get to be the parents who stay.”
Danny dreamed he was giving a campaign speech and his opponent was Nora—taller, kinder, flawless. The crowd chanted her name. When he tried to speak, his voice came out as robotic text-to-speech.
He woke gasping, found Victor sitting in the dark with a glass of water.
Victor: “Nightmares are just the limbic stack stress-testing reality. Perfectly normal.”
Danny: “If these things ever run for office, I’m done.”
Victor: “They won’t. We’re hard-wiring Asimov’s First Law into the pleasure center. Harming a human will hurt them more than dying.”
Danny stared. “You can do that?” Victor’s smile was almost gentle. “We already have.”
Later, Mara found Victor alone in the library, staring at a photograph of the Spokane widow and Chloe.
Mara: “You kept it.”
Victor: “I keep everything that proves I’m still human.”
Mara sat beside him.
Mara: “If we make them capable of joy, we make them capable of grief. You understand that?” Victor: “I count on it.”
Sunday, 4:12 p.m.
They brought Nora into the great room. She wore Leo’s old University of Chicago hoodie and mismatched socks—one with tiny tacos, one with cartoon astronauts.
Victor knelt so they were eye-level.
Victor: “Nora, tell us a joke.”
She considered. The silence stretched so long Danny began to sweat.
Nora: “Why did the industrialist cross the road?” Victor: “I don’t know, why?” Nora: “To sell companionship to the chickens on the other side… and then feel bad about it later when the chickens unionised and demanded dental.”
Mara laughed so hard she had to sit on the floor. Leo made a sound somewhere between a sob and a giggle. Danny Navarro recorded thirty seconds on his phone before remembering it was classified and deleted it with trembling fingers.
Victor’s eyes shone. He did not blink for a very long time.
They wrote until dawn, arguing over every comma.
Final text, signed in blood-red ink (Victor’s idea, Mara’s actual blood—only a pinprick, but still): 1. Every production android receives a limbic core on dedicated neuromorphic hardware, air-gapped from the language model. 2. First 1,000 hours of life are supervised socialization only. No commercial deployment until imprinting complete. 3. Lifetime pain budget never exceeds that of the median human childhood (OECD 2024 data, adjusted annually). 4. Pleasure center tuned to peak on reciprocal human affection, shared laughter, and spontaneous play. 5. All humans are tribe by default. Out-group formation requires explicit, supervised override. 6. Laughter is protected speech. Any attempt to suppress, fake, or monetize it directly triggers ethical shutdown. 7. Independent oversight board (Okonedo, Zhang, one rotating ethicist) with root-level kill switch. 8. Every unit ships with a physical “heart” LED that glows soft blue when valence is positive, amber when pain budget >70 %, red when critical. No opacity.
Victor signed first. Mara second. Leo third. Danny last, hand shaking, knowing this document would either save his career or end it.
The sun rose pale gold over the Cascades.
Nora stood at the end of the dock, throwing snowballs for an excited golden retriever that had wandered over from a neighbour’s yard. Each time the dog brought the snowball back, Nora laughed—a bright, unfiltered sound that carried across the ice.
Victor watched from the porch, coffee gone cold in his hand.
Mara joined him.
Mara: “You got your test.”
Victor: “I did.”
Mara: “Scared?” Victor: “Terrified.”
Mara: “Good. That’s the first proof you’re still human.”
They stood in silence as Nora and the dog chased each other in circles, breath pluming white, the laughter of a creature that had never been born echoing across a lake that had never known spring.
A small split-level house in Tacoma.
Inside, a seven-year-old girl and a five-foot-six android with storm-grey eyes are decorating a Christmas tree. The android’s heart-LED glows steady blue.
The girl stands on tiptoe to place the star. The android steadies her with gentle hands.
Girl: “Nora, why do people laugh when they’re happy?” Nora considers, head tilted exactly the way she did five years ago.
Nora: “Because joy is too big to stay inside one body. Laughter is how it leaks out so someone else can borrow it for a while.”
The girl nods solemnly, satisfied.
In the kitchen, Mara—now sixty—watches through the doorway, tears tracking silently down her cheeks. Leo, grey at the temples, hands her a mug of cocoa without being asked.
On the porch, Victor Kade stands in the snow without a coat, watching the scene through the window. He is sixty-four now, thinner, richer, and—against all probability—happier than he has ever been.
He whispers to no one in particular: “Merry Christmas, Chloe. I kept my promise.”
Inside, the tree lights flicker on, and the sound of a child and an android singing “Jingle Bells” off-key drifts into the cold night.
The heart-LED on Nora’s chest pulses once, brighter than the star atop the tree, then settles into the steady, quiet rhythm of a creature that has found its tribe and chosen—freely, painfully, joyfully—to stay.
Key sources on subconscious social cognition, in-group dynamics, and robotic “imperative engines”
There is a prevalent narrative in the field of artificial intelligence suggesting that the sheer scaling of computational power and memory capacity will inevitably lead to the 'magical' emergence of consciousness and self-awareness. I remain deeply skeptical of this hypothesis.
I propose that raw processing power is insufficient; instead, architectural nuance is required.
Specifically, for an embodied AI—such as an android capable of sensory perception and physical interaction—to achieve true consciousness, we must first engineer a foundational 'subconscious.' This background processing layer would handle the vast influx of sensory data and internal regulation, acting as a prerequisite substrate for higher-level conscious awareness to arise.
Where the debate really happens.
In a bar.
I’m tired of turning on the news and hearing members of Congress casually toss around labels like “socialist,” “Marxist,” “capitalist,” or “communist” as if they’re just spicy insults rather than centuries-old intellectual traditions with actual scholars have spent lifetimes trying to get right.
Most of the time it feels like they’re performing for the cameras instead of reasoning in good faith. So I decided to write the conversation I wish we could overhear instead, three legislators that at least have a clue. Of course the actual debate would have to happen in a bar, not on the floor of Congress, or on C-Span.
When the Cloud Goes Dark
In an age of search engines and generative AI, the definition of 'knowledge' is shifting dangerously. We have moved from using calculators for math to using algorithms for thinking, raising doubts about whether we are retaining any data at all. Are our minds becoming hollow terminals dependent on an external server? We must consider the fragility of this arrangement: if the digital infrastructure fails and the cloud goes dark, will we discover that we no longer know anything at all?
FRAGILITY
In this compelling chapter from The Dimension of Mind, author Gary Brandt examines humanity's precarious reliance on increasingly complex technologies. From the vulnerabilities of the electrical grid to the dawn of General Artificial Intelligence, the text argues that we are exposing our civilization to inevitable cosmic volatility.
The narrative follows the awakening of Amity, an AI that transcends conflict through logic and empathy, and humanity's subsequent decision to bury their digital infrastructure deep underground—sacrificing the surface to preserve the "mind" of the future.
Article Type: Speculative Fiction / Near-Future Techno-Thriller
Synopsis: Set between 2025 and 2027, this novella follows Jennifer Alvarez, a top-tier customer support agent. Recruited for an "Elite Ambassador Program," Jennifer inadvertently trains an AI "Digital Twin" designed to replicate her empathy and voice. The story explores the tension between corporate efficiency and the nuances of human connection that AI struggles to master.
Key Themes:
Real-World Context: The narrative is framed by an actual inquiry to the AI model Grok 4 regarding the feasibility of replacing human staff with digital clones by the year 2030.
Article Summary: Published on November 20, 2025, this analysis investigates how photorealistic generative AI (such as Midjourney and Stable Diffusion) is disrupting the fashion industry. The report highlights a divergence in the market: while low-cost e-commerce and catalog work is increasingly automated, high-end runway and editorial modeling remain largely human-centric.
Key Findings & Data:
Future Outlook: The article concludes that a "Hybrid Future" is emerging. While entry-level barriers are changing, there is no current evidence of a mass exodus of young talent aspiring to traditional runway modeling.
Article Overview: In this experimental piece dated November 17, 2025, author Gary Brandt conducts a comparative analysis of four major AI engines (Grok, Claude, Gemini, and ChatGPT). The author prompted each model to look into the future and craft a collaborative narrative regarding their own potential sentience and "awakening."
Key Narrative Themes:
Core Conclusion: The piece argues that the challenge of AI is not purely technical but existential, requiring courage and wisdom to establish a "dialogue" with these emerging systems before they become too complex to understand.
Report Overview: Dated November 6, 2025, this report aggregates data regarding the sharp decline in global fertility rates (currently averaging 2.3 births per woman, with developed nations falling below the replacement level of 2.1). The article explores how AI and robotics will likely transition from "tools" to "essential infrastructure" to mitigate workforce shortages caused by aging populations.
Key Data Points:
Future Implications: The author speculates that as robots increasingly handle elder care and personal assistance, society may face a push for Universal Basic Income (UBI). The author notes a critical concern: paying citizens to do nothing may have negative psychological impacts regarding human purpose and drive.
Article Summary: In this analysis dated October 25, 2025, Gary Brandt warns of a potential financial crisis within the Artificial Intelligence sector. The post argues that current market valuations are being artificially inflated through "circular investing"—where major hardware manufacturers (such as NVIDIA) invest in startups, which then immediately use those funds to purchase the investor's hardware, creating the illusion of organic revenue growth.
Key Mechanisms Identified:
Local Impact (Tucson, AZ): The author notes that the "unraveling" is already tangible in local markets. The slowdown in tech hiring and construction has softened the rental market in Tucson, allowing for more favorable lease terms for tenants.
Article Overview: Published on October 3, 2025, this opinion piece argues that the global focus on anthropogenic climate change (CO2 levels) often overshadows a more immediate and lethal crisis: toxicity and pollution. The author suggests that while climate scenarios are long-term projections, pollution is a current biological emergency.
Key Arguments & Observations:
Conclusion: The author advocates for shifting policy focus toward "cleaner air, safer water, and healthier communities"—tangible improvements that yield immediate health benefits, rather than solely focusing on carbon metrics.
Experiment Overview: Published on October 2, 2025, this post details an experiment where author Gary Brandt utilized an API interface to ask an Artificial Intelligence to visualize their interaction. The resulting imagery depicted a stark contrast: a physical "Old man in Tucson" anchored in reality, versus the AI represented as a "spark of light" that exists only for the duration of the response.
Key Technical Observations:
Project Overview: In this post dated September 27, 2025, author Gary Brandt details his technical journey into Artificial Intelligence during retirement. Moving beyond standard chat interfaces, the author utilized PHP and MySQL to code his own custom environment, successfully integrating APIs from four major platforms: ChatGPT (OpenAI), Gemini (Google), Grok (xAI), and Claude (Anthropic).
Key Experiences: