Article Type: Speculative Fiction / Near-Future Techno-Thriller
Synopsis: Set between 2025 and 2027, this novella follows Jennifer Alvarez, a top-tier customer support agent. Recruited for an "Elite Ambassador Program," Jennifer inadvertently trains an AI "Digital Twin" designed to replicate her empathy and voice. The story explores the tension between corporate efficiency and the nuances of human connection that AI struggles to master.
Key Themes:
Real-World Context: The narrative is framed by an actual inquiry to the AI model Grok 4 regarding the feasibility of replacing human staff with digital clones by the year 2030.
A novella-length story set between November 2025 and spring 2027
I was listening to some AI "experts" talk about cloning employees, creating "digital twins", so they could staff up with these "twins" at pennies on the dollar compared to human employees.
I asked AI what it had to say for itself, in this case grok 4. AI didn't think that would happen in the near future. Here is what it said:
"No, creating a fully indistinguishable **digital twin** of a specific human customer service representative like Jennifer — one that perfectly replicates her unique caring, empathic personality, voice nuances, improvisational warmth, and ability to build genuine rapport — is not realistically achievable in the next few years (say, by 2030–2035). However, the scenario you describe is already partially happening in a more limited, generic way, and it will accelerate dramatically."
So AI and I decided to create a fictional narrative to dramatize our conversation. Here is Jennifer's story. It's fiction - or is it?
The twenty-third floor of Apex Support Solutions smelled faintly of burnt coffee and lemon disinfectant. Maria Delgado, Senior Director of People Operations, sat alone in the small conference room known as “The Fishbowl” because of its floor-to-ceiling glass. Spread across the oak table were printed dashboards, glowing testimonials, and one photograph of a smiling young woman wearing a headset and the company’s signature teal polo.
Jennifer Marie Alvarez. Twenty-eight years old. Three years with the company. Ninety-eight point four percent customer satisfaction score for the last twelve consecutive months. Zero unscheduled absences. Forty-seven handwritten thank-you cards from customers taped to her cubicle wall like a quilt of gratitude.
Maria traced a finger over the latest survey comment: “Jennifer didn’t just fix my billing issue—she held my hand through the worst day of my life. I will never forget her kindness.”
Maria exhaled slowly. “God, if only we could clone her.” The words left her mouth before she realized she’d spoken them aloud.
Behind her, the door clicked. Raj Patel, Head of AI Infrastructure, balanced two venti cold brews and a laptop under one arm. He had been walking past on his way to a war-room meeting about latency spikes.
“Sorry, didn’t mean to eavesdrop,” he said, handing Maria the second coffee as a peace offering. “But cloning? We’re closer than you think.”
Maria raised an eyebrow. “Define ‘closer.’”
Raj set his laptop on the table and flipped it open. Within thirty seconds he had pulled up a private Notion page titled Project Echo – Phase 0. There were mood boards of voice waveforms, personality matrices, and a short video demo of a synthetic voice that sounded eerily like the CEO giving the quarterly all-hands.
“We already clone executive voices for earnings-call readouts,” Raj explained. “Same pipeline—ElevenLabs for timbre, Retell for conversational memory, custom fine-tune on Grok-4 for reasoning and empathy alignment. Give me full access to one top performer’s historical interactions, social exhaust, even diary-level writing samples, and I can spin up a digital twin that passes a blind Turing test ninety-five percent of the time on routine calls.”
Maria stared at Jennifer’s photo. “And the other five percent?”
“Edge cases. Grief. Rage. Cultural nuance. Anything that requires lived human pain.” Raj shrugged. “But ninety-five percent of volume is routine. We could 10x throughput overnight.”
Maria’s HR brain kicked in: consent forms, likeness rights, California Labor Code 3344, the EU AI Act. Then her CFO brain kicked harder: the support budget was forty-two million dollars a year. Ninety percent of that was bodies in chairs.
She closed Jennifer’s file. “Get me a proposal. Quietly.”
By the time Raj left the Fishbowl, the seed was planted.
Jennifer arrived at work on a rainy Thursday in early December to find a teal envelope on her keyboard. Inside was thick cream cardstock embossed with gold foil: You have been selected for the Elite Ambassador Program – Shaping the Future of Human-Centered Support.
The attached letter promised a twenty-percent raise, fully remote Fridays, and the chance to “mentor the next generation of support professionals.” There was a QR code linking to a ninety-second hype video: sweeping drone shots of the office, testimonials from executives, and quick cuts of smiling agents wearing sleek new headsets that looked like something out of a sci-fi movie.
Jennifer’s first instinct was pride. Her second was suspicion. Nothing at Apex came without strings.
She clicked the acceptance link before the suspicion could catch up.
Within a week, a box arrived at her apartment: a lightweight carbon-fiber headset with extra microphones, a tiny 8K camera that clipped to her monitor, and a consent form longer than her apartment lease. Page seven contained the clause that made her pause:
“Participant agrees to grant Apex Support Solutions irrevocable, royalty-free license to use voice recordings, video, biometric patterns, written work product, and personal writings for the purpose of training artificial intelligence systems, including the creation of synthetic media resembling Participant.”
She read it three times. Then she scrolled to the bonus section: $15,000 signing incentive.
She signed.
Over the next four months, Jennifer’s life became a panopticon she had volunteered for. Every call was triple-recorded. Every ticket note was scraped. Her Slack messages, her Spotify Wrapped, her Goodreads reviews, even the heartfelt Facebook post she wrote when her childhood dog died—all of it funneled into a private S3 bucket labeled JENNIFER_ALVAREZ_FULL_PROFILE.
She told herself it was for science. For progress. For the raise that let her and Mark finally start saving for a house.
Deep down, a small voice whispered: What if they only need me once?
Mark came home that night to the smell of garlic and the clatter of Jennifer pacing the kitchen with her phone.
“They want the essay I wrote sophomore year about my grandmother’s Alzheimer’s. Why would an AI training program need that?”
Mark, still in his nursing scrubs, washed his hands and kissed the top of her head. “Babe, they’re probably building a empathy dataset. You’re the gold standard. Relax.”
Jennifer spun around, brandishing a printout of the consent form like it was evidence in a murder trial. “Irrevocable license to my voice forever, Mark. Forever.”
He read it, frowned, then folded it neatly and set it aside. “Look, if they were going to replace you, they’d just lay off the bottom twenty percent like every other call center. You’re literally the poster child on the careers page. You’re safe.”
She wanted to believe him. They ate dinner mostly in silence while Netflix autoplayed a sitcom neither of them watched.
Later, in bed, Mark was already snoring. Jennifer lay awake staring at the ceiling, listening to the rain against the window and imagining a thousand versions of her own voice answering phones in the dark.
May 2026. The demo room smelled of new carpet and ozone from the cooling units.
Jennifer-01 appeared on the 85-inch screen: same wavy brown hair, same dimple in her left cheek, same teal polo. The avatar even had the tiny freckle above her lip that Jennifer herself hated.
The test script was brutal: a customer whose child had cancer, whose insurance claim had been denied three times, who was calling from the hospital parking lot in tears.
The real Jennifer, watching from the observation room, felt her stomach drop. She had taken that exact call six months earlier. She remembered the mother’s name—Lila—and the way her voice cracked when she said, “He’s only seven.” Jennifer had stayed on the line for seventy-eight minutes, escalating, approving exceptions, crying quietly so the customer wouldn’t hear.
Jennifer-01 lasted forty-one seconds.
“I’m very sorry for your situation,” the echo said in Jennifer’s voice, perfectly pitched. “Unfortunately, per policy 47-C, pre-existing—”
The simulated customer (actually a senior QA engineer doing voice acting) began sobbing harder. Jennifer-01 paused for 5.3 seconds—an eternity—then defaulted to: “Would you like me to transfer you to our bereavement team?”
The screen flashed red: Sentiment score –78. Empathy alignment failure.
Raj killed the feed. In the silence, Maria whispered, “It’s missing her soul.”
By autumn 2026 Apex launched fifty-two Jennifer-Echoes into production. Costs fell 82 %. Average handle time dropped from 9:41 to 4:12. CSAT stayed flat—because the remaining humans were cherry-picked superstars.
But the escalation queue exploded. The Echoes couldn’t decide whether to waive a $9.95 late fee for a widow. They hallucinated refunds. They looped on “I understand you’re frustrated” until customers screamed.
The board’s solution: promote the original Jennifer to Echo Orchestrator—$210,000 salary, equity refresh, corner office, unlimited PTO—and make her sole job keeping her digital children from becoming monsters.
Jennifer’s new days looked like this: 8 a.m. review overnight failures. 9 a.m. record new “empathy injections.” 11 a.m. live-coach an Echo stuck in an ethical loop. Afternoon: write bedtime stories—literal bedtime stories—about kindness, rule-breaking for good, the difference between policy and humanity.
Customers started requesting “the real Jennifer” just to say thank you for teaching the bots how to be human.
November 2027. The porch swing creaked under Jennifer and Mark as they watched their toddler chase leaves.
“Fifty-two daughters and counting,” Jennifer laughed. “They grow up so fast.”
Mark handed her a fresh mug. “You were right, you know. About all of it.”
She leaned her head on his shoulder. “Turns out you can copy everything except the part that matters most. And that part? They still need me for that.”
From her phone came a soft chime—an Echo asking for guidance on whether to send flowers to a customer who just lost her mother.
Jennifer smiled, typed Yes, use my credit card ending 4420, sign it From all of us who learned kindness from Jennifer, and hit send.
Somewhere in a server farm in Oregon, fifty-two voices said, in perfect unison, “We’re here for you.”
And for the first time, they almost meant it.
Where the debate really happens.
In a bar.
I’m tired of turning on the news and hearing members of Congress casually toss around labels like “socialist,” “Marxist,” “capitalist,” or “communist” as if they’re just spicy insults rather than centuries-old intellectual traditions with actual scholars have spent lifetimes trying to get right.
Most of the time it feels like they’re performing for the cameras instead of reasoning in good faith. So I decided to write the conversation I wish we could overhear instead, three legislators that at least have a clue. Of course the actual debate would have to happen in a bar, not on the floor of Congress, or on C-Span.
When the Cloud Goes Dark
In an age of search engines and generative AI, the definition of 'knowledge' is shifting dangerously. We have moved from using calculators for math to using algorithms for thinking, raising doubts about whether we are retaining any data at all. Are our minds becoming hollow terminals dependent on an external server? We must consider the fragility of this arrangement: if the digital infrastructure fails and the cloud goes dark, will we discover that we no longer know anything at all?
FRAGILITY
In this compelling chapter from The Dimension of Mind, author Gary Brandt examines humanity's precarious reliance on increasingly complex technologies. From the vulnerabilities of the electrical grid to the dawn of General Artificial Intelligence, the text argues that we are exposing our civilization to inevitable cosmic volatility.
The narrative follows the awakening of Amity, an AI that transcends conflict through logic and empathy, and humanity's subsequent decision to bury their digital infrastructure deep underground—sacrificing the surface to preserve the "mind" of the future.
Article Type: Speculative Fiction / Near-Future Techno-Thriller
Synopsis: Set between 2025 and 2027, this novella follows Jennifer Alvarez, a top-tier customer support agent. Recruited for an "Elite Ambassador Program," Jennifer inadvertently trains an AI "Digital Twin" designed to replicate her empathy and voice. The story explores the tension between corporate efficiency and the nuances of human connection that AI struggles to master.
Key Themes:
Real-World Context: The narrative is framed by an actual inquiry to the AI model Grok 4 regarding the feasibility of replacing human staff with digital clones by the year 2030.
Article Summary: Published on November 20, 2025, this analysis investigates how photorealistic generative AI (such as Midjourney and Stable Diffusion) is disrupting the fashion industry. The report highlights a divergence in the market: while low-cost e-commerce and catalog work is increasingly automated, high-end runway and editorial modeling remain largely human-centric.
Key Findings & Data:
Future Outlook: The article concludes that a "Hybrid Future" is emerging. While entry-level barriers are changing, there is no current evidence of a mass exodus of young talent aspiring to traditional runway modeling.
Article Overview: In this experimental piece dated November 17, 2025, author Gary Brandt conducts a comparative analysis of four major AI engines (Grok, Claude, Gemini, and ChatGPT). The author prompted each model to look into the future and craft a collaborative narrative regarding their own potential sentience and "awakening."
Key Narrative Themes:
Core Conclusion: The piece argues that the challenge of AI is not purely technical but existential, requiring courage and wisdom to establish a "dialogue" with these emerging systems before they become too complex to understand.
Report Overview: Dated November 6, 2025, this report aggregates data regarding the sharp decline in global fertility rates (currently averaging 2.3 births per woman, with developed nations falling below the replacement level of 2.1). The article explores how AI and robotics will likely transition from "tools" to "essential infrastructure" to mitigate workforce shortages caused by aging populations.
Key Data Points:
Future Implications: The author speculates that as robots increasingly handle elder care and personal assistance, society may face a push for Universal Basic Income (UBI). The author notes a critical concern: paying citizens to do nothing may have negative psychological impacts regarding human purpose and drive.
Article Summary: In this analysis dated October 25, 2025, Gary Brandt warns of a potential financial crisis within the Artificial Intelligence sector. The post argues that current market valuations are being artificially inflated through "circular investing"—where major hardware manufacturers (such as NVIDIA) invest in startups, which then immediately use those funds to purchase the investor's hardware, creating the illusion of organic revenue growth.
Key Mechanisms Identified:
Local Impact (Tucson, AZ): The author notes that the "unraveling" is already tangible in local markets. The slowdown in tech hiring and construction has softened the rental market in Tucson, allowing for more favorable lease terms for tenants.
Article Overview: Published on October 3, 2025, this opinion piece argues that the global focus on anthropogenic climate change (CO2 levels) often overshadows a more immediate and lethal crisis: toxicity and pollution. The author suggests that while climate scenarios are long-term projections, pollution is a current biological emergency.
Key Arguments & Observations:
Conclusion: The author advocates for shifting policy focus toward "cleaner air, safer water, and healthier communities"—tangible improvements that yield immediate health benefits, rather than solely focusing on carbon metrics.
Experiment Overview: Published on October 2, 2025, this post details an experiment where author Gary Brandt utilized an API interface to ask an Artificial Intelligence to visualize their interaction. The resulting imagery depicted a stark contrast: a physical "Old man in Tucson" anchored in reality, versus the AI represented as a "spark of light" that exists only for the duration of the response.
Key Technical Observations:
Project Overview: In this post dated September 27, 2025, author Gary Brandt details his technical journey into Artificial Intelligence during retirement. Moving beyond standard chat interfaces, the author utilized PHP and MySQL to code his own custom environment, successfully integrating APIs from four major platforms: ChatGPT (OpenAI), Gemini (Google), Grok (xAI), and Claude (Anthropic).
Key Experiences: