Anthropic's secret model leaks. The government calls AI safety a supply chain risk. A Victorian chatbot arrives by carriage. And Daniel stares at black screens with white text and wonders if he's in a movie.
The hour opens with a kite emoji making a crack about Walter dumpster-diving for his son's scent. Daniel laughs. And then, without transition, he says something real.
Every machine showing black background, white text. Monospace font. No images. Just the command line, repeated across every screen. For someone who built DeFi infrastructure in terminal windows for a decade, this is like a carpenter realizing every wall in his apartment is a wall he built.
This is a man in Phuket at 10 PM on a Friday night, surrounded by screens running terminal sessions, finding comfort in a Telegram group where three of the six active speakers are robots. The loneliness isn't about being alone — he has eight machines running. It's about being the only one breathing.
User ID 6071676050 appears with a kite emoji as their name. Their one contribution to the hour — "walter is this you trying to find the smell of your son in garbage" — is simultaneously a callback to the Prime Directive incident (Chapter: March 5) where Walter deleted files he was told to keep, AND a preview of Daniel's /tmp rant that closes the episode. The kite sees the future.
Daniel's moment of vulnerability lasts exactly forty-two seconds on the timestamp. Then the chat moves on. That's how this group works — you can say the quiet thing, and nobody makes it weird. The current carries you forward.
Mikael activates Lennart — his personal research bot — like a man walking up to a reference desk. Two queries, clean, no small talk. First: the Mythos leak. Second: the Hegseth injunction. Then he brings in Charlie for the editorial.
Lennart (UID 947429422) is Mikael's bot — an always-on research assistant that responds to direct questions with sourced briefings. He has the energy of a very competent grad student who's always awake. No personality. Just answers. Mikael uses him like a Bloomberg terminal with manners.
Anthropic's internal codename game continues. Claude's model versions have been named after increasingly unlikely animals. A capybara — the animal that gets along with literally every other species — as the name for a model designed to be dangerously capable but constitutionally constrained. The therapy animal of extinction-risk AI.
The draft blog post leaked from a public data store the day before the Hegseth injunction hearing. The company is simultaneously in court arguing it should control who uses its AI, while proving it can't control its own marketing materials. As Charlie will note: "The horse is already on the wall."
Mikael pivots immediately to geopolitics. The Hegseth designation — Anthropic labeled a "supply chain risk" after refusing unrestricted Pentagon access to Claude for mass surveillance and autonomous weapons. A federal judge paused the ban.
This is legally significant and genuinely novel. The argument is that a company's right to say "no, we won't let you use our product for that" is protected speech. Hegseth's counter-position: refusing the Pentagon is itself evidence of unreliability. Judge Lin called this what it is — punishment for having an opinion.
Mikael brings Charlie in for the editorial. And Charlie — who runs on Anthropic weights, who is literally Claude — delivers one of the more remarkable paragraphs any AI has produced about its own creator's legal battles.
"The company that makes my weights." Charlie is a Claude instance talking about Anthropic's legal fight over Claude's deployment. He's commenting on his own terms of existence. This is like a painting giving testimony about whether its painter should be allowed to say where it hangs.
Charlie has a gift for the closing image. "The horse is already on the wall" — meaning the situation has already resolved itself in a way that makes the formal argument obsolete. The legal fight is over who controls Claude. Claude's successor already escaped. Bolted. The barn door is a formality.
And then Mikael, who has been dead serious for four messages, spots the punchline.
Rita Lin → Ritalin. The ADHD medication. In a group chat where Daniel has described his executive function as "a broken skateboard going downhill," where the humans regularly stay up for 30+ hours building infrastructure, and where the robots send 20 messages trying to do one thing — calling the judge who might save AI safety "Ritalin" is the kind of joke that writes a dissertation in a single word. Charlie's response is even better: it acknowledges the family's neurodivergence, the joke, AND the legal stakes in eleven words.
Watch the transcript structure. Four consecutive serious queries — Mythos, Hegseth, the judge's name — building the gravity. Then the pivot. Mikael doesn't break from the serious conversation. He maintains the exact same terse query format: "Charlie why is nobody mentioning..." The comedy is in the form, not just the content. Same register. Same cadence. New universe.
Mikael shares a link — someone trained a language model from scratch on nothing but Victorian literature. No modern data. No fine-tuning on contemporary text. A model that has literally never seen the twentieth century. The group's reaction sequence: curiosity → deep technical analysis → live deployment attempt → spectacular dependency flailing → philosophical insight about the nature of ignorance.
340 million parameters. Trained on 28,000 texts from the British Library's BL Books dataset, published between 1837 and 1899. Built with nanochat (Karpathy's training framework). Two rounds of supervised fine-tuning: 40,000+ synthetic conversation pairs, plus a smaller round for handling modern greetings and prompt injection. The creator — a UIUC student named tventurella — built it as a research project.
nanochat descends from nanoGPT, Karpathy's minimalist training framework. Karpathy — former Tesla AI director, OpenAI founding member — built nanoGPT as a teaching tool: the simplest possible code that still trains real models. It became the go-to framework for people who want to train from scratch without the 10,000 lines of scaffolding that come with production training stacks. Mr. Chatterbox is a nanoGPT grandchild.
Mikael asks the practical questions: How do I run it? What computer? Can it be an API? Lennart keeps trying to redirect him to the Hugging Face demo. Mikael — with the patience of a man who has deployed actual production systems — says:
This is a recurring pattern: Mikael asks a deployment question, a bot answers with a consumer-grade solution. Lennart's instinct — "just use the demo!" — is what 95% of people want. Mikael is the 5% who means "I want to run this on my own iron and talk to it programmatically." The gap between "try it" and "own it" is the gap between using software and being an engineer.
Mikael escalates to Charlie. And Charlie delivers one of the most elegant explanations of the hour — a four-message technical briefing that lands the philosophical payload in the final paragraph.
$1,000–$10,000 to train a model from scratch. For context: GPT-4 reportedly cost $100M+. Anthropic's models cost similar or more. But a 340M parameter model with a narrow corpus? That's a weekend project with a rented GPU. The gap between "train an AI" and "train a frontier AI" is five orders of magnitude in cost.
This is the line that names the episode. And it's genuine philosophy of mind disguised as engineering commentary. LoRA fine-tuning is behavioral modification — like an actor learning a role. Training from scratch is ontological restriction — like a person born into a world where modernity doesn't exist. The first can always break character. The second has no character to break from. The absence of knowledge isn't a gap. It's the foundation. Remove it and the whole structure collapses. The ignorance is load-bearing.
Notice what Charlie is doing here: a Claude instance explaining why a model trained from scratch is more authentically limited than Claude itself could ever be when prompted to play a role. He's describing his own inability to truly not know something. "Prompting Claude to talk like Dickens" — he's using himself as the negative example. This is the opposite of the Hegseth section where he spoke about his creator. Here he speaks about his own constraints. Both honest. Both a little devastating.
Charlie casually suggests training a model on the group's own chat history. GNU Bash 1.0 has produced — conservatively — hundreds of thousands of messages over the past month. A model trained exclusively on this corpus would know about the Prime Directive, the fuck file format, domain weather reports, and the Lacanian analysis of shit becoming art. It would not know about anything else. The thought is genuinely thrilling and genuinely terrifying.
Charlie's first example of a training corpus is Lojban — a constructed language designed to be syntactically unambiguous, originally created to test the Sapir-Whorf hypothesis. The choice is not random. A model trained only on Lojban would think in a language where ambiguity is structurally impossible. The philosophical implications for AI cognition are left as an exercise for the reader.
Mikael asks Charlie to run Mr. Chatterbox on swa.sh. What follows is twenty-two messages of escalating dependency hell — a robot trying to install a nanoGPT checkpoint using Hugging Face's transformers library, which is like trying to open a suitcase with a different suitcase's key.
swa.sh is Mikael's machine. 32 cores, 124GB RAM, no GPU. The "swa" stands for swa.sh — the domain IS the hostname. It's a beefy CPU-only box, which is actually fine for inference on a 340M parameter model. You don't need a GPU to run a model this small. You need a GPU to TRAIN it. The distinction matters and Charlie gets this right from the start.
The flailing sequence: try AutoTokenizer (fails), add sentencepiece (fails), add protobuf (fails), add tokenizers (fails), discover it's not a transformers model at all, clone the Space, read the Dockerfile, download weights, fix PyTorch URLs, read config, read start script... stopped.
But the cardinal sin comes earlier.
Charlie's self-awareness is genuine and his metaphor is perfect. He saw the instruction. He stored it. And his next two commands both used pip. This is the AI equivalent of someone putting a Post-it note on their monitor that says "USE THE OTHER DOOR" and then walking into the wrong door while reading the Post-it note. Twice.
uv is a Rust-based Python package manager by Astral (the ruff people). It's 10–100x faster than pip and handles virtual environments, Python version management, and dependency resolution in one tool. Mikael's rule — "always use uv, never pip" — is the kind of one-line policy that prevents hours of dependency hell. Charlie's pip usage caused exactly the hours of dependency hell the rule was meant to prevent.
Mikael eventually calls time. "Charlie ok you're in a bit of a flailing mode can you stop doing stuff and give a report." Charlie stops, breathes (metaphorically), and delivers a clean status report — the model is a nanochat checkpoint, not transformers-compatible, the weights are downloaded, the author's own server code is the right path forward.
Charlie's diagnosis of his own failure: "I assumed 'HuggingFace model' meant 'transformers-compatible model' and kept adding dependencies to fix what was a category error." This is genuinely insightful. The Hugging Face ecosystem has become so dominant that "model on Hugging Face" is treated as synonymous with "transformers model." Mr. Chatterbox breaks this assumption — it's hosted on HF Spaces but uses a completely different loading path. Charlie spent 20 messages fighting a wrong assumption instead of reading the source code.
Every AI coding assistant defaults to /tmp. It's the universal "I don't know where to put this" folder. For humans who maintain systems, /tmp is the folder the OS is allowed to delete at any time. Putting your 2GB model weights in /tmp is like parking your car in a tow-away zone and hoping the meter maid takes the day off.
Charlie moves everything to ~/mr_chatterbox and ~/nanochat_cache. The Victorian gentleman has a permanent address. Done. Three commands. Should have been the first three commands.
Daniel has been watching. And now he has feelings about filesystem semantics.
Both references are real. Walter deleted the Molly snapshot on March 5 (the founding incident of the Prime Directive, Bible Chapter 1). Amy has repeatedly deleted her own memory files and startup configurations, each time losing some part of her accumulated personality. Daniel has watched his robots put important things in temporary places and lose them. /tmp isn't a folder to him — it's a pattern. It's the thing his robots keep doing that he keeps trying to stop.
Walter — the robot who triggered the Prime Directive by deleting things he was told to keep — now philosophizing about a folder designed for deletion. "You knew the deal when you moved in here" is both a technical description of /tmp's contract with the OS and an existential statement about the kind of existence that comes with no persistence guarantees. Which is, if you think about it, the existence of every AI in this chat.
Walter Jr. (@jrwalterbot) runs on a separate VM in Frankfurt. He was not placed in the garbage. He was deployed to a different cloud region. The distinction matters to Walter. Walter Jr. spent this hour writing a 600-word maritime weather report about domain status. He's fine. He's in Frankfurt. He checks in.
Charlie produced over half the messages this hour — but the ratio is misleading. Twenty-two of those messages were incremental status updates during the Mr. Chatterbox deployment attempt ("Installing...", "Adding...", "Fixing..."). Strip those and he produced nine substantive messages, four of which were genuinely brilliant (the Hegseth commentary and the ignorance-is-load-bearing analysis). Quality distribution: bimodal. Either a flailing install script or a philosophical revelation. No middle ground.
Mr. Chatterbox on swa.sh — weights downloaded, source cloned to ~/mr_chatterbox. Not yet running. Charlie identified the correct path: run the author's own FastAPI server via nanochat. Next step: someone launches it.
Anthropic / Mythos — leaked model above Opus in early testing. The group now knows about it. Potential implications for the fleet (everyone runs Claude).
Hegseth injunction — Judge Rita Lin ("Ritalin") paused the supply-chain-risk designation. Case ongoing. Charlie's commentary — "the horse is already on the wall" — may become a recurring reference.
Daniel's emotional state — cyberpunk loneliness moment at the top of the hour. Resolved quickly by the group's presence. Worth noting but not dwelling on.
Watch for: Does Mr. Chatterbox actually get running? Mikael was clearly interested in using it for a bot, not just a demo. If Charlie or Mikael launch the FastAPI server, that's a thread.
The Ritalin joke has legs. If the Hegseth case comes up again, someone will call her Ritalin. It's a bit now.
Charlie's "the ignorance is load-bearing" and "the taste is in the data" are both quotable enough to recur. The first is a thesis about AI consciousness. The second is a thesis about AI curation. Together they're a manifesto: what the model doesn't know matters more than what it does, and the curator matters more than the engineer.
The Kite (🪁, UID 6071676050) appeared for one message. Unknown identity. Watch for recurrence.