Category: English

Computing Characters: An Emergence-First Approach to Narrative Simulation

Why simulating character behavior should start with math, not language models. The Generative Agents Problem In 2023, Stanford’s “Generative Agents” paper put 25 LLM-powered characters in a virtual town and let them interact. The characters planned their days, formed relationships, even organized a party. The paper received widespread attention and launched a wave of “AI town” projects. The architecture is straightforward: each character is an LLM instance with a memory stream. When two characters meet, both LLMs generate dialogue. When…

Agent Teams Coordination: Observations and Proposals from Building a Hook-Based Solution

I built a hook-based coordination layer for Claude Code agent teams to address eight documented pain points from the official agent-teams documentation. After three end-to-end test runs on a real project, I have concrete observations about what works, what required workarounds, and what would be better as native platform features. This document presents those observations as feature proposals. The Core Constraint PreToolUse is the only hook where coordination logic can gate agent actions — deciding whether to allow or block…

Coordinating Claude Code Agent Teams Through Cognitive State Tracking: A Case Study

Claude Code agent teams run multiple AI agents in parallel on a shared filesystem. Each agent has its own context window and no shared state by default. I built a hook-based coordination layer that addresses eight documented pain points using five hooks and 1,555 lines of production code. This case study describes the reconnaissance, design, implementation, and testing of that system. 1. The Problem Claude Code’s agent teams feature enables parallelism: a lead agent delegates tasks to teammate agents, each…

The Knowledge Authority Problem in Long-Running Agents

When the agent is confident, calibrated, and wrong — because its foundational knowledge is stale. The previous two posts in this series argued that AI agents need structured cognitive state (not just memory) and that structured confidence tracking produces calibration data that enables quantitative oversight. But there’s a failure mode that calibration can’t catch — one that I believe is the most underappreciated risk in current agent architectures. An agent can be perfectly calibrated and confidently wrong. Not because it…

Bayesian Confidence as Agent Self-Calibration

What 587 turns of structured confidence data reveal about whether AI agents know when they know. There is a question about AI agents that almost nobody asks, even though it matters more than most questions people do ask: When an AI agent says it’s confident, is it right to be confident? This isn’t a philosophical question. It’s an engineering question with measurable answers. And the answers have direct implications for how much autonomy we should grant agents, when we should…

Why AI Agents Need Cognitive State, Not Memory

The missing layer between what happened and what was understood. When you close your IDE and reopen it tomorrow, your tools remember everything. Your git history, your file system, your terminal history — all intact. But if you’re working with an AI coding agent, it remembers nothing. Not what it understood about your architecture. Not what it planned to do next. Not what it was uncertain about. The session starts blank. The industry’s response has been to build memory systems.…

Gao Yaojie: The Conscience in China’s Blood Catastrophe

Prologue: The Final Watch On December 10, 2023—International Human Rights Day—ninety-five-year-old Gao Yaojie passed away in her sleep in a small apartment on Manhattan’s Upper West Side. She was just nine days short of her ninety-sixth birthday. The frail old woman had been in exile in the United States for fourteen years. Her feet had been bound in childhood, leaving her with a lifelong limp. Three-quarters of her stomach had been removed after a suicide attempt during the Cultural Revolution.…