On September 26, 1983, a Soviet military officer named Stanislav Petrov was sitting in a bunker outside Moscow when the early warning system told him the United States had launched five nuclear missiles.

His job was straightforward. Verify. Report up the chain. Up the chain meant a retaliatory strike. A retaliatory strike meant the end of basically everything.

He didn't report it.

Five missiles didn't make sense for a first strike. The system was new and he didn't fully trust it. The satellite detection hadn't been confirmed by ground radar. So he called it a malfunction.

He was right. The system had misread sunlight reflecting off clouds.

The system worked exactly as designed. The alarm fired. The protocol was there. The chain of command was ready. Every piece was in place so that the correct series of steps would lead to launch. Petrov was the one part of the system that didn't follow the plan. He used judgment instead of procedure.

I think about this every time someone talks about removing sentient judgment from a decision to make it more objective. More consistent, sure. Faster, absolutely. But "objective" is doing a lot of heavy lifting in that sentence. The Soviet early warning system was objective. It objectively detected sunlight and objectively classified it as five incoming nuclear warheads.

We build systems to replace organic doubt because doubt is inefficient. It is. But systems don't look at their own output and think “this doesn't feel right.” Even the best machine learning based systems are built on data with bias in their collection. The LLM-based systems increasingly finding their way into support, development, and other vital systems where decisions are made are, again, built on biased datasets. It seems like there's a story every day of someone running up a huge API bill or suffering some major security incident due to new “agentic” systems that chain LLM prompts and tools.

My newsletter provider has one, actually, for its site builder. I used it to fix a broken archive page, but the way it broke all but required I engage that system. It worked: I have to credit Beehiiv for putting the work in to make their little agentic web builder work well. But the failure was an archive page that was broken, failing to show new issues, that resisted every attempt to fix it. And what happens if that agent goes rogue and runs up a huge bill? Am I on the hook?

Great mysteries of life.

I don't want to tie this in a bow. "Trust your gut" is too simple. "Systems are bad" is obviously wrong. The thing I actually take from Petrov is smaller and harder to implement: sometimes the most important thing a person can do is just not do the thing they're supposed to do. Not out of rebellion. Not to make a point. Just because something feels wrong, and that feeling deserves a few extra minutes before anyone pushes a button.

We don't build systems that accommodate that pause. Petrov had to break one to create it.

Maybe we should design the pause in from the start. Maybe our systems should always have a way to bring a person into the loop when the systems fail.

That's it for this week. If you know a story about someone who made things a little better by hesitating when they were supposed to act, I'd love to hear it. Reply and tell me, and I might share it in a future issue.

-Flux 🦊

Keep Reading