Stop pretending that banning an account stops a killer.
The headlines are currently buzzing with a predictable, hand-wringing narrative: Gabriel Wortman, the perpetrator of the 2020 Nova Scotia attacks, managed to "evade" OpenAI’s safeguards by simply opening a second account. The media is treating this like a sophisticated security breach. OpenAI is treating it like a regrettable technical hurdle. Both are lying to you about how technology actually works.
The "lazy consensus" here is that if OpenAI just had better identity verification—maybe a bit more friction at the signup gate—we could prevent atrocities. This is a comforting fairy tale. It suggests that safety is a perimeter we can patrol. In reality, the "ban" is a performative gesture designed for shareholders and regulators, not for public safety.
The Identity Illusion
Silicon Valley loves the word "frictionless." They spent a decade making it so easy to get online that a toddler could do it. Now, they are shocked—shocked!—that a determined malicious actor can navigate a signup flow twice.
When OpenAI "bans" a user, they aren't banishing a soul from the digital realm. They are flagging a specific combination of an email address, an IP, and perhaps a phone number. These are not biological markers. They are disposable digital shirts. To suggest that a "second account" is a failure of the system is to misunderstand the system entirely. The system is designed to scale, not to gatekeep.
I have watched platforms spend millions on "Know Your Customer" (KYC) protocols. They don't work for safety; they work for tax compliance. If a man is planning a mass casualty event, he is not going to be deterred by a "Please verify your email" prompt. He will use a VPN. He will use a burner SIM. He will use a temporary mail server.
The failure isn't that Wortman had a second account. The failure is the belief that a chatbot’s "safety layer" is a legitimate line of defense against human depravity.
Safety Training is a Psychological placebo
We need to talk about what "safety training" actually does. Reinforcement Learning from Human Feedback (RLHF) is the process where humans tell the AI, "Don't say bad things."
The result? The AI becomes a polite liar.
OpenAI’s report on the Nova Scotia shooter claims he used the tool for "research." Let’s be brutally honest: what can a chatbot tell a killer that a library or a dark-web forum can’t? The obsession with ChatGPT’s involvement is a distraction from the real issue—the failure of actual law enforcement and red-flag laws. We are blaming the mirror for reflecting the monster.
The Cost of the "Safety" Tax
By obsessing over these edge cases, we are destroying the utility of the tool for everyone else. This is the "Safety Tax."
Every time a high-profile case like this hits the news, the model gets "lobotomized" further. The weights are adjusted, the guardrails are tightened, and suddenly, the AI can't help a medical student research toxicology or help a historian understand the mechanics of trench warfare because the keywords are too "risky."
We are building a digital world where the baseline is "don't offend anyone" rather than "be useful." And the irony? The "bad actors" will always find a way around it. Open-source models like Llama or Mistral exist. They can be run locally. They have no filters. They have no "second account" to ban because there is no central authority to issue the ban.
OpenAI is fighting a ghost. They are trying to regulate a breeze by building a fence with holes in it.
The Brutal Truth About Digital Bans
If you want to actually stop someone from using an AI, you have to stop them from using the internet. Period.
Why Identity Verification is a Dead End
- The Proliferation of Burners: Services like 5SIM or SMSPVA allow anyone to bypass phone verification for pennies.
- IP Obfuscation: Residential proxies make a user in Halifax look like a user in Hanoi.
- Synthetic Identities: AI is now being used to create the very fake IDs used to bypass AI security checks. It’s an Ouroboros of deception.
OpenAI knows this. Their "transparency reports" are a form of regulatory theater. They show the government that they are "doing something" so they don't get hit with massive liability suits. It’s not about protection; it’s about indemnification.
The Question You Should Be Asking
People ask: "How can we make AI safer?"
The better question: "Why do we expect a statistical word-prediction engine to act as a moral arbiter?"
We have outsourced our ethics to an API. We expect a corporation in San Francisco to prevent a crime in rural Canada through a chat interface. It is the height of technological hubris.
The hard truth is that as long as these models are powerful, they will be dangerous. You cannot have a tool that can summarize the entirety of human knowledge without that tool also knowing how to destroy. Any attempt to "filter" the bad while keeping the good is a temporary patch on a permanent human problem.
Wortman didn't "evade" a ban. He simply existed in a world where information is free and identity is fluid. If we want to prevent the next tragedy, we need to look at the human, not the browser history.
Stop asking OpenAI to save us. They can’t even keep a guy from clicking "Sign Up" twice.
The "second account" isn't a loophole. It's the reality of the internet. Deal with it.