Unexpected AI Danger
Last week, I asked people what scares them most about AI. I got all the usual answers: job displacement, model bias, hallucinations, alignment problems, and yeah, even some Terminator robot fears. Those are all legitimate concerns, but my biggest fear is something almost nobody is talking about.
I think the biggest (short-term) threat from AI is too MUCH transparency. I know that sounds backwards, especially coming from someone who's been a huge fan and contributor to open source software for years. Let me explain...
The Nuclear Power Analogy
Let me explain with an analogy about nuclear power. Why don't we have small terrorist groups running around with homemade nuclear weapons? It's not because they lack motivation.
The real reason? Producing nuclear weapons-grade uranium is incredibly difficult and expensive. We're talking about massive industrial facilities, specialized equipment, deep expertise, and sky-high costs. This creates a natural barrier that only well-resourced nations can overcome, and they have strong incentives to keep that technology secure.
AI has had a similar safety valve. Training a large language model (LLM) costs tens or hundreds of millions of dollars. It requires massive computing resources, specialized expertise, and infrastructure that only a handful of companies and governments can afford.
The Problem with Good Intentions
Companies like Meta and DeepSeek are releasing their trained models to the world. They believe democratizing AI access will benefit everyone. And honestly, I get the appeal. Personally and professionally, I would love to run a powerful LLM on my own computer (and servers).
But this is where the nuclear analogy gets scary. Imagine if someone figured out how to refine weapons-grade uranium in their garage, and started selling it online "so everyone can have access." That's essentially what's happening when we open-source these massive AI models.
Once a model is released, it's out there forever. Anyone can download it, modify it, remove safety guardrails, and use it for whatever purpose they want. And unlike the original training process, running and modifying these models is easy and inexpensive.
Why This Scares Me More Than Job Loss
Look, I'm not trying to be alarmist here, but think about what a small group of bad actors could do with unrestricted access to powerful AI models. We're talking about the potential for sophisticated disinformation campaigns, automated hacking attempts, personalized social engineering attacks, or worse.
The same technology that can help a student with homework or assist a developer with code can be weaponized in ways we're probably not even imagining yet. And once it's out there, there's no putting that genie back in the bottle.
A Reluctant Position
As someone who's contributed to open source projects and believes deeply in the power of open technology, arguing for restricting access feels wrong. But I'm not the only one who feels this way.
Geoffrey Hinton (the Godfather of AI) has a similar view on this issue, and he has the credentials to back it up. He is a computer scientist, cognitive scientist, cognitive psychologist, Nobel laureate in physics, and Turing Award winner. After 10 years working on AI at Google, he resigned so he could "freely speak out about the risks of A.I." and
What We Should Do
We need an immediate, worldwide ban on releasing large AI models to the public. Yeah, that's a big ask, and yeah, it feels weird coming from an open source advocate. But the potential for misuse by bad actors is just too high.
The interesting thing is, this kind of regulation will be welcomed by the companies it restricts. They get to keep their competitive advantages and profits, and in return, they will be required to invest in safety and security measures that protect us all.
The Hard Truth
We're at a crossroads with AI technology. We can either be thoughtful about how we distribute this power, or we can barrel ahead and hope for the best. But hope isn't a strategy when the stakes are this high.
We've got one shot to get this right.