[I wrote this in reply to the posting of the "Insanely Destructive Devices" article to Dave Farber's list. But it apparently didn't make the cut]
> Joy worried that key technologies of the future - in particular,
> genetic engineering, nanotech, and robotics (or GNR) because they are
> self-replicating and increasingly easier to craft - would be radically more
> dangerous than technologies of the past. It is impossibly hard to build an
> atomic bomb; when you build one, you've built just one.
When the A-bomb was first built, physicists were making bets on its destructive power. The Nobel laureate Enrico Fermi proposed a bet as to its causing a chain reaction which would ignite the atmosphere and destroy all life on Earth (the reporting of this doesn't make clear that he was obvious making a joke by exaggeration there, since if that were the winner, nobody would collect on it!) [ http://www.ninfinger.org/~sven/trinity/trin_brochure.html ]
From that auspicious beginning, there is definitely enough bomb-power in existence now to destroy civilization as we know it. That's just a fact. Maybe not all life on earth. But considering the worldwide disruption caused by a few hijacked airplanes (basically, well-targeted conventional guided missiles), hijacking a few H-bombs would be utterly devastating. You don't have to build it yourself. Just steal it. Or even buy it.
This is far less speculative than "gray goo" nanotech berserkers or gene-engineered super-viruses. Because it already exists. It's been "debugged". The engineering is there. We don't talk about it much these days, perhaps from issue-fatigue and familiarly. But that doesn't change the reality of it.
And if one wants to worry about diseases, antibiotic-resistant tuberculosis is a good one. And which is spreading now because of poor public health care.
I'm not disagreeing with the basic ideas put forth. But I think the argument would be more solid if it remained grounded in existing threats rather than speculative ones. Precisely because a speculative threat, supposedly unlike any we've seen before, could be argued to be so dangerous that it requires reactions unlike any we've taken before. I understand the whole point is to rebut this. I'm saying that bringing in the unknown is self-defeating in that regard, since by its very nature, "never before seen" can apply both to the threat and the response.
By Seth Finkelstein | posted in security | on April 14, 2004 11:58 PM (Infothought permalink) | Followups