March 26 2008 / by Alvis Brigis / In association with Future Blogger.net
Category: Other Year: General Rating: 7
When considering the future, is it more important to focus on the extinction risk posed by advancing technology or the massive potential for social advancement enabled by the same?
Anissimov writes, “In less than a decade, humanity will likely develop weapons even more deadly than nukes – synthetic life, and eventually, nanorobots and self-improving AI. Even if we consider the likelihood of human extinction in the next century to be small, say 1%, it still merits attention due to the incredibly high stakes involved.”
In a recent Nanotechnology Now column he explains, “[S]ome technologies may enable individuals or small groups to carry out attacks, on infrastructure or people, at a scale that would have required the resources of an army in decades past. This is not an outlandish concern by any means; many proponents of the “super-empowered angry individual” (SEAI) concept cite the September 11 attacks as a crude example of how vulnerable modern society can be to these kinds of threats. It’s not hard to imagine what a similar band of terrorists, or groups like Aum Shinrikyo, might try to do with access to molecular manufacturing or advanced bioengineering tools.”
But then Cascio turns things around a bit and points out that “angry people aren’t the only ones who could be empowered by these technologies.”