When ballistic missiles can see

By | including 1 item
When I was a kid, I read a lot of sci-fi books. One of the most common themes was “man vs. machine,” which often took the form of robots becoming self-aware and threatening humanity. This theme has also become a staple of Hollywood movies like The Terminator and The Matrix. Despite the prevalence of this theme, I don’t lose any sleep worrying about this scenario. But I do think we should spend more time thinking about the implications—positive and negative—of recent progress in artificial intelligence, machine learning, and machine vision. For example, militaries have begun to develop drones, ships, subs, tanks, munitions, and robotic troops with increasing levels of intelligence and autonomy. While this use of A.I. holds great promise for reducing civilian casualties and keeping more troops out of harm’s way, it also presents the possibility of unintended consequences if we’re not careful. Earlier this year, U.N. Secretary General António Guterres called global attention to these threats: “The weaponization of artificial intelligence is a growing concern. The prospect of weapons that can select and attack a target on their own raises multiple alarms…. The prospect of machines with the discretion and power to take human life is morally repugnant.” Unfortunately, my first attempt to educate myself on autonomous weapons was a bust. I read a book that was dry and felt really outdated. Then a few months ago I picked up Army of None: Autonomous Weapons and the Future of War, by Paul Scharre. It’s the book I had been waiting for. I can’t recommend it highly enough.
read more on gatesnotes.com