A few days ago we highlighted the open letter presented at an AI conference by a thousand scientists, calling for a ban on the development of Lethal Autonomous Weapons Systems (LAWS). I suggested there would be blowback to the effect that a ban would not succeed. While anyone might have expected protests from the military lobby to the effect that a ban was practicable neither as an aim nor as a strategy, a thoroughly reasoned and reasonable reply like this one was something of a surprise. Posted on the Kurzweil Accelerating Intelligence Blog, the detailed essay crafted by a former US army officer makes fascinating and sobering reading.
The gist is that analogies drawn with nuclear, chemical and biological weapons are unrealistic as those systems are expensive and hard to replicate. AI weapons systems could rapidly become widely available, however, and no more distant than an easily weaponised drone or hackable device or implant – weapons whose creators are unlikely to sign up to any collective agreement. Nor indeed could any self-directed Super Intelligence be counted upon to sign up either. If war is demanding quicker decision and response times, we can imagine that an inferior weapon directed by a superior intelligence might obliterate the superior weapon operated by a human.
In considering humanity’s alternatives, the blogger rejects the possibility of a “world totalitarian state” and goes for full-on “military capabilities to fight unforeseen threats”. He dismisses what he terms the “kumbaya mentality” and clearly assumes that our species will remain defined by a transcendent intelligence, but encumbered by the limbic promptings of psychopathic apes. Left unexamined is the potential in soft power, and the implications in keeping our friends close, but our enemies closer.