first_imgWatch out, Arnold Schwarzenegger. Activists at the Human Rights Watch (HRW) are coming after you. At least, they probably would if you really were a humanoid killer robot. As it is, though, there are an increasing number of research projects into autonomous drones that can be programmed to kill and need no physical human presence to operate. This increasingly powerful — and worrisome — war technology will be banned across the world if the HRW has its say.The group has released a 50-page report on the potential consequences, entitled “Losing Humanity: The Case Against Killer Robots.” One of the issues that comes up is, if a robot kills someone, who would be the one to respond to any legal challenges? If soldiers are killed during the course of combat, that’s one thing, but if fully autonomous weapons take out a civilian, a whole host of issues would arise.In addition to the new legal precedents that would need to be created, there is a far more obvious question of morality. Is it okay for a robot to decide who lives and who dies? Today’s advanced weaponry is a far cry from the days of bayonets and looking for the whites of the enemy’s eyes, but any sort of explosion or attack, or death, can be attributed back to a human decision. In addition, if it would some day be possible for countries to declare war simply by sending out a battalion of robots, would this lead to more wars? After all, robots can always be replaced.So yes, this report is really talking about fully functioning robots, or “weapons,” that can essentially think for themselves. The report admits that this kind of technology probably won’t be achievable for at least 20 years, but based on the existing advancements, the time for international treaties is now.The use of the term “killer robots” in the report title may be partially meant for shock value and publicity for the HRW, but it is actually a subject that is worth calling attention to.More at HRW, via The Guardianlast_img read more