Military Robots

DARPA's Humanoid Robot

Written by Daniella Nicole
Posted July 24, 2013 at 2:41PM

Evoking images of the 1984 hit movie The Terminator, the U.S. Department of Defense's Defense Advanced Research Projects Agency's (DARPA) Atlas robot is a 6-feet-2-inch tall, 330 pound humanoid machine designed by Boston Dynamics. Atlas comes complete with cameras, a computer, hydraulic joints, and four hands.

Source: DARPA

Though not complete with all the brains and innards, the design includes plans for Atlas to be autonomous – not controlled by remote operators or programming, but capable of controlling itself via artificial intelligence.

According to Mashable, Atlas is being designed for use in rescue operations, going where it is difficult or unsafe for humans to go, such as in rough terrain or through walls. Atlas will even be capable of driving vehicles.

This Defense Department machinery may sound amazing to some, but others have concerns, especially in light of other Defense Department machinery and its applications.

“Man” vs Machine

How does the man-like Atlas robot compare with the flying machines known as drones? Drones – those used by the Defense Department and other entities – have been tasked with such objectives as observation, data collection, and the use of munitions. In this way, an earthbound machine is no different than one capable of flight. Both can be used for the same tasks, just by differing methods.

Whereas drones can fly at a high enough altitude that they cannot be seen by the naked eye, Atlas can be utilized with a different stealthy approach. An Atlas tactic of stealth could be less obvious: to present less of a “threat” to the public via its human-like appearance and abilities.

While drones are controlled remotely by an operator, Atlas will control itself, presumably within certain pre-programmed parameters, reminding some of the 1987 movie Robocop.

There has been talk of shooting down drones, including legislation from the Colorado town of Deer Trail, which has considering offering permits to do just that. But shooting down a lone drone in the sky is very different from taking aim at a humanoid machine that likely will be located in the midst of human beings, putting them all at risk of becoming collateral damage.

Public Safety

Though some have cited the use of machines such as drones and Atlas as being for public safety, the machines can actually raise concerns about public safety. Citing a Department of Defense report, NBC wrote, “drones are 30 to 300 times more likely to crash than small civilian aircrafts.” Movies such as The Terminator, RoboCop, and I, Robot each gave lessons in how artificial intelligence can go seriously wrong.

Though the movies were works of fiction, the concerns over artificial intelligence are very real. How does a human being with his or her own flaws in logic, ethics, and morals bestow upon a machine the capability to make life and death decisions that will not also be flawed and potentially dangerous?

There are no absolutes in life, so how can one teach a machine to make safe and sound decisions within such indefinable parameters? Even technology that is simple by comparison is subject to viruses, hacking, and “glitches”. How safe is it to create a type of “super soldier” that inherently will be subject to the same and possibly worse?

Why Atlas?

The benefits of having a machine such as Atlas include protecting human life. Used in rescue operations, Atlas would allow rescue efforts to be conducted in places unsafe for humans, thus protecting rescuers and saving the lives of those in danger. Putting Atlas in the line of fire rather than a human police officer or soldier is also a worthy endeavor that could save and protect human life.

The cost, aside from what Bloomberg reports is a $2 million price tag, is the risk involved in weaponizing or arming artificial intelligence. It is a risk to put human life in the hands of artificial intelligence.

If a human police officer is in error in responding to a situation or fails to understand a situation, another human can reason with him or her. What happens when Atlas misunderstands? Is there any option for reason or discussion, or will the machine be programmed to behave in a certain way in response to what it perceives to be threats, danger, and violations of the law?

As humans, we know things are not always as they appear. If Atlas does not, we are all at risk.


If you liked this article, you may also enjoy:


Investing in Marijuana Without Getting Burned