Tuesday, March 18, 2008

A Big Dog of a Robot

While people are running around in a panic over the economy, new innovations are constantly happening. It's beginning to look like robots are the new "Next Big Thing". Of course robots have been used for years in the assembly of automobiles, but what's new are completely independent robots that can coexist with us as part of our daily environment.

Most familiar are iRobot's rug and floor cleaning robots. But iRobot also makes a number of robots for the use in dangerous environments. If you watch CSI or similar shows, you may have seen these 'bots.

So check out this video from a start up company called Boston Dynamics which has developed an autonomous robot called Big Dog:



I am not sure what the loud noise is due to. But since Big Dog doesn't seem to make the noise when tethered to cables, perhaps the noise is due to some sort of power generator.

Speaking of iRobot, people I know who use the rug cleaning 'bots all have super neat houses and they probably really don't need the robot. So as an iRobot stockholder, I keep trying to get the company to beta test its housecleaning robots under real battle conditions- namely my house.

What are they afraid of? Don't they want to see if they can pass the cleaning 'bot Turing test?

Needless to say, these robots do raise a number of ethical issues. As robots become more and more autonomous do we want robots to make life and death decisions in the battlefield? I would argue probably not. But decision making speed is important so this will put pressure on developers such as iRobot to work towards robots that can make independent decisions to kill, at least in certain situations.

Can use of these robot weapons be justified in terms of the rules of war? Ethicists have begun wrestling with those sorts of issues. Here for instance is a paper from the Georgia Institute of Technology: Governing Lethal Behavior. It is a large file. The article notes though that we already have semi robotic systems in place that do make decisions whether or not to fire.

This paper quotes a government study which says:

"Armed UMS [Unmanned Systems] are beginning to be fielded in the current battlespace, and will be extremely common in the Future Force Battlespace…
This will lead directly to the need for the systems to be able to operate autonomously for extended periods, and also to be able to collaboratively engage hostile targets within specified rules of engagement… with final decision on target engagement being left to the human operator….

Fully autonomous engagement without human intervention should also be
considered, under user-defined conditions, as should both lethal and non-lethal engagement and effects delivery means."

Note the last sentence. Now warfare often is constrained by various sorts of rules of engagement and conventions, but what's going to happen when these systems begin to fall into the hands of a determined foe who isn't constrained by the same sorts of rules. What about our government finding some justification for a previously taboo use of these systems? And I am not being partisan here. Linguistic shenanigans are not a monopoly of Republicans or Democrats, shocking as that may be.

Suppose humans are left in the decision making loop. Is the result going to be any better? As warfare becomes more and more like a video game, will the detachment of humans from the actual battlefront lead to greater abuse of these systems?

Geesh! What ever happened to those good old ethical issues-embryonic stem cells for instance. We haven't adjusted to issues raised by those technologies and here is a whole set of new issues. And I just want a 'bot that can clean my floors!

Cross posted from Dangerous Ideas.
Post a Comment