Sunday, June 7, 2015

Lethal Autonomous Vehicles, Morality and Closing an Important Loophole that Allows Opportunity for the Poor


I was very impressed that Dr. Stuart Russell of the Univ of California called for scientists to boycott work on lethal autonomous weapons systems, e.g. autonomous vehicles that kill people.  But it also seems to me that it is mighty late in the day to raise this concern.  Why?

Because the Artificial Intelligence community was substantially, if not entirely, financed by the Department of Defense for the first 50 or so years of its life.  Yes, there has been some private financing, and NSF financing, probably more today than there has ever been.  But if you look at the history of the field, it is the DOD through DARPA and similar agencies that found the money to support the idea and stick by it through decades of early work, long before it had practical applications.

Now, it does not take a lot of imagination or even a PhD to realize that the DOD's interest in AI would include completely autonomous and lethal weapon systems.  There would be many obstacles on the way to that of course, but ultimately that would be one of the goals of financing this very early stage technology.   There were and will continue to be issues of what sorts of controls need to be on such systems, e.g. when they can be used to assist humans in these weapon systems and when they can be allowed to act "on their own" through rules and systems that are programmed into them.   The issue of validation of such systems and what it means on the battlefield when some of the players are not so conscientious about validation is a major concern.  And now is a good time to be concerned because while full autonomy may or may not be imminent, it is certainly much more imminent than it was 20 years ago.

Of course it needs discussion.

I find it intriguing that even unmanned drones are so controversial, they are far from autonomous but seem to raise strong opinions among the public.   I would not have particularly guessed that, given that each of these drones has a human or two at all times managing their progress.  But it is a concern and no doubt truly autonomous drones and vehicles will be as well.

Remember also that pretty much anything that moves can be lethal whether or not that is its primary purpose. Even the most docile and friendly autonomous vehicle could hurt someone by running into them at full speed, or dropping on them, even if they are only being affectionate and happy to see you.

But getting back to AI and its funding, is it really fair to rely for decades on a source of  funding, knowing full well why they were funding you, and then balking when you start to see results?   

Of course there is nothing unique in this situation to the field of AI.  Many technologies started out as DOD financed in their early stages only to move beyond that into other areas of financing and application.  Some scientists find the knowledge that they are being funded by the DOD morally objectionable and choose to avoid such financing, and that is certainly their right, even though some of us can be a little cynical about whether the NSF is really all that different from the DOD.  They are both financed after all by the same Congress, the same government, the same national will.  Nevertheless, if they prefer their filthy lucre laundered through the NSF that is OK with me. AI is only exceptional in that it is one area that has required more years of development to enter the practical zone of mere applications than many other advanced technologies.  It has required more nurturing and more faith on the part of the organizations that finance research.  And for decades that pretty much was only the DOD, at least to a large degree.

At this point, I would need to review the history of funding of AI and related technologies in order to make sure I am on firm ground.  What I am describing here is an impression from the late last century.  These impressions are almost certain to be out of date, at least partially.  AI has moved from blue sky research to practical applications in many areas.




But there is a good reason to oppose this work, this inhuman autonomy, although I am not sure that there are any AI researchers who are aware of it.

The reason is that throughout history, one of the very few avenues for advancement allowed to poor people in most countries is through the military. Certain civilizations were famous for this, including the Romans and our fair country.  Although officers were generally drawn almost exclusively from the upper classes, a capable young man without pedigree could often join the military and at the risk of his life and hardship, daring and luck,  find a way to advance himself and his family out of the grinding poverty they were condemned to by circumstances of their birth.   In the case of the Romans, there are various cities around the Mediterranean that are the direct descendants of some of these soldiers when they were given land at the end of their years of service.

I am not advocating anything about the military in this essay, for or against, but merely pointing out that historically the military has been a way for the poor and disenfranchised to advance themselves and have a better life in their otherwise corrupt and wealth-privileged society.  As part of that I think it is also fair to ask whether the use of autonomous vehicles, and autonomous robots of other kinds, will reduce this "demand for labor", one of the few channels of advancement available to the poor.  Of course it will.  In fact, that is probably one of the reasons for doing this development, people being so expensive to maintain.

As for the morality of computer scientists who choose to work on autonomous lethal weapons, I have mixed feelings.   Just because so much of the technology and computer industry was financed by the Dept of Defense does not mean that everyone should choose to stay on that path.  Of course not.  Perhaps it is sufficient to just acknowledge the past, thank those that had faith and move on.  There will presumably be enough people to develop the technologies that the DOD and Congress, who funds all these things, desires while the University spits on their benefactors and the academics within hold themselves so preciously aloof.

You may read an article about this call here:


No comments:

Post a Comment