Should Robots Have Rights?

By 2056, robots may be given the same rights as humans, a government-funded report claimed in 2006.

The report was conducted by the British Government’s chief scientist, Sir David King, and was written in conjunction with Outsights, a management consultancy group, and Ipos Mori, an opinion research organization.

If the report is correct, then in less than half a century from now, robots may even be able to vote, pay taxes and be called upon for compulsory military service.

An article in the Mail about the report quoted Henrik Christensen, director of the Centre of Robotics and Intelligent Machines at the Georgia Institute of Technology, who said: “If we make conscious robots they would want to have rights and they probably should.”

The report continues:

Robots and machines are now classed as inanimate objects without rights or duties but if artificial intelligence becomes ubiquitous, the report argues there may be calls for human rights to be extended to them.
It is also logical that such rights, are meted out with citizens’ duties, including voting, paying tax and compulsory military service.
Mr Christensen said: “Would it be acceptable to kick a robotic dog even though we shouldn’t kick a normal one? There will be people who can’t distinguish that so we need to have ethical rules to make sure we as humans interact with robots in an ethical manner.”

I am pleased to be able to say that there were some dissenting voices. Writing in the Daily Mail, A.N. Wilson asked, “If robots were given the vote, would they be tempted to vote for other robots to enter Parliament?” He continued:

The Government paper is no joke. They are seriously considering the possibility of the rights of machines…. How can it be that such an absolutely insane set of propositions could have escaped the pages of science fiction, and been given serious consideration by the Government’s Chief Scientific Adviser?…

As for robots or other machines, it is foolish to suppose that they can ‘think’ in the way that human beings think. They can no more think in the human sense than a clock knows how to tell the time.

The clock helps us to tell the time. Just as a robot or machine, however complicated or capable of developing apparently independent mental processes, will only ever be the sum of its mechanical parts.

That debate happened back in 2006. Thankfully, I have not heard that it has not been taken up since then. But it did raise an interesting question: if, theoretically, robots could be developed to the point where they had consciousness and could be programed with all the properties of humans, how could we justify not giving them rights? According to some of the “nothing but” approaches to describing human nature that Denyse O’Leary wrote about in Salvo 1, the answer is simple: even now there is not a whole lot of difference in principle being a human and a machine, or between a human and an animal. The difference is merely one of complexity. Indeed, what A.N. Wilson said about the machine, namely that it “will only ever be the sum of its mechanical parts”, is unfortunately what many people now think about humans.

Further Reading