Can Google Employees impose their will on the Company?

When Google agreed to help the Pentagon with AI for defense purposes, the employees at Google threw a fit. They didn’t want their company to be remotely connected with war.

4000 employees signed a petition that they didn’t want Google to go ahead with Project Maven, a program by Pentagon which focused on the targeting systems of the military’s armed drones. Google is supposed to contribute artificial intelligence technology to the program. Essentially Google is required machine-learning algorithms to help military drones.

Many employees have also resigned. In the petition, the employees said, “We cannot outsource the moral responsibility of our technologies to third parties,” The employees felt that this puts Google reputation at risk and was in direct opposition to the core values of the company. A resigning Google employee told Gizmodo, “At some point, I I could not in good faith recommend joining Google, knowing what I knew. I realized if I can’t recommend people join here, then why am I still here?”

This means that the absorption of the mission statement of the company by the employees is well understood and complete. While mission statements are supposed to be the guiding lights for companies, most often companies forget about their mission and values in the throes of business and the pressure of meeting short-term goals. So it is quite amazing that the employees of Google are proving to be the conscience of the company and reminding them of the stated values and mission. The also said that Google should not be in the business of war.

In part, the greatest worry seemed to be whether artificial intelligence could actually differentiate between military and civilian targets. In the past, often civilian targets have already been hit by drones, thanks to human error. Is there any AI error when the tasks done by humans are replaced by AI. What comes to mind is the Uber self-driving car which last year killed a human being. It does mean that while the virtues of delegating human tasks are many no one is quite sure what mistakes are likely to happen when we employ AI to do human tasks.

The more difficult question is that of responsibility. If a human driving the Uber car killed a human being, a human can be held responsible and the law can take its course by delivering the appropriate punishment. But when AI makes a mistake, it is likely that there is no one to punish. This means that mistakes made by AI as in the Uber case would need to be forgiven by society since there is no legal recourse to AI errors.

In many ways, the issues raised by Google employees is a first. It is the first time employees are telling their company what is ethical and what is not rather than the other way around. So it is being seen as a unique case of ethical activism from employees. But the general public seems to be supporting the

the Electronic Frontier Foundation (EFF) and the International Committee for Robot Arms Control (ICRAC).

It is a case where employees are trying to impose their will on the will of the company. But how will it turn out? A long time ago, one of my bosses told me that the corporate ego is bigger than personal egos. If that is true Google will win this battle. There is also a nationalistic element in this project. The counter-argument is should a company deny the country the support it requires?

Project Maven at Google is a new form of employee activism and would be interesting to watch how this pans out for the company.

Connect with me on twitter

Prabhakar Mundkur has spent 40 years in advertising and worked in India, Africa and Asia. He is currently Chief Mentor with HGS Interactive a part of HGS in the Hinduja Group. He is on the advisory board of Sol 's Arc (solsarc.org ) an NGO dedicated to special education for intellectually challenged children. He is also a member of Whiteboard ( whiteboardindia.org ) which supports senior management of NGOs in financial management, PR, Communication and HR through pro bono expertise.

1 Comment

  1. Very good points about corporate ethics and accountability here.

    One point to bear in mind (particularly since you spoke of the inability to prosecute an AI that might be at fault) is that US was the pioneer, about 100 years ago, in devising the concept of a “corporate person”.

    By a series of laws, the government gave corporations maany of the protections given to natural persons/citizens of the US. This idea has spread around the world, driven by capitalists who want these protections for their businesses.

    The problem with the concept is still, however, precisely what you say: a lack of accountability.

    Corporations can be fined, yes, but they can never be put in gaol or, as others in the US are subject to, sentenced to capital punishment.

    There are, of course, criminal liabilities that directors of corporations face, but the very fact of their having the corporation, as a buffer between themselves and the actual law, means that their liability is far more limited than if they had been sole proprietors or traders.

    In short, in many ways, corporations (the very name comes, of course, from the Latin for body – hence corpus, corpse, corps and so on – and represents a “body” put together by a group of people) have the very best of both worlds: most of the rights of individual citizens, but far fewer responsibilities.

    Of course, in the States, the culture of litigation makes class actions and so on possible, but those can be laborious and do not always bring the guilty to task.

    So yes, with regard to AI we are going to have to watch with interest, and no small degree of horror, to find out how legislation and public opinion go on the matter of accountability and ethics.

    As a postscript, it should be obvious by now that modern AI systems (learning mechanisms that get more confident and “accurate” the more experiences they have) are nothing like Asimov’s positronic robots, and there is no practicable way to incorporate his famous Three Laws of Robotics into them. Too many people seem to rely on the idea that a threshold will be crossed and then the Three Laws will become standard. These are usually sci-fi fanboys and it is time they realised their ideas are not realistic.

Leave a Reply