When Google agreed to help the Pentagon with AI for defense purposes, the employees at Google threw a fit. They didn’t want their company to be remotely connected with war.
4000 employees signed a petition that they didn’t want Google to go ahead with Project Maven, a program by Pentagon which focused on the targeting systems of the military’s armed drones. Google is supposed to contribute artificial intelligence technology to the program. Essentially Google is required to develop machine-learning algorithms to help military drones. The machine learning technology would quickly analyze images captured on the battlefield, such as from aerial drones.
Many employees have also resigned. In the petition, the employees said, “We cannot outsource the moral responsibility of our technologies to third parties,” The employees felt that this puts Google reputation at risk and was in direct opposition to the core values of the company. A resigning Google employee told Gizmodo, “At some point, I could not in good faith recommend joining Google, knowing what I knew. I realized if I can’t recommend people join here, then why am I still here?”
Letter to Sundar Pichai
We believe that Google should not be in the business of war. Therefore we ask that Project Maven be and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.
Google is implementing Project Maven, a customized AI surveillance engine that uses “Wide Area Motion Imagery” data captured by US Government drones to detect vehicles and other objects, track their motions, and provide results to the Department of Defense.
Recently, Googlers voiced concerns about Maven internally. Diane Greene responded, assuring them that the technology will not “operate or fly drones” and “will not be used to launch weapons.” While this eliminates a narrow set of direct applications, the technology is being built for the military, and once it’s delivered it could easily be used to assist in these tasks.
This plan will irreparably damage Google’s brand and its ability to compete for talent. Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust. By entering into this contract, Google will join the ranks of companies like Palantir, Raytheon, and General Dynamics. The argument that other firms, like Microsoft and Amazon, are also participating doesn’t make this any less risky for Google. Google’s unique history, its mottoDon’t Be Evil, and its direct reach into the lives of billions of users set it apart.
We cannot outsource the moral responsibility of our technologies to third parties. Google’s stated values make this clear: Every one of our users is trusting us. Never jeopardize that. Ever. This contract puts Google’s reputation at risk and stands in direct opposition to our core values. Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable.
Recognizing Google’s moral and ethical responsibility, and the threat to Google’s reputation, we request that you:
- Cancel this project immediately
Draft, publicize, and enforce a clear policy stating that neither Google nor
its contractors will ever build warfare technology
Google Mission Statement
This means that the absorption of the mission statement of the company by the employees is well understood and complete. While mission statements are supposed to be the guiding lights for companies, most often companies forget about their mission and values in the throes of business and the pressure of meeting short-term goals. So it is quite amazing that the employees of Google are proving to be the conscience of the company and reminding them of the stated values and mission. The also said that Google should not be in the business of war. In part, the greatest worry seemed to be whether artificial intelligence could actually differentiate between military and civilian targets. In the past, often civilian targets have already been hit by drones, thanks to human error. Is there any AI error when the tasks done by humans are replaced by AI.
The more difficult question is that of responsibility. If a human makes a mistake a human can be held responsible and the law can take its course by delivering the appropriate punishment. But when AI makes a mistake, there is no one to punish.
In many ways, the issues raised by Google employees is a first. It is the first time employees are telling their company what is ethical and what is not rather than the other way around. So it is being seen as a unique case of ethical activism from employees. But the general public seems to be supporting the employee’s cause.
Those who think that ethical approaches to business are a "nice-to-have" may be surprised by the resignations at #Google over #ProjectMaven. Smart companies will use #ethics as a competitive advantage, to win the best talent. But t…https://t.co/E7IgTi7fy2 https://t.co/OJfFqYbjwb
— Tom Upchurch (@t_upchurch) May 15, 2018
A dozen Google employees resigned in protest over its involvement in military contracts using machine intelligence to control drones. This is some sterling ethical activism and should be getting more press: https://t.co/7UMohMzDtg
— Laurie Voss (@seldo) May 14, 2018
It has just been announced that Google will not renew their contracts on Project Maven once they end next year. It has also forced Google to articulate a military projects policy for the future so that the company and its employees share the same vision.
Project Maven at Google is a new form of employee activism that seems to have worked. But it is only the greatness of a company like Google to reject business that hurts their employee’s sentiments.
Connect with me on twitter