Three years ago, we blogged about the successful campaign by Google tech workers to persuade the company to withdraw from Project Maven—a United States Department of Defense program aimed at using artificial intelligence (A.I.) to rapidly analyze and identify drone targets. More than 4,000 Google employees signed a letter to Google’s CEO, Sundar Pichai, demanding that Google “not be in the business of war.” The internal revolt worked: Google declined to renew its contract for Project Maven and published a set of A.I. principles—including a vow that no future A.I. endeavor would be used for “weapons…whose principal purpose or implementation is to cause or directly facilitate injury to people.”
In our previous blog about this, we (and many others) noted the non-binding nature of Google’s A.I. principles. Without conditioning the use of its technology on protection of human rights, Google’s pledge to abstain from the business of war was unenforceable. We wrote that it was a matter of time—and will always be a matter of time—before Google, or any other company beholden only to the promise of profit, breaks its commitment to safeguard social wellbeing over shareholder wellbeing. Well, it was a matter of time. Time ran out last week.
Google is “aggressively pursuing” a new contract to work with the Defense Department on the Joint Warfighting Cloud Capability project. This project, like Project Maven, will use A.I. to support military operations—including, the Defense Department admitted, combat situations.
This ethical retreat, predictable as it was, was not inevitable. If Google had embedded the A.I. principles into its intellectual property (IP) licenses—and better yet, created a private right of action for those who suffer from misuse of Google’s A.I. technology—we might have, finally, seen a corporate promise kept.
This blog post briefly discusses the opportunities and dangers of A.I., and then imagines a world in which human rights conditions are built into intellectual property and tech workers have control over the impact of their inventions on the world. It uses the Hippocratic License 3.0 as an example of what is already possible, and urges companies to legally bind their profit to the promises they’ve made.
A.I. is Already Here, But We Can Control What It’s Used For
Autonomous weaponry—the “third revolution in warfare, following gunpowder and nuclear arms”—already exists and is used on battlefields around the world. Israel’s “Fire and Forget” drone is capable of taking itself to a specific area, hunting for a particular target, and destroying the identified target with a powerful warhead. A similar drone was used in Libya in March 2020. In May 2021, the U.S. Air Force’s autonomous “Skyborg” drone completed its first flight. Advocates of militarized A.I. argue that autonomous weapons lower the cost of killing—a terrifying idea (as if war is a market and we want fewer barriers to entry).
But A.I. does have benefits, just not in war. A.I. has the capacity to revolutionize transportation, urban infrastructure, emergency response systems, and healthcare solutions. We’ve written before about the dual personality of scientific innovation—how nuclear technology gave the world the atom bomb and space travel. When Google employees protested Project Maven in 2018, they were asking for control over how their labor—which produces Google’s intellectual property—would impact the world.
Ethical IP licensing makes this control possible.
Imagining Google’s A.I. Principles as Legally Binding IP Conditions
Critics of Google’s A.I. Principles point out the “fuzzy” language used (e.g., “Be socially beneficial” and “Be accountable to people”). The list of restricted uses is more specific, but still leaves room for interpretation. The Joint Warfighting Cloud Capability project is a good example of why this fuzziness matters—despite “warfighting” being in the name, Google will probably claim its cloud computing technology is sufficiently disconnected from the battlefield. Because the principles are non-specific and non-binding, and because the Pentagon restricts external groups from knowing the full extent of its activities, it may never be clear whether the project violates Google’s principles.
Google should have made a concrete list of restrictions on the use of its A.I. technology and embedded that list in its IP licensing. CAL’s recently launched Hippocratic License 3.0 is an open-source copyright license for software developers, but it provides a great example of the kind of specific use restrictions ethical IP licensing can provide:
[Excerpt from HL 3.0] The Licensee shall not, whether directly or indirectly, through agents or assigns:
3.1.19. (Module – Mass Surveillance) Be a government agency or multinational corporation, or a representative, agent, affiliate, successor, attorney, or assign of a government or multinational corporation, which participates in mass surveillance programs;
3.1.20. (Module – Military Activities) Be an entity or a representative, agent, affiliate, successor, attorney, or assign of an entity which conducts military activities
The above excerpt looks a lot like the A.I. use applications that Google committed not to engage in three years ago—but one is a legally binding obligation and the other is a hollow corporate statement sold as a high-minded ethical treatise. By embedding values and intended impact into IP licensing—in essence, binding human and environmental rights to profit—a corporation can enforce the imprint it wishes to leave on the world.
In the Hippocratic License 3.0 we included a private right of action for victims of human rights abuse caused by misuse of the license. The licensee, by agreeing to these terms, voluntarily accepts a duty of care not to violate the conditions of the license. When conditions are broken, the duty of care is breached. A right of action could also be created for tech workers—in fact, we believe that incorporating HR norms into IP licensing is a powerful opportunity for tech worker unions and organizing.
To critics who say this is unrealistic, too costly, a betrayal of shareholders: we—you and I, and future generations—stand to lose far more if we forfeit our greatest ideals for humanity to corporate greed.
Putting it All in Context
If we accept the premise that artificially intelligent weapons and military operations will lead to, well, a dystopic hellscape, then the importance of companies legally binding themselves to their own ethical obligations is clear. But even without this premise, a company’s promise to its employees is reason enough for enforcement.
Google’s internal motto is “Don’t be evil.” Employees included the motto in their letter to Sundar Pichai in 2018, when they asked the company not to “outsource the moral responsibility” of the technology they had helped to create. Don’t be evil? We’re hard-pressed to think of a lower standard. Let’s be kind, wise, loyal, and visionary. Let’s make real promises and do our damn best to keep them.
A.I. technology is here to stay—but the genius behind it belongs to the workers who create it. When they ask for clarity on how their monumental innovations will be used to shape the world, we believe they should get it. If Google agrees, show us.
In the words of CAL’s founder, “The time for watered down, voluntary corporate social responsibility has passed. We are over it. Companies, if you mean what you say, make it legally enforceable.”
Reynolds Taylor is a Legal Fellow at Corporate Accountability Lab.