The Israeli military has reportedly implemented a facial recognition dragnet across the Gaza Strip, scanning ordinary Palestinians as they move throughout the ravaged territory, attempting to flee the ongoing bombardment and seeking sustenance for their families.
The program relies on two different facial recognition tools, according to the New York Times: one made by the Israeli contractor Corsight, and the other built into the popular consumer image organization platform offered through Google Photos. An anonymous Israeli official told the Times that Google Photos worked better than any of the alternative facial recognition tech, helping the Israelis make a “hit list” of alleged Hamas fighters who participated in the October 7 attack.
The mass surveillance of Palestinian faces resulting from Israel’s efforts to identify Hamas members has caught up thousands of Gaza residents since the October 7 attack. Many of those arrested or imprisoned, often with little or no evidence, later said they had been brutally interrogated or tortured. In its facial recognition story, the Times pointed to Palestinian poet Mosab Abu Toha, whose arrest and beating at the hands of the Israeli military began with its use of facial recognition. Abu Toha, later released without being charged with any crime, told the paper that Israeli soldiers told him his facial recognition-enabled arrest had been a “mistake.”
Putting aside questions of accuracy — facial recognition systems are notorious less accurate on nonwhite faces — the use of Google Photos’s machine learning-powered analysis features to place civilians under military scrutiny, or worse, is at odds with the company’s clearly stated rules. Under the header “Dangerous and Illegal Activities,” Google warns that Google Photos cannot be used “to promote activities, goods, services, or information that cause serious and immediate harm to people.”
Asked how a prohibition against using Google Photos to harm people was compatible with the Israel military’s use of Google Photos to create a “hit list,” company spokesperson Joshua Cruz declined to answer, stating only that “Google Photos is a free product which is widely available to the public that helps you organize photos by grouping similar faces, so you can label people to easily find old photos. It does not provide identities for unknown people in photographs.” (Cruz did not respond to repeated subsequent attempts to clarify Google’s position.)
It’s unclear how such prohibitions — or the company’s long-standing public commitments to human rights — are being applied to Israel’s military.
“It depends how Google interprets ‘serious and immediate harm’ and ‘illegal activity’, but facial recognition surveillance of this type undermines rights enshrined in international human rights law — privacy, non-discrimination, expression, assembly rights, and more,” said Anna Bacciarelli, the associate tech director at Human Rights Watch. “Given the context in which this technology is being used by Israeli forces, amid widespread, ongoing, and systematic denial of the human rights of people in Gaza, I would hope that Google would take appropriate action.”
Doing Good or Doing Google?
In addition to its terms of service ban against using Google Photos to cause harm to people, the company has for many years claimed to embrace various global human rights standards.
“Since Google’s founding, we’ve believed in harnessing the power of technology to advance human rights,” wrote Alexandria Walden, the company’s global head of human rights, in a 2022 blog post. “That’s why our products, business operations, and decision-making around emerging technologies are all informed by our Human Rights Program and deep commitment to increase access to information and create new opportunities for people around the world.”
This deep commitment includes, according to the company, upholding the Universal Declaration of Human Rights — which forbids torture — and the U.N. Guiding Principles on Business and Human Rights, which notes that conflicts over territory produce some of the worst rights abuses.
The Israeli military’s use of a free, publicly available Google product like Photos raises questions about these corporate human rights commitments, and the extent to which the company is willing to actually act upon them. Google says that it endorses and subscribes to the U.N. Guiding Principles on Business and Human Rights, a framework that calls on corporations to “to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts.”
Among the recommendations in the document is for companies like Google to consider the use of products and services for government surveillance that violate international human rights law, leading to immediate privacy and bodily security impacts such as locating, arresting, and imprisoning individuals. Neither JustPeace Labs nor Business for Social Responsibility, co-authors of the due-diligence framework, responded to a request for comment.
“It is the responsibility of Google and Corsight to ensure that their products and services do not contribute to human rights abuses,” said Bacciarelli. “I would expect Google to take immediate action to stop the use of Google Photos in this system, based on this information.”
Employees of Google participating in the No Tech for Apartheid campaign, a worker-led protest against Project Nimbus, urged their employer to prevent the Israeli military from using Google Photos’ facial recognition technology in the war in Gaza.
“The fact that the Israeli military is using consumer technology like Google Photos, including facial recognition, to identify Palestinians for surveillance purposes, shows that they will use any available technology unless Google takes steps to ensure their products do not support ethnic cleansing, occupation, and genocide,” the group stated. “As Google employees, we demand that the company immediately end Project Nimbus and stop all activities that aid the genocidal agenda of the Israeli government and military in Gaza.”
Project Nimbus has raised concerns about Google’s human rights principles conflicting with its business practices. Since 2021, Google has provided advanced cloud computing and machine learning tools to the Israeli military through its controversial Project Nimbus contract.
While Google Photos is a free consumer product, Project Nimbus is a customized software project for the Israeli state, utilizing Google’s machine-learning resources for facial recognition capabilities.
The sale of these tools to a government accused of human rights abuses and war crimes contradicts Google’s AI Principles, which prohibit AI applications likely to cause harm or violate international law and human rights principles.
Google has indicated that its principles apply mainly to custom AI work and not the general use of its products by third parties, potentially allowing broad military use of its technology.
Ariel Koren, a former Google employee who protested Project Nimbus and believes she was forced out of her job, sees Google’s silence on the Photos issue as part of a pattern of avoiding responsibility for how its technology is used.
Despite public assurances, Google’s actions suggest that its AI ethics principles do not influence Google Cloud’s business decisions, even in cases of complicity in genocide, reflecting the company’s pursuit of profit above all else.
Source link