Google has made one of the most substantial changes to its IA principles Since the first publication in 2018. In a change identified by The Washington PostThe research giant published the document to delete the commitments he had made that he would not be "Design or deploy" AI tools to be used in weapons or monitoring technology. Previously, these guidelines included a section entitled "applications that we will not continue," which is not present in the current version of the document.
Instead, there is now a section entitled "Responsible development and deployment." There, Google says he will implement "Human surveillance, reasonable diligence and appropriate feedback mechanisms to align with user objectives, social responsibility and largely accepted principles of international law and human rights."
It is a much broader commitment than those specific that the company made as recently as the end of last month when the previous version of its IA principles was still live on its website. For example, with regard to arms, the company previously declared that it would not conceive of AI for use in "Weapons or other technologies whose main objective or implementation is to cause or easily facilitate injuries to people. As for AI monitoring tools, the company said it would not develop technology that rapes "Internationally accepted standards."
When asked for comments, a Google spokesperson pointed out blog The company published Thursday. In this area, the CEO of Deepmind Demis Hassabis and James Manyika, main vice-president of research, laboratories, technology and society in Google, say the emergence of AI as "general use technology" required a change in policy.
"We believe that democracies should lead to the development of AI, guided by fundamental values ​​such as freedom, equality and respect for human rights. And we believe that companies, governments and organizations sharing these values ​​should work together to create an AI that protects people, promotes global growth and supports national security," The two wrote. "… Guided by our IA principles, we will continue to focus on research and applications that align with our mission, our scientific objective and our areas of expertise, and remain in accordance with the largely accepted principles of international law and Human rights – always evaluating specific work by carefully assessing if the advantages prevail considerably over potential risks."
When Google published its IA principles for the first time in 2018, it did it following Maven project. It was a controversial government contract which, if Google had decided to renew it, would have seen the company provide AI software to the Ministry of Defense to analyze drone images. Dozens of Google employees Exit To protest against the contract, with thousands of others Sign a petition in opposition. When Google finally published its new directives, CEO Sundar Pichai would have told staff that his hope was that they would be held "The time test."
By 2021, however, Google started to continue the military contracts, with what would have been a "aggressive" offer For the Cloud Combat Combat Cloud Contract of the Pentagon. At the start of this year, The Washington Post reported that Google employees had worked several times with the Israeli Defense Ministry for Expand the government’s use of AI tools.
This article originally appeared on engadget on https://www.engadget.com/ai/google-now-thinks-its-ok-use-use-ai-forpons-and-survence-224824373.html src = rsss