Google bars uses of its artificial intelligence tech in weapons

1 News Net

1 News - 1 Movies - 1 Music - 1 eBooks - 1 Search


SAN FRANCISCO (Reuters) – Google will not allow its artificial intelligence software to be used in weapons or unreasonable surveillance efforts, the Alphabet Inc (GOOGL.O) unit said Thursday in standards for its business decisions in the nascent field.

FILE PHOTO: The logo of Google is pictured during the Viva Tech start-up and technology summit in Paris, France, May 25, 2018. REUTERS/Charles Platiau

The new restrictions could help Google management defuse months of protest by thousands of employees against the company’s work with the U.S. military to identify objects in drone video.

Google will pursue other government contracts including around cybersecurity, military recruitment and search and rescue, Chief Executive Sundar Pichai said in a blog post Thursday.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” he said.

FILE PHOTO: Google CEO Sundar Pichai speaks on stage during the annual Google I/O developers conference in Mountain View, California, U.S., May 8, 2018. REUTERS/Stephen Lam/File Photo

Breakthroughs in the cost and performance of advanced computers have begun to carry AI from research labs into industries such as defense and health. Google and its big technology rivals have become leading sellers of AI tools, which enable computers to review large datasets to make predictions and identify patterns and anomalies faster than humans could.

But the potential of AI systems to pinpoint drone strikes better than military specialists or identify dissidents through mass collection of online communications has sparked concerns among academic ethicists and Google employees.

“Taking a clear and consistent stand against the weaponization of its technologies” would help Google demonstrate “its commitment to safeguarding the trust of its international base of customers and users,” Lucy Suchman, a sociology professor at Lancaster University in England, told Reuters ahead of Thursday’s announcement.

Google said it would not pursue AI applications intended to cause physical injury, that tie into surveillance “violating internationally accepted norms of human rights,” or that present greater “material risk of harm” than countervailing benefits.

Its principles also call for employees as well as customers “to avoid unjust impacts on people,” particularly around race, gender, sexual orientation and political or religious belief.

Pichai said Google reserved the right to block applications that violated its principles.

A Google official described the principles and recommendations as a template that anyone in the AI community could put into immediate use in their own software. Though Microsoft Corp (MSFT.O) and other firms released AI guidelines earlier, the industry has followed Google’s efforts closely because of the internal pushback against the drone imagery deal.

Reporting by Paresh Dave; additional reporting by Kristina Cooke and Heather Somerville; Editing by Cynthia Osterman

1 News Net

1 News - 1 Movies - 1 Music - 1 eBooks - 1 Search


Leave a Reply