Google CEO weighs in on AI ethicist's controversial departure

Google and Alphabet CEO Sundar Pichai sent an internal memo on Wednesday pledging to further investigate a recent controversy involving the departure of one of the company’s top artificial intelligence ethicists, Timnit Gebru.  

In early December, Gebru – who had previously collaborated with MIT Media Lab researchers on bias in facial recognition software – posted on Twitter that she had been fired from Google. The company says that it accepted her resignation after she gave them conditions for her continued employment that it could not fulfill.  

“I’ve heard the reaction to Dr. Gebru’s departure loud and clear: It seeded doubts and led some in our community to question their place at Google,” wrote Pichai in the memo, which was obtained by Axios. “I want to say how sorry I am for that, and I accept the responsibility of working to restore your trust.  

“We need to accept responsibility for the fact that a prominent Black, female leader with immense talent left Google unhappily,” he added. “This loss has had a ripple effect through some of our least represented communities, who saw themselves and some of their experiences reflected in Dr. Gebru’s. It was also keenly felt because Dr. Gebru is an expert in an important area of AI Ethics that we must continue to make progress on – progress that depends on our ability to ask ourselves challenging questions.”

Google and Gebru did not respond to requests for comment.  

WHY IT MATTERS  

Gebru’s exit centers on her involvement with a paper she co-authored, which asserts that large-scale AI models can reproduce and exacerbate bias. It also highlighted environmental concerns – noting that big algorithmic computations can consume huge amounts of electricity, among other risks.  

The MIT Technology Review, which obtained a copy of the paper, noted that its six collaborators drew on a “wide breadth” of scholarship. The main risks of large language models the paper outlined involved environmental and financial costs; large and potentially biased training sets; research opportunity costs; and illusions of meaning.   

According to reporting from Wired, Gebru said a senior manager asked her to retract her name from the paper. Google AI lead Jeff Dean, in an email to Google Research he has since posted online, said the paper “didn’t meet our bar for publication.” 

Gebru’s former co-workers said Google’s publication review policy has been applied “unevenly and discriminatorily.” Other former Google employees also disputed the process as it was described by the company. More than 2,000 Google team members have signed a petition asking for transparency around Google’s response to Gebru’s research.  

Pichai said Google leadership would review the circumstances of Gebru’s departure, including potentially implementing de-escalation strategies and new processes.   

Gebru’s exit provoked dismay for some prominent tech leaders, with former Reddit CEO Ellen K. Pao calling the notion “ridiculous” last week.  

“This is one of the many times when I think there is just no hope for the tech industry,” Pao tweeted.  

THE LARGER TREND  

Ethics in AI have become an increasingly major focus in recent years, particularly where healthcare is concerned.  

Last year, health tech experts at HIMSS19 noted the huge opportunities AI can offer –  while calling for tools’ rigorous testing and oversight.  

“Even small defects in the training samples can cause unpredictable failures,” said Microsoft Corporate Vice President Peter Lee.  

And at the HIMSS Machine Learning and AI for Healthcare Digital Summit last week, experts said that AI and ML could be a “force for evil,” even as it could also illuminate disparities.  

“We can use data to further the actions and intentions that lead to equity, and I think there’s also reason for hope when thinking about how we can analyze data, identify where there might be biases, and say, ‘Well, how can these data reveal new information about disparities in the healthcare system that we may not be fully cognizant of?'” said Kadija Ferryman, industry assistant professor of ethics and engineering, NYU Tandon School of Engineering.  

The pandemic has highlighted the fact that algorithmic bias could worsen COVID-19 health disparities for people of color, thanks to “under-developed” data models.  

The U.S. Food and Drug Administration has recognized the need to address this bias in machine learning tools, and to better ensure diversity in the data used to train algorithms.  

ON THE RECORD

“One of the best aspects of Google’s engineering culture is our sincere desire to understand where things go wrong and how we can improve,” wrote Pichai in the memo. “The events of the last week are a painful but important reminder of the progress we still need to make.”

 

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.

Source: Read Full Article