ImpactAlpha, November 7 – Google, known for its “don’t be evil” motto, recently launched an AI for Good initiative, accompanied by seven principles for the company’s development and use of artificial intelligence. They include “be socially beneficial,” “avoid creating or reinforcing unfair bias,” and “be accountable to people.”
The company’s new artificial intelligence (AI) Impact Challenge is part of that initiative and commitment to add ethics to artificial intelligence development.
The challenge will support nonprofits, academics, and social enterprises in developing new AI applications for social, humanitarian and environmental issues. Selected applicants will receive grant funding from a $25 million pool allocated by Google’s non-profit arm, Google.org, and be enrolled in an accelerator program called Launchpad Accelerator.
To anyone still wondering just how ubiquitous applications for artificial intelligence are, it has become “so mainstream that even an actress from Twilight has published an academic paper on it.” (per Deloitte, in a paper about artistic applications of AI.)
“These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions,” Google’s CEO Sundar Pichai wrote in a blog post.
“How AI is developed and used will have a significant impact on society for many years to come,” he continued. “As a leader in AI, we feel a deep responsibility to get this right.”
The impact challenge and broader AI for Social Good initiative follow Google’s decision not to renew its AI drone program contract with the U.S. Department of Defense, prompted by a petition from more than 4,000 Google employees. The company has since committed to not supporting weapons-related applications of AI.
Applications for the AI for Impact Challenge are open. Finalists will be selected in early 2019.