intelligence | March 21, 2017

Underserved communities: Leveraging AI to improve health and social services in low-resource areas

ImpactAlpha
The team at

ImpactAlpha

AI see, AI do.

Among the risks in using data-driven AI in low-resource or at-risk communities is that the algorithms will magnify systemic biases.

“Care must be taken to prevent AI systems from reproducing discriminatory behavior, such as machine learning that identifies people through illegal racial indicators,” write researchers from the Stanford One Hundred Study on Artificial Intelligence.

This week, ImpactAlpha is extracting nuggets from Stanford’s century-long effort to understand AI’s long-term possibilities and dangers. There’s already an update to yesterday’s #2030 segment on self-driving cars: Ford recently announced a $1 billion investment in software for autonomous fleets.

The researchers found AI could be a money-saving lifeline for budget-strapped local and state governments.

Illinois’ Department of Human Services, for example, is using predictive data modelling to improve prenatal care to high-risk pregnant women.

Cincinnati is using AI to identify and inspect properties that aren’t up to code.

AI also has potential for developing low-cost community health campaigns, which are otherwise difficult to target and expensive to implement.

This post originally appeared in ImpactAlpha’s daily newsletter. Get The Brief.

Photo credit: Scienceofsingularity.com