Skip to the main content
icon-community

Algorithmic Consumer Protection

To manage the risks and benefits of AI, we need to look beyond the fairness and accuracy of AI decisions.

This March, Facebook announced a remarkable initiative that detects people who are most at risk of suicide and directs support to them from friends and professionals. As society entrusts our safety and well-being to AI systems like this one, how can we ensure that the outcomes are beneficial?

I recently spent a weekend at the University of Michigan to discuss this question with a gathering of scholars, journalists, and civil society. As we talked, I noticed something I’ve seen elsewhere: discussions tend to focus on algorithmic fairness and discrimination. Thanks to pioneering work over the last 5 years, problems of algorithmic discrimination are starting to be understood more widely in online advertisingimage recognitionlogistics systems, and judicial sentencing, to name a few.

"Research on fairness checks that AIs treat people fairly. I want to be sure they’re actually saving lives."

Throughout these conversations, I often feel like I’m asking completely different questions. It’s only recently that I started to find the language for what’s different. Think about Facebook’s suicide prevention initiative: while research on fairness checks that AIs treat people fairly, I want to be sure they’re actually saving lives. While a system that benefits some people more than others is a problem, we should also worry about AI systems that harm everyone equally.

(The ideas here were discussed with several people from the workshop, including Christo Wilson, Solon Barocas, Paul Resnick, and Alondra Nelson. All errors and mistakes are my own. Since the event was under Chatham House Rule, I’ll acknowledge people as they are willing to be named.)

 

Continue reading on Medium

You might also like