How Algorithms Amplify Our Unconscious Biases
Ever wondered why your Facebook feed feels like an echo chamber of your own opinions? Algorithms might be amplifying your biases, not reflecting your true preferences.
In the eye-opening NBER Working Paper “Automating Automaticity: How the Context of Human Choice Affects the Extent of Algorithmic Bias” Amanda Y. Agan (Rutgers University), Diag Davenport (Princeton University), Jens Ludwig (University of Chicago) and Sendhil Mullainathan (University of Chicago) offer profound insight into how our cognitive biases, especially those made in haste or without deep thought, can inadvertently shape the algorithms that are becoming integral to our daily lives.
Learning from Behavioral Economics
Rooted in behavioral economics, the paper challenges a fundamental assumption of algorithmic design: that choices made by users accurately reflect their true preferences. It posits that there is a significant deviation between what people choose automatically and what they genuinely want. This discrepancy is critical because algorithms, particularly those curating our social media feeds or recommending friends, are often trained on these quick, often unreflective choices.
How System 1 Thinking Shapes Social Media Algorithms
Our brains operate using two distinct modes: the fast, impulsive System 1, and the slower, more deliberate System 2. Social media algorithms, however, can be misled by our System 1 thinking, interpreting our quick clicks and automatic choices as genuine preferences.
Picture this: As you casually scroll through your Facebook feed and click on articles that momentarily catch your attention, these actions, reflective of System 1 thinking, inform the algorithms about your interests. The result is a feed increasingly tailored to these impulsive choices, potentially amplifying biases and narrowing your exposure to diverse perspectives.
Real-World Impacts: Echo Chambers and Polarization
This issue extends beyond just news feeds. Consider a social media algorithm suggesting new friends based on past interactions. If you habitually “like” content from people with similar views, the algorithm may create an echo chamber, limiting exposure to varied opinions and contributing to polarization.
From Theory to Reality: Addressing Algorithmic Bias with Evidence-Based Strategies
Agan et al.’s study isn’t just theoretical; it’s backed by compelling evidence from both lab experiments and real-world audits of Facebook algorithms. They show how algorithms trained on data from rushed, impulsive decisions exhibit more out-group bias, favoring content from users who are similar to them and potentially disadvantaging those who are different.
A Critical Question and a Call to Action
This raises a critical question: what can we do to ensure that the algorithms we rely on are fair and unbiased?
Agan et al.’s study is a wake-up call. It reminds us that technology is not inherently neutral but rather a reflection of the choices we make and the biases we hold. By understanding how our own biases can shape the algorithms we use, we can begin to build a fairer, more equitable, and truly human-centric future of technology.
Upcoming Insight: From Lab to Life – Unraveling Bias in AI
Stay tuned for my next post “From Lab to Life – Unraveling Bias in Algorithm“, where I’ll delve deeper into the lab experiments from this study. I aim to shed more light on the mechanics of how cognitive biases can inadvertently influence AI systems. Additionally, I’ll discuss what we can learn from these experiments and how to apply this knowledge to enhance our services. I’ll use streaming service as a practical example to illustrate these points. This exploration will not only deepen our understanding but also demonstrate real-world applications of insights from lab experiments.