Theory to Practice

After interacting with AI, our behavior toward people changes

 

We often think that every interaction with others changes us, and more often than not, enriches us. Today, however, we increasingly find ourselves interacting with artificial intelligence (AI), and these interactions also have effects, raising critical questions about AI's social impact.

 

Together with Raina Zexuan Zhang, Ellie J. Kyung, Luca Cian, and Kellen Mrkva, we observed that when people experience what they perceive as unfair treatment by AI, they tend to behave less prosocially toward other humans. An interaction with AI perceived as unjust triggers a reaction called “AI-induced indifference,” meaning a reduced willingness to subsequently punish unjust human behavior.

The context

AI is used in various sectors, such as loan approvals, shift assignments, or screening resumes submitted to companies. However, automated decisions are not always free from errors or biases. A notable example is the use of AI systems that, by relying on past data, end up amplifying pre-existing inequalities. These biases, rooted in the training data, can lead to decisions perceived as unfair by those on the wrong side.

 

Many studies have focused on how people perceive AI compared to humans. On one hand, people tend to trust AI decisions less, fearing it lacks empathy or moral judgment. On the other hand, some research shows that people assign less blame to an AI system when it makes mistakes or produces unfair decisions than they would to a human. AI is seen as less intentional and, therefore, less responsible for its actions than humans. This reduces the negative emotional reaction usually associated with perceived injustice.

 

Our research poses a simple yet crucial question: if a person experiences an injustice from AI, how does it affect their behavior in a subsequent interaction with another human?

The research

For this research, we conducted four separate experiments, involving a total of 2,425 participants. The methodology used was a “two-phase game,” an experimental paradigm that simulates resource allocation decisions to study prosocial behavior. In the first phase, participants were presented with an allocation game where they received an unfair decision from an allocator, alternately identified as either AI or another human. This decision involved an unequal distribution of money or time, where the participant received significantly less than a third party. In the second phase, participants had to decide whether to punish another human who had treated someone else unfairly in an unrelated context.

 

Our experiments show that when people receive an unfair decision from AI, they are significantly less likely to punish another person for unfair behavior, compared to when the injustice comes from a human. This difference has been labeled “AI-induced indifference.” People seem to become desensitized to injustice after interacting with AI, likely because they tend to blame AI less for unfair decisions, seeing it as less responsible. This effect persists regardless of the context (whether involving allocations of money or time).

Conclusions and takeaways

Our research has implications for the design and implementation of AI systems. AI developers should be aware that interactions with AI systems that may deliver unfair decisions can influence subsequent, unrelated human interactions. They should work to systematically address biases in AI training data.

 

Policymakers should act quickly to ensure transparency and require companies to disclose areas where AI may make unfair decisions, making users understand the limitations of the systems they rely on and allowing them to appeal against perceived unjust decisions.

 

Additionally, greater awareness of the side effects of AI-induced indifference can help people remain sensitive to injustice. The moral outrage that arises from unfair treatment is necessary for people to recognize wrongdoing in others and uphold social norms, drawing attention, for the good of society, to those who commit such errors.

 

By understanding AI-induced indifference and its underlying mechanisms, policymakers and managers can ensure that the integration of AI into human systems supports, rather than undermines, the social and ethical standards essential for a just society.

 

(This article was produced by the SDA Bocconi Insight editorial team)

SHARE ON