Can AI Make Us More Open-Minded?
Open-mindedness is a critical ingredient of human progress. To solve complex problems requires the capacity to thoughtfully engage with perspectives that challenge our existing beliefs. Cultures characterized by openness foster innovation, encourage learning, and enable people with different beliefs and life experiences to work together effectively. As Johan Norberg documents in his book Peak Human (which I highly recommend), such openness has been a defining feature of humanity’s most successful eras. However, people are often reluctant to engage with opposing views and are skeptical of those who hold them. And this tendency appears to have intensified in recent years as surveys reveal high levels of political polarization and social distrust. Could artificial intelligence help us address this challenge?
In a series of studies recently published in Scientific Reports, researchers Louise Lu, Zakary Tormala, and Adam Duhachek examined whether people respond differently to counterattitudinal messages depending on whether they believe those messages come from AI or human sources. Across four experiments involving over 2,000 participants, the researchers tested this question on topics including universal healthcare, vaccinations, and gun control.
In these studies, participants received counterattitudinal messages on issues they had an opinion on. In studies on the topic of universal healthcare, the researchers recruited participants who supported universal healthcare and presented them with a message outlining disadvantages of universal healthcare. In subsequent studies on the topics of vaccinations and gun control, they recruited people on both sides of these issues and gave each person a counterattitudinal message based on their position. This allowed them to test whether their findings would hold across different issues and regardless of which side of an issue people started from.
For each topic, everyone on the same side of an issue received the exact same counterattitudinal message. But critically, the source varied. Some participants were told the message came from an AI source like ChatGPT, while others were told it came from human sources such as other study participants, experts, social media influencers, or advocacy groups.
The results were consistent across studies. When people believed a counterattitudinal message came from AI rather than a human source, they perceived it as less biased, more informative, and as having less persuasive intent. These perceptions mattered. Participants who received counterattitudinal messages from AI sources reported feeling more receptive to the opposing viewpoint. They were more willing to seek out additional information on the topic and more willing to share the message with others.
The gun control study included an additional measure examining attitudes toward people on the opposing side. Participants who received a counterattitudinal message from an AI source reported more positive feelings toward people on the other side of the issue compared to those who received the same message from a human advocacy group.
In short, we appear to be more receptive to information that challenges our beliefs when that information comes from AI because we see AI as less biased, more informative, and less intent on changing our minds than human sources. And these same perceptions that increase our receptiveness to opposing arguments also seem to make us feel more positively toward the people who hold those views.
These findings offer an intriguing possibility for addressing social distrust and polarization, but the real challenge is becoming more open to perspectives offered by the actual people who hold different views. We need to cultivate the capacity to listen to others even when we disagree with them and be willing to consider that their views might be informed by experiences or knowledge we don’t have. If engaging with counterattitudinal information through AI sources helps us experience what it feels like to consider opposing views without reflexive defensiveness, that practice might make us more receptive when those views come from humans. The goal should not be to replace human dialogue with AI-mediated information, but to use AI as one tool among many to help us become more open-minded people.
These findings are encouraging, but they depend on people continuing to perceive AI as neutral and informative. While the studies suggest people currently view AI as more neutral than human sources, survey research from our team at the Archbridge Institute’s Human Flourishing Lab reveals skepticism about AI’s long-term impact. We found that 51 percent of Americans believe AI will weaken our information ecosystem by creating echo chambers and increasing censorship. How we design, deploy, and engage with AI technologies will determine whether they help us become more receptive to different perspectives or further entrench us in our existing views.
Much of the current conversation about AI is dominated by concerns and fears. This is not surprising, as technological disruption is often met with anxiety and resistance. When it comes to AI, people worry about job displacement, the spread of misinformation, threats to privacy, and the erosion of human connection. These concerns deserve serious attention and thoughtful responses. However, progress requires an optimistic and agentic attitude in which we do not allow fear to dictate our actions or prevent us from exploring the potential benefits of new technologies. Research like this can help us see possibilities we might otherwise overlook. Perhaps by opening our minds to what AI might offer, we can also open our minds to new ways of engaging with each other.
Have a great weekend!
Clay
