The Fundamental Attribution Error and Artificial Intelligence

An AI Generated image of geometric shapes and colors

How humans and machines interpret behavior differently 

What is the fundamental attribution error? 

The fundamental attribution error (FAE) is a cognitive bias that affects how people explain the causes of their own and others’ behavior. According to the FAE, people tend to overestimate the influence of personality traits and underestimate the influence of situational factors when they observe someone’s actions. For example, if someone cuts you off in traffic, you might assume that they are rude and selfish, rather than considering that they might be in a hurry or distracted. 

How does the FAE affect human interactions? 

The FAE can have negative consequences for human interactions, especially in situations where there is a conflict or a misunderstanding. The FAE can lead to unfair judgments, stereotypes, prejudices, and blame. For instance, if a student fails an exam, a teacher might attribute it to the student’s laziness or lack of intelligence, rather than considering the difficulty of the exam or the student’s circumstances. The FAE can also prevent people from learning from their own mistakes, as they might attribute their failures to external factors rather than internal ones. 

How does artificial intelligence relate to the FAE? 

Artificial intelligence (AI) is the field of computer science that aims to create machines or systems that can perform tasks that normally require human intelligence, such as reasoning, learning, decision-making, and perception. AI systems can be affected by the FAE in two ways: as agents and as targets. 

  • As agents, AI systems can exhibit the FAE when they interpret human behavior or interact with humans. For example, an AI system that analyzes social media posts might infer personality traits or emotions from the content or tone of the messages, without considering the context or the intention of the users. An AI system that interacts with humans, such as a chatbot or a virtual assistant, might also make assumptions or judgments about the users based on their inputs, without considering the situational factors that might influence them. 
  • As targets, AI systems can be subject to the FAE by humans who observe or interact with them. For example, a human might attribute human-like qualities or intentions to an AI system, such as intelligence, creativity, or malice, without acknowledging the limitations or the design of the system. A human might also blame or praise an AI system for its outcomes, without considering the input data, the algorithms, or the external factors that might affect it. 

How can the FAE be reduced or avoided? 

The FAE can be reduced or avoided by adopting a more critical and balanced perspective on behavior, both human and artificial. Some possible strategies are: 

  • Being aware of the FAE and its effects on perception and judgment. 
  • Seeking more information and evidence before making attributions or conclusions. 
  • Considering multiple possible causes and explanations for behavior, both internal and external. 
  • Empathizing with the perspective and the situation of the other party, whether human or machine. 
  • Revising or updating attributions or conclusions based on new information or feedback.