The Anchoring Effect and Artificial Intelligence

AI Generated image of multiple colors with different colored blocks

A brief overview of the cognitive bias and its relation to artificial intelligence 

What is the anchoring effect? 

The anchoring effect is a cognitive bias that occurs when people rely too much on the first piece of information they receive (the anchor) when making decisions or judgments. The anchor influences how people interpret subsequent information and adjust their estimates or expectations.  

An example of the anchoring effect is when people are asked to estimate the number of countries in Africa, and they are given a high or low number as a hint. For instance, if they are told that there are 15 countries in Africa, they may guess a lower number than if they are told that there are 55 countries in Africa. The hint serves as an anchor that influences their estimation, even though it has no relation to the actual number of countries in Africa (which is 54). 

How can AI influence the anchoring effect? 

Artificial intelligence (AI) can influence the anchoring effect in various ways, depending on how it is used and perceived by humans. For instance, AI can provide anchors to humans through its outputs, such as recommendations, predictions, or evaluations. If humans trust or rely on the AI’s outputs, they may adjust their judgments or decisions based on the anchors, even if they are inaccurate or biased. Alternatively, AI can also be influenced by the anchoring effect, if it is trained or designed with human-generated data or feedback that contains anchors. For example, if an AI system learns from human ratings or reviews that are skewed by the anchoring effect, it may reproduce or amplify the bias in its outputs. 

What are some possible implications and solutions? 

The anchoring effect and AI can have significant implications for various domains and contexts, such as business, education, health, or social interactions. For example, the anchoring effect and AI can affect how people negotiate prices, evaluate products or services, assess risks or opportunities, or form opinions or beliefs. The anchoring effect and AI can also have ethical and moral implications, such as influencing people’s fairness, justice, or responsibility judgments, or affecting their autonomy, privacy, or dignity. Therefore, it is important to be aware of the anchoring effect and AI, and to seek ways to mitigate or prevent it. Some possible solutions include: 

  • Providing multiple sources of information or perspectives and encouraging critical thinking and comparison. 
  • Increasing the transparency and explainability of the AI’s outputs and allowing users to question or challenge them. 
  • Ensuring the quality and diversity of the data or feedback that the AI uses or receives and avoiding or correcting any anchors or biases. 
  • Educating and empowering users to understand the anchoring effect and AI, and to make informed and autonomous decisions. 

For more on biases, please visit our other articles on Biases and Psychology.