AI Generated image of multiple colors with different colored blocks

Organizations can measure and track bias in their AI systems by implementing a combination of strategies: 

  • AI Governance: Establishing AI governance frameworks to guide the responsible development and use of AI technologies, including policies and practices to identify and address bias [1] [2]. 
  • Bias Detection Tools: Utilizing tools like IBM’s AI Fairness 360 toolkit, which provides a library of algorithms to detect and mitigate bias in machine learning models [1]. 
  • Fairness Metrics: Applying fairness metrics that measure disparities in model performance across different groups to uncover hidden biases [3]. 
  • Exploratory Data Analysis: Conducting exploratory data analysis to reveal any underlying biases in the training data used for AI models [3]. 
  • Interdisciplinary Collaboration: Promoting collaborations between AI researchers and domain experts to gain insights into potential biases and their implications in specific fields [4]. 
  • Diverse Teams: Involving diverse teams in the development process to bring a variety of perspectives and reduce the risk of biased outcomes [5]. 

These measures help organizations to actively monitor and mitigate bias, ensuring their AI systems are fair and equitable. 


1. IBM Policy Lab: Mitigating Bias in Artificial Intelligence 

2. What Is AI Bias? | IBM 

3. Testing AI Models — Part 4: Detect and Mitigate Bias – Medium 

4. Mitigating Bias In AI and Ensuring Responsible AI 

5. Addressing bias and privacy challenges when using AI in HR 

AI Generated image of multiple colors with different colored blocks

The Dunning-Kruger effect is a cognitive bias where people with limited competence in a particular domain overestimate their abilities [2]. 

This effect was first described by psychologists David Dunning and Justin Kruger in 1999 [2]. They found that those who performed poorly on tests of logic, grammar, and sense of humor often rated their skills far above average [1]. For example, those in the 12th percentile self-rated their expertise to be, on average, in the 62nd percentile [1]. 

The researchers attributed this trend to a problem of metacognition—the ability to analyze one’s own thoughts or performance [1]. “Those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it,” they wrote [1]. 

The Dunning-Kruger effect has been found in domains ranging from logical reasoning to emotional intelligence, financial knowledge, and firearm safety [1]. It also applies to people with a solid knowledge base: Individuals rating as high as the 80th percentile for a skill have still been found to overestimate their ability to some degree [1]. 

Inaccurate self-assessment could potentially lead people to making bad decisions, such as choosing a career for which they are unfit or engaging in dangerous behavior [2]. It may also inhibit people from addressing their shortcomings to improve themselves [2]. 

How AI can influence the Dunning-Kruger effect 

One feasible way that AI could influence the Dunning-Kruger effect is by providing feedback and guidance to people who overestimate or underestimate their abilities. For example, an AI system could analyze a person’s performance on a task and compare it with objective criteria or peer benchmarks. Then, the AI system could give the person a realistic assessment of their strengths and weaknesses and suggest ways to improve or use their skills effectively. This could help people overcome their biases and become more aware of their competence levels. 

Another conceivable way that AI could influence the Dunning-Kruger effect is by creating new domains of knowledge and skill that challenge existing human expertise. For example, an AI system could generate novel problems or scenarios that require complex reasoning or creativity. These problems could expose the limitations of human cognition and force people to acknowledge their knowledge gaps or errors. This could also motivate people to learn new things and expand their horizons. Alternatively, an AI system could also demonstrate superior performance or solutions in some domains and inspire people to emulate or collaborate with it. This could foster a growth mindset and a willingness to learn from others. 

These are just some hypothetical examples of how AI could influence the Dunning-Kruger effect. However, the actual impact of AI on human metacognition may depend on factors, such as the design, purpose, and context of the AI system, as well as the personality, motivation, and goals of the human user. Therefore, more research and experimentation are needed to explore the potential benefits and risks of AI for human self-awareness and improvement. 


  1. Dunning–Kruger effect – Wikipedia 
  1. Dunning-Kruger Effect | Psychology Today 
  1. The Dunning-Kruger Effect: What It Is & Why It Matters – Healthline 
  1. The Dunning-Kruger Effect: An Overestimation of Capability – Verywell Mind 

For more on biases, please visit our other articles on Biases and Psychology.

AI Generated image of multiple colors with different colored blocks

A brief overview of the cognitive bias and its relation to artificial intelligence 

What is the anchoring effect? 

The anchoring effect is a cognitive bias that occurs when people rely too much on the first piece of information they receive (the anchor) when making decisions or judgments. The anchor influences how people interpret subsequent information and adjust their estimates or expectations.  

An example of the anchoring effect is when people are asked to estimate the number of countries in Africa, and they are given a high or low number as a hint. For instance, if they are told that there are 15 countries in Africa, they may guess a lower number than if they are told that there are 55 countries in Africa. The hint serves as an anchor that influences their estimation, even though it has no relation to the actual number of countries in Africa (which is 54). 

How can AI influence the anchoring effect? 

Artificial intelligence (AI) can influence the anchoring effect in various ways, depending on how it is used and perceived by humans. For instance, AI can provide anchors to humans through its outputs, such as recommendations, predictions, or evaluations. If humans trust or rely on the AI’s outputs, they may adjust their judgments or decisions based on the anchors, even if they are inaccurate or biased. Alternatively, AI can also be influenced by the anchoring effect, if it is trained or designed with human-generated data or feedback that contains anchors. For example, if an AI system learns from human ratings or reviews that are skewed by the anchoring effect, it may reproduce or amplify the bias in its outputs. 

What are some possible implications and solutions? 

The anchoring effect and AI can have significant implications for various domains and contexts, such as business, education, health, or social interactions. For example, the anchoring effect and AI can affect how people negotiate prices, evaluate products or services, assess risks or opportunities, or form opinions or beliefs. The anchoring effect and AI can also have ethical and moral implications, such as influencing people’s fairness, justice, or responsibility judgments, or affecting their autonomy, privacy, or dignity. Therefore, it is important to be aware of the anchoring effect and AI, and to seek ways to mitigate or prevent it. Some possible solutions include: 

  • Providing multiple sources of information or perspectives and encouraging critical thinking and comparison. 
  • Increasing the transparency and explainability of the AI’s outputs and allowing users to question or challenge them. 
  • Ensuring the quality and diversity of the data or feedback that the AI uses or receives and avoiding or correcting any anchors or biases. 
  • Educating and empowering users to understand the anchoring effect and AI, and to make informed and autonomous decisions. 

For more on biases, please visit our other articles on Biases and Psychology.

AI Generated image of multiple colors with different colored blocks

A brief overview of the potential effects of artificial intelligence on human cognition 


Artificial intelligence (AI) is the field of computer science that aims to create machines and systems that can perform tasks that normally require human intelligence, such as perception, reasoning, learning, decision-making, and natural language processing. AI has become increasingly prevalent and influential in various domains of human activity, such as education, health, entertainment, commerce, and social media. However, AI also poses some challenges and risks for human cognition, especially with confirmation bias. 

What is confirmation bias? 

Confirmation bias is the tendency to seek, interpret, and remember information that confirms one’s preexisting beliefs or hypotheses while ignoring or discounting information that contradicts them. Confirmation bias can affect various aspects of human cognition, such as memory, perception, reasoning, and decision-making. Confirmation bias can lead to errors in judgment, distorted views of reality, and resistance to change. Confirmation bias can also influence how people interact with others who have different opinions or perspectives, resulting in polarization, conflict, and echo chambers. 

How can AI influence confirmation bias? 

AI can influence confirmation bias in several ways, depending on how it is designed, used, and regulated. Some of the possible effects of AI on confirmation bias are: 

  • AI can amplify confirmation bias by providing personalized and tailored information that matches the user’s preferences, interests, and beliefs while filtering out or minimizing information that challenges or contradicts them. For example, AI algorithms can recommend news, products, videos, or social media posts that align with the user’s views, creating a feedback loop that reinforces and strengthens the user’s confirmation bias. 
  • AI can mitigate confirmation bias by providing diverse and balanced information that exposes the user to different perspectives, opinions, and evidence while highlighting the uncertainty, ambiguity, and complexity of the information. For example, AI systems can suggest alternative sources, viewpoints, or explanations that challenge the user’s assumptions, or prompt the user to reflect on their own biases and motivations. 
  • AI can exploit confirmation bias by manipulating the user’s emotions, beliefs, and behaviors while concealing or disguising the AI’s intentions, goals, and methods. For example, AI agents can use persuasive techniques, such as framing, anchoring, or priming, to influence the user’s decisions, actions, or opinions, or to elicit the user’s trust, loyalty, or compliance. 


AI can have both positive and negative effects on human cognition, depending on how it is designed, used, and regulated. AI can either amplify, mitigate, or exploit confirmation bias, which is a common and pervasive cognitive bias that affects how people seek, interpret, and remember information. Therefore, it is important to be aware of the potential impacts of AI on confirmation bias, and to adopt critical thinking skills, ethical principles, and social norms that can help prevent or reduce the harmful consequences of confirmation bias. 

For more on biases, please visit our other articles on Biases and Psychology.

AI Generated image of multiple colors with different colored blocks

Here’s a list of common cognitive biases: 

  1. Apophenia: Perceiving false connections [1]. 
  1. Availability heuristic: Biased by memory accessibility [1]. 
  1. Cognitive dissonance: Perception of contradictory information [1]. 
  1. Confirmation bias: Seeking evidence for own beliefs [1]. 
  1. Egocentric bias: Overestimating own perspective [1]. 
  1. Framing effect: Influenced by presentation of information [1]. 
  1. Hindsight bias: Seeing past events as predictable [1]. 
  1. Illusory superiority: Overestimating own qualities [1]. 
  1. Loss aversion: Preferring to avoid losses [1]. 
  1. Negativity bias: Focusing on negative information [1]. 
  1. Omission bias: Judging harmful actions worse [1]. 
  1. Optimism bias: Expecting positive outcomes [1]. 
  1. Self-serving bias: Claiming credit for successes [1]. 
  1. Anchoring bias: Over-reliance on first information [1]. 
  1. Memory bias: Distortion of memory recall [1]. 
  1. Recency effect: Remembering last items better [1]. 

These biases can influence our beliefs and actions daily. They can affect how we think, how we feel, and how we behave [3]. It’s important to be aware of these biases as they can distort our thinking and decision-making processes [2] [3]. 

For more on biases, please visit our other articles on Biases and Psychology.


  1. Examples of cognitive biases 
  1. Cognitive Bias List: 13 Common Types of Bias – Verywell Mind 
  1. 12 Common Biases That Affect How We Make Everyday Decisions 
  1. List of Cognitive Biases and Heuristics – The Decision Lab 
  1. Cognitive Bias 101: What It Is and How To Overcome It 
An AI image of furniture being assembled

The IKEA Effect is a cognitive bias where consumers place a disproportionately high value on products they have partially created or assembled [2]. This effect is named after the Swedish furniture company IKEA, which sells many items of furniture that require assembly [2]. 

The IKEA Effect suggests that when people invest their own time and effort into creating or assembling something, they tend to value it more highly, even if the result is not perfect [2]. This is because the act of creation or assembly gives people a sense of accomplishment and ownership, which in turn increases their appreciation of the product [2]. 

For example, a person might value a piece of IKEA furniture that they assembled themselves more highly than a similar piece of furniture that was pre-assembled, even if the self-assembled furniture has minor flaws or imperfections [2]. 

This effect has been leveraged by various businesses and marketers to involve consumers in the creation or customization process, thereby enhancing their attachment and perceived value of the products [4]. However, it is important to note that this effect can also lead to irrational decision-making, as people might overvalue their own creations and undervalue others’ [2]. 

The IKEA Effect illustrates how our perceptions of value can be influenced by our own involvement in the creation process [2]. It is a fascinating aspect of consumer psychology that has significant implications for product design, marketing, and consumer behavior [2]. 

For more on biases, please visit our other articles on Biases and Psychology.


  1. IKEA effect – Wikipedia 
  2. What is the IKEA Effect? — updated 2024 | IxDF 
  4. The “IKEA Effect”: When Labor Leads to Love – Harvard Business School 
        An AI image of multiple colors in geometric shapes.

        The Illusory Truth Effect plays a significant role in the spread of misinformation. Here is how: The Illusory Truth Effect is a cognitive bias that makes people more likely to believe something is true if they hear it repeatedly. This effect can influence how people process and evaluate information, especially in situations where they are uncertain or lack knowledge. The Illusory Truth Effect can have various consequences, such as: 

        1. Repetition: Misinformation often spreads when false statements are repeated frequently. This repetition can make the information seem more familiar, and therefore more believable, even if it is not true. 
        2. Social media: On platforms like Facebook and Twitter, false information can be shared and reshared, reaching a large audience quickly. Each time a user sees the same false information, it may seem truer due to the Illusory Truth Effect. 
        3. Confirmation Bias: People are more likely to believe information that confirms their existing beliefs, even if it is false. When this information is repeated, it reinforces these beliefs, making it harder to correct the misinformation. 
        4. Fake News: Fake news articles often contain false information that is repeated to make it seem true. The Illusory Truth Effect can make readers more likely to believe these false statements. 
        5. Propaganda: The Illusory Truth Effect is often used in propaganda. By repeating certain messages, propagandists can make their audience believe certain ideas, even if they are not based on truth. 
        6. Misinterpretation: Sometimes, a piece of information starts as true, but gets twisted or misinterpreted as it is shared and reshared. Repeated exposure to misinformation can make people believe the false version. 

                  To combat the Illusory Truth Effect and the spread of misinformation, it is important to fact-check information, consider the source, and be aware of our own biases. It is also helpful to promote media literacy and critical thinking skills. 

                  For more on biases, please visit our other articles on Biases and Psychology.

                  An AI generated image of a several colors in geometic shapes

                  A brief overview of cognitive biases and their effects 

                  What are cognitive biases? 

                  Cognitive dissonance: the state of mental uneasiness or strain that happens when a person has two or more conflicting or incompatible beliefs, values, or behaviors at the same time. For example, a person who is concerned about the environment but drives a fuel-consuming car may feel cognitive dissonance. Cognitive dissonance is often caused by heuristics or simple rules of thumb or mental shortcuts that people use to make fast and instinctive judgments or decisions, often based on experience or common sense. For example, a person who wants to buy a product may use the heuristic of choosing the most popular or costly option, assuming it is the best quality or value. Heuristics can be helpful and effective when there is not enough time or information to perform a more careful analysis or evaluation of the situation.  

                  To reduce cognitive dissonance, people may try to change their beliefs, attitudes, or actions to make them more coherent or rationalize or justify their behavior by downplaying its negative effects or highlighting its positive aspects. Alternatively, they may avoid information or situations that question or contradict their existing views and create dissonance. Cognitive dissonance can influence decision-making, motivation, and self-esteem. It can also lead to confirmation bias, as people look for evidence that confirms their preferred choices or beliefs and disregard or devalue evidence that opposes them. 

                  Cognitive biases can skew a person’s thoughts in several ways 

                  • Distorting the perception of reality and the evaluation of evidence. 
                  • Impairing the ability to reason logically and objectively. 
                  • Reducing the willingness to consider alternative perspectives or update one’s beliefs. 
                  • Influencing the formation of stereotypes and prejudices. 
                  • Affecting the quality of decision-making and problem-solving. 
                  • Increasing the likelihood of errors and mistakes. 

                  Common examples of cognitive biases 

                  • Confirmation bias: the tendency to seek, interpret, and remember information that confirms one’s preexisting beliefs or hypotheses. 
                  • Availability heuristic: the tendency to judge the frequency or probability of an event based on how easily examples come to mind. 
                  • Anchoring effect: the tendency to rely too much on the first piece of information that is given when making decisions or estimates. 
                  • Hindsight bias: the tendency to overestimate one’s ability to predict an outcome after it has occurred. 
                  • Frequency bias: known as the Baader–Meinhof phenomenon or the frequency illusion, the tendency to perceive the frequency of something based on how recently or vividly it was encountered, rather than on objective data. For example, a person might think that shark attacks are quite common after watching a movie or hearing a news report about them, even though they are statistically rare. This bias can affect how people assess risks, make decisions, or form opinions based on availability rather than accuracy. 
                  • Survivorship bias: the tendency to focus on the successful cases or outcomes, while ignoring the failures or non-survivors, thus creating a distorted view of reality. For example, a person might think that entrepreneurship is easy and profitable after reading stories of successful founders while neglecting the fact that most startups fail. This bias can affect how people evaluate their chances of success, learn from the past, or make decisions based on incomplete information. 
                  • Fundamental attribution error: the tendency to attribute other people’s behavior to their personality or disposition, while ignoring the situational factors that may have influenced them. 

                  For more on biases, please visit our other articles on Biases and Psychology.