An AI generated image of a several colors in geometic shapes

A brief overview of cognitive biases and their effects 

What are cognitive biases? 

Cognitive dissonance: the state of mental uneasiness or strain that happens when a person has two or more conflicting or incompatible beliefs, values, or behaviors at the same time. For example, a person who is concerned about the environment but drives a fuel-consuming car may feel cognitive dissonance. Cognitive dissonance is often caused by heuristics or simple rules of thumb or mental shortcuts that people use to make fast and instinctive judgments or decisions, often based on experience or common sense. For example, a person who wants to buy a product may use the heuristic of choosing the most popular or costly option, assuming it is the best quality or value. Heuristics can be helpful and effective when there is not enough time or information to perform a more careful analysis or evaluation of the situation.  

To reduce cognitive dissonance, people may try to change their beliefs, attitudes, or actions to make them more coherent or rationalize or justify their behavior by downplaying its negative effects or highlighting its positive aspects. Alternatively, they may avoid information or situations that question or contradict their existing views and create dissonance. Cognitive dissonance can influence decision-making, motivation, and self-esteem. It can also lead to confirmation bias, as people look for evidence that confirms their preferred choices or beliefs and disregard or devalue evidence that opposes them. 

Cognitive biases can skew a person’s thoughts in several ways 

  • Distorting the perception of reality and the evaluation of evidence. 
  • Impairing the ability to reason logically and objectively. 
  • Reducing the willingness to consider alternative perspectives or update one’s beliefs. 
  • Influencing the formation of stereotypes and prejudices. 
  • Affecting the quality of decision-making and problem-solving. 
  • Increasing the likelihood of errors and mistakes. 

Common examples of cognitive biases 

  • Confirmation bias: the tendency to seek, interpret, and remember information that confirms one’s preexisting beliefs or hypotheses. 
  • Availability heuristic: the tendency to judge the frequency or probability of an event based on how easily examples come to mind. 
  • Anchoring effect: the tendency to rely too much on the first piece of information that is given when making decisions or estimates. 
  • Hindsight bias: the tendency to overestimate one’s ability to predict an outcome after it has occurred. 
  • Frequency bias: known as the Baader–Meinhof phenomenon or the frequency illusion, the tendency to perceive the frequency of something based on how recently or vividly it was encountered, rather than on objective data. For example, a person might think that shark attacks are quite common after watching a movie or hearing a news report about them, even though they are statistically rare. This bias can affect how people assess risks, make decisions, or form opinions based on availability rather than accuracy. 
  • Survivorship bias: the tendency to focus on the successful cases or outcomes, while ignoring the failures or non-survivors, thus creating a distorted view of reality. For example, a person might think that entrepreneurship is easy and profitable after reading stories of successful founders while neglecting the fact that most startups fail. This bias can affect how people evaluate their chances of success, learn from the past, or make decisions based on incomplete information. 
  • Fundamental attribution error: the tendency to attribute other people’s behavior to their personality or disposition, while ignoring the situational factors that may have influenced them. 

For more on biases, please visit our other articles on Biases and Psychology.

AI generated image of a concert at a stadium

How a rap song inspired a phenomenon of obsessive fandom and online activism 

What is Stan Culture? 

Stan culture is a term that describes the behavior and attitude of fans who are extremely devoted to a certain celebrity, artist, or media franchise. The word “stan” is a blend of “stalker” and “fan”, and it was popularized by Eminem’s 2000 song “Stan”, which tells the story of a fan who becomes obsessed with the rapper and ends up killing himself and his pregnant girlfriend. The song was a critical and commercial success, and it introduced the concept of a “stan” to the mainstream audience. 

How Stan Culture Evolved 

Since the release of Eminem’s song, the term “stan” has been adopted by various fan communities, especially on social media platforms like Twitter, Instagram, and TikTok. Stan culture is characterized by the intense loyalty and admiration that fans have for their idols and the tendency to defend them from criticism or perceived attack. Stan culture also involves creating and consuming fan-made content, such as memes, videos, fan art, and fan fiction, that celebrate and promote the idol’s work and personality. Some fans even adopt the idol’s name, style, or catchphrases as part of their online identity. 

Who are the Stans? 

Stan culture is not limited to any specific genre, industry, or demographic. There are stans for musicians, actors, athletes, politicians, influencers, and even fictional characters. Some of the most prominent examples of stan culture are the fans of Taylor Swift, who call themselves “Swifties”, the fans of Beyoncé, who call themselves “Beyhive”, and the fans of BTS, who call themselves “ARMY”. These fan groups are known for their massive online presence, their ability to mobilize and support their idols, and their fierce rivalry with other fan groups. Stan culture can also be seen in the political sphere, where supporters of certain candidates or parties display a similar level of devotion and activism. For instance, the fans of Bernie Sanders, who call themselves “Bernie Bros”, the fans of Donald Trump, who call themselves “MAGA”, and the fans of Alexandria Ocasio-Cortez, who call themselves “AOC Squad”. 

What are the Pros and Cons of Stan Culture? 

Stan culture can have both positive and negative effects on the fans, the idols, and the society. On the positive side, stan culture can provide a sense of belonging, identity, and community for the fans, who can connect with other like-minded people and share their passion and enthusiasm. Stan culture can also inspire creativity, activism, and generosity, as fans create and consume fan-made content, participate in social movements and campaigns, and donate to charities and causes that their idols endorse. Stan culture can also benefit the idols, who can gain more exposure, recognition, and support from their loyal fan base. 

On the negative side, stan culture can also lead to unhealthy, toxic, and obsessive behavior, such as cyberbullying, harassment, stalking, and doxxing. Some fans may cross the line between admiration and obsession, and invade the privacy, safety, and personal lives of their idols and their rivals. Some fans may also develop unrealistic expectations, idealizations, and parasocial relationships with their idols, and lose touch with reality and their own identity. Stan culture can also create division, hostility, and intolerance among different fan groups, who may engage in online wars, insults, and threats. Stan culture can also harm the idols, who may face pressure, stress, and backlash from their demanding and critical fan base. 

Conclusion 

Stan culture is a phenomenon that has emerged and evolved in the digital age, where fans can access and interact with their idols and their fellow fans more easily and frequently. Stan culture can be seen as a form of expression, appreciation, and empowerment, but it can also be seen as a form of obsession, fanaticism, and extremism. Stan culture can have both positive and negative impacts on the fans, the idols, and the society, depending on how it is practiced and perceived. Stan culture is a complex and dynamic phenomenon that reflects the changing nature of fandom and celebrity in the 21st century. 

For more on biases, please visit our other articles on Biases and Psychology.

An AI generated image of a laptop with jobs listings at a coffee shop

Here are some skills you should consider developing: 

  1. Cloud Computing: Skills in cloud computing are in great demand due to the increasing number of companies moving business functions to the cloud [1]. 
  1. Artificial Intelligence: Experts in the fields of artificial intelligence (AI) and machine learning are in high demand [1]. 
  1. Sales Leadership: Having sales leadership experience will give you opportunities in many different industries [1]. 
  1. Analysis: Companies look for employees who are great at investigating a problem and finding the ideal solution in an efficient and timely manner [1]. 
  1. Growth Mindset: Embrace a growth mindset and be open to learning new things [2]. 
  1. Continuous Learning: Keep up with new trends, technologies, and techniques in your field [2]. 
  1. Transferable Skills: Develop transferable skills that are useful in multiple jobs and industries, such as communication, leadership, problem-solving, and time management [2]. 
  1. Online Presence: Build a strong online presence that showcases your skills, achievements, and expertise [2]. 
  1. Networking: Networking is crucial in a changing job market [2]. 
  1. Analytical Thinking, Creative Thinking, Leadership and Social Influence, AI and Big Data, Curiosity and Lifelong Learning: These are also important skillsets to help you stay relevant in the changing landscape [3]. 

Remember, the key to staying relevant is being proactive in your learning, developing a growth mindset, building a strong online presence, developing transferable skills, staying informed about industry trends, and being flexible and adaptable [2]. It’s a rapidly evolving field, so continuous learning and adaptation are crucial for success. 

Sources:  

  1. 20 In-Demand Skills for Today’s Work Environment | Indeed.com 
  2. How to Stay Relevant in a Changing Job Market | EVONA 
  3. Gen AI is here to stay — here are 5 skills to help you stay relevant in … 
  4. How to Stay Relevant in a Rapidly Changing Job Market 
        A computer-generated image of job listings on a laptop at a coffee shop

        Artificial Intelligence (AI) is expected to have a significant impact on the job market 

         Here are some insights: 

        1. Bank Tellers: One of the most at-risk jobs is bank tellers [1]. 
        1. Clerical or Secretarial Roles: Many clerical or secretarial roles are seen as likely to decline quickly because of AI [1]. 
        1. Customer Service: AI chatbots could soon be more intelligent than humans, potentially impacting customer service roles [2]. 
        1. Manufacturing, Construction, Professional, Scientific and Technical Services, and Information and Communications: These sectors are most at risk, with AI potentially replacing a sizable number of jobs [3]. 
        1. Low and Middle-skilled Jobs: The OECD report suggests that low and middle-skilled jobs are most at risk [5]. 

        However, it is important to note that AI is also expected to create new jobs. Roles for AI and machine learning specialists, data analysts and scientists, and digital transformation specialists are expected to grow rapidly [1]. Furthermore, AI can help reduce tedious and dangerous tasks, leading to greater worker engagement and physical safety [5]. 

        This is a rapidly evolving field and the impact of AI on jobs can change as technology advances. It is also worth noting that many believe the key to navigating these changes is reskilling and upskilling the workforce to work effectively with AI-infused processes [1]. 

        Sources:  

        1. The jobs most likely to be lost and created because of AI | World … 
        2. AI: Which jobs are most at risk from the technology? – BBC News 
        3. AI Takeover: Jobs at High Risk of Disappearing in 10 Years
        4. OECD finds nearly a third of jobs are under threat from AI 
        5. 85 Million Jobs Under Threat – Navigating the Impact of AI on Workforce 
                AI image of a robot holding a lock

                The Consumer Protections for Artificial Intelligence Act in Colorado, also known as SB24-205, is a pioneering piece of legislation aimed at regulating high-risk AI systems [4]. The law was signed by Colorado Governor Jared Polis on May 17, 2024 [4] and is set to take effect on February 1, 2026 [4]. This law makes Colorado the first state in the nation to enact broad restrictions on private companies using AI [2]. 

                The Act imposes a duty of “reasonable care” on both developers and deployers of high-risk AI systems to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination [6]. Algorithmic discrimination is based on actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of Colorado or federal law [6]. 

                For developers, the Act requires them to: 

                • Use reasonable care to avoid algorithmic discrimination in the high-risk system [1]. 
                • Make available to a deployer of the high-risk system a statement disclosing specified information about the high-risk system [1]. 
                • Disclose to the attorney general and known deployers of the high-risk system any known or reasonably foreseeable risk of algorithmic discrimination, within 90 days (about 3 months) after the discovery or receipt of a credible report from the deployer [1]. 

                For deployers, the Act requires them to: 

                • Use reasonable care to avoid algorithmic discrimination in the high-risk system [1]. 
                • Implement a risk management policy and program for the high-risk system [1]. 
                • Annually review the deployment of each high-risk system deployed by the deployer to ensure that the high-risk system is not causing algorithmic discrimination [1]. 
                • Provide a consumer with an opportunity to correct any incorrect personal data that a high-risk artificial intelligence system processed in making a consequential decision [1]. 
                • Provide a consumer with an opportunity to appeal, via human review if technically feasible, an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system [1]. 

                This Act is a significant step towards ensuring fairness and transparency in the use of AI systems, particularly those that make consequential decisions affecting consumers [1]. 

                Sources: 

                1. Colorado Enacts Groundbreaking AI Consumer Protection Legislation … 
                2. Colorado Passes Consumer Protections in Interactions with AI 
                3. Colorado Adopts Comprehensive AI Act Imposing Broad Disclosure Requirements 
                4. Consumer Protections for Artificial Intelligence | Colorado General … 
                5. Colorado Adopts Comprehensive AI Act Imposing Broad Disclosure Requirements 
                6. Colorado’s Artificial Intelligence Act: What Employers Need to Know 
                          An AI generated image of a gavel and law books

                          Promotion of Open Science and Innovation

                          • The laws could encourage open science by setting standards for transparency and accountability [2]. 
                          • They could also stimulate innovation by creating a clear regulatory framework within which AI developers can operate [2]. 

                          Impact on High-Risk AI Applications

                          • Companies developing AI applications considered to pose a “high risk” to fundamental rights, such as those meant to be used in sectors like education, health care, and policing, will have to meet new EU standards [3]. 

                          Exemption for Research and Development

                          • The European Parliament added a clause to the draft act that would exempt AI models developed purely for research, development, or prototyping [2]. This could ensure that the act does not negatively affect research [2]. 

                          Challenges for General-Purpose AI Models

                          • The EU AI Act strictly regulates general-purpose models, which have broad and unpredictable uses [2]. This could pose challenges for the development and deployment of such models [2]. 

                          Potential Stifling of Innovation

                          • Some researchers worry that the laws could stifle innovation by imposing stringent regulations on AI development [2]. 

                          Addressing Concentration of Power

                          • The laws address the concentration of power among private corporations in AI development and the associated risks to individual rights and the rule of law [4]. 

                          These impacts could shape the future of AI research and development, influencing how AI technologies are developed, used, and regulated. However, the exact impact of these laws will depend on their final form and how they are implemented in practice [1] [2] [3] [4]. It is important to note that the status of these laws may have changed after my last update in 2021, and you should check the latest sources for the most current information. 

                          Sources:  

                          1. What the EU’s tough AI law means for research and ChatGPT – Nature 
                          2. What’s next for AI regulation in 2024? | MIT Technology Review 
                          3. Law and the Governance of Artificial Intelligence | SpringerLink 
                          4. Artificial Intelligence: Overview, Recent Advances, and Considerations … 
                          5. https://crsreports.congress.gov 
                                  An AI generated of Parliament in London

                                  Here are some key points: 

                                  • The UK government has published a white paper detailing plans for implementing a pro-innovation approach to AI regulation [1]. This paper sets out the government’s proposals for a proportionate, future-proof, and pro-innovation framework for regulating AI [1]. 
                                  • The UK Science and Technology Framework identifies AI as one of five critical technologies and notes that regulation plays a key role in creating an environment for AI to flourish [1]. 
                                  • The government aims to help the UK harness the opportunities and benefits that AI technologies present. This will drive growth and prosperity by boosting innovation and investment and building public trust in AI [1]. 
                                  • The UK’s approach is based on six core principles that regulators must apply, with flexibility to implement these in ways that best meet the use of AI in their sectors [2]. 
                                  • The UK is taking a less centralized approach than the EU, focusing on supporting growth and avoiding unnecessary barriers being placed on businesses [2] [3]. 
                                  • Currently, a range of legislation and regulation applies to AI in the UK – such as data protection, consumer protection, product safety and equality law, and financial services and medical devices regulation – but there is no overarching framework that governs its use [4]. 

                                  Please note that this is a rapidly evolving field and the information provided is based on the latest available data. For more detailed information, you may want to refer to the respective government or legislative websites. 

                                  Sources:  

                                  1. AI regulation: a pro-innovation approach – GOV.UK 
                                  1. UK sets out proposals for new AI rulebook to unleash innovation and … 
                                  1. The Ecosystem: UK companies welcome pro-innovation pitch for new AI … 
                                  1. AI laws inevitable but not right for today, says UK government 
                                  1. What is the AI Act — and what does it mean for the UK? 
                                  An AI generated image of a gavel and law books

                                  United States

                                  • The US has seen a surge in legislative activity related to potential AI risks and harms [5]. States are adopting regulations responding to concerns around many types of AI systems and contexts [5]. 
                                  • The Biden-Harris Administration issued an Executive Order to strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership around the world [6]. 
                                  • There have been some notable legislative efforts to regulate the use of AI, such as the American Data Privacy and Protection Act (ADPPA) and the Algorithmic Accountability Act of 2022 (AAA) [7]. 

                                  Canada

                                  • In June 2022, the Government of Canada tabled the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022 [1]. The AIDA aims to ensure that AI systems deployed in Canada are safe and non-discriminatory [2]. 
                                  • The AIDA requires businesses to implement new governance mechanisms and policies that will consider and address the risks of their AI system and give users enough information to make informed decisions [2]. 

                                  European Union

                                  • The EU has approved a ground-breaking law, the Artificial Intelligence Act, aiming to harmonize rules on artificial intelligence [10^. This legislation follows a ‘risk-based’ approach, which means the higher the risk to cause harm to society, the stricter the rules [10]. 
                                  • The Act categorizes different types of artificial intelligence according to risk. AI systems presenting only limited risk would be subject to very light transparency obligations, while high-risk AI systems would be authorized, but subject to a set of requirements and obligations to gain access to the EU market [10]. 

                                  Please note that this is a rapidly evolving field and the information provided is based on the latest available data. For more detailed information, you may want to refer to the respective government or legislative websites. 

                                  Sources:  

                                  An AI generated image of a squirrel on a bench on a college campus

                                  A brief overview of some websites and initiatives that assist companies and educational institutions in developing AI use and education policies: 

                                  1. AI for Education [1]: AI for Education provides guidance and support for crafting practical AI policies. As educators and students increasingly adopt AI, it’s crucial to develop policies and guidelines that ensure ethical use. Whether you need targeted strategic advice or tailored end-to-end support, AI for Education can help you formulate governance frameworks specific to your school or district’s needs. 
                                  2. TeachAI [2]: TeachAI is an initiative led by Code.org, ETS, the International Society for Technology in Education, Khan Academy, and the World Economic Forum. It unites education and technology leaders to assist governments and education authorities in teaching with and about AI. While not a website specifically for policy development, TeachAI offers valuable resources and insights related to AI in education. 
                                  3. Office of Educational Technology (OET) [3]: The OET focuses on developing policies and supports for the effective, safe, and fair use of AI-enabled educational technology. Although their primary focus is broader than just policy development, they contribute to the conversation around AI in education. 
                                  4. Anthology’s AI Policy Framework [4]: Anthology, an edtech firm, released a six-page AI policy framework designed to support higher education institutions interested in developing their own policies around the ethical use of AI. The framework aligns with the AI Risk Management Framework from the National Institute of Standards and Technology. 

                                  Remember to explore these resources further to tailor your policies to your organization’s unique context and requirements. 

                                  Sources:  

                                  1. AI Policy Development — AI for Education 
                                  2. Foundational Policy Ideas for AI in Education 
                                  3. Artificial Intelligence – Office of Educational Technology 
                                  4. Trying to create a university AI policy? There’s a framework for that … 
                                  An AI generated image of a person standing in a doorway with a keyhole shadow

                                  A guide for companies that want to leverage AI without compromising data security 

                                  Introduction 

                                  Artificial intelligence (AI) tools can help companies improve efficiency, accuracy, innovation, and customer satisfaction. However, using AI also comes with some challenges and risks, especially when it involves sensitive or personal data, also referred to as PII (Personally Identifiable Information). Data breaches, cyberattacks, privacy violations, and ethical issues are some of the potential threats that companies should be aware of when and if they mitigate to use AI tools. 

                                  This document aims to provide some guidance and best practices for companies that want to protect their data when they use AI tools. It will cover the following topics: 

                                  • Why data protection is important for AI 
                                  • What are the main data protection challenges and risks for AI 
                                  • What are the key data protection principles and standards for AI 
                                  • What are some practical data protection strategies and solutions for AI 

                                  Why Data Protection is Important for AI 

                                  Data is the fuel for AI. Without data, AI tools cannot learn, train, or perform their tasks. Data is also the output of AI. AI tools can generate, analyze, or process data to provide insights, recommendations, or decisions. Therefore, data protection is crucial for AI, both as an input and an output. 

                                  Data protection is important for AI for several reasons: 

                                  • It ensures the quality and reliability of the data and the AI tools. Data protection can prevent data corruption, manipulation, or loss, which can affect the accuracy, validity, or performance of the AI tools. Data protection can also ensure the integrity, consistency, and completeness of the data and the AI tools. 
                                  • It safeguards the rights and interests of the data subjects and the data owners. Data protection can prevent unauthorized access, use, or disclosure of the data, which can violate the privacy, confidentiality, or consent of the data subjects or the data owners. Data protection can also protect the intellectual property, trade secrets, or competitive advantage of the data owners. 
                                  • It complies with the legal and ethical obligations and expectations of the data users and the data regulators. Data protection can ensure that the data and the AI tools follow the relevant laws, regulations, standards, or guidelines that govern data collection, processing, storage, or sharing. Data protection can also align with the ethical principles, values, or norms that guide data governance, accountability, or transparency. 

                                  What are the Main Data Protection Challenges and Risks for AI 

                                  Because of its features, abilities, and uses, AI has some distinct and complicated data protection issues and dangers. Some of the main data protection challenges and risks for AI are: 

                                  • Data volume and variety. AI tools often require large and diverse datasets to learn, train, or perform their tasks. This can increase the complexity and difficulty of data protection, as the data may come from various sources, formats, or domains, and may contain distinct types of information, such as personal, sensitive, or confidential data. 
                                  • Data processing and sharing. AI tools often involve complex and dynamic data processing and sharing activities, such as data extraction, transformation, integration, analysis, or dissemination. This can increase the exposure and vulnerability of the data, as the data may be transferred, stored, or accessed by different parties, platforms, or systems, and may be subject to different policies, protocols, or standards. 
                                  • Data interpretation and application. AI tools often generate, analyze, or process data to provide insights, recommendations, or decisions, which may have significant impacts or consequences for the data subjects, the data owners, or the data users. This can increase the responsibility and accountability of the data protection, as the data may affect the rights, interests, or obligations of the data subjects, the data owners, or the data users, and may raise ethical, legal, or social issues. 

                                  What are the Key Data Protection Principles and Standards for AI 

                                  Data protection for AI should follow some key principles and standards, which can provide a framework and a benchmark for data protection practices and policies. Some of the key data protection principles and standards for AI are: 

                                  • Data security. Data protection for AI should implement measures to protect the data from unauthorized or unlawful access, use, disclosure, alteration, or destruction. Data protection for AI should also monitor and report any data breaches or incidents and take remedial actions as soon as possible.  
                                  • Data minimization. Data protection for AI should collect, process, store, or share only the minimum amount and type of data that is necessary, relevant, and adequate for the purpose and scope of the AI tools. Data protection for AI should also delete or anonymize the data when it is no longer needed, outdated, or more appropriately not accurate. 
                                  • Data privacy. Data protection for AI should respect and uphold the privacy rights and preferences of the data subjects, and obtain their informed and explicit consent before collecting, processing, storing, or sharing their data. Data protection for AI should also provide the data subjects with the options to access, correct, delete their data, or to withdraw their consent at any time. 
                                  • Data transparency. Data protection for AI should disclose and explain the sources, methods, purposes, and outcomes of the data and the AI tools, and provide clear and accurate information and communication to the data subjects, the data owners, and the data users. Data protection for AI should also enable and facilitate the oversight, audit, or review of the data and the AI tools, and address any questions, concerns, or complaints. 
                                  • Data accountability. Data protection for AI should assign and assume the roles, responsibilities, and liabilities of the data and the AI tools, and ensure that the data and the AI tools comply with the relevant laws, regulations, standards, or guidelines. Data protection for AI should also evaluate and assess the impacts and risks of the data and the AI tools, and implement measures to prevent, mitigate, or remedy any harm or damage. 

                                  What are some Practical Data Protection Strategies and Solutions for AI 

                                  Data protection for AI requires some practical strategies and solutions, which can help implement and operationalize the data protection principles and standards. Some of the practical data protection strategies and solutions for AI are: 

                                  • Data encryption. Data encryption is a technique that converts data into a code that can only be accessed or decrypted by authorized parties. Data encryption can enhance data security and privacy and help prevent unauthorized or unlawful access, use, disclosure, alteration, or destruction of the data. 
                                  • Data anonymization. Data anonymization is a technique that removes or modifies the data that can identify or link to a specific data subject, such as names, addresses, or phone numbers. Data anonymization can reduce the data volume and variety, by minimizing the amount and type of data that is collected, processed, stored, or shared. 
                                  • Data federation. Data federation is a technique that allows the data to remain in its original location and format, and only provides a virtual view or access to the data when it is needed or requested by the AI tools. Data federation can improve data minimization and transparency, by collecting, processing, storing, or sharing only the necessary, relevant, and adequate data for the purpose and scope of the AI tools. 
                                  • Data auditing. Data auditing is a technique that records and tracks the data and the AI tools activities, such as data collection, processing, storage, or sharing, and data generation, analysis, or processing. Data auditing can support data accountability and oversight, by providing evidence and documentation of the data and the AI tools compliance, impacts, and risks. 
                                  • Data ethics. Data ethics is a technique that applies ethical principles, values, or norms to the data and the AI tools, such as fairness, justice, or respect. Data ethics can address the ethical, legal, or social issues that may arise from the data and the AI tools interpretation and application and ensure that the data and the AI tools respect and uphold the rights and interests of the data subjects, the data owners, and the data users.