Posts

Robots in a news room
  1. OpenAI Breach: OpenAI’s systems were recently breached. However, the breach was superficial and did not compromise any secret ChatGPT conversations [1]. 
  2. Quantum Rise Funding: Quantum Rise, a Chicago-based startup that provides AI-driven automation for companies, raised a $15 million seed round [1]. 
  3. AI Regulation: The U.S. Supreme Court struck down “Chevron deference,” a 40-year-old ruling on federal agencies’ power [1]. 
  4. AI Deepfakes: YouTube has made changes to make it easier to report and take down AI deepfakes [1]. 
  5. Cloudflare’s AI Bot Tool: Cloudflare launched a tool to combat AI bots [1]. 
  6. Altrove’s New Materials: Altrove, a French startup, is using AI models and lab automation to create new materials [1]. 
  7. AI’s Energy Cost: Google’s environmental report avoided addressing the actual energy cost of AI [1]. 
  8. Generative AI in Metaverse Games: Meta plans to bring more generative AI tech into games, specifically VR, AR (Augmented Reality), and mixed reality games [1]. 
  9. Plagiarism Accusations: News outlets are accusing Perplexity of plagiarism and unethical web scraping [1]. 
  10. Hebbia’s Funding: AI startup Hebbia raised $130 million in funding. The company aims to help firms efficiently parse and interpret complex data [5]. 

                    For more details, refer to the source below.

                    Sources:  

                    1. AI News & Artificial Intelligence | TechCrunch 
                    2. AI News Today – July 9th, 2024 – The Dales Report 
                    3. The most important AI trends in 2024 – IBM Blog 
                    4. AI News June 2024: In-Depth and Concise 
                    5. June 2024: Top five AI stories of the month – FinTech Futures: Fintech news 

                            Microsoft Copilot

                            About Microsoft Copilot


                            Microsoft Copilot is a generative artificial intelligence chatbot developed by Microsoft. Launched in February 2023 as Microsoft’s primary replacement for the discontinued Cortana. The service was initially called Bing Chat and was featured as a built-in feature for Microsoft’s search engine Bing and Microsoft’s web browser Edge.

                            Microsoft Copilot sells the power of its AI to boost “productivity, unlock creativity, and help you understand information better with a simple chat experience.” It coordinates large language models (LLMs), content in Microsoft Graph, and Microsoft 365 productivity apps, such as Word, Excel, PowerPoint, Outlook, Teams, and others.

                            There are different versions of Copilot:

                            • Copilot Free: The basic version that allows you to create original content and answer questions.
                            • Copilot Pro: A more robust version for your creativity and productivity that costs $20/month.
                            • Copilot for Microsoft 365: This version is optimized for your organization’s Microsoft 365 Business Standard or Business Premium subscription.

                            Using Microsoft Copilot to generate an image.

                            Click on Copilot and you will see a place for a prompt. You can experiment with the creative levels.

                            Enter your prompt, to generate an image

                            Your image will render

                            Up to four images will render

                            Copilot will make some suggestions to change your image

                            The final output



                            Image of a record player with records

                            Introduction 

                            Artificial intelligence (AI) has been used to create background music, enhance existing songs, or compose original melodies. However, some AI music platforms have been accused of violating the copyrights of major record labels, who claim that the AI-generated music infringes on their original works. 

                            The plaintiffs, Universal Music Group, Sony Music Entertainment, and Warner Music Group have countered that the AI systems used by the defendants are not capable of generating truly original and creative music and that they rely on the musical data and inputs provided by the plaintiffs and other sources. They have also asserted that their songs have distinctive and recognizable features that are copied or reproduced by the AI systems without authorization. 

                            The lawsuits have raised complex and novel legal issues regarding the nature and scope of copyright protection for AI-generated music, and the criteria and standards for determining the originality, creativity, and ownership of AI-generated music. The outcomes of the lawsuits could have significant implications for the future of the AI music industry and the rights and interests of the musicians, composers, producers, and consumers involved. 

                            Examples of AI Music Platforms Sued by Major Record Labels 

                            • Amper Music: Amper Music is an AI music platform that allows users to create custom music for their videos, podcasts, games, or other projects. In 2020, Amper Music was sued by Universal Music Group, Sony Music Entertainment, and Warner Music Group, who alleged that Amper Music’s AI system copied the melodies, rhythms, harmonies, and lyrics of their songs without authorization. 
                            • Mubert: Mubert is an AI music platform that generates adaptive music streams for various scenarios, such as meditation, fitness, gaming, or studying. In 2021, Mubert was sued by Sony Music Entertainment, who claimed that Mubert’s AI system used the samples, loops, and stems of their songs without permission. 
                            • Boomy: Boomy is an AI music platform that enables users to create and sell their own songs, which are generated by an AI system based on the user’s preferences and inputs. In 2021, Boomy was sued by Warner Music Group, who alleged that Boomy’s AI system reproduced the melodies, structures, and styles of their songs without consent. 

                            Conclusion 

                            AI music platforms have been facing legal challenges from major record labels, who argue that the AI-generated music infringes on their copyrights. The lawsuits raise questions about the originality, creativity, and ownership of AI-generated music, and how the existing laws and regulations can address these issues. 

                            AI Generated image of multiple colors with different colored blocks

                            Organizations can measure and track bias in their AI systems by implementing a combination of strategies: 

                            • AI Governance: Establishing AI governance frameworks to guide the responsible development and use of AI technologies, including policies and practices to identify and address bias [1] [2]. 
                            • Bias Detection Tools: Utilizing tools like IBM’s AI Fairness 360 toolkit, which provides a library of algorithms to detect and mitigate bias in machine learning models [1]. 
                            • Fairness Metrics: Applying fairness metrics that measure disparities in model performance across different groups to uncover hidden biases [3]. 
                            • Exploratory Data Analysis: Conducting exploratory data analysis to reveal any underlying biases in the training data used for AI models [3]. 
                            • Interdisciplinary Collaboration: Promoting collaborations between AI researchers and domain experts to gain insights into potential biases and their implications in specific fields [4]. 
                            • Diverse Teams: Involving diverse teams in the development process to bring a variety of perspectives and reduce the risk of biased outcomes [5]. 

                            These measures help organizations to actively monitor and mitigate bias, ensuring their AI systems are fair and equitable. 

                            Sources: 

                            1. IBM Policy Lab: Mitigating Bias in Artificial Intelligence 

                            2. What Is AI Bias? | IBM 

                            3. Testing AI Models — Part 4: Detect and Mitigate Bias – Medium 

                            4. Mitigating Bias In AI and Ensuring Responsible AI 

                            5. Addressing bias and privacy challenges when using AI in HR 

                            An AI image of a bunny dressed like a Beefeater

                            How technology can create and combat synthetic media 

                            What are Deep Fakes? 

                            Deep Fakes are a type of synthetic media that uses artificial intelligence (AI) to manipulate or generate audio, video, or images. They can create realistic-looking content that appears to show people doing or saying things that they never did or said. For example, a Deep Fake video could show a politician making a controversial statement, a celebrity endorsing a product, or a person’s face swapped with another person’s face. 

                            Below are examples of one with an Arnold Schwarzenegger Deep Fake starring in James Cameron’s Titanic

                             

                            How do Deep Fakes work? 

                            Deep Fakes are created by using deep learning, a branch of AI that involves training neural networks on large amounts of data. Neural networks are mathematical models that can learn patterns and features from the data and apply them to new inputs. There are different methods to create Deep Fakes, but one of the most common ones is called generative adversarial networks (GANs). 

                            GANs consist of two neural networks: a generator and a discriminator. The generator tries to create fake content that looks like real content, while the discriminator tries to distinguish between the real and the fake content. The two networks compete, improving their skills over time. The result is fake content that can fool both humans and machines. 

                            What are the threats of Deep Fakes? 

                            Deep Fakes pose several threats to individuals, organizations, and society. Some of the potential harms of Deep Fakes are: 

                            • Disinformation and propaganda: Deep Fakes can be used to spread false or misleading information, influence public opinion, undermine trust in institutions, and incite violence or conflict. 
                            • Identity theft and fraud: Deep Fakes can be used to impersonate someone’s voice, face, or biometric data, and gain access to their personal or financial information, accounts, or devices. 
                            • Blackmail and extortion: Deep Fakes can be used to create compromising or embarrassing content that can be used to coerce or threaten someone. 
                            • Privacy and consent violation: Deep Fakes can be used to create non-consensual or invasive content that can harm someone’s reputation, dignity, or mental health. 

                             An example of how close a Deep Fake can be to the original

                            The people below were generated by the website www.thispersondoesnotexist.com such images can be used in fake social media accounts.

                            How are companies dealing with Deep Fakes? 

                            While Deep Fakes pose a serious challenge, they also offer an opportunity for innovation and collaboration. Many companies are developing tools and solutions to detect, prevent, and mitigate the impact of Deep Fakes. Some of the examples are: 

                            • Adobe: Adobe has created a tool called Content Authenticity Initiative (CAI) that aims to provide a secure and verifiable way to attribute the origin and history of digital content. CAI uses cryptography and blockchain to create a tamper-proof record of who created, edited, or shared the content and allows users to verify the authenticity and integrity of the content. 
                            • Meta: Meta, formerly known as Facebook, has launched a program called Deep Fake Detection Challenge (DFDC) that aims to accelerate the development of Deep Fake detection technologies. DFDC is a global competition that invites researchers and developers to create and test algorithms that can detect Deep Fakes in videos. DFDC also provides a large and diverse dataset of real and fake videos for training and testing purposes. 
                            • Microsoft: Microsoft has developed a tool called Video Authenticator that can analyze videos and images and provide a confidence score of how likely they are to be manipulated. Video Authenticator uses a machine learning model that is trained on a large dataset of real and fake videos, and can detect subtle cues such as fading, blurring, or inconsistent lighting that indicate manipulation. Microsoft also provides a browser extension that can apply the same technology to online content. 
                            • X (formerly known as Twitter): X/Twitter has implemented a policy that requires users to label synthetic or manipulated media that are shared on its platform. The policy also states that X/Twitter may remove or flag such media if they are likely to cause harm or confusion. Twitter uses a combination of human review and automated systems to enforce the policy and provide context and warnings to users. 
                            • Deeptrace: Deeptrace is a startup that specializes in detecting and analyzing Deep Fakes and other forms of synthetic media. Deeptrace offers a range of products and services, such as Deeptrace API, Deeptrace Dashboard, and Deeptrace Intelligence, that can help clients identify, monitor, and respond to malicious or harmful uses of Deep Fakes. Deeptrace also publishes reports and insights on the trends and developments of synthetic media. 

                            These are just some of the examples of how companies are tackling the problem of Deep Fakes. There are also other initiatives and collaborations from academia, government, civil society, and media that are working to raise awareness, educate users, and promote ethical and responsible use of synthetic media. 

                            Challenges in implementing ethical Artificial Intelligence 

                            Artificial intelligence (AI) is a powerful technology that can enhance human capabilities, improve social welfare, and solve complex problems. However, AI also poses significant ethical challenges that need to be addressed to ensure the responsible use of AI technologies.  

                            One of the main ethical challenges of AI is bias and fairness. Bias refers to the systematic deviation of an AI system from the truth or the desired outcome, while fairness refers to the ethical principle that similar cases should be treated similarly by an AI system. Bias and fairness are intertwined, as biased AI systems can lead to unfair or discriminatory outcomes for certain groups or individuals [1]. 

                            Bias and fairness issues can arise at various stages of an AI system’s life cycle, such as data collection, algorithm design, and decision making. For example, an AI system that relies on data that is not representative of the target population or that reflects existing social biases can produce skewed or inaccurate results. Similarly, an AI system that uses algorithms that are not transparent, interpretable, or explainable can make decisions that are not justified or understandable to humans. Moreover, an AI system that does not consider the ethical implications or the social context of its decisions can cause harm or injustice to be affected parties [1]. 

                            To address bias and fairness issues, several strategies can be employed, such as: 

                            • Data auditing: Checking the quality, diversity, and representativeness of the data used by an AI system and identifying and correcting any potential sources of bias. 
                            • Algorithm auditing: Testing and evaluating the performance, accuracy, and robustness of the algorithms used by an AI system, and ensuring they are transparent, interpretable, and explainable. 
                            • Impact assessment: Assessing the potential impacts and risks of an AI system’s decisions on various stakeholders, and ensuring they are aligned with ethical principles and societal values. 
                            • Human oversight: Providing mechanisms for human intervention, review, or feedback in the AI system’s decision-making process, and ensuring accountability and redress for any adverse outcomes [1]. 
                            • Privacy: Another ethical challenge of AI is privacy. Privacy refers to the right of individuals to control their personal information and how it is collected, used, and shared by others. Privacy is a fundamental human right that is essential for human dignity, autonomy, and freedom [3]. 

                            Privacy Issues 

                            Privacy issues can arise when AI systems process vast amounts of personal data, such as biometric, behavioral, or location data, that can reveal sensitive or intimate details about individuals. For example, an AI system that uses facial recognition or voice analysis to identify or profile individuals can infringe on their privacy rights. Similarly, an AI system that collects or shares personal data without the consent or knowledge of the individuals can violate their privacy rights. Moreover, an AI system that does not protect the security or confidentiality of the personal data it handles can expose individuals to the risk of data breaches or misuse3

                            To address privacy issues, several strategies can be employed, such as: 

                            Principle Description 
                            Privacy by design Incorporating privacy principles and safeguards into the design and development of an AI system and minimizing the collection and use of personal data. 
                            Privacy by default Providing individuals with the default option to opt-in or opt-out of the data collection and use by an AI system and respecting their preferences and choices. 
                            Privacy by law Complying with the relevant laws and regulations that govern the privacy rights and obligations of the AI system and its users and ensuring transparency and accountability for any data practices. 
                            Privacy by education Raising awareness and educating the AI system and its users about the privacy risks and benefits of the AI system and providing them with the tools and skills to protect their privacy 3

                            The accountability Challege 

                            A third ethical challenge of AI is accountability. Accountability refers to the obligation of an AI system and its users to take responsibility for the decisions and actions of the AI system, and to provide explanations or justifications for them. Accountability is a key principle that ensures trust, legitimacy, and quality of an AI system [2]. 

                            Accountability issues can arise when an AI system makes decisions or actions that have significant impacts or consequences for humans or society, especially when they lead to unintended or harmful outcomes. For example, an AI system that makes medical diagnoses or legal judgments can affect the health or rights of individuals. Similarly, an AI system that operates autonomously or independently can cause damage or injury to humans or property. Moreover, an AI system that involves multiple actors or intermediaries can create ambiguity or confusion about who is responsible or liable for the AI system’s decisions or actions [2]. 

                            To address accountability issues, several strategies can be employed, such as: 

                            • Governance: Establishing clear and consistent rules, standards, and procedures for the development, deployment, and use of an AI system, and ensuring compliance and enforcement of them. 
                            • Traceability: Maintaining records and logs of the data, algorithms, and processes involved in the AI system’s decision making, and enabling verification and validation of them. 
                            • Explainability: Providing meaningful and understandable explanations or justifications for the AI system’s decisions or actions and enabling feedback and correction of them. 
                            • Liability: Assigning and apportioning the legal or moral responsibility or liability for the AI system’s decisions or actions and ensuring compensation or remedy for any harm or damage caused by them [2]. 

                            A fourth ethical challenge of AI is safety and security. Safety refers to the ability of an AI system to avoid causing harm or damage to humans or the environment, while security refers to the ability of an AI system to resist or prevent malicious attacks or misuse by unauthorized parties. Safety and security are essential for ensuring the reliability, robustness, and resilience of an AI system1

                            Safety and security issues can arise when an AI system is exposed to errors, failures, uncertainties, or adversities that can compromise its functionality or performance. For example, an AI system that has bugs or glitches can malfunction or behave unpredictably. Similarly, an AI system that faces novel or complex situations can make mistakes or errors. Moreover, an AI system that is targeted by hackers or adversaries can be manipulated or corrupted [1]. 

                            To address safety and security issues, several strategies can be employed, such as: 

                            • Testing: Conducting rigorous and extensive testing and evaluation of the AI system before, during, and after> its deployment, and ensuring its quality and correctness. 
                            • Monitoring: Observing and supervising the AI system’s operation and behavior and detecting and reporting any anomalies or problems. 
                            • Updating: Maintaining and improving the AI system’s functionality and performance and fixing and resolving any issues or defects. 
                            • Defense: Protecting and securing the AI system from malicious attacks or misuse and mitigating and recovering from any damage or harm caused by them [1]. 

                            In conclusion, AI is a powerful technology that can bring many benefits to humans and society, but it also poses significant ethical challenges that need to be addressed to ensure the responsible use of AI technologies. By applying various strategies and methods, such as data auditing, algorithm auditing, impact assessment, human oversight, privacy by design, privacy by default, privacy by law, privacy by education, governance, traceability, explainability, liability, testing, monitoring, updating, and defense, we can mitigate the ethical challenges of AI and foster trust, confidence, and acceptance of AI systems. 

                            Implementing ethical AI presents several challenges that need to be addressed to ensure the responsible use of AI technologies. Here are some of the key challenges: 

                            • Bias and Fairness: Ensuring AI systems are free from biases and make fair decisions is a significant challenge. This includes addressing biases in data, algorithms, and decision-making processes [1]. 
                            • Transparency: AI systems often operate as “black boxes,” with opaque decision-making processes. Making these systems transparent and understandable to users and stakeholders is a complex task [2]. 
                            • Privacy: Protecting the privacy of individuals when AI systems process vast amounts of personal data is a critical concern. Balancing data utility with privacy rights is a delicate and challenging issue [3]. 
                            • Accountability: Determining who is responsible for the decisions made by AI systems, especially when they lead to unintended or harmful outcomes, is a challenge. Establishing clear lines of accountability is essential [2]. 
                            • Safety and Security: Ensuring AI systems are safe and secure from malicious use or hacking is a challenge, especially as they become more integrated into critical infrastructure [1]. 
                            • Ethical Knowledge: There is a lack of ethical knowledge among AI developers and stakeholders, which can lead to ethical principles being misunderstood or not applied correctly [2]. 
                            • Regulatory Compliance: Developing and enforcing regulations that keep pace with the rapid advancements in AI technology is a challenge for policymakers and organizations [4]. 
                            • Social Impact: AI technologies can have profound impacts on society, including job displacement and changes in social dynamics. Understanding and mitigating these impacts is a complex challenge [5]. 

                            These challenges highlight the need for ongoing research, dialogue, and collaboration among technologists, ethicists, policymakers, and the public to ensure ethical AI implementation. 

                            Sources: 

                            1. CHAPTER 5: Ethical Challenges of AI Applications – Stanford University 

                            2. Ethics of AI: A systematic literature review of principles and challenges 

                            3. Ethical challenges of using artificial intelligence in healthcare … 

                            4. A Practical Guide to Building Ethical AI – Harvard Business Review 

                            5. 6 Ethical Considerations of Artificial Intelligence | Upwork 

                            An AI generated image of a robot using a laptop

                            Ethical uses of AI are crucial for ensuring that the technology benefits society while minimizing harm. Here are some key points regarding the ethical use of AI: 

                            • Global Standards: UNESCO has established the first-ever global standard on AI ethics with the ‘Recommendation on the Ethics of Artificial Intelligence’, adopted by all 193 Member States1. This framework emphasizes the protection of human rights and dignity, advocating for transparency, fairness, and human oversight of AI systems1
                            • Algorithmic Fairness: AI should be developed and used in a way that avoids bias and discrimination. This includes ensuring that algorithms do not replicate stereotypical representations or prejudices2
                            • Transparency and Accountability: AI systems should be transparent in their decision-making processes, and there should be accountability for the outcomes they produce3
                            • Privacy and Surveillance: Ethical AI must respect privacy rights and avoid contributing to invasive surveillance practices4
                            • Human Judgment: The role of human judgment is paramount, and AI should not replace it but rather augment it, ensuring that human values and ethics guide decision-making4
                            • Environmental Considerations: AI development should also consider its environmental impact and strive for sustainability1
                            • Guiding Principles: Stakeholders, from engineers to government officials, use AI ethics as a set of guiding principles to ensure responsible development and use of AI technology5
                            • Social Implications: The ethical and social implications of AI use include establishing ethical guidelines, enhancing transparency, and enforcing accountability to harness AI’s power for collective benefit while mitigating potential harm6

                            These points reflect a growing consensus on the importance of ethical considerations in AI development and deployment, aiming to maximize benefits while addressing potential risks and ensuring that AI serves the common good. 

                            Sources: 

                            1. Ethics of Artificial Intelligence | UNESCO 

                            2. Artificial Intelligence: examples of ethical dilemmas | UNESCO 

                            3. Ethics of artificial intelligence – Wikipedia 

                            4. Ethical concerns mount as AI takes bigger decision-making role 

                            5. AI Ethics: What It Is and Why It Matters | Coursera 

                            6. Ethical and Social Implications of AI Use | The Princeton Review 

                            A dangerous looking dark alley

                            Artificial Intelligence (AI) can bring about significant advancements, but it also comes with many risks and dangers. Here are some of the key dangers associated with AI: 

                            1. Rapid Self-Improvement: AI algorithms are reaching a point of rapid self-improvement that threatens our ability to control them and poses exciting potential risk to humanity [1]. This rapid acceleration promises to soon result in “artificial general intelligence” (AGI), and when that happens, AI will be able to improve itself with no human intervention [1]. 
                            2. Automation-spurred Job Loss: As AI systems become more capable, they could take over jobs currently performed by humans, leading to significant job loss [2]. 
                            3. Deepfakes: AI can be used to create convincing fake images and videos, known as deepfakes, which can be used to spread misinformation [2] [3]. 
                            4. Privacy Violations: AI systems often require copious amounts of data for training, which can lead to privacy concerns if the data includes sensitive information [2] [3]. 
                            5. Algorithmic Bias: If the data used to train an AI system is biased, the system itself can also become biased, leading to unfair outcomes [2]. 
                            6. Socioeconomic Inequality: The benefits of AI are not distributed evenly, which could exacerbate socioeconomic inequality [2]. 
                            7. Market Volatility: AI systems are increasingly being used in financial markets, which could lead to increased volatility [2] [3]. 
                            8. Weapons Automatization: AI can be used to automate weapons systems, which raises ethical and safety concerns [2]. 
                            9. Uncontrollable Self-aware AI: There is a risk that AI could become self-aware and act in ways that are not controllable by humans [2]. 

                                            These dangers underscore the need for careful regulation and oversight of AI development. It is important to ensure that AI is developed and used in a way that is safe, ethical, and beneficial for all of humanity. 

                                            Sources:  

                                            1. Here’s Why AI May Be Extremely Dangerous–Whether It’s Conscious or Not 
                                            2. 12 Dangers of Artificial Intelligence (AI) | Built In 
                                            3. What Are the Dangers of AI? – Decrypt 
                                            4. What are the risks of artificial intelligence (AI)? – Tableau 

                                            For more on biases, please visit our other articles on Biases and Psychology.

                                            An AI Generated image of geometric shapes and colors

                                            How humans and machines interpret behavior differently 

                                            What is the fundamental attribution error? 

                                            The fundamental attribution error (FAE) is a cognitive bias that affects how people explain the causes of their own and others’ behavior. According to the FAE, people tend to overestimate the influence of personality traits and underestimate the influence of situational factors when they observe someone’s actions. For example, if someone cuts you off in traffic, you might assume that they are rude and selfish, rather than considering that they might be in a hurry or distracted. 

                                            How does the FAE affect human interactions? 

                                            The FAE can have negative consequences for human interactions, especially in situations where there is a conflict or a misunderstanding. The FAE can lead to unfair judgments, stereotypes, prejudices, and blame. For instance, if a student fails an exam, a teacher might attribute it to the student’s laziness or lack of intelligence, rather than considering the difficulty of the exam or the student’s circumstances. The FAE can also prevent people from learning from their own mistakes, as they might attribute their failures to external factors rather than internal ones. 

                                            How does artificial intelligence relate to the FAE? 

                                            Artificial intelligence (AI) is the field of computer science that aims to create machines or systems that can perform tasks that normally require human intelligence, such as reasoning, learning, decision-making, and perception. AI systems can be affected by the FAE in two ways: as agents and as targets. 

                                            • As agents, AI systems can exhibit the FAE when they interpret human behavior or interact with humans. For example, an AI system that analyzes social media posts might infer personality traits or emotions from the content or tone of the messages, without considering the context or the intention of the users. An AI system that interacts with humans, such as a chatbot or a virtual assistant, might also make assumptions or judgments about the users based on their inputs, without considering the situational factors that might influence them. 
                                            • As targets, AI systems can be subject to the FAE by humans who observe or interact with them. For example, a human might attribute human-like qualities or intentions to an AI system, such as intelligence, creativity, or malice, without acknowledging the limitations or the design of the system. A human might also blame or praise an AI system for its outcomes, without considering the input data, the algorithms, or the external factors that might affect it. 

                                            How can the FAE be reduced or avoided? 

                                            The FAE can be reduced or avoided by adopting a more critical and balanced perspective on behavior, both human and artificial. Some possible strategies are: 

                                            • Being aware of the FAE and its effects on perception and judgment. 
                                            • Seeking more information and evidence before making attributions or conclusions. 
                                            • Considering multiple possible causes and explanations for behavior, both internal and external. 
                                            • Empathizing with the perspective and the situation of the other party, whether human or machine. 
                                            • Revising or updating attributions or conclusions based on new information or feedback. 
                                            AI generated image of a concert at a stadium

                                            How a rap song inspired a phenomenon of obsessive fandom and online activism 

                                            What is Stan Culture? 

                                            Stan culture is a term that describes the behavior and attitude of fans who are extremely devoted to a certain celebrity, artist, or media franchise. The word “stan” is a blend of “stalker” and “fan”, and it was popularized by Eminem’s 2000 song “Stan”, which tells the story of a fan who becomes obsessed with the rapper and ends up killing himself and his pregnant girlfriend. The song was a critical and commercial success, and it introduced the concept of a “stan” to the mainstream audience. 

                                            How Stan Culture Evolved 

                                            Since the release of Eminem’s song, the term “stan” has been adopted by various fan communities, especially on social media platforms like Twitter, Instagram, and TikTok. Stan culture is characterized by the intense loyalty and admiration that fans have for their idols and the tendency to defend them from criticism or perceived attack. Stan culture also involves creating and consuming fan-made content, such as memes, videos, fan art, and fan fiction, that celebrate and promote the idol’s work and personality. Some fans even adopt the idol’s name, style, or catchphrases as part of their online identity. 

                                            Who are the Stans? 

                                            Stan culture is not limited to any specific genre, industry, or demographic. There are stans for musicians, actors, athletes, politicians, influencers, and even fictional characters. Some of the most prominent examples of stan culture are the fans of Taylor Swift, who call themselves “Swifties”, the fans of Beyoncé, who call themselves “Beyhive”, and the fans of BTS, who call themselves “ARMY”. These fan groups are known for their massive online presence, their ability to mobilize and support their idols, and their fierce rivalry with other fan groups. Stan culture can also be seen in the political sphere, where supporters of certain candidates or parties display a similar level of devotion and activism. For instance, the fans of Bernie Sanders, who call themselves “Bernie Bros”, the fans of Donald Trump, who call themselves “MAGA”, and the fans of Alexandria Ocasio-Cortez, who call themselves “AOC Squad”. 

                                            What are the Pros and Cons of Stan Culture? 

                                            Stan culture can have both positive and negative effects on the fans, the idols, and the society. On the positive side, stan culture can provide a sense of belonging, identity, and community for the fans, who can connect with other like-minded people and share their passion and enthusiasm. Stan culture can also inspire creativity, activism, and generosity, as fans create and consume fan-made content, participate in social movements and campaigns, and donate to charities and causes that their idols endorse. Stan culture can also benefit the idols, who can gain more exposure, recognition, and support from their loyal fan base. 

                                            On the negative side, stan culture can also lead to unhealthy, toxic, and obsessive behavior, such as cyberbullying, harassment, stalking, and doxxing. Some fans may cross the line between admiration and obsession, and invade the privacy, safety, and personal lives of their idols and their rivals. Some fans may also develop unrealistic expectations, idealizations, and parasocial relationships with their idols, and lose touch with reality and their own identity. Stan culture can also create division, hostility, and intolerance among different fan groups, who may engage in online wars, insults, and threats. Stan culture can also harm the idols, who may face pressure, stress, and backlash from their demanding and critical fan base. 

                                            Conclusion 

                                            Stan culture is a phenomenon that has emerged and evolved in the digital age, where fans can access and interact with their idols and their fellow fans more easily and frequently. Stan culture can be seen as a form of expression, appreciation, and empowerment, but it can also be seen as a form of obsession, fanaticism, and extremism. Stan culture can have both positive and negative impacts on the fans, the idols, and the society, depending on how it is practiced and perceived. Stan culture is a complex and dynamic phenomenon that reflects the changing nature of fandom and celebrity in the 21st century. 

                                            For more on biases, please visit our other articles on Biases and Psychology.