AI Generated image of US Capitol

What is the APRA? 

The American Privacy Rights Act (APRA) is a proposed federal legislation that aims to regulate the collection, use, and sharing of personal data by online platforms and service providers. The bipartisan bill was introduced in April 2024 by Representative Cathy McMorris Rodgers (R-WA) and Senator Maria Cantwell (D-WA) and is currently under review by the Senate Commerce Committee.  

Under the proposed American Privacy Rights Act (APRA), there are several ways you can opt out of data sales: 

  1. Opting out of data transfer and targeted advertising: For most covered data, covered entities would need to give individuals an opportunity to opt out of the transfer of their covered data or the use of their data for targeted advertising [5]. 
  2. Express consent for sensitive data: For sensitive covered data, covered entities would be required to obtain an individual’s affirmative, express consent before transferring that data [5]. 
  3. Data brokers: Data brokers would be required to register with the FTC, which would establish a central data broker registry with a “Do Not Collect” mechanism allowing individuals to opt out of data brokers’ collection of their covered data [6]. 
  4. Website for opt-out requests: Under the APRA, data brokers will need to maintain a website that identifies themselves as data brokers, provides a tool for subject rights and opt-out requests, and links to the FTC’s data broker registry [4]. 

    Opposition 

    The Electronic Frontier Foundation (EFF) has expressed opposition to the American Privacy Rights Act (APRA) for several reasons [8] [9]: 

    • Rolling back state privacy protections: The EFF believes that federal privacy laws should not roll back state privacy protections [8] [9]. They argue that there is no reason to trade strong state laws for weaker national privacy protection [9]. 
    • Overriding stronger state laws: The EFF opposes APRA because it overrides stronger state laws and prevents states from passing stronger laws, which they believe hurts everyone [8]. 
    • Concerns about the latest draft: The EFF, along with other advocacy groups, have raised concerns about the latest draft of the APRA. They claim that the latest revision has diluted the privacy rules [7]. For example, the new draft allegedly strips out anti-discrimination protections, AI impact assessment requirements, and the ability to opt-out of AI decision-making for major economic opportunities like housing and credit [7]. 
    • Loopholes in personal data collection: The EFF is concerned that the latest APRA revision fails to cover personal data collected and used on-device [7]. They argue that tech companies would be able to do almost anything they want with data that stays on a personal device – no data minimization rules, no protections for kids, no advertising limits, no transparency requirements, no civil rights safeguards, and no right to sue for injured consumers [7]. 

    Please note that the APRA is still a proposed bill and has not yet become law. The final Act, if approved, may have different provisions [1] [2]. It’s always a good idea to consult with a legal professional for advice tailored to your specific situation. 

    AI image of a robot holding a lock

    The Consumer Protections for Artificial Intelligence Act in Colorado, also known as SB24-205, is a pioneering piece of legislation aimed at regulating high-risk AI systems [4]. The law was signed by Colorado Governor Jared Polis on May 17, 2024 [4] and is set to take effect on February 1, 2026 [4]. This law makes Colorado the first state in the nation to enact broad restrictions on private companies using AI [2]. 

    The Act imposes a duty of “reasonable care” on both developers and deployers of high-risk AI systems to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination [6]. Algorithmic discrimination is based on actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of Colorado or federal law [6]. 

    For developers, the Act requires them to: 

    • Use reasonable care to avoid algorithmic discrimination in the high-risk system [1]. 
    • Make available to a deployer of the high-risk system a statement disclosing specified information about the high-risk system [1]. 
    • Disclose to the attorney general and known deployers of the high-risk system any known or reasonably foreseeable risk of algorithmic discrimination, within 90 days (about 3 months) after the discovery or receipt of a credible report from the deployer [1]. 

    For deployers, the Act requires them to: 

    • Use reasonable care to avoid algorithmic discrimination in the high-risk system [1]. 
    • Implement a risk management policy and program for the high-risk system [1]. 
    • Annually review the deployment of each high-risk system deployed by the deployer to ensure that the high-risk system is not causing algorithmic discrimination [1]. 
    • Provide a consumer with an opportunity to correct any incorrect personal data that a high-risk artificial intelligence system processed in making a consequential decision [1]. 
    • Provide a consumer with an opportunity to appeal, via human review if technically feasible, an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system [1]. 

    This Act is a significant step towards ensuring fairness and transparency in the use of AI systems, particularly those that make consequential decisions affecting consumers [1]. 

    Sources: 

    1. Colorado Enacts Groundbreaking AI Consumer Protection Legislation … 
    2. Colorado Passes Consumer Protections in Interactions with AI 
    3. Colorado Adopts Comprehensive AI Act Imposing Broad Disclosure Requirements 
    4. Consumer Protections for Artificial Intelligence | Colorado General … 
    5. Colorado Adopts Comprehensive AI Act Imposing Broad Disclosure Requirements 
    6. Colorado’s Artificial Intelligence Act: What Employers Need to Know 
              An AI generated image of a gavel and law books

              Promotion of Open Science and Innovation

              • The laws could encourage open science by setting standards for transparency and accountability [2]. 
              • They could also stimulate innovation by creating a clear regulatory framework within which AI developers can operate [2]. 

              Impact on High-Risk AI Applications

              • Companies developing AI applications considered to pose a “high risk” to fundamental rights, such as those meant to be used in sectors like education, health care, and policing, will have to meet new EU standards [3]. 

              Exemption for Research and Development

              • The European Parliament added a clause to the draft act that would exempt AI models developed purely for research, development, or prototyping [2]. This could ensure that the act does not negatively affect research [2]. 

              Challenges for General-Purpose AI Models

              • The EU AI Act strictly regulates general-purpose models, which have broad and unpredictable uses [2]. This could pose challenges for the development and deployment of such models [2]. 

              Potential Stifling of Innovation

              • Some researchers worry that the laws could stifle innovation by imposing stringent regulations on AI development [2]. 

              Addressing Concentration of Power

              • The laws address the concentration of power among private corporations in AI development and the associated risks to individual rights and the rule of law [4]. 

              These impacts could shape the future of AI research and development, influencing how AI technologies are developed, used, and regulated. However, the exact impact of these laws will depend on their final form and how they are implemented in practice [1] [2] [3] [4]. It is important to note that the status of these laws may have changed after my last update in 2021, and you should check the latest sources for the most current information. 

              Sources:  

              1. What the EU’s tough AI law means for research and ChatGPT – Nature 
              2. What’s next for AI regulation in 2024? | MIT Technology Review 
              3. Law and the Governance of Artificial Intelligence | SpringerLink 
              4. Artificial Intelligence: Overview, Recent Advances, and Considerations … 
              5. https://crsreports.congress.gov 
                      An AI generated of Parliament in London

                      Here are some key points: 

                      • The UK government has published a white paper detailing plans for implementing a pro-innovation approach to AI regulation [1]. This paper sets out the government’s proposals for a proportionate, future-proof, and pro-innovation framework for regulating AI [1]. 
                      • The UK Science and Technology Framework identifies AI as one of five critical technologies and notes that regulation plays a key role in creating an environment for AI to flourish [1]. 
                      • The government aims to help the UK harness the opportunities and benefits that AI technologies present. This will drive growth and prosperity by boosting innovation and investment and building public trust in AI [1]. 
                      • The UK’s approach is based on six core principles that regulators must apply, with flexibility to implement these in ways that best meet the use of AI in their sectors [2]. 
                      • The UK is taking a less centralized approach than the EU, focusing on supporting growth and avoiding unnecessary barriers being placed on businesses [2] [3]. 
                      • Currently, a range of legislation and regulation applies to AI in the UK – such as data protection, consumer protection, product safety and equality law, and financial services and medical devices regulation – but there is no overarching framework that governs its use [4]. 

                      Please note that this is a rapidly evolving field and the information provided is based on the latest available data. For more detailed information, you may want to refer to the respective government or legislative websites. 

                      Sources:  

                      1. AI regulation: a pro-innovation approach – GOV.UK 
                      1. UK sets out proposals for new AI rulebook to unleash innovation and … 
                      1. The Ecosystem: UK companies welcome pro-innovation pitch for new AI … 
                      1. AI laws inevitable but not right for today, says UK government 
                      1. What is the AI Act — and what does it mean for the UK? 
                      An AI generated image of a gavel and law books

                      United States

                      • The US has seen a surge in legislative activity related to potential AI risks and harms [5]. States are adopting regulations responding to concerns around many types of AI systems and contexts [5]. 
                      • The Biden-Harris Administration issued an Executive Order to strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership around the world [6]. 
                      • There have been some notable legislative efforts to regulate the use of AI, such as the American Data Privacy and Protection Act (ADPPA) and the Algorithmic Accountability Act of 2022 (AAA) [7]. 

                      Canada

                      • In June 2022, the Government of Canada tabled the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022 [1]. The AIDA aims to ensure that AI systems deployed in Canada are safe and non-discriminatory [2]. 
                      • The AIDA requires businesses to implement new governance mechanisms and policies that will consider and address the risks of their AI system and give users enough information to make informed decisions [2]. 

                      European Union

                      • The EU has approved a ground-breaking law, the Artificial Intelligence Act, aiming to harmonize rules on artificial intelligence [10^. This legislation follows a ‘risk-based’ approach, which means the higher the risk to cause harm to society, the stricter the rules [10]. 
                      • The Act categorizes different types of artificial intelligence according to risk. AI systems presenting only limited risk would be subject to very light transparency obligations, while high-risk AI systems would be authorized, but subject to a set of requirements and obligations to gain access to the EU market [10]. 

                      Please note that this is a rapidly evolving field and the information provided is based on the latest available data. For more detailed information, you may want to refer to the respective government or legislative websites. 

                      Sources: