AI or Not
    Menu

    Detecting Differences: Biden's AI Executive Order vs EU's AI Act

    What are the differences and similarities of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence by President Biden and the European Union's Artificial Intelligence Act, and how it affects businesses.

    Detecting Differences: Biden's AI Executive Order vs EU's AI Act

    Artificial Intelligence (AI) has become a part of our lives, for work and play: from the personalized recommendations on our favorite streaming platforms to the advanced systems driving autonomous vehicles. Recognizing its potential, governments around the world are taking steps to regulate and harness the power of AI. With the recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence by President Biden and the European Union's Artificial Intelligence Act, navigating AI risk and compliance becomes more important.

    The US order requires technical adaptations for watermarking and robust risk management strategies. The EU Act brings with it a comprehensive approach for AI risk and compliance. But the challenges pale in comparison to the potential benefits: a future where AI operates with the utmost transparency and accountability and its risks mitigated.

    Similarities between the USA Executive Order and the EU AI Act

    While their shores may be separated by miles and legal frameworks, a closer look reveals a surprising convergence – a shared vision for a future where AI operates with the utmost integrity and accountability while not hampering pace of innovation.

    For risk and compliance professionals, this convergence is not just a distant dream; it's a call to action. Both frameworks, though distinct in approach, share a common DNA: a deep concern for the ethical implications of AI and a resolute commitment to mitigating its risks.

    Here is a summary of the similarities between the US and EU approaches to regulating AI:

    1. Focus on Risk Management and Ethical AI Use:
    • Both emphasize the importance of responsible AI development, with a strong focus on mitigating risks and ethical considerations.
    2. Regulatory Compliance Requirements:
    • Both frameworks mandate compliance with specific regulations, necessitating businesses to adapt their AI systems to meet these standards.
    3. Emphasis on Transparency and Accountability:
    • The documents highlight the need for transparency in AI operations and hold businesses accountable for their AI systems' outcomes.
    4. Protection Against Harmful AI Outputs:
    • Both the USA and the EU frameworks require safeguards against AI outputs that could be discriminatory, misleading, or harmful.

    The similarities is a step forward in the global quest for trustworthy AI. While differences in approach remain, the shared vision of risk management, compliance, transparency, and responsible AI development offers a framework both businesses can use for their use and development of artificial intelligence.

    Differences between the USA Executive Order and the EU AI Act

    While the overall premise is similar, two distinct approaches are emerging, each vying to shape the future of this powerful technology in their respective country(s). The EU's AI Act is a dense legal framework meticulously outlining the rules of the game for AI development and deployment. On the other, the US Executive Order on Safe, Secure, and Trustworthy AI takes a more agile stance, offering a set of guiding principles for responsible AI development. 

    The EU, a veteran in data protection, takes a prescriptive approach, seeking to build a controlled, transparent AI ecosystem. The US, with its focus on innovation, opts for a more flexible framework, trusting businesses to act responsibly within established principles.

    Here is a summary of the differences between the US and EU approaches to regulating AI: 

    1. Scope and Depth of Regulation:
    • The EU AI Act is more comprehensive and detailed, establishing specific rules for a wide range of AI systems, especially high-risk AI.
    • The USA Executive Order is more general, focusing on broader guidelines for ethical AI development and use.
    2. Watermarking and Technical Requirements:
    • The USA Executive Order specifically mentions the requirement for watermarking generative AI outputs​​.
    • The EU AI Act, while comprehensive, does not specifically mention watermarking but focuses on conformity assessments and comprehensive testing for high-risk AI systems​​.
    3. Enforcement Mechanisms:
    • The EU AI Act has detailed provisions for enforcement, including conformity assessments and sanctions for non-compliance​​.
    • The USA Executive Order, while mentioning compliance, is less specific about enforcement mechanisms.
    4. Geographical Scope and Applicability:
    • The EU AI Act has specific provisions for AI systems used within the EU, including requirements for non-EU providers​​.
    • The USA Executive Order is focused on the development and use of AI within the United States, with an emphasis on federal agencies and national policies.
    5. Compliance with International Standards:
    • The EU AI Act encourages compliance with harmonized international standards and allows for common technical specifications in areas lacking such standards​​.
    • The USA Executive Order does not explicitly address international standards but focuses on domestic guidelines and best practices.

    The Different Approaches to AI Detection between the US and EU

    Both aim to shed light on the opaque world of AI, but their approaches to AI detection diverge, leaving you wondering: which path leads to a future free from misinformation and manipulation?

    1. USA Executive Order:
    • It specifically requires reasonable steps to watermark or label outputs from generative AI, indicating a prescriptive approach to AI detection through output marking​​.
    2. EU AI Act:
    • The Act focuses on the broader compliance of AI systems with EU standards and regulations and puts more onus on company’s identifying generative AI content.
    • The emphasis is more on the conformity of AI systems to established standards rather than the specific marking or labeling of AI outputs.

    The US Executive Order takes a more direct and prescriptive approach to AI detection, focusing on marking the outputs of generative AI with watermarks. While seemingly straightforward, this method raises concerns and, arguably, more issues. First, open-source AI models are already in the wild which do not include watermarks, or can be easily removed through code edits, making it a hurdle easily cleared by bad actors. Secondly, malicious actors can simply apply watermarks to real content, creating a dangerous landscape of "misinformation doubt." Think of it as a clever forgery, where the watermark, once a symbol of authenticity, becomes a tool for deception. 

    So, is watermarking the silver bullet for AI detection? The answer is a resounding "no." While it has merit in certain contexts, it's a blunt tool for the intricate task of identifying harmful AI outputs. The EU's focus on conformity assessments, while less flashy, offers a more robust and holistic approach, ensuring AI systems are built on a foundation of ethical and regulatory compliance.

    What Does the the USA AI Executive  Order and the EU AI Act Mean for Businesses

    For risk and trust professionals, the question isn't "which one wins?" but "how do we adapt?"

    The EU Act, a GDPR-esque approach for AI regulation, looms large, demanding compliance and comprehensive documentation, especially for high-risk AI systems in the EU. Remember the scramble to comply with GDPR? Companies in the US became compliant with GDPR quickly and the same scenario could play out with the passing of the EU AI Act. Furthermore, recall that the California Consumer Privacy Act (CCPA) passed 2 years after GDPR and took inspiration from the EU for its framework on California’s data privacy.

    Whether you're a US-based tech giant or a global company with a presence in the EU, compliance with both frameworks is no longer a choice, it's a necessity. Risk management strategies need a global upgrade, encompassing both the EU's stringent assessments and the US's emphasis on ethical development. Companies all over the world need a strategy for AI detection so that they know whether content they take in, or their users post or submit, is AI or Not.