The Evolving Legal Landscape for AI: Navigating Innovation and Regulation

Published on
August 21, 2024
August 21, 2024
Authors
No items found.
Advancements in AI Newsletter
Subscribe to our Weekly Advances in AI newsletter now and get exclusive insights, updates and analysis delivered straight to your inbox.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Watch the presentation on this topic by legal and AI expert Ben Maling of EIP.

The rapid evolution of artificial intelligence (AI) has ushered in an era of unprecedented innovation, raising significant questions about ethics, safety, and legality. The current legal frameworks for managing AI, however, remain largely untested and are struggling to keep pace with these advancements. As the landscape develops, businesses and individuals involved with AI systems must be prepared to navigate a complex web of regulations that differ across regions and sectors.

The Legal Challenges of AI Innovation

AI technology has grown at a staggering pace, bringing with it numerous legal challenges that span various sectors. One of the earliest and most notable cases involved Clearview AI, which faced significant legal issues for scraping images from social media platforms like Facebook to create a facial recognition database. This case highlighted the potential for AI to infringe on privacy rights and the need for clear legal boundaries.

Similarly, the rise of generative AI has sparked a series of lawsuits. For instance, Getty Images sued Stability AI in both the US and the UK for using its images without permission for training models. These cases illustrate the ongoing tension between AI innovation and intellectual property rights. As AI systems become more sophisticated, questions about the legality of training models on copyrighted materials have come to the forefront.

Misinformation, Crime, and Data Privacy

AI's ability to generate content, whether through deepfakes or other means, has led to an increase in misinformation and disinformation. The latter, being intentional, poses severe risks, particularly in the context of political events such as elections. The use of AI-generated audio to impersonate public figures like US President Joe Biden during election campaigns is just one example of how AI can be misused to undermine democratic processes.

Moreover, AI has been implicated in enabling crime, with one notable incident involving a company losing $25 million after being duped by a deepfake impersonating its CFO. These examples underscore the critical need for robust legal frameworks to address the potential for AI to facilitate fraud and other criminal activities.

Data privacy is another area of concern. AI systems often require vast amounts of data for training, raising questions about the legality of using personal data without explicit consent. This issue has been at the centre of various lawsuits against major tech companies like Google and Microsoft, which have been accused of scraping private data to train their AI models. The challenge lies in balancing the need for data to fuel AI advancements with the protection of individual privacy rights.

Existing Legal Frameworks and the Need for AI-Specific Regulation

Given the range of issues that AI presents, one might wonder whether existing laws are sufficient to address these challenges or if there is a need for new, AI-specific regulations. The UK government, for example, has adopted a pro-innovation stance, opting to regulate AI within existing sectoral frameworks rather than implementing overarching AI-specific laws. This approach allows for more targeted and proportionate regulation, ensuring that AI is governed in a manner that reflects its varied impact across different industries.

Under this model, existing regulators, such as Ofcom in UK broadcasting, are tasked with ensuring that AI systems adhere to trustworthy AI principles within their respective domains. This sector-specific approach is designed to foster innovation while mitigating risks. However, it also leaves room for interpretation and may lead to inconsistencies in how AI is regulated across different sectors.

The EU's AI Act: A Comprehensive Approach

While the UK has taken a sectoral approach, the European Union has opted for a more comprehensive regulatory framework with its AI Act. This regulation, once in force, will apply directly to all EU member states and will impact any provider of AI systems that are marketed or put into service in the EU, regardless of where the provider is based.

The AI Act is structured around risk-based tiers, categorising AI systems into unacceptable practices, high-risk systems, and those with limited or no risk. Unacceptable practices, such as social scoring and predictive policing, are outright prohibited as they violate the EU's core values of freedom, equality, democracy, and human dignity.

High-risk AI systems, which include products like medical devices and autonomous vehicles, are subject to stringent regulations. Providers of these systems must implement robust risk and quality management systems, ensure data governance, and maintain transparency. The act also imposes significant penalties for non-compliance, with fines potentially reaching up to 35 million euros or 7% of global annual turnover, whichever is greater.

The Impact on Intellectual Property and Trade Secrets

The AI Act introduces responsibilities along the entire value chain, requiring parties involved in AI systems to share information that ensures compliance with the regulations. This requirement has significant implications for intellectual property (IP) and trade secrets, as businesses may be compelled to disclose more information than they would typically prefer.

This aspect of the AI Act could lead to a shift in how companies approach their IP strategies, particularly in relation to AI innovation. Businesses must carefully consider how to protect their proprietary information while remaining compliant with the new regulations.

The Global Patchwork of AI Regulations

One of the most challenging aspects of AI regulation is the lack of global harmonisation. Different regions have adopted varying approaches to AI governance, creating a patchwork of laws that can be difficult for companies to navigate. For example, Japan has explicitly allowed the training of AI models on copyrighted works, while the UK and US are still grappling with the legalities of such practices.

The EU's AI Act adds another layer of complexity by extending its reach beyond its borders. Any general-purpose AI model that is put into service in the EU must comply with EU copyright laws, even if it was trained outside the EU. This extraterritorial aspect of the AI Act could have far-reaching consequences for AI development and deployment globally.

Preparing for the Future: Compliance and Best Practices

As the legal landscape for AI continues to evolve, businesses must proactively prepare for the changes ahead. Although most provisions of the AI Act will not come into force for another two to three years, companies need to begin assessing their obligations now. This includes determining whether they are providing high-risk AI systems and understanding the specific requirements that apply to them.

The cost of compliance with the AI Act could be substantial, with some estimates suggesting that it could add up to 20% to the development costs of high-risk AI systems. Therefore, businesses must integrate regulatory compliance into their broader strategic planning, considering how it will impact their profitability and innovation.

To navigate this complex environment, companies should adhere to existing laws and regulations, adopt industry best practices, and seek legal advice when necessary. By doing so, they can position themselves to succeed in a market that is increasingly governed by sophisticated legal frameworks.

Final Thoughts

The development of AI regulation is still in its early stages, but it is clear that the legal landscape will continue to evolve in response to the rapid advancements in AI technology. As governments and regulatory bodies around the world grapple with the challenges posed by AI, businesses must stay informed and be prepared to adapt to new rules.

The UK’s pro-innovation approach and the EU’s comprehensive AI Act represent two distinct strategies for managing AI’s impact. While the path forward may be uncertain, one thing is clear: compliance with AI regulations will be a critical factor in the success of AI-driven businesses in the coming years. By understanding and adhering to these evolving legal requirements, companies can harness the power of AI while minimising legal risks and safeguarding their future.

Let us solve your impossible problem

Speak to one of our industry specialists about how Artificial Intelligence can help solve your impossible problem

Deeper Insights
Sign up to get our Weekly Advances in AI newsletter delivered straight to your inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Written by our Data Scientists and Machine Learning engineers, our Advances in AI newsletter will keep you up to date on the most important new developments in the ever changing world of AI
Email us
Call us
Deeper Insights AI Ltd t/a Deeper Insights is a private limited company registered in England and Wales, registered number 08858281. A list of members is available for inspection at our registered office: Camburgh House, 27 New Dover Road, Canterbury, Kent, United Kingdom, CT1 3DN.