Uncategorized
Esports Legal News Essay Competition Winner #1
We’re thrilled to kick off the publication of the winning essays from the Esports Legal News Essay Competition!
The competition drew a wide range of insightful, well-researched submissions, showcasing just how much passion and expertise is emerging in the field of esports law. After a careful review by our panel, three essays were selected as winners for their originality, clarity, and forward-looking perspectives.
Over the next three weeks, we’ll be publishing these essays here on our blog – one each week – so you can dive into the brilliant thinking shaping the future of esports and law.
We begin this week with ‘Use of AI in video game development and impact on gamers regarding privacy law: how privacy law and algorithmic transparency challenge game developers and studios with regulatory compliance‘ by Ashrit Goyal. Enjoy!
Use of AI in video game development and impact on gamers regarding privacy law: how privacy law and algorithmic transparency challenge game developers and studios with regulatory compliance
- INTRODUCTION:
The gaming industry today faces a critical juncture as AI integration into video games advances at an unprecedented pace, where artificial intelligence has become both a creative catalyst and a legal minefield. The use of AI to develop key assets like artwork and in games like Call of Duty: Black Ops 6 is generating increasing ire among gamers today[1],[2]. Activision Games also has a patent for their Call of Duty (‘COD’) lineup of video games, which is described as a system and method that encourages players to make in-game purchases for various types of content. For instance, the system may match a more expert/marquee player with a junior player to encourage the junior player to make game-related purchases of items possessed/used by the marquee player. A junior player may wish to emulate the marquee player by obtaining weapons or other items used by the marquee player. Advanced patents like these thrive on sophisticated data collection and processing capabilities of the gaming companies. [3]
Valve’s Steam platform stands out in the gaming world not just for its vast library or global reach, but for how well it uses player data to adjust its offerings and enhance user experience. Steam gathers a wide range of data, from playtime and in-game behavior to system specs, purchase patterns, mod usage, and social interactions. [4] This information helps create personalized game recommendations through machine learning, as well as refine pricing models, boost server performance, and identify and address cheating and fraud. For example, Valve’s well-known Steam Hardware & Software Survey gives developers valuable insights into the current hardware landscape, allowing for better optimization.
Topics like these are just the beginning of a bigger issue that includes serious legal challenges involving privacy laws, algorithm transparency, and intellectual property protection. Game developers now have to navigate a complicated set of regulations as AI-driven development introduces new legal risks while traditional trade secret protections conflict with new transparency requirements.
This analysis examines three critical dimensions of this evolving legal challenge: the privacy implications of AI-driven player data collection, the intellectual property complexities surrounding AI-generated game content, and the fundamental friction between algorithmic transparency requirements and trade secret protection. These issues collectively showcase one of the most significant legal developments facing the gaming industry in the 21st century.
- AI-DRIVEN PLAYER DATA COLLECTION: NAVIGATING GDPR AND OTHER EMERGING AI REGULATIONS
Modern gaming platforms employ sophisticated AI systems that collect gigantic amounts of player data, extending far beyond traditional gameplay metrics. Depending on the nature of the content used to train the model, privacy issues may arise. One AI tool analyzed human movements of 50,000 players in over 2.5 million VR data recordings of a popular VR game ‘Beat Saber’. The AI tool was quickly able to identify the players with 94 percent accuracy. The research team also discovered that motion data analysis beyond identification also allowed them to reveal which was their dominant hand, their height, and in some instances, their gender. [5]
The legal basis for this type of collection differs widely between areas. The European Union’s General Data Protection Regulation (GDPR) sets the toughest rules. The gaming industry must establish valid reasons for processing data. The most frequently used justifications are legitimate interest and contract performance. However, the automated nature of AI-driven decisions complicates this analysis, particularly when such systems affect player access to services or result in account sanctions.
GDPR Compliance Challenges in Gaming AI
The GDPR’s Article 22 prohibition on automated decision-making presents particular challenges for gaming AI systems. Anti-cheat systems exemplify this complexity, as they often operate through “fully automated bans without human intervention“[6] which are generally not permitted under GDPR unless specific exceptions apply. However, it’s tricky to justify anti-cheat bans under these grounds. Players rarely give explicit consent, and there’s ongoing legal debate about whether automated bans are truly “necessary” for delivering the core gaming experience.
A core tenet of the GDPR is that Organizations must collect data for specific, explicit, and legitimate purposes. AI systems should not utilize personal data for unrelated tasks without further consent or justification.[7] The European Court of Justice (ECJ), in its SCHUFA decision (C-634/21, Dec 7, 2023), took a strict stance ruling that automated profiling with serious consequences is only lawful if there’s explicit consent or a clear contractual need. This reasoning should apply equally to gaming. So, even if developers cite a “legitimate interest” under Article 6(1)(f), Article 22’s specific ban on impactful automated decisions may possibly override that justification.
Gaming companies have responded by implementing human oversight mechanisms, with many establishing ticket and appeal systems where players are banned buut can lodge an appeal, whereupon employees check the process. This approach transforms purely automated decisions into human-reviewed determinations, but the practical implementation often falls short of meaningful human involvement.
The transparency obligations under GDPR Article 15(1)(h)[8] require companies to provide “meaningful information about the logic behind the automated decision“. This requirement directly conflicts with gaming companies’ desires to protect proprietary algorithms, creating tension between regulatory compliance and competitive advantage preservation.
Children’s Privacy Protection Under COPPA and Beyond
The Children’s Online Privacy Protection Act (COPPA) violations have resulted in record-breaking penalties for gaming companies such as Epic Games, whose $520 million settlement in 2022 established new enforcement precedents, with the Federal Trade Commission imposing a $275 million penalty for violating the privacy law of children including $245 million in refunds to consumers who made unintended purchases.[9]
Subsequent enforcement actions have maintained this trajectory, with HoYoverse agreeing to pay $20 million in violation of COPPA for their game Genshin Impact in 2025.[10] Instances like these showcase the legislative intent to scrutinize in particular on AI-powered systems targeting younger demographics, particularly when they involve behavioural analysis or personalized content delivery.
The emerging pattern of enforcement reveals that regulators are particularly concerned with AI systems that exploit vulnerabilities in children’s cognitive development. Under the EU AI Act, any AI system that manipulates human decisions or targets vulnerable individuals such as those based on age or disability is explicitly prohibited under Article 5(1)(b) due to “unacceptable risk” which includes exploiting cognitive vulnerabilities of children.
The EU AI Act prohibits AI systems from exploiting vulnerabilities related to disability, age, or socio-economic status, creating additional compliance layers for gaming companies operating in European markets.
The EU AI Act’s Impact on Gaming
The EU AI Act, implemented in August 01, 2024, introduces a risk-based regulatory framework that significantly impacts games utilizing AI. Gaming companies may fall under regulation as either “providers of AI systems” when developing proprietary AI or “deployers of AI systems” when integrating third-party AI solutions.
High-risk AI classifications under the Act can encompass gaming systems that affect safety, livelihoods and rights of natural persons potentially including matchmaking algorithms, behavioural analysis systems, and automated moderation tools. The Act’s prohibition on manipulative AI systems directly impacts gaming design, banning AI that distorts behaviour using subliminal, deceptive or manipulative techniques that impair autonomy and cause or are likely to cause harm.[11]
The practical implementation creates significant compliance burdens. High-risk AI systems must undergo “conformity assessments“, implement “extensive documentation and human oversight“, and maintain ongoing monitoring systems[12]. For gaming companies, this translates into substantial operational overhead and potential limitations on AI-driven innovation
- THE LEGAL GREY AREA CONCERNING THE CONTENT GENERATED BY AI AROUND GAMES: MATTERS RELATED TO OWNERSHIP, COPYRIGHT, AND LEGAL RESPONSIBILITY
Steam’s AI Disclosure Requirements and the Lacklustre Industry Response
Valve’s introduction of AI disclosure requirements on Steam marks a key step in shaping how the gaming industry navigates transparency and copyright risk. Developers must now identify AI-generated content in two ways: content created before release (“pre-generated“) and content created dynamically during gameplay (“live-generated“). This move not only informs players but also shields Valve from potential copyright disputes, as games can be rejected if developers can’t prove rights to the training data behind their AI.
While the policy has prompted disclosures in nearly 8,000 titles, around 7% of Steam’s catalogue[13], its enforcement has been inconsistent. A clear example is again COD Black Ops 6, which disclosed its use of AI several months after release only after they were caught using AI to develop artwork[14], raising questions about how uniformly the rules are being applied and whether major studios are being held to the same standards as indie developers.
Intellectual Property Protection Challenges in AI-Generated Game Assets and Gaming AI
The use of generative AI to create game characters, textures, and background music creates intense copyright dilemmas. Many countries, including the U.S., don’t recognize copyright for fully AI-generated content that lacks significant human input. This situation poses a challenge for developers as they may be held liable for copyright infringement if AI systems unknowingly replicate identifiable portions of existing copyrighted content, whether in original or modified versions. The use of copyrighted material without the rights holder’s permission raises other infringement concerns, especially in the context of AI-generated background content for games. AI systems have repeatedly come under fire for using an artist’s copyrighted work as part of training data without authorization. [15]
The training data used to create AI-generated content presents additional infringement risks. Visual artists and content creators have filed numerous lawsuits alleging that AI systems have been trained using copyrighted material without permission. The ongoing Andersen v. Stability AI case[16], which involves visual artists suing AI image generators for copyright infringement, could establish precedents directly affecting gaming companies’ use of AI-generated artwork.
The fundamental copyright challenge stems from the uncertain legal status of AI-generated works. Most jurisdictions, including the US, do not provide copyright protection for AI generated matters wherein human creativity plays a key role and without it, it leaves gaming companies without traditional intellectual property protection for AI-created assets. This creates a paradoxical situation where companies invest resources in AI-generated content that may lack copyright protection. While legal disputes over copying of AI-generated game assets have not yet made the headlines, the absence of clear protection does introduce uncertainty into content ownership and long-term commercial strategy.
Article 2 of the AI Act delimits the scope of the regulation, specifying who may be subject to the AI Act. Video game developers might specifically fall under two of these categories:
- Providers of AI systems, who are developers of AI systems who place them on the EU market or put the AI system into service under their own name or trademark, whether for payment or free of charge (Article 3(3) AI Act).
- Deployers of AI systems, who are users of AI systems in the course of a professional activity, provided they are established in the EU or have users of the AI system based in the EU (Article 3(4) AI Act). [17]
Trade secret protection has emerged as the primary intellectual property strategy for gaming AI systems. Trade secrets provide protection ordinarily for “as long as the information remains confidential” and require companies to take steps to ensure confidentiality. However, this approach makes algorithms less clear and creates conflict with transparency rules under new AI regulations.
The gaming industry’s reliance on trade secret protection exposes strategic weaknesses that are becoming clearer in the era of AI-driven development. Unlike patents or copyrights, trade secrets do not stop others from independently creating similar technologies or reverse-engineering protected systems, as long as no illegal methods are used to obtain the information. This leaves companies exposed if competitors or malicious actors can legally replicate or approximate key systems. The recent AimJunkies v. Bungie case[18] illustrates both the potential and limitations of current IP enforcement. In that matter, a U.S. court held a cheat software developer liable for copyright infringement after finding that they unlawfully reverse-engineered the game Destiny 2. While the ruling confirmed that reverse engineering could infringe copyright in scenarios such as when game code is copied without authorization, it also highlighted the narrow scope of such protection. If the cheat developers had independently recreated functionality without copying code, traditional copyright law may not have offered the same recourse. This means there is a desperate need for a more robust legal framework around algorithmic transparency and the protection of proprietary systems in gaming AI.
- ALGORITHMIC TRANSPARENCY IN GAMING AI: BALANCING TRADE SECRETS WITH REGULATORY DEMANDS
The friction Between Transparency and Competitive Advantage
Gaming companies risk facing a difficult balancing act between AI transparency and protecting trade secrets. The EU’s AI Act mandates that high-risk systems come with “clear and adequate information” – including the logic, significance, and expected consequences of automated decisions, as outlined in Article 13 and Recital 27. For game developers, this could mean explaining how systems like matchmaking algorithms or behavioural analysis tools operate, elements they’ve long kept confidential to maintain competitive edge.
Legal scholars warn that the freedom for keeping AI opaque is narrower than many assume; this opacity is only justifiable in limited scenarios, and market players may find it hard to justify broad secrecy without risking regulatory scrutiny[19]. In gaming, revealing too much about anti-cheat systems or detailing how player behaviour influences ranking could allow malicious users to sidestep safeguards and undermine fairness. This balancing issue around transparency and IP protection is exactly what the EU AI Act aims to address, but its implementation may reshape competitive behaviour among those using AI in gaming.
Regulatory Convergence and Future Compliance Requirements
The regulatory landscape for gaming AI is rapidly evolving, with multiple frameworks converging to create comprehensive compliance requirements. The EU AI Act, GDPR, and emerging national regulations create overlapping obligations that gaming companies must navigate simultaneously.
Mandatory algorithmic impact assessments are quickly becoming a key part of AI regulation, and this has important consequences for gaming companies. These assessments typically require detailed documentation explaining how an AI system works, what risks it might pose, and accordingly managing the said risk. For gaming companies, especially those that rely on proprietary algorithms as a competitive advantage including matchmaking engines or player behaviour prediction models where showing transparency to the players can be very difficult. Disclosing too much may risk exposing trade secrets. As a result, many companies are being pushed to rethink how they protect their intellectual property, exploring strategies that can comply with transparency rules without undermining the value of their technology.
The emergence of regulatory sandboxes[20] in some jurisdictions provides potential relief for gaming companies seeking to test AI innovations while maintaining compliance. However, the gaming industry has been slower to adopt these mechanisms compared to financial services and healthcare sectors, potentially limiting their effectiveness for addressing gaming-specific challenges.
- EMERGING LEGAL STRATEGIES AND COMPLIANCE FRAMEWORKS
Risk-Based Compliance Approaches
To address growing regulatory scrutiny, many gaming companies are adopting risk-based compliance models that align internal governance with emerging AI regulations. These frameworks classify AI systems based on the potential harm they pose to users, applying stricter controls to high-risk systems. For example, under the EU AI Act, systems that interact with minors or significantly affect users’ rights, such as those tied to account suspension or content moderation, are categorized as high-risk and subject to requirements like human oversight, technical documentation, and post-market monitoring as seen in the EU AI Act in Articles 6 and 17.
Lower-risk systems, such as AI used for cosmetic in-game recommendations or minor UI personalization, may not trigger the same regulatory burdens. This tiered approach allows companies to focus compliance resources where they are most needed. However, the success of such strategies depends on accurate risk classification, which is complicated by the fluid and evolving nature of global AI laws. Legal advisors and compliance officers must often make judgment calls based on anticipated enforcement trends, particularly in regions like the EU, where the AI Act introduces broad and novel obligations.[21]
Industry Cooperation and Standard Development
The gaming industry is also engaging in collective standard-setting to guide AI governance. Organizations such as the Entertainment Software Association (ESA) have created internal working groups to address data privacy, ethical AI use and alignment with existing regulations. These efforts are consistent with broader industry-wide movements, such as the Partnership on AI and IEEE’s global AI ethics initiatives, which encourage responsible deployment of AI technologies.
Despite these efforts, industry-developed standards remain largely voluntary and lack binding force, which can limit their practical utility in jurisdictions with statutory requirements like the EU or Canada. Moreover, as the global regulatory environment becomes more fragmented, with different rules emerging in the EU, the United States, and Asia, multinational game publishers need to unify their approach towards such divergent standards, increasing compliance costs and operational complexity.
- CONCLUSION
Thirty years ago, the magic behind Doom’s relentless demons, StarCraft’s Zerg rushes, and GoldenEye’s alarm-triggering guards was little more than razor-sharp scripting and clever pathfinding. That era of smart programming set a high bar for player immersion without leaning on what we now call artificial intelligence. Fast-forward to 2025 and games like Horizon Forbidden West, Red Dead Redemption 2, and even the latest Call of Duty entry are running on learning systems that animate faces on the fly, balance multiplayer lobbies in real time, and compose music that reacts to every heartbeat of the match. The leap from tight if-then logic to self-optimising code is breathtaking, but it has also opened a Pandora’s box of privacy, transparency, and fairness questions that the industry can no longer treat as side quests.
The direction is already clear. Europe’s AI Act, Valve’s new disclosure policy on Steam, and recent enforcement actions under GDPR and COPPA show that transparency is quickly becoming the price of admission to global markets. Studios that treat privacy-by-design as a core production value, limit data collection, document model pipelines, and provide meaningful opt-outs will spend less time in court and more time shipping games.
Trade-secret protection need not disappear in the process. Developers can still protect proprietary code while offering external audits, impact assessments, and plain-language explanations of how matchmaking, personalization, or dynamic asset generation work. The security industry has long balanced open vulnerability reporting with closed-source software; gaming can strike the same balance for AI.
Equally important is culture. Players who once marvelled at a clever boss fight now question the skill-based matchmaking in Call of Duty and the careless microtransactions along with valid questions about the lack of developer’s vision for a game if the updates in the live service model games of today do not fit he overarching structure of a Video game.
Consider how Riot Games already publishes detailed explanations of their League of Legends matchmaking system, breaking down MMR (matchmaking rating) calculations and queue algorithms in developer blogs that millions read. Bungie has similarly opened up about Destiny 2’s weapon randomization through their “This Week at Bungie” posts, explaining how perks are weighted and why certain combinations appear more frequently. Building on this precedent, future titles could integrate these explanations directly into the user interface, like hovering over your Crucible match results to see exactly which skill metrics influenced team composition, or checking why specific Eververse items rotated into your storefront based on your play patterns and preferences.
EA Sports has already taken steps toward transparency with their FIFA Ultimate Team pack odds, as required by various gaming commissions. The logical next step would be expanding this to include algorithmic fairness reports similar to how major tech companies now publish diversity statistics and content moderation transparency reports. Picture EA partnering with established auditing firms like PwC or Deloitte, who already verify gaming revenue and player counts, to annually review whether their matchmaking treats players equitably across skill levels, spending habits, and geographic regions. These moves sound radical until you remember that patch notes, dev blogs, and backstage GDC talks have built fan loyalty for decades.
If the past decade was about proving that AI can make games bigger, the next will be about showing that AI can make games better: more inclusive, more responsible, and ultimately more fun. Studios that pair technological daring with legal foresight will not only navigate the coming regulations but will write the playbook for an industry that will be adventurous and thrive despite being in an era of unprecedented legal developments.
Meeet the Author
Ashrit Goyal is an Associate at Fox Mandal & Associates in New Delhi, India, where he specialises in intellectual property law with well-rounded experience spanning prosecution, enforcement, oppositions, and IP transactions. He is adept at managing Indian as well as cross-border trademark matters, backed by strong legal drafting and research skills. Known as a clear and persuasive communicator, Ashrit brings a collaborative work ethic to every project while staying adaptable and proactive under demanding deadlines.
Beyond his legal practice, Ashrit has always sought to align his affinity for the law with his longstanding passion for technology and gaming. This pursuit has shaped his growing interest in esports and sports law, areas where rapidly evolving industries meet equally dynamic regulatory challenges. His writing, including recent work exploring the intersection of AI, privacy law, and video game development, reflects this commitment to finding synergy between innovation and legal frameworks.
Outside the office, Ashrit channels the same focus and strategic mindset into esports. For over four years, he has been both a professional coach and an active player, competing in the South Asia/Southeast Asia circuit of Rainbow Six Siege X esports with his team, Hasib Warriors. This dual pursuit reflects not only his competitive spirit but also his broader vision: to build a career at the intersection of intellectual property, tech, and gaming law, where his professional expertise and personal passions converge
[1] Times of India, ‘Activision Faces Backlash for “AI Mistakes” in Call of Duty Black Ops 6 – Times of India’ (11 December 2024) https://timesofindia.indiatimes.com/technology/gaming/activision-faces-backlash-for-ai-mistakes-in-call-of-duty-black-ops-6/articleshow/116218986.cms accessed 31 July 2025.
[2] E Gach, ‘Call of Duty Discloses AI Slop after Months of Players Complaining’ (25 February 2025) Kotaku https://kotaku.com/black-ops-6-ai-calling-card-loading-screen-blops-6-cod-1851766313 accessed 31 July 2025.
[3] VC Mathews and A Goyal, ‘IP Strategies for the Billion-Dollar Video Gaming Industry’ (18 October 2023) Lexology https://www.lexology.com/library/detail.aspx?g=87cd898b-cb7d-4b00-80b5-3c9b8fe69cb4 accessed 31 July 2025.
[4] L Yang et al, ‘Large-Scale Personalized Video Game Recommendation via Social-Aware Contextualized Graph Neural Network’ (11 February 2022) arXiv https://arxiv.org/abs/2202.03392 accessed 31 July 2025.
[5] B Yirka, ‘Study Shows VR Users in the Metaverse Can Be Identified Using Head and Hand Motion Data’ (22 February 2023) Tech Xplore – Technology and Engineering News https://techxplore.com/news/2023-02-vr-users-metaverse-motion.html accessed 31 July 2025.
[6] M Härtel, ‘Anti-Cheat Software vs Data Protection: Legal Risks and Design Options – RA Marian Härtel’ (14 May 2025) https://itmedialaw.com/en/anti-cheat-software-vs-data-protection-legal-risks-and-design-options/ accessed 31 July 2025.
[7] OV Ademokun, ‘AI and the GDPR: Understanding the Foundations of Compliance’ (4 June 2025) TechGDPR https://techgdpr.com/blog/ai-and-the-gdpr-understanding-the-foundations-of-compliance/ accessed 31 July 2025.
[8] Art 15 GDPR – Right of Access by the Data Subject’ (28 March 2018) General Data Protection Regulation (GDPR) https://gdpr-info.eu/art-15-gdpr/ accessed 31 July 2025.
[9] “Fortnite” Creator Agrees to Pay a Record Penalty for Violating Children’s Privacy Laws’ (13 February 2023) Polsinelli https://www.polsinelli.com/publications/fortnite-creator-agrees-to-pay-a-record-penalty-for-violating-childrens-privacy-laws accessed 31 July 2025.
[10] FTC, ‘Genshin Impact Game Developer Will Be Banned from Selling Lootboxes to Teens under 16 without Parental Consent, Pay a $20 Million Fine to Settle FTC Charges’ (17 January 2025) Federal Trade Commission https://www.ftc.gov/news-events/news/press-releases/2025/01/genshin-impact-game-developer-will-be-banned-selling-lootboxes-teens-under-16-without-parental accessed 31 July 2025.
[11] Article 5: Prohibited AI Practices’ (2 February 2025) EU Artificial Intelligence Act https://artificialintelligenceact.eu/article/5/ accessed 31 July 2025.
[12] S Snodgrass, ‘AI Trends to Watch in 2024 in Game Development and Beyond’ (26 February 2024) modl.ai | AI Engine for Game Development https://modl.ai/ai-trends-2024-game-development/ accessed 31 July 2025.
[13] A Robinson, ‘Steam Games Disclosing Generative AI Use “Are Up 800%” This Year’ (17 July 2025) Video Games Chronicle https://www.videogameschronicle.com/news/steam-games-disclosing-generative-ai-use-are-up-800-this-year/ accessed 31 July 2025.
[14] The Hindu, ‘“Call of Duty: Black Ops 6” Game Maker Activision Discloses Use of Generative AI to Help Develop Some In-Game Assets’ (26 February 2025) https://www.thehindu.com/sci-tech/technology/activision-discloses-use-of-generative-ai-in-call-of-duty-black-ops-6-game/article69265159.ece accessed 31 July 2025.
[15] Asia IP, ‘How Generative AI is Reshaping Gaming’ (31 March 2025) Asia IP https://asiaiplaw.com/article/how-generative-ai-is-reshaping-gaming accessed 31 July 2025.
[16] Z Schor, ‘Andersen v Stability AI: The Landmark Case Unpacking the Copyright Risks of AI Image Generators’ (2 December 2024) NYU Journal of Intellectual Property & Entertainment Law https://jipel.law.nyu.edu/andersen-v-stability-ai-the-landmark-case-unpacking-the-copyright-risks-of-ai-image-generators/ accessed 31 July 2025.
.
[17] CJ Oliver Heinisch, ‘Some Implications of the EU AI Act on Video Game Developers’ (28 February 2025) AI Law and Policy https://www.ailawandpolicy.com/2025/02/some-implications-of-the-eu-ai-act-on-video-game-developers/ accessed 31 July 2025.
[18] Emily Price, ‘Bungie Wins Lawsuit Against Cheat Maker Aimjunkies’ (27 May 2024) PCMAG https://www.pcmag.com/news/bungie-wins-lawsuit-against-cheat-maker-aimjunkies/ accessed 31 July 2025.
[19] M Busuioc, D Curtin, and M Almada, ‘Reclaiming Transparency: Contesting the Logics of Secrecy within the AI Act’ (23 December 2022) European Law Open, Cambridge https://www.cambridge.org/core/journals/european-law-open/article/reclaiming-transparency-contesting-the-logics-of-secrecy-within-the-ai-act/01B90DB4D042204EED7C4EEF6EEBE7EA accessed 31 July 2025.
[20] ‘Regulatory Sandboxes – Testing Environments for Innovation and Regulation’ (n.d.) BMWEhttps://www.bundeswirtschaftsministerium.de/Redaktion/EN/Dossier/regulatory-sandboxes.html accessed 31 July 2025.
[21] M Veale and FZ Borgesius, ‘Demystifying the Draft EU Artificial Intelligence Act’ (2021) Computer Law Review International https://arxiv.org/abs/2107.03721 accessed 31 July 2025.