LitArb
5 Takeaways from LaPaglia v. Valve: Gamer Challenges Arbitration Award Over Alleged ChatGPT Use
A Gamer has petitioned a U.S. federal court to vacate an arbitration award in favour of Valve Corporation, alleging the arbitrator “outsourced his adjudicative role to artificial intelligence.”
John LaPaglia – a Connecticut-based gamer – filed the petition last month in the U.S. District Court for the Southern District of California, seeking to set aside a 29-page American Arbitration Association (AAA) award issued on 7 January 2025. The award, which ruled in favor of Valve (the company behind the Steam platform), is accused of being partly ghostwritten by ChatGPT or a similar AI tool. LaPaglia claims that the arbitrator’s heavy reliance on AI-produced text breached his right to a human-rendered decision and violated fundamental fairness.
Table of Contents
Key Players and Background of LaPaglia v. Valve
The dispute stems from an AAA consumer arbitration that LaPaglia initiated against Valve in 2022. He alleged Valve’s Steam store monopolized PC game distribution (raising prices) and also sought a remedy for a defective game he purchased. The AAA appointed Michael Saydah as the sole arbitrator for the case. Saydah is a California-based arbitrator and former in-house counsel, appointed through AAA procedures. Unbeknownst to LaPaglia at the time, Saydah later became the focus of controversy for purportedly using AI to prepare the award.
The arbitration hearing took place over 10 days in December 2024. Notably, Arbitrator Saydah had indicated during the hearing that he was eager to issue a decision quickly because he had an upcoming trip to the Galápagos. Post-hearing briefs were submitted on 23 December 2024, and sure enough, Saydah rendered the final award just two weeks later on 7 January 2025 – the day he was set to depart on vacation. The award rejected all of LaPaglia’s claims (antitrust and warranty) and ruled in Valve’s favor, bringing a swift end to the AAA case. But that swift resolution soon gave rise to skepticism.
“Hallmarks of AI”: LaPaglia’s Allegations Against the Award
In April 2025, LaPaglia filed a motion to vacate the award under the Federal Arbitration Act (FAA) §10, on the basis that the arbitrator “exceeded his powers” by effectively delegating the decision to an AI. The petition paints a startling picture of how and why LaPaglia believes ChatGPT wrote significant parts of the arbitrator’s decision:
- Arbitrator’s Own Admissions: During a break in the proceedings, Saydah allegedly boasted that he uses AI tools to help write articles. He even “told a story about having been assigned to write a short article for an aviation club and using ChatGPT to draft it to save time”. This raised LaPaglia’s suspicions that Saydah might similarly lean on AI for writing the legal award.
- Rushed Timeline: Saydah’s comments about his imminent vacation, coupled with the expedited timeline (a 29-page award issued only 15 days after final briefs, spanning Christmas and New Year’s holidays), suggested to LaPaglia that the arbitrator may have sought a shortcut. The speed of drafting a complex award in such a short span is argued to be a red flag.
- Tell-tale Textual Signs: According to LaPaglia, the final award “contains telltale signs of AI generation” – including oddly phrased sentences, redundant passages, and even references to facts “both untrue and not presented at trial or present in the record,” appearing without citations. These anomalies imply that content might have been hallucinated by an AI (produced out of thin air), since a human arbitrator normally would not invent facts or omit supporting references.
- The ChatGPT Test: In a quasi-Turing test of the award, a law clerk for LaPaglia’s counsel copied a portion of the arbitral decision and queried ChatGPT whether it was written by a human or AI. ChatGPT’s analysis was damning: it responded that the paragraph’s “awkward phrasing, redundancy, incoherence, and overgeneralization” suggested it was generated by AI rather than written by a human. This result, presented in an affidavit, bolsters LaPaglia’s claim that the award “bore the hallmarks of AI drafting.”
LaPaglia’s legal argument is that by using AI in this manner, the arbitrator breached the parties’ arbitration agreement, which implicitly requires a neutral human decision-maker and a reasoned, human-authored award. The FAA allows vacatur when an arbitrator exceeds his powers or behaves in a manner not contemplated by the parties’ contract.
In LaPaglia’s view, an arbitrator who hands off writing (or reasoning) to an AI has stepped outside the scope of authority, comparable, he argues, to cases where awards were voided because an unqualified person effectively decided the case instead of the duly appointed arbitrator. Just as courts have vacated awards if an arbitrator was later revealed to be an “impostor” or if the decision-making was delegated to someone else, LaPaglia contends a court “must vacate an award when decision-making is outsourced to AI.”
In addition to the AI issue, LaPaglia’s petition also criticizes procedural moves by Saydah – notably that he consolidated LaPaglia’s claims with 22 other consumer claimants without consent, and allegedly barred LaPaglia from presenting certain evidence (such as an expert report on Valve’s market share). These points reinforce LaPaglia’s narrative that the arbitrator cut corners and did not fully consider LaPaglia’s individual case. However, the headline grievance remains the use of AI, a novel basis for challenging an arbitration award.
Valve’s Response and the Question of Proof
Valve Corporation vehemently opposes the bid to overturn the award. In its court filings and public statements, Valve argues that LaPaglia has no concrete evidence that the arbitrator actually relied on AI–only conjecture and an AI’s own opinion of the writing style. Valve’s counsel points out that the petition’s technical analysis hinges largely on what ChatGPT itself said about the award, which Valve suggests is an unreliable measure of authorship. The arbitrator’s quick drafting could have other explanations (such as using a standard template or working diligently over the holidays), and stylistic quirks in the text, Valve contends, do not prove non-human authorship.
Moreover, Valve has highlighted an ironic twist: LaPaglia’s own side encountered trouble with AI-generated text in a related matter. According to Valve, William Bucher, who represented LaPaglia in the arbitration, was rebuked in a separate Valve-related arbitration for submitting a letter brief that contained numerous fake case citations generated by an AI tool.
In that incident, an arbitrator reportedly sanctioned Bucher for filing a brief rife with nonexistent or “hallucinated” legal references. (Bucher, when contacted by the press, denied intentionally using AI to draft the brief, explaining that one of his associates had done so without his knowledge, and that he promptly corrected the record once the inaccuracies came to light.) Valve’s point is to cast doubt on LaPaglia’s AI-detection methods and to suggest that the claimant’s camp is not immune to AI mishaps either.
It’s important to note that Arbitrator Saydah has not publicly commented on the allegations (he “was contacted for comment” by one outlet, but no response was reported). The court has yet to hold a hearing on LaPaglia’s petition, and for now, the award remains in effect. The case has quickly become a talking point in both legal and esports circles due to its implications: Can an arbitration award be invalidated because a machine, rather than a mind, wrote it?
AI and the New York Convention: Human Tribunal or Not?
LaPaglia’s challenge not only tests U.S. domestic arbitration law but also raises questions for international enforcement of arbitral awards. The award in question is domestic (Valve and LaPaglia are in the U.S.), but the scenario of an AI-influenced award could create enforcement hurdles abroad under the New York Convention on the Recognition and Enforcement of Foreign Arbitral Awards (1958). This global treaty requires courts of 170+ countries to enforce valid arbitration awards, but it provides exceptions, two of which may be directly relevant if an award is rendered with AI involvement:
- Improper Tribunal or Procedure (Article V(1)(d)): Enforcement may be refused if “the composition of the arbitral authority or the arbitral procedure was not in accordance with the agreement of the parties”. Parties expect arbitrators (human ones) to hear and decide their dispute. If a court concludes that critical aspects of the decision were effectively made by an AI (which was never agreed to), it might treat the award as not coming from a duly constituted tribunal. In other words, an argument could be made that the de facto “arbitral authority” included an unauthorized AI agent, violating the parties’ agreement.
- Public Policy (Article V(2)(b)): A court can also refuse enforcement if it finds the award or its decision-making process “contrary to the public policy” of that country. Different countries define public policy differently, but fundamental fairness and the human administration of justice are often core principles. In many jurisdictions, the idea of a non-human arbitrator may be inimical to public policy. For example, Indonesian arbitration law explicitly requires arbitrators to be natural persons of legal capacity. If an award were seen as essentially rendered by a machine, a foreign court might view it as violating the forum’s basic notions of justice or the legally required definition of an “arbitrator.” As one analysis noted, “foreign awards by AI arbitrators may face enforcement challenges under the New York Convention if deemed contrary to public policy“.
Legal scholars have begun grappling with these questions in recent years. Some point out that arbitration laws generally don’t explicitly mandate that arbitrators be human, but this was simply assumed when the laws were written.
Former ICC Court President Alexis Mourre addressed this issue bluntly in a 2025 speech, arguing that if an AI makes the decisions, it isn’t true arbitration at all. Mourre warned that “AI decisions aren’t arbitration,” emphatically stating “the algorithm should not replace arbitrators.” Human judgment, he noted, is a bedrock of arbitral justice, and he called on arbitral institutions to develop binding rules to prevent any ambiguity on this point. His comments reflect a growing consensus that an award issued by an AI-driven process could be inherently unenforceable in many jurisdictions, absent express consent by the parties.
A Call for an Esports Arbitration Forum with Tech-Savvy Rules
The LaPaglia v. Valve case also underscores why some industry experts advocate for a specialized dispute resolution forum for esports and gaming. The esports industry, with its unique blend of technology, intellectual property, and global participation, often faces disputes that traditional arbitration forums may not be fully equipped to handle efficiently. Could a dedicated esports arbitration institution preempt issues like an arbitrator secretly using AI?
Interestingly, such an institution is already in the works. In January 2025, the Esports Integrity Commission (ESIC) and the World Intellectual Property Organization (WIPO) jointly launched the International Games and Esports Tribunal (IGET) – a groundbreaking arbitral body explicitly tailored to gaming and esports disputes. IGET is designed with specialized expertise and modernized procedures in mind:
- Tech-Savvy Arbitrators: IGET boasts a panel of arbitrators and mediators who have deep knowledge of both esports/gaming and the law. These neutrals are expected to understand industry-specific context, from game mechanics to data analytics, reducing the learning curve that an ordinary arbitrator might have. This specialization can help in evaluating technical evidence (for example, anti-cheat data or gameplay recordings) and in appreciating the rapid pace at which esports operates.
- Comprehensive Scope & Rules: The tribunal’s mandate covers a wide array of gaming disputes, from player contract issues and tournament rules to cheating allegations and intellectual property fights. Crucially, because it is a fresh start, IGET has the opportunity to craft “tech-forward” procedural rules from day one. This could include clear provisions on if and how arbitrators may use AI tools in managing cases or drafting decisions, how to ensure data integrity for digital evidence, and requiring party consent or disclosure before any unconventional techniques are employed in the process.
- Ensuring Integrity and Consent: Given esports’ reliance on digital platforms, an esports tribunal can implement protocols to verify the integrity of data submitted (to prevent tampering with replays, chat logs, or statistical data). It can also mandate that if arbitrators wish to utilize any AI assistance – even something as benign as an AI transcription service or legal research tool – they must disclose this to the parties in advance and perhaps obtain consent, much as the new AI guidelines suggest (discussed below). By building such protocols into its rules, a forum like IGET could avoid the situation that arose in LaPaglia v. Valve, where the parties had no idea an AI might be involved until after the fact.
- Global Accessibility: Esports disputes often involve parties from different continents. IGET is structured to be globally accessible, administering cases online and under neutral rules, which may make it easier to enforce decisions internationally. With WIPO’s backing, IGET awards would carry a certain pedigree, and if its rules ensure human oversight of AI, its awards should be enforceable under the New York Convention without the shadow of an AI challenge.
The idea of a specialized esports arbitral body is to provide a forum that “understands the unique dynamics of gaming and esports” and bridges the gap that traditional legal forums struggle with. The LaPaglia case might well become a rallying point for why such specialization is needed: it dramatically shows what can go wrong when an arbitrator unfamiliar with either the technology or the ethical expectations uses a tool like ChatGPT in a vacuum.
A body like IGET, or any specialized tribunal, would ideally have clear guidelines prohibiting the delegation of decision-making to AI, as well as training for arbitrators on the appropriate use of technology. In the esports context, where tech is everywhere, having “referees” who know the rules of the game (both literally and metaphorically) is invaluable.
Best Practices: AI Guidelines and Human Oversight in Arbitration
As the legal world grapples with AI’s increasing presence, arbitral institutions and professional bodies have started to issue guidance on best practices for AI in arbitration. The situation in LaPaglia v. Valve highlights exactly the risks these guidelines seek to mitigate. Some key points from recent guidance:
- 1. Non-Delegation of Decision-Making: In April 2024, the Silicon Valley Arbitration & Mediation Center (SVAMC) published its “Guidelines on the Use of Artificial Intelligence in Arbitration,” the first framework of its kind. A core principle in the SVAMC Guidelines is that arbitrators must not delegate their fundamental decision-making responsibilities to AI. The arbitrator may use AI for support (like organizing evidence or even drafting language), but the analysis of facts, evaluation of evidence, and legal reasoning must remain the arbitrator’s own. This ensures the human arbitrator is truly the author of the award’s outcome. In LaPaglia’s case, if it were shown that Saydah let ChatGPT actually determine factual or legal conclusions, that would violate this non-delegation principle.
- 2. Due Process and Disclosure: Both the SVAMC Guidelines and the new 2025 Chartered Institute of Arbitrators (CIArb) Guideline on AI echo the importance of transparency. An arbitrator who intends to use AI tools is advised (or in some cases required) to disclose this to the parties in advance and, where practical, seek the parties’ agreement. At minimum, parties should be given a chance to object or comment on the proposed AI usage (for example, if an arbitrator wanted to use an AI translation service or have an AI summarize witness testimony). In the LaPaglia arbitration, Saydah apparently made no disclosure that he might use AI in drafting. Had SVAMC/CIArb best practices been followed, LaPaglia would have been aware and able to voice concerns before the award was issued.
- 3. Independent Verification – No “Blind Trust” in AI: Another best practice is that if arbitrators do use AI for any aspect of their work, they must independently verify the accuracy of AI-generated outputs before relying on them. This addresses the well-known problem of AI “hallucinations” – generating text that sounds plausible but is false. For instance, if an arbitrator used an AI tool to draft a portion of the award or to do legal research, the arbitrator should double-check every reference and fact for truth. Both SVAMC Guideline 7 and CIArb’s rules remind arbitrators that ultimate responsibility for the award’s content rests with the human. If Saydah had adhered to this, any fake facts or case citations that an AI inserted would ideally have been caught and removed. (The allegation that the award cited “untrue” facts not in the record suggests a lapse in this duty of verification.)
- 4. Maintain Control and Judgment: The CIArb 2025 Guideline explicitly allows AI as a supportive tool but prohibits using AI in a way that influences substantive decision-making or procedural rulings. It also emphasizes that the arbitrator “remains fully responsible for the award, regardless of AI assistance”. This mirrors SVAMC’s instruction that an arbitrator’s independent analysis must always underpin the outcome. In practice, this means an arbitrator might use, say, an AI to proofread or to organize evidence, but not to decide who wins or to draft the core reasoning without careful oversight.
- 5. Institutional Oversight: Several arbitral institutions are now considering rules to address AI. While the major rules (ICC, AAA/ICDR, etc.) don’t yet explicitly mention AI, organizations like CIArb and SVAMC provide guidance that could be adopted into institutional protocols. There’s also discussion of arbitration-specific AI ethics codes. The consensus forming is that transparency, party consent, and human oversight are key pillars. As one commentating lawyer put it, guidelines from SVAMC and CIArb make clear that arbitrators bear ultimate responsibility for the accuracy, integrity, and human authorship of their awards.
Alexis Mourre’s admonition is a fitting note to conclude on: “the algorithm should not replace arbitrators.” No matter how advanced AI becomes, the role of an arbitrator – especially in an esports dispute where nuance, fairness, and industry context are critical – requires human intellect and discernment. The LaPaglia v. Valve case is testing the boundaries of that principle. Its outcome could either reinforce the longstanding norm that only humans can render enforceable arbitration awards or, conversely, open the door (even slightly) to AI-assisted justice.
For now, the safe bet for arbitrators (and counsel) is to heed the best practices: if you use AI, use it carefully, sparingly, and transparently. The esports industry, on the cutting edge of technology, will no doubt continue to watch this space closely. After all, competitive gaming is all about skill, and in the arena of dispute resolution, it appears the skill of the human arbitrator remains irreplaceable.
LaPaglia v Valve Corporation (3:25-cv-00833)
In the US District Court for the Southern District
Counsel to John Lapaglia
• Morrow Ni
Xinlin Li Morrow in San Diego
Counsel to Valve Corporation
• Skadden Arps Slate Meagher & Flom
Partner Virginia Milstead in Los Angeles
In the AAA proceedings
Sole arbitrator
• Michael Sayda
Counsel to John Lapaglia
• William Bucher
Counsel to Valve Corporation
• Skadden Arps Slate Meagher & Flom
Partner Virginia Milstead in Los Angeles
Via: GlobalArbitrationReview (Paywall)