Hollywood’s first strike in 15 years was announced by the Writers Guild of America (WGA), after weeks of negotiations with major entertainment firms such as Netflix, Amazon, Apple, and Disney, who form the Alliance of Motion Picture and Television Producers (AMPTP). The strike happened when the writers demanded regulation of the studios’ use of generative AI tools, like ChatGPT, which was the first time such demands had been made.
According to the guild, the studios’ responses to their proposal were “wholly insufficient given the existential crisis writers are facing,” leading to guild members walking off the job to join picket lines in Los Angeles and New York. The WGA also presented a list of their proposals and the AMPTP’s response, including regulating the use of artificial intelligence on union projects. They proposed that AI cannot write or rewrite literary material, or be used as source material or to train AI. The AMPTP rejected this proposal, instead telling the guild that annual meetings would be held to discuss “advancements in technology.”
Since the strike began, some writers have been vocal about the proposition, stating that they do not want generative AI tools to become an industry standard at the expense of writers. They believe that the studios do not value writers and their work and want to treat them as gig workers. The fear is that AI-generated material could be used to replace human-written content or underpay writers to rewrite it.
The WGA’s two AI stipulations are that “literary material” cannot be generated by AI, and “source material” cannot be something generated by AI.
Studios frequently hire writers to adapt source material into new work, so it is easy to imagine a situation where a studio uses AI to generate ideas or drafts, claims those ideas are “source material,” and hires a writer to polish it up for a lower rate.
The AMPTP’s position reflects an overblown perception of the capabilities of AI, which has been seen in a number of corporate media shake-ups where executives have prioritized AI content over human-created content. However, AI is still filled with misinformation and bias, as recently acknowledged by Microsoft researchers in a paper. They found that GPT-4, a large language model similar to ChatGPT, has trouble distinguishing between true facts and guesses, makes far-fetched conceptual leaps, and makes up facts that are not in its training data.
Generative AI systems are already facing copyright issues from writers and artists who claim that they were trained on their copyrighted data without permission. The strike shows that writers fear that their work may be replaced or undervalued due to the use of generative AI tools, leading to a breakdown in negotiations between the WGA and the AMPTP.