AI in Film: Copyright, Labour & Moral Rights Guide
Yiheng Lu
Guest Contributor
PhD Researcher in Intellectual Property Law
Queen Mary University of London
Supervised by Professor Johanna Gibson
When does AI help filmmakers and when does it threaten artistic integrity? PhD analysis of copyright, labour disputes, and disclosure in cinema.
Introduction
Artificial intelligence is now part of everyday discussion about cinema. It can help with scheduling, script organisation, technical polishing, visual effects, and other time-consuming tasks. Yet one principle should remain clear: filmmaking is still an art. At its centre are storytelling, human judgment, and human performance. AI may reshape production, but it does not automatically replace the creative role of human beings.
This article explores a simple but increasingly important question: when does AI support filmmaking, and when does it begin to undermine artistic integrity? The answer matters not only to directors, actors, and studios, but also to audiences, whose trust and cultural expectations help shape the future of cinema. As recent industry disputes, copyright debates, and film-award controversies show, AI is no longer a futuristic side issue. It is already part of the legal and ethical reality of filmmaking.
1. AI in film is not entirely new, but the scale is changing
One useful way to understand the current debate is to separate assistance from replacement. Film has always used technology. Digital editing, CGI, sound correction, motion capture, and post-production tools are not new. What has changed is that AI can now imitate voices, styles, facial expressions, and even patterns of storytelling at a much more advanced level than before.
Two public-facing examples help explain why attention has grown. One was Furious 7, where digital techniques were used after Paul Walker’s death to complete the film, including the use of his brothers and visual effects to finish scenes.[1] Another was Sunspring, the 2016 short film written by an AI system, which became a widely discussed experiment in machine-generated storytelling.[2] These two examples showed very different possibilities: AI as a way to complete or support an existing film, and AI as an apparent generator of new creative material.
2. Early AI often created the illusion of creativity
A critical theme is that early AI-generated works often looked human without truly thinking or understanding like humans. They could imitate the surface of expression, but not always its depth. In that sense, AI often created an illusion of creativity: outputs that resembled writing, art, or emotion, while still lacking coherence, intention, or genuine insight.
That helps explain why early AI-generated film scripts could feel uncanny rather than convincing. They might produce dialogue that sounded poetic or vaguely human, but the emotional logic underneath was often unstable. The point is not that AI produces “nothing”, but that imitation is not the same as authorship. This distinction remains crucial when discussing both copyright and artistic value.
3. AI can reduce drudgery, but it can also squeeze creative labour
Used carefully, AI can help with repetitive or labour-intensive tasks and free human creators to focus on more creative work. In film production, that may include proofreading, scheduling, minor editing assistance, technical clean-up, and other support functions. From this perspective, AI can be useful. It can improve efficiency without necessarily displacing human artistic judgment.
At the same time, the fear of job displacement is real, and it is already shaping industry conflict. The 2023 Hollywood strikes brought AI directly into collective bargaining, with both SAG-AFTRA and the Writers Guild securing protections relating to digital replicas, consent, and the role of generative AI in film and television production. Those disputes showed that AI is not merely a technological question. It is also a labour question.
This is where the public debate often becomes sharper. If AI is used to remove drudgery, many people may accept it. But if it is used to reduce the bargaining power of writers, actors, editors, or artists, the technology starts to look less like a creative assistant and more like a tool of substitution.
4. Performance enhancement raises a different fairness problem
An important point about acting should also be raised that sometimes AI is not used to replace a performance entirely, but to polish or refine it. A recent example is the controversy around The Brutalist, where AI-assisted voice work was reportedly used to refine Hungarian dialogue, while the Academy later confirmed that generative AI would neither automatically help nor harm a film’s Oscar chances, though judges would consider the extent of human creative authorship.[3]
For ordinary cinema-going audiences, some technical enhancement may seem acceptable. If a digital tool helps make dialogue more natural or prevents immersion from being broken by an obviously inaccurate accent, some viewers may regard that as a legitimate form of post-production. But the question changes in competitive contexts, such as film awards and festivals. At that point, the issue is not simply whether the film “works”, but whether one performer has gained an advantage through technological assistance that others did not use.
That concern does not disappear merely because the technology is widely available. Access alone does not settle fairness. Competitions still require rules. Otherwise, what is meant to be a human-centred award may gradually become a contest between software systems rather than between performers.
5. Audiences often notice AI indirectly, not technically
A particularly interesting aspect concerned audience perception. Most ordinary viewers do not know exactly when AI has been used in a film. But they may still notice when something feels “off”. A digitally younger face paired with an older body, or a technically polished voice that no longer quite matches the physical performance, can create a subtle but visible mismatch. That is often how audiences detect AI—not through technical knowledge, but through disrupted realism.
This helps explain why the central question may not be whether audiences will one day “prefer AI films”. It may instead be whether filmmakers know how to use AI in the right way. If it supports the storytelling and does not damage the viewing experience, audiences may barely object. If it creates robotic faces, emotional flatness, or obvious visual dissonance, audiences are more likely to reject it.
6. AI is also changing how people judge artistic value
Another concern is that AI is changing not only production, but also public judgment. As AI-generated images, voices, and videos become cleaner and more convincing, people may find it harder to know whether a work was made by a human being at all. That uncertainty can affect how audiences evaluate labour, originality, and artistic worth.
In other words, AI does not merely generate content. It changes the conditions under which people view content. A highly polished human work may now be doubted because it looks “too perfect”. At the same time, many consumers are drawn to whatever is quick, free, and easily accessible. That dynamic may gradually reduce the perceived market value of labour-intensive human art, even where human work remains more meaningful in creative terms.
7. The cultural problem: preserving diversity or flattening it?
One of the strongest themes was culture. AI can, in some contexts, be presented as a tool for preservation. For example, it may help record, model, or circulate minority cultural material that might otherwise receive less public attention. Used with care and with proper community consent, that may have genuine cultural value.
But the opposite danger is also real. AI can flatten diversity by mass-producing already popular styles at very low cost. The recent surge of “Studio Ghibli-style” AI images is a useful example. Reuters reported in April 2025 that the viral use of ChatGPT’s image tool for Ghibli-style images drove record user activity, which shows both the cultural reach and commercial force of style-based AI imitation.[4] What is legally difficult here is that copyright law usually protects specific expressions, not artistic styles as such. Yet the ethical and economic problem remains: when a distinctive style built through years of artistic labour becomes instantly reproducible, its scarcity and value may be weakened.
This does not mean culture can or should be monopolised. But it does mean that AI can create a form of market pressure in which the original artist or studio bears the cost of creating a recognisable aesthetic, while others use software to imitate that aesthetic cheaply and at scale.
8. Who owns AI-generated work?
The copyright question remains unsettled and complex. In the United States, the Copyright Office has maintained that copyright protects human authorship, and that purely AI-generated outputs are not protected simply because they were generated by a machine.[5] In the UK, however, section 9(3) of the Copyright, Designs and Patents Act 1988 contains a distinctive rule for computer-generated works, stating that the author is “the person by whom the arrangements necessary for the creation of the work are undertaken”.[6]
That UK provision is famous, but it does not solve everything. It still leaves a very practical question unanswered: in the age of generative AI, who exactly is the person making the “necessary arrangements”? The user? The developer? The producer? The platform? The answer is rarely obvious. That is why the legal problem is not just whether AI outputs can be protected, but how authorship is attributed in a way that remains meaningful.
9. The key distinction: AI-assisted work and AI-generated work
A particularly useful idea is the distinction between AI-assisted work and AI-generated work. This is a practical, public-facing way of thinking about authorship. If AI is used as a tool to support an already human-led creative process, the result may still be meaningfully described as human work with technological assistance. But if AI takes over the main expressive decisions and the human merely waits for the output, the human claim becomes weaker.
The metaphor was memorable: not all parts of a work matter equally. In legal analysis, qualitative importance often matters more than raw quantity. The question is not only who made “most” of the work in a mechanical sense, but who made the most valuable expressive choices, i.e., who made the cake, and who made the cherries?
10. Disclosure may be the most practical short-term solution
In the absence of fully settled legislation, one practical safeguard stands out: disclosure. If AI plays a meaningful role in a film’s performance, visuals, or post-production, viewers and competitors should know that. Disclosure does not answer every copyright question, but it promotes transparency, audience autonomy, and fairer competition.
That argument has particular force in relation to film awards and festivals. If AI is used to shape a performance and the audience, judges, or competing artists are not informed, the integrity of the comparison is weakened. Even where no specific legislation yet compels disclosure, there is a strong ethical case for it. At the very least, viewers should be able to make informed choices based on accurate information about what they are watching.
11. What can audiences actually do?
In practice, the most immediate power still lies with studios and platforms. They decide how to use AI and how far to push it. Audiences cannot control those decisions directly. But they are not powerless. They still shape the market through attention, criticism, and ticket sales. If audiences reward films that use AI carefully and reject films that use it badly, studios will notice.
This means that the future of AI in film is not only a matter for lawmakers and technology companies. It is also a matter of public taste and public judgment. Viewers do not need to solve the legal theory of AI authorship to influence the direction of cinema. They can still respond to what feels respectful, convincing, and artistically honest.
Conclusion
AI is now part of filmmaking, and it is unlikely to disappear. The real question is not whether cinema should use technology, but how it should use it. When AI removes drudgery, supports craft, and strengthens storytelling without replacing human creativity, it can be a valuable tool. When it replaces labour, dilutes distinctive styles, obscures authorship, or creates unfair advantages in artistic competition, it becomes much harder to defend.
The most balanced approach is therefore neither blanket enthusiasm nor blanket rejection. It is to insist on a human-centred standard: AI should support artistic vision, not override it. Transparency, fair competition, and respect for creative labour are not obstacles to innovation. They are the conditions under which innovation remains legitimate.
Endnotes
[1] Carolyn Giardina, ‘Paul Walker “Furious 7” VFX’ Variety (15 October 2015).
[2] End Cue, ‘Sunspring’ (9 June 2016).
[3] Academy of Motion Picture Arts and Sciences, ‘Awards Rules and Campaign Promotional Regulations Approved for 98th Oscars®’ (21 April 2025).
[4] Reuters, ‘Ghibli effect: ChatGPT usage hits record after rollout of viral feature’ (1 April 2025).
[5] US Copyright Office, Copyright and Artificial Intelligence, Part 2: Copyrightability (17 January 2025); US Copyright Office, ‘Copyright and Artificial Intelligence’ (updated 2025).
[6] Copyright, Designs and Patents Act 1988, s 9(3).
Need help reviewing your China contracts?
China Legal Hub offers fixed-fee contract review services for foreign businesses — with clear pricing and fast turnaround.
This case insight is published by China Legal Hub (www.chinalegalhub.com) for informational purposes only and does not constitute legal advice. For professional contract review services, please visit our website.