Skip to main content

Schooling for the Era of AI Video Actors

Schooling for the Era of AI Video Actors

Picture

Member for

11 months
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

AI video actors are moving from demo to real production and education
They cut costs for routine content but raise rights, ethics, and disclosure duties Schools must teach rights-aware workflows, when to use AI actors, and where only humans should lead

One number shows how quickly things are changing: OpenAI’s new Sora app reached one million downloads in less than five days. This is the fastest launch for a consumer AI video tool so far. It provides professional-quality generative video to anyone with a phone. Users can also “cast” themselves into scenes with a cameo feature and share the results online. The message is clear. As creation becomes as easy as a tap, production models change, and the demand for some types of on-screen work shifts. AI video actors are not just science fiction or a niche market. They are a reality that impacts distribution, raises legal questions, sees market adoption, and has simultaneous implications for classrooms. Education systems that still view this as a future issue will struggle to prepare students for the jobs—and the challenges—emerging around synthetic performance.

AI video actors are now a production reality

Studios and streaming services have begun testing generative video in real productions. Netflix revealed its first use of generative AI effects in El Eternauta this summer, presenting it as a way to support creators, not just save money. At the same time, the public debut of a completely synthetic “actress,” Tilly Norwood, prompted backlash from the industry and a response from unions. The common theme is evident: AI video actors are becoming part of the workflow and are already a topic of public debate. A central talent agency has noted that tools like Sora present new risks to creator rights. Schools that train performers, editors, and producers must treat these facts as a starting point.

The rules are changing, too. The 2023 SAG-AFTRA TV/Theatrical agreement recognizes two essential categories for classrooms and on-set work: “digital replicas” of real performers and “synthetic performers” that look human but are not based on any specific person. The contract requires notice, consent, and payment when creating and reusing digital replicas, and it establishes limits on the number of human background actors that must be hired before using synthetic substitutes. Educators should teach these definitions and rights alongside acting technique or VFX. This knowledge is essential for employability.

Figure 1: Adoption is concentrated: information & communication firms use AI at ~49%, professional services at ~31%, while many service sectors sit near or below 15%. This uneven base explains where AI video actors will scale first.

Another baseline is capability. OpenAI’s latest model focuses on more physical realism, synchronized dialogue, and tighter control. These features, combined with a popular social app, reduce the time and effort previously needed for decent composites and crowd scenes. The initial impact will be felt in low-budget advertising, industrial training, and background performance—exactly where new graduates often start. AI video actors make these processes smoother, so curricula must shift toward teaching skills that match this trend: directing generative models, navigating the prompt-to-performance workflow, understanding rights-aware postproduction, and making editorial decisions about what should not be automated.

What AI video actors mean for learning and work

The evidence shows that the education landscape has moved from hype to tangible signals. A recent study in Nature found that an AI tutor led to greater learning gains than a typical in-class active-learning scenario while using less time. Similarly, research comparing human-made and AI-generated teaching videos demonstrates that synthetic instruction can deliver similar or even better learning outcomes under specific design conditions. Another peer-reviewed study indicated higher retention with AI-generated instructional videos, although the ability to transfer knowledge remained similar to human-recorded material. The key takeaway is not that AI will replace teachers. AI video actors can effectively deliver instruction when the teaching methods and transparency are solid.

This shift matters for training in creative industries. If synthetic presenters can handle standard modules—like safety briefings, software onboarding, and compliance training—then human talent can focus on high-touch coaching, feedback, and narrative development. This reflects the current challenges faced by film and media programs. They need to prepare students to lead hybrid teams where AI video actors perform repetitive takes, while humans handle scenes requiring timing, improvisation, and empathy. The learning aim for students should be about judgment, not just skills: knowing when to cast a synthetic stand-in, how to direct it, and when a human presence is necessary because the audience will notice and care.

The overall school system also needs a proactive approach. UNESCO’s guidelines advocate for a human-centered approach and call for strong policies before tools become widespread, while OECD reports indicate that deepfakes are already negatively affecting students and teachers. If AI video actors become commonplace in entertainment and education, then media and AI literacy must be integrated into the curriculum. Students should practice detection skills while also learning about ethics: consent, context, and the differences between parody, instruction, and deception. These topics are not just theoretical discussions; they are practical decisions in classrooms, campus studios, and corporate training settings.

Building responsible pipelines for AI video actors

Education and policy need a common framework: transparency, consent, compensation, and control. The union contract establishes this framework for productions, and schools can adopt similar practices. Require performers to sign clear likeness licenses during student shoots. Log prompts and assets for every synthetic clip. Teach revenue sharing when likeness or voice models add value to a project. This is not merely an ethical matter; it is a standard practice that transitions smoothly into the industry. It also addresses the concerns raised by major Hollywood agencies about creator rights in the age of Sora.

Figure 2: Even with synthetic performers, productions must meet higher minimums for human background actors—25 for TV and 85 for features—so training and budgeting can’t ignore live hiring.

Institutions should also create “do-not-clone” registries based on app platform norms. OpenAI has introduced features allowing rights holders to restrict use and manage how their likeness appears on the platform. Schools can adopt similar measures with campus-level registries and honor codes. Simultaneously, educators should teach technical controls for safer synthesis: watermarking, content credentials, and verification workflows that track media from creation to final product. In the short term, these steps will be more effective in reducing harm than waiting for slow-moving regulations, while providing students with practical experience in compliance.

K-12 schools and universities should integrate this with media literacy goals. Use synthetic clips to conduct live evaluations: What giveaways hint at AI video actors? What disclosures seem adequate? At what point does it become unethical? UNESCO and OECD frameworks can support such initiatives. The goal is not to turn everyone into filmmakers but to equip the next generation of citizens and professionals to discern intent and consent when any face on a screen might be a model, a remix, or a real person.

Where humans still lead—and how schools should teach it

There is no substantial evidence that AI video actors can fully replace human performers in complex drama. In fact, learners often find human feedback more beneficial than AI feedback when it comes to nuances and relationships. Several reviews emphasize that good design, not novelty, is what drives impact. This should inform the curriculum. Programs should restrict the use of synthetic tools to areas where they help with repetition and scale, while focusing on live direction, teamwork, and the ethics of representation in high-stakes storytelling. We need to maintain the emphasis on human strengths: meaning, memory, and trust.

Film and media schools can implement these changes within a year. First, introduce a “Synthetic Performance” series that combines acting classes with generative video labs. Students will learn to co-direct AI video actors, set prompts for pacing and eye lines, and blend scenes with human actors. Second, require a rights and safety practicum covering likeness licensing, storage, watermarking, and on-set transparency. Third, update capstone projects to include one that demonstrates restraint: a scenario where the team opts not to use AI because a human moment carries greater weight. Finally, invite unions, studios, and AI companies to participate in critique days so that graduates can enter the market familiar with its changing norms. This approach keeps programs focused on artistry while ensuring industry relevance.

The million-download week was not just a fluke. It marked the beginning of a significant shift. AI video actors will not eliminate the craft of acting or the role of teachers. However, they will streamline many types of production and learning, potentially changing jobs, finances, and practices unless schools take proactive steps. The correct response is not to resist or yield. Instead, it's to innovate. Educators can create courses that provide students with necessary control tools—rights-aware workflows, prompt-to-performance direction, and clear standards for disclosure. Administrators can develop policies that reflect union protections and platform regulations. Policymakers can encourage trustworthy media, not just punishments for misuse. If we do this, we can teach the next generation to collaborate with AI video actors where appropriate while recognizing when a human touch is essential. The tools are here. We should respond with thoughtful judgment.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Authors Guild. (2024). SAG-AFTRA agreement establishes important AI safeguards.
Deadline. (2023). Full SAG-AFTRA deal summary released: Read it here.
Guardian. (2025, Jul. 18). Netflix uses generative AI in one of its shows for first time.
Guardian. (2025, Oct. 1). Tilly Norwood: how scared should we be of the viral AI ‘actor’?
Hollywood Reporter. (2024, Apr. 10). How SAG-AFTRA’s AI road map works in practice.
Hollywood Reporter. (2025, Sep. 29). Creator of AI actress Tilly Norwood responds to backlash.
Loeb & Loeb. (2023). Artificial intelligence terms of SAG-AFTRA TV/Theatrical contract (summary).
Nature (Scientific Reports). (2025). AI tutoring outperforms in-class active learning.
OpenAI. (2024, Dec. 9). Sora is here.
OpenAI. (2025, Sep. 30). Sora 2 is here.
OpenAI Help Center. (2025, Oct. 2). Getting started with the Sora app.
Perkins Coie. (2024). Generative AI in movies and TV: How the 2023 SAG-AFTRA and WGA contracts address generative AI.
Reuters. (2025, Oct. 1). OpenAI launches new AI video app spun from copyrighted content.
Reuters. (2025, Oct. 9). CAA says OpenAI’s Sora poses risk to creators’ rights.
Reuters. (2025, Oct. 2). Hollywood performers union condemns AI-generated “actress”.
SAG-AFTRA. (2023). TV/Theatrical 2023 summary agreement (AI and digital replicas).
Scientific American. (2025, Oct.). OpenAI’s new Sora app lets users generate AI videos—and star in them.
UNESCO. (2023, updated 2025). Guidance for generative AI in education and research.
UNESCO. (2025, Oct. 1). Deepfakes and the crisis of knowing.
Verge, The. (2025, Oct.). Sora hits one million downloads in less than five days.
Wiley (BJET). (2024). Assessing student perceptions and use of instructor versus AI feedback.
MDPI Education. (2025). From recorded to AI-generated instructional videos: Effects on retention and transfer.
Computers & Education. (2025). Comparing human-made and AI-generated teaching videos.

Picture

Member for

11 months
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Sector-Specific AI: Why Finance Isn’t ChatGPT—and Why That Matters

Sector-Specific AI: Why Finance Isn’t ChatGPT—and Why That Matters

Picture

Member for

11 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

AI works best when built for each sector’s data and goals
Finance needs domain-grounded models and risk-based metrics, not generic chatbots
Teach, buy, and regulate using sector-specific measures

The most crucial figure in today’s AI discussion isn’t a high parameter count. It is a divide. In 2024, only 13.5% of EU firms reported using AI at all. Yet nearly half of information and communications firms had adopted it, while most other sectors were below one in six. This pattern is striking and revealing: adoption occurs where data and tasks match the tools, and stalls where they do not. This divide challenges the idea that "AI equals ChatGPT." Sector-specific AI relies on domain data, error structure, incentives, and key metrics in the field. Photographs and writing benefit from identifying patterns in clean, stable datasets. In contrast, financial time series are challenging due to noise, regime shifts, and the higher costs of making mistakes. Suppose we continue to think of AI as a single entity. In that case, we risk teaching students incorrectly, purchasing the wrong systems, and mismanaging risk. The solution isn’t just more hype; it’s about adopting sector-specific AI tailored to the needs of each sector.

Sector-specific AI is a practice, not a product

Sector-specific AI isn’t just a chatbot dressed up; it’s a group of probabilistic systems tailored to specific contexts. In some areas, these systems already outperform previous top standards. Take weather forecasting. In peer-reviewed studies, DeepMind’s GraphCast beat the leading European physics model on 90% of 1,380 verification targets for forecasts ranging from three to ten days, delivering results in under a minute. This achievement stemmed from training on decades of structured reanalysis data, which had well-understood physical constraints and stable measurement processes. In short, the domain provided the model with valuable signals. This lesson applies broadly: when data are plentiful, consistent, and connected to governing dynamics, the benefits of learning are significant and rapid. That’s the essence of sector-specific AI.

The adoption rates illustrate the same story. Across the European Union, information and communications firms led AI use at 48.7% in 2024, followed by professional, scientific, and technical services at 30.5%. Adoption rates were lower in other sectors, highlighting differences in how well tasks match available data. UK business surveys present a similar picture: about 15% of firms reported using AI in late 2024, with adoption increasing alongside firm size and digital readiness. Global executive surveys show another layer: many organizations claim to be "using AI" somewhere, but the actual use is primarily in IT and sales/marketing, not in the complex, data-scarce areas of each sector. Sector-specific AI doesn’t spread like a single product; it expands where data, incentives, and measurement are aligned.

Why financial time series breaks the hype

Finance is where the idea that “AI = ChatGPT” causes the most damage. Equity and foreign exchange returns exhibit heavy tails, clustered volatility, and frequent regime changes—factors that render simple pattern-finding unreliable. The signal-to-noise ratio is low, labels change because of market perceptions (prices move because others believe they will), and feedback loops can reverse correlations. Recent studies clarify this issue: careful analyses find that language-model features generally do not beat strong, purpose-built time-series baselines. In many cases, simpler attention models or traditional structures perform just as well or better at a much lower cost. This isn’t a critique of AI; it’s a reminder that sector-specific AI for markets must begin with the data-generating process, not the latest general-purpose model.

What works in weather or image recognition doesn’t translate directly to forecasting returns. Weather models benefit from physics-consistent data with dense, continuous coverage. Corporate earnings, yields, and prices fluctuate due to policy changes, accounting adjustments, liquidity pressures, and narratives that affect risk pricing. Advanced models in finance can assist by nowcasting macroeconomic series from mixed data, classifying the tone of news, and identifying structural breaks, but the objectives differ. A trading desk cares less about root mean square error on tomorrow’s price and more about drawdowns, turnover, and tail risk during stress periods. This requires sector-specific AI that combines causal structure, reliable features, and stringent controls for backtesting and live deployment. If the benchmark is whether it surpasses a well-tuned seasonal naive or a hedged factor model after costs, many flashy systems fall short. That isn’t a failure of AI; it reflects the nature of finance.

Figure 1: AI user adoption (ChatGPT as proxy) has been exponentially faster than that of earlier digital platforms, underscoring that general-purpose tools spread faster than sector-specific ones but also face sharper saturation and adaptation limits.

Sector-specific AI needs local metrics, not generic demos

The quickest benefits from sector-specific AI come when evaluations align with the job to be done. In meteorology, the goal is timely, location-specific forecasts on variables that impact operations; benchmarks reward physical consistency and accuracy across different times. That’s why the GraphCast work—and hybrid systems that combine it with ensemble methods—are significant to grid planners and disaster response teams, not just to machine learning experts. The technique, data, and impact metric all align. In health imaging, sensitivity and specificity, depending on the condition and scanner type, matter more than polished fluency. In manufacturing, defects per million and scrap rates determine success, not flashy presentations. Sectors that establish these local metrics see real compounding benefits from AI.

Figure 2: The structure of AI research reveals distinct architectures—computer vision, NLP, robotics, and time-series analysis—each producing different error behaviors and performance ceilings. Policy must recognize these domain boundaries to design realistic AI adoption strategies.

Financial markets require an even stricter approach. A credible sector-specific AI roadmap starts with governance: controlled data pipelines, baseline econometrics, and pre-registered backtesting protocols that penalize overfitting. It then focuses evaluations on finance-specific outcomes: probability of failure under stress, realized Sharpe ratio after fees and slip, turnover-adjusted alpha, and worst-case liquidity scenarios. This viewpoint explains the gap between firms stating "we use AI somewhere" and those saying "we trust an AI signal in real time." Business surveys show widespread experimentation—78% of firms reported AI use in at least one function in 2025—but not widespread transformation. In finance, trust will increase when developers acknowledge the noise and start with the right benchmarks. Sector-specific AI works when it optimizes for the metrics that the sector already uses.

What educators, administrators, and policymakers should do next

Educators should teach sector-specific AI as a skill grounded in domain data. This begins with projects that use real sector datasets and metrics. A finance capstone should connect students with anonymized tick or macro datasets, run competitions against seasonal naïve, ARIMA-GARCH, and transformer models, and assess based on out-of-sample risk and costs, rather than classroom-friendly accuracy. A public-sector project should simulate casework prioritization or fraud detection, focusing on fairness, false-positive costs, and auditability from the start. An energy systems project should optimize fleet dispatch based on weather forecasts and price volatility. The core message is clear: the sector defines the loss function, and the model adapts accordingly. OECD analysis indicates that sectoral AI dynamics differ significantly; curricula should reflect that reality, rather than compress it into a single "AI course."

Administrators should also invest in sector metrics. In universities and research hospitals, purchases should require direct validation based on outcomes important to the unit—such as critical sensitivity and workload effects in radiology —rather than broad natural language processing benchmarks. In business schools and engineering programs, computing resources should connect to reproducible methods and solid baselines, rather than just parameter counts. In finance labs, live-trading environments should be separated from model development and subject to strict change controls. For many institutions, the first step is straightforward: improve the data management. UK and EU business surveys show that AI adoption increases with digital maturity; companies with good data management and security gain more tangible benefits from AI than those that skip the basics. Sector-specific AI relies on clean data.

Policymakers should create regulations based on sector needs, not a one-size-fits-all approach. Generic rules regarding "AI systems" overlook the actual risk landscape. Financial markets require model-risk management, record-keeping, and post-trade auditing. Health care needs documentation of training groups, site-to-site testing, and commitments to real-world performance. Critical infrastructure needs stress tests for rare events. International organizations are starting to recognize these differences as they assess sectoral AI use in manufacturing, finance, and government; the most effective frameworks relate controls to impacts and error costs rather than vague capability labels. In short, regulate the interface between model errors and human welfare where it occurs.

The counterargument is familiar: if models improve continuously, won’t ChatGPT-like systems soon handle everything? Progress is real, and overlaps will occur. However, the immediate path to impact relies on aligning with sector needs. Even in areas with significant breakthroughs, improvements stem from tailoring architecture and data to the task. Weather AI’s success wasn’t about chatbots; it was about domain knowledge. Finance will eventually reach its own breakthroughs, but they will involve effective risk management and improved hedging, rather than a general model extracting alpha from random noise. Since incentives vary by sector, the adoption curve will also differ. The right approach is not to wait for a single, universal solution; it’s to build according to the unique needs of each domain now.

The 13.5% overall adoption compared to nearly half in information and communications was never just a number. It illustrates how sector-specific AI spreads. When data are organized, outcomes are clear, and metrics are relevant, AI evolves quickly. In contrast, when data are chaotic, outcomes are uneven, and metrics are flawed, it stalls or even fails. Finance serves as a warning: time series are complex, rich in feedback, and resistant to pattern-finding that lacks a foundation in risk economics. The takeaway isn’t to lower expectations. It’s to teach, invest, build, and regulate as if AI encompasses many things—because it does. Suppose educators prepare builders who start with the data-generating process. In that case, administrators invest based on sector metrics, and policymakers regulate the impact of errors on people and capital. This approach will achieve the desired benefits alongside the necessary safeguards. This is the route out of the "AI = ChatGPT" trap and toward effective sector-specific AI based on the needs of each field.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

DeepMind (2023). GraphCast: AI model for faster and more accurate global weather forecasting. Retrieved November 14, 2023, from DeepMind blog; see also Lam, R., et al. “Learning skillful medium-range global weather forecasting,” Science, 382(6673), eadi2336.
Eurostat (2025). Use of artificial intelligence in enterprises—Statistics Explained (data extracted January 2025). European Commission.
McKinsey & Company (2024). The state of AI in early 2024: Gen AI adoption. McKinsey Global Survey.
McKinsey & Company (2025). The State of AI: Global survey. (March 12, 2025).
OECD (2025). “How do different sectors engage with AI?” OECD Blog, February 13, 2025.
OECD (2025). Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions (Media advisory and report). September 18, 2025.
Office for National Statistics—UK (2024). Business insights and impact on the UK economy, October 3 2024 (PDF).
Office for National Statistics—UK (2025). Management practices and the adoption of technology and artificial intelligence in UK firms, 2023, March 24, 2025.
Tan, M., Merrill, M. A., Gupta, V., Althoff, T., & Hartvigsen, T. (2024). “Are Language Models Actually Useful for Time Series Forecasting?” NeurIPS 2024 (conference paper and arXiv).
Xu, D. J., & Kim, D. J. (2025). “Modeling Stylized Facts in FX Markets with FINGAN-BiLSTM.” Entropy, 27(6), 635 (discussion of volatility clustering).
Yan, Z., et al. (2025). “Evaluation of precipitation forecasting based on GraphCast.” Scientific Reports (Nature).

Picture

Member for

11 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.