Skip to main content

Mind Captioning in Education: Pattern Recognition Needs Policy, Not Hype

Mind Captioning in Education: Pattern Recognition Needs Policy, Not Hype

Picture

Member for

11 months 4 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

Mind captioning maps visual brain patterns to text, but it’s pattern recognition—not mind reading
In schools, keep it assistive only, with calibration, human-in-the-loop checks, and clear error budgets
Adopt strict governance first: informed consent, mental-privacy protections, and audited model cards

A machine that can guess your memories should make every educator think twice. In November 2025, researchers revealed that a non-invasive "mind captioning" system could identify which of 100 silent video clips a person was recalling nearly 40% of the time for some individuals, even when language areas were inactive. That exceeds mere chance. The model learned to connect patterns of visual brain activity to the meaning of scenes, then generated sentences that described the memories. It isn’t telepathy, and it isn’t perfect. But it clearly shows that semantic structure in the brain can be captured and translated into text. If this ability finds its way into school tools—ostensibly to help non-speaking students, assess comprehension, or personalize learning—it will come with power, risk, and the temptation to exaggerate its capabilities. Our systems excel at pattern recognition, but our policies fall short. This highlights the necessity and potential of interdisciplinary teamwork in developing mind captioning technology, underscoring the importance and promise of collaboration between engineers and neuroscientists.

Mind Captioning: Pattern Recognition, Not Mind Reading

The core idea of mind captioning is straightforward and transformative: the system maps visual representations in the brain to meaning and then to sentences. In lab settings, fMRI signals from a few participants watching or recalling short clips were turned into text that retained who did what to whom, rather than just a collection of words. This is important for classrooms because a tool that can retrieve relational meaning could, in theory, help students who cannot speak or who struggle to express what they saw. The potential benefits of this technology in education are vast, offering hope and optimism for the future of learning. However, the key term is "pattern recognition," not "mind reading." Models infer likely descriptions from brain activity linked to semantics learned from video captions. They do not pull private thoughts at will; they need consent, calibration, and context. Even the teams behind this research stress those limits and the ethical implications of mental privacy. We should adopt the same approach and draft our rules accordingly.

Performance demands similar attention. Related advances outline the progress and its limits. A 2023 study reconstructed the meaning of continuous language from non-invasive fMRI, showing that semantic content—rather than exact words—can be decoded when participants listen to stories. However, decoding what someone produces is more challenging. In 2025, a non-invasive “Brain2Qwerty” system translated brain activity while people typed memorized sentences, achieving an average character-level accuracy of around 68% with MEG and up to 81% for top participants; EEG was much less effective.

Figure 1: Above-chance decoding: ~50% when viewing clips; up to ~40% top-1 when recalling from 100 options (proficient subjects).

In contrast, invasive implants for inner speech have reported word or sentence accuracy in the 70% range in early trials, highlighting gaps in portability and safety that still need to be addressed. These figures do not support grading or monitoring. They advocate for narrow, assistive use, open error budgets, and interdisciplinary oversight wherever mind captioning intersects with learning.

Mind Captioning in Education: Accuracy, Error Budgets, and Use Cases

If mind captioning enters schools, its first appropriate role is in assistive communication. Imagine a student with aphasia watching a short clip and having a system suggest a simple sentence, which the student can then confirm or edit. This process puts the student in control, not the model. It also aligns with existing science: mind captioning works best with seen or remembered visual content that has a clear structure, is presented under controlled timing, and is calibrated to the individual. The complex equipment, such as fMRI today and promising but not yet wearable MEG, keeps this technology in clinics and research labs for now. Claims that it can evaluate reading comprehension on the spot are premature. Even fast EEG setups that decode rapid visual events have limited accuracy, reminding us that “instant” in headlines often means processing that takes seconds to minutes with many training trials behind the scenes. Tools should never exceed the error margins that establish their safe educational use.

Figure 2: MEG non-invasive decoding averages ~68% character accuracy; best participants ~81%; EEG averages ~33%—keep uses assistive.

Administrators should seek a clear error budget linked to each application. If the goal is to generate a one-sentence summary of a video the class just watched, what rate of false descriptions is acceptable before trust is broken—5%? 10%? For crucial decisions, the answer is zero, which means no high-stakes applications. Developers must disclose how long calibration takes per student, session length, and whether performance remains consistent across days without retraining. Results from fMRI may not apply to EEG in a school environment, and semantic decoders trained on cinematic clips may not work as well with hand-drawn diagrams. A reasonable standard is “human-in-the-loop or not at all.” The model suggests language; the learner approves it; the teacher oversees. Where timing or circumstances make that loop unfeasible, deployment should wait. Mind captioning should be treated as a tool for expression, not a silent judge of thought. This perspective safeguards students and keeps the technology focused on real needs.

Governance for Mind Captioning and Neural Data in Schools

The importance of governance in keeping pace with technological advancements cannot be overstated. In 2024, Colorado passed the first U.S. law to protect consumer “neural data,” classifying it as sensitive information and requiring consent—an essential part of any district procurement. In 2025, the U.S. Senate introduced the MIND Act, directing the Federal Trade Commission to study neural data governance and suggest a national framework. UNESCO adopted global standards for neurotechnology this month, emphasizing mental privacy and freedom of thought. The OECD’s 2025 Neurotechnology Toolkit pushes for flexible regulations that align with human rights. Together, these create a solid foundation: schools should treat neural signals like biometric health data, not as simple data trails. Contracts must prohibit secondary use, require default deletion, and draw a clear line between assistive communication and any form of behavior scoring or monitoring. Without that distinction, the social acceptance of mind captioning in education will vanish.

Districts can put this into practice. Require on-device or on-premise processing whenever possible; if cloud algorithms are necessary, mandate encryption during storage and transmission, strict purpose limitations, and independent audits. Insist on understandable local consent forms for students and guardians that teachers can easily explain. Request model cards disclosing training data types, subject counts, calibration needs, demographic performance differences, and failure modes, along with data sheets for each hardware sensor. Clearly define kill-switches and sunset clauses in contracts, with mandatory timelines for data deletion. Explain that outputs used in learning must be verifiable by students. Finally, implement red-team testing before any pilot: adversarial scenarios that test whether the system infers sensitive traits, leaks private associations, or produces inaccurate, confident sentences. Policy cannot remain just words. Procurement reflects policy in action; use it to establish a safe foundation for mind captioning.

Bridging Engineers and Neuroscience: A Curriculum for Evidence

The quickest way for mind captioning to fail in schools is to let engineers release a product without collaborating closely with neuroscientists or to let neuroscientists overlook classroom realities. We can address this by making co-production standard practice. Teacher-education programs should include a short, practical module on neural measurement: what fMRI, MEG, and EEG actually capture; why signal-to-noise ratios and subject variability are significant; and how calibration affects performance. Recent guidance on brain-computer interface design is straightforward: neuroscience principles must inform engineering decisions and user protocols. This means setting clear hypotheses, being transparent about pre-registration where possible, and focusing on how task design affects the decodable signal. The research frontier is changing—visual representation decoders report high top-5 accuracy in lab tasks, and semantic decoders now extract meaning from stories—but classroom tasks are messier. A shared curriculum across education, neuroscience, and human-computer interaction can help maintain realistic expectations and humane experiments.

Critiques deserve responses. One critique suggests the vision is exaggerated: Meta-funded non-invasive decoders achieve only "alphabet-level" accuracy. That misinterprets the trend. The latest non-invasive text-production study using MEG achieves about 68% character accuracy on average and 81% for the top subjects; invasive systems reach similar accuracy for inner speech but at the cost of surgery. Another critique states that privacy risks overshadow benefits. Governance can help here: Colorado’s law, the proposed MIND Act, and UNESCO’s standards allow schools to establish clear boundaries. A third critique claims that decoding structure from the visual system does not equate to understanding. Agreed. That is why we restrict educational use to assistive tasks that require human confirmation and why we measure gains in agency rather than relying on mind-reading theatrics. Meanwhile, the evidence base is expanding: the mind captioning study shows solid mapping from visual meaning to text and, significantly, generalizes to memory without depending on language regions. Use that progress. Do not oversell it.

That "40% from 100" figure is our guiding principle and our cautionary tale. It shows that mind captioning can recover structured meaning from non-invasive brain data at rates well above chance. It also illustrates that the system is imperfect and probabilistic, not a peek into private thoughts. Schools should implement technology only where it is valuable and limited: for assistive purposes with consent, for tasks aligned with the science, and with a human involved. The remaining focus is governance. We should treat neural data as sensitive by default, enforce deletion and purpose limits, and expect public model cards and red-team results before any pilot enters a classroom. We should also support training that allows teachers, engineers, and neuroscientists to communicate effectively. Mind captioning will tempt us to rush the process. We must resist that. Pattern recognition is impressive. Educational policy must be even more careful, protective, and focused on dignity.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Congress of the United States. (2025). Management of Individuals’ Neural Data Act of 2025 (MIND Act). 119th Congress, S.2925. Retrieved from Congress.gov.
Ferrante, M., et al. (2024). Decoding visual brain representations from biomedical signals. Computer Methods and Programs in Biomedicine.
The Guardian. (2025, November 6). UNESCO adopts global standards on the ‘wild west’ field of neurotechnology.
Horikawa, T. (2025). Mind captioning: Evolving descriptive text of mental content from human brain activity. Science Advances.
Live Science. (2025). New brain implant can decode a person’s inner monologue.
Neuroscience News. (2025, November 7). Brain decoder translates visual thoughts into text.
OECD. (2025, July). Neurotechnology Toolkit. Paris: OECD.
Rehman, M., et al. (2024). Decoding brain signals from rapid-event EEG for visual stimuli. Frontiers in Neuroscience.
Reuters. (2024, April 18). First law protecting consumers’ brainwaves signed by Colorado governor.
Scientific American / Nature. (2025, November 6). AI Decodes Visual Brain Activity—and Writes Captions for It.
Tang, J., et al. (2023). Semantic reconstruction of continuous language from non-invasive brain recordings. Nature Neuroscience.
Yang, H., et al. (2025). Guiding principles and considerations for designing a well-grounded brain-computer interface. Frontiers in Neuroscience.
Zhang, A., et al. (2025). Brain-to-Text Decoding: A Non-invasive Approach via Typing (Brain2Qwerty).

Picture

Member for

11 months 4 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

AI Finance Talent Has Ended the Old Apprenticeship. Education Must Build the New One

AI Finance Talent Has Ended the Old Apprenticeship. Education Must Build the New One

Picture

Member for

11 months 4 weeks
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.

Modified

Internal AI now performs junior work, collapsing the old apprenticeship
Education must build AI finance talent—aim, audit, and explain models
Policy should fund governance sandboxes to grow trusted hybrid roles

The most meaningful metric in finance this year isn’t a trading multiple. It’s that internal, company-wide AI tools have reached a point where they handle most of the entry-level work once done by interns and junior analysts. Goldman Sachs has implemented a generative-AI assistant across the company, with around 10,000 staff already using it for document synthesis and analysis. Morgan Stanley reports nearly universal adoption of an AI assistant among advisor teams. Citigroup claims its internal AI saves about 100,000 developer hours each week. In the UK, three-quarters of financial firms use AI, with the remainder planning to follow suit. In short, the foundation of financial work is now automated in-house. This change does not lead to fewer jobs. It creates a new ladder. It also compels schools, regulators, and employers to rethink how people learn and advance. At the center of this redesign is the emergence of AI finance talent — workers who can quickly aim, audit, and explain AI — highlighting the crucial role of judgment and explanation skills in the new finance landscape.

AI Finance Talent and the End of the Old Apprenticeship

The traditional apprenticeship in banking was straightforward: new hires built models, cleaned data, drafted pitchbooks, and summarized filings until they could be trusted to make judgments. Today, internal platforms manage much of that initial work. FactSet now offers an AI Pitch Creator that saves junior bankers hours each week. UBS uses AI analyst avatars to produce research videos at scale, freeing up human resources for insights. UniCredit’s DealSync uses AI to identify hundreds of M&A leads without increasing headcount. Even when firms do not eliminate roles, the work shifts up the value chain. Entry-level tasks are decreasing; expectations are growing. The new entry-level requirement is not “clean the data” but “frame the decision.” That explains why the market premium for AI skills is real, not just talk: across different sectors, jobs that require AI skills offered a salary premium of about 28% in 2025, and over half of such postings are now outside of core IT. These roles are hybrid—finance-first, AI-savvy.

This does not indicate a decline in finance jobs. U.S. labor projections still predict growth for personal financial advisors and financial and investment analysts over the next decade. This aligns with broader evidence: where AI supports complex decision-making, employment and sales can increase together. The change we are facing is not a “no-jobs” future but a “new-jobs” mix with higher skill demands. Method note: these projections are decade-long baselines, not short-term forecasts; they assume ongoing cooperation and regulated use. For educators, the message is clear: curricula must abandon the notion that students learn by doing low-stakes grunt work. The grunt work is disappearing. We need to teach judgment from day one, opening up new opportunities for growth and advancement in the finance sector.

Figure 1: Most UK finance firms already use AI and another slice is close behind—shrinking manual entry work and raising the bar for judgment.

What AI Finance Talent Changes in the Labor Market

Three facts anchor the labor story. First, adoption has become mainstream. In the Bank of England’s latest survey, 75% of financial firms already use AI, and an additional 10% plan to do so within 3 years. Second, major banks have scaled internal platforms rather than outsourcing essential workflows to public tools. Goldman Sachs rolled out its assistant across the firm; Morgan Stanley’s wealth unit experienced 98% adoption; Citi has transitioned from pilots to company-wide improvements. Third, the capability frontier is specific to the industry and widening: Bloomberg trained a 50-billion-parameter model on financial data to enhance narrow tasks like sentiment and entity tagging, the same functions junior staff once performed. The result is a workplace where judgment matters more than keystrokes, and where AI finance talent earns a premium for guiding, auditing, and explaining machine outputs.

Skeptics may argue that displacement at the bottom still represents displacement. They are right to be concerned about early career opportunities. Some datasets show fewer listings for entry-level roles that generative AI most impacts. Yet when we look at the bigger picture, firms that invest in AI at scale often expand higher-value work and reassign staff. JPMorgan’s leadership has stated that AI could enhance nearly every job at the bank, and external reports have documented hundreds of use cases. Internalizing tools—rather than relying on vendors—maintains tighter compliance and speeds up improvements. The real risk isn’t mass unemployment in finance; it’s a lack of workers who can blend industry knowledge with model oversight and narrative skills. That’s what AI finance talent represents in practice.

Building the AI Finance Talent Pipeline in Education

Programs that prepare students for finance must reshape the learning sequence. If automation handles extraction and drafting, students must focus on selection and defense. Change “learn Excel shortcuts” to “run an AI-assisted model, then stress-test and explain its limits.” Change “compile comps” to “audit the source of comps and adjust for model bias.” Change “write a market update” to “create a decision brief that links model outputs to policy or fiduciary responsibility.” This isn’t theoretical. It reflects how leading firms operate now: assistants summarize policies, cluster filings, and draft code; humans decide what holds up with clients or regulators. Schools should develop AI finance talent by teaching students to align model outputs with real constraints—risk, regulations, and time—while stressing the need for a new approach to finance education in the face of these changes.

To facilitate that shift, universities can borrow ideas from medical training. Create supervised “AI rounds” where students rotate through desk-like simulations. In one week, they use a domain model to analyze an earnings call and produce a one-page risk memo, citing source documents. In another week, they will refine a draft pitch creator and write a three-paragraph rationale that a compliance officer could approve. In a third week, they assess how a prompt change affects the valuation range and justify it with a clear audit trail. These 'AI rounds' are designed to provide students with hands-on experience in using AI tools and interpreting their outputs, preparing them for the real-world challenges they will face in the finance industry. Method note: assessments can evaluate three aspects—evidence, calibration, and clarity—rather than just the final output, which AI can generate. The goal is fluency: can a student explain what the model did, why it might be wrong, and how to guide it? That’s the essence of AI finance talent.

Policy for an AI Finance Talent Economy

Policymakers should treat the redesign of apprenticeships as essential infrastructure. The aim is not to fund another coding bootcamp; it’s to support shared “explainable finance” environments where schools, supervisors, and middle-market firms can train on safe data and publish repeatable exercises. Central banks and regulators have already cautioned that explainability, bias detection, and human oversight are necessary. Public environments can standardize these practices earlier in careers, rather than as remedial training. Sector-wide adoption data highlight why timing is crucial. With 75% of firms already using AI—and major players continually pulling ahead—the window for inclusive talent pipelines is now, not later.

Regulators can also promote an “internal-first” approach, where appropriate, backing firms that develop or closely manage in-house tools rather than untested external systems. This is both a precautionary measure and an educational one. Keeping model behavior and data flow visible to reviewers is better for supervision and training graduates in governance. Evidence from large institutions indicates that internal platforms enhance measurable productivity and adoption at scale. This supports wage premiums for hybrid roles and creates a more straightforward pathway from student projects to professional roles. A close connection between classroom environments and firm workflows will cultivate AI finance talent more quickly and equitably than a hands-off approach.

Educators, Administrators, and the New Ladder

Educators should eliminate outdated modules that assume entry work is manual. Replace them with three new habits that align with current practices. First, always combine generation with verification. Students must check AI-generated numbers against original documents and keep a record of the check. Second, teach translation. The best junior today turns a model’s complex outputs into clear, client-ready paragraphs with decisions and caveats. Third, integrate narrative skills with quantitative skills. Finance now involves both analytics and storytelling—quick synthesis, concise writing, and a human tone. Bloomberg’s finance-focused model and UBS’s video avatars signal this shift. The content aspect of finance is becoming more like journalism, but with fiduciary responsibilities. If we train students to report and not just repeat, their value increases in any role as they move up the ladder.

Figure 2: AI skills now earn a 28% pay premium, and over half of AI-skill demand sits outside IT—evidence that hybrid finance roles are ascendant.

Administrators should assess success by placement into hybrid roles rather than by software certifications. Evidence from Lightcast showing an AI skills premium should inform program design: build capstones that have students practice governance, not just prompting. Track outcomes in risk, research, and client advisory, where adoption is strongest and the combination with human skills is most effective. Meanwhile, universities should enter into data-sharing agreements with employers that allow students to work with synthetic yet realistic datasets—such as policy libraries, anonymized filings, and redacted client notes—so graduates enter the workforce with a governance portfolio rather than just a degree.

Internal AI now manages much of the work that once taught newcomers how finance operates. This can serve as a warning or as a call to action. If grunt work is disappearing, we must replace it with structured judgment. The evidence is compelling: adoption is widespread; specialized models are emerging; wage premiums reward hybrid skills; and projections still indicate growth in advisory and analysis. The labor market will favor individuals who can refine and validate a model and communicate the results clearly. That defines AI finance talent. Our mission is to construct a ladder fit for this era: classrooms that link to real governance; apprenticeships that simulate rather than impose drudgery; curricula that emphasize judgment and culminate in impact. Finance has upgraded its tools. Now, education needs to upgrade its training. The quicker we act, the fairer the new ladder will be.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Bank of England. (2024, November 21). Artificial intelligence in UK financial services – 2024.
Bank for International Settlements. (2024, June 25). Artificial intelligence and the economy: implications for central banks (BIS Annual Economic Report, Chapter III).
Bloomberg. (2023, March 30). Introducing BloombergGPT, Bloomberg’s 50-billion parameter large language model, purpose-built for finance.
Brookings Institution. (2025, July 10). Hybrid jobs: How AI is rewriting work in finance.
Brookings Institution. (2025). The effects of AI on firms and workers.
FactSet. (2025, January 15). FactSet launches AI-powered Pitch Creator.
Financial News London. (2024, April 8). Jamie Dimon: AI could ‘augment virtually every job’ at JPMorgan.
Lightcast. (2025, July 24). New Lightcast report: AI skills command 28% salary premium as demand shifts beyond tech.
Morgan Stanley. (2024, June 26). AI @ Morgan Stanley Debrief – launch debrief.
Reuters. (2025, June 23). Goldman Sachs launches AI assistant firmwide, memo shows.
Reuters. (2025, May 22). Citigroup launches AI tools for Hong Kong employees.
Reuters / Breakingviews. (2024, June 27). Banks grab AI-generated tiger by the tail.
UBS. (2025, June). UBS deploys AI analyst avatars [News report]. Financial Times.
U.S. Bureau of Labor Statistics. (2025, March 11). Incorporating AI impacts in BLS employment projections.
World Economic Forum. (2023). The Future of Jobs Report 2023.

Picture

Member for

11 months 4 weeks
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.