Households’ inflation beliefs move more with media framing than ECB verbosity
Extra, unscheduled talk can backfire; clarity, timing, and audiovisual formats anchor expectations
Make communication a measurable policy tool with simple targets and state-contingent triggers
Token value comes from network use, not only cash flows
Teach Metcalfe-style metrics—active users, adjusted settlement, fees and ETF signals—with transparent filters
Update curricula to pair demand-based valuation with risk and regulation
When “Intrinsic Value” Meets a Compute Bidding War: Urgent Lessons for Education Policy in the AI Era
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Published
Modified
AI prices reflect scarce compute and network effects, not just hype
Educators must teach market dynamics and govern AI use
Turn volatility into lasting learning gains
In a time historically dominated by cash-flow models and neat multiples, one number stands out: by 2030, global data-center electricity use is expected to more than double to around 945 terawatt-hours. This represents Japan's annual consumption, with AI contributing to that increase. Regardless of our views on price-to-earnings ratios or “justified” valuations, the physical build-out is absolute. It involves steel, concrete, grid interconnects, substations, and heat rejection, all of which are financed rapidly by firms eager to meet demand. Today’s AI equity markets reflect more than just future cash flow forecasts. They are engaging in a live auction for scarce compute, power, and attention. Suppose education leaders continue to see the valuation debate as a mere financial exercise. In that case, we risk overlooking a critical point: the market is sending a clear signal about capabilities, bottlenecks, and network effects. Our goal is to prepare students and our institutions to understand hype intelligently, use it where it creates real options, and resist it where it deprives classrooms of the resources necessary for effective learning.
From “Intrinsic Value” to a Market for Expectations
Traditional valuation maintains that price equals the discounted value of expected cash flows, adjusted for risk by a reasonable rate. This approach works well in stable markets but struggles when a technological shock changes constraints quickly than accounting can keep up. The current AI cycle is less about today’s profits and more about securing three interconnected scarcities—compute, energy, and high-quality data. Additionally, it is about capturing user attention, which transforms these inputs into platforms. In this environment, supply-and-demand dynamics take over: when capacity is limited and switching costs rise through ecosystems, prices surpass “intrinsic value.” This happens because having the option to participate tomorrow becomes valuable today. A now-well-known analysis found that companies with workforces more tied to generative AI saw significant excess returns in the two weeks following ChatGPT’s release. This pattern aligns with investors paying upfront for a quasi-option on future productivity gains, even before those gains appear in the income statements.
Figure 1: Firms with higher generative-AI exposure earned rapid, market-factor-adjusted excess returns after ChatGPT’s release, illustrating how expectations can lead measured profits.
The term “hype” is not incorrect, but it is incomplete. Network economics shows that who is connected and how intensely they interact can create value only loosely associated with near-term cash generation. Metcalfe-style relationships—where network value grows non-linearly with users and interactions—have concrete relevance in crypto assets and platform contexts, even if they don’t directly translate to revenue. Applying that concept to AI, parameter counts and data pipelines matter less than the active use of those resources: the number of students, educators, and developers engaging with AI services. In valuation terms, this usage can act as an amplifier for future cash flows or as a real option that broadens the potential for profitable product lines. An education policy that views all this as irrational mispricing will misallocate scarce attention within institutions. The market is telling us which bottlenecks and complements truly matter.
What the 2023–2025 Numbers Actually Say
The numbers provide a mixed but clear picture. On the optimistic side, NVIDIA's price-to-earnings ratio recently hovered around 50, high by historical standards and an easy target for bubble conversations. On the investment front, Alphabet raised its 2025 capital expenditure guidance from $75 billion to about $85 billion, and Meta now expects $66–72 billion in capex for 2025, directly linking this to AI data-center construction. Meanwhile, U.S. data-center construction reached a record annualized pace of $40 billion in June 2025. Both the IEA and U.S. EIA predict strong electricity growth in the mid-2020s, with AI-related computing playing a significant role. These figures represent concrete plans, not mere speculation. However, rigorous field data complicate the productivity narrative. An RCT by METR found that experienced developers using modern AI coding tools were 19% slower on actual tasks, suggesting that current tools can slow productivity in high-skill situations. Additionally, a report from the MIT Media Lab argues that 95% of enterprise Gen-AI projects have shown no measurable profit impact so far. This gap in adoption and transformation should prompt both investors and decision-makers to exercise caution.
If this seems contradictory, it arises from expecting one data point—stock multiples or a standout case study—to answer broader questions. A more honest interpretation is that we are witnessing the initial dynamics of platform formation. This includes substantial capital expenditures to ease compute and power constraints, widespread adoption of generative AI tools in firms and universities, and uneven, often delayed, translations into actual productivity and profits. McKinsey estimates that generative AI could eventually add $2.6–4.4 trillion annually across various use cases. Their 2025 survey shows that 78% of organizations use AI in at least one function, while generative AI usage exceeds 70%—high adoption figures alongside disappointing short-term return on investment metrics. In higher education, 57% of leaders now see AI as a “strategic priority,” yet fewer than 40% of institutions report having mature acceptable-use policies. This points to a sector that is moving faster to deploy tools than to govern their use. The correct takeaway for educators is not “this is a bubble, stand down,” but instead “we are in a market for expectations where the main limit is our capacity to turn tools into outcomes.”
Figure 2: Generative-AI exposure is highest in analytical and routine cognitive tasks, mapping where education and workforce training will feel the earliest shocks.
What Education Systems Should Do Now
The first step is to teach the real economics we are experiencing. Business, public policy, and data science curricula should move beyond the mechanics of discounted cash flow to cover network economics, market microstructure in times of scarcity, and real options for intangible assets, such as data. Students should learn how an increase in GPU supply or a new substation interconnect can alter market power and pricing, even far from the classroom. They should also understand how platform lock-in and ecosystems affect not only company strategies but also public goods. Methodologically, programs should incorporate brief “method notes” into coursework and capstone projects. This approach forces students to make straightforward, rough estimates, such as how a campus model serving 10,000 users changes per-user costs with latency constraints. This literacy shifts the discussion from “bubble versus fundamentals” to a dynamic challenge of relieving bottlenecks and assessing option value under uncertainty rather than a static P/E ratio argument.
Second, administrators should view AI spending as a collection of real options instead of a single entity. Centralized, vendor-restricted tool purchases may promise speed but can also lead to stranded costs if teaching methods don’t adjust. In contrast, smaller, domain-specific trials might have higher individual costs but provide valuable insights into where AI enhances human expertise and where it replaces it. The MIT NANDA finding of minimal profit impact is not a reason to stop; it’s a reason to phase in slowly: begin where workflows are established, evaluation is precise, and equity risks are manageable. Focus on academic advising, formative feedback, scheduling, and back-office automation before high-stakes assessments or admissions. Create dashboards that track business and learning outcomes, not just model tokens or prompt counts, and subject them to the same auditing standards we apply to research compliance. The clear takeaway is this: adoption is straightforward; integration is tough. Governance is what differentiates a cost center from a powerful capability.
Third, approach computing and energy as essential infrastructure for learning, ensuring sustainability is included from the start. The IEA’s forecast for data-center electricity demands doubling by 2030 means that campuses entering AI at scale need to plan for power, cooling, and eco-friendly procurement—or risk unstable dependencies and public backlash. Collaborative models can help share fixed costs. Regional university alliances can negotiate access to cloud credits, co-locate small inference clusters with local energy plants, or enter power-purchase agreements that prioritize green energy. Where possible, select open standards and interoperability to minimize switching costs and enhance negotiating power with vendors. Connect every infrastructure dollar to outcomes for students—whether through course redesigns that demonstrate improvements in persistence, integrating AI support into writing curricula, or providing accessible tutoring aimed at reducing equity gaps. Infrastructure without a pedagogical purpose is just an acquisition of assets.
Fourth, adjust governance to align with how capabilities actually spread. EDUCAUSE’s 2025 study shows that enthusiasm is outpacing policy depth. This can lead to inconsistent practices and reputational risks. Institutions should publish clear, up-to-date use policies that outline permitted tasks, data-handling rules, attribution norms, and escalation procedures for issues. Pair these with revised assessments—more oral presentations, in-class synthesis, and comprehensive portfolios—to decouple grading from text generation and clarify AI’s role. A parallel track for faculty development should focus on low-risk enhancements, including the use of AI for large-scale feedback, formative analytics, and expanding access to research-grade tools. The aim is not to automate teachers but to increase human interaction in areas where judgment and compassion enhance learning.
Fifth, resist the comfort of simplistic narratives. Some financial analyses argue that technology valuations remain reasonable once growth factors are considered, while others caution against painful corrections. Both viewpoints can hold some truth. For universities, the critical inquiry is about exposure management: which investments generate option value in both scenarios? Promoting valuation literacy across disciplines, funding course redesigns that focus on AI-driven problem-solving, establishing computational “commons” to reduce experimentation costs, and enhancing institutional research capabilities to measure learning effects each serve to hedge against both optimistic and pessimistic market scenarios. In market terms, this represents a balanced strategy: low-risk, high-utility additions at scale, alongside a select few high-risk pilots with clear stopping criteria.
A final note on “irrationality.” The most straightforward critique of AI valuations is that many projects do not succeed, and productivity varies. Both of these assertions are true in the short term. However, markets can rationally account for path dependence: once a few platforms gather developers, data, and distribution, the adoption curve’s steepness and the cost of ousting existing players change. This observation does not certify every valuation; instead, it explains why price can exceed cash flows during times of relaxed constraints. The industry’s mistake would be to dismiss this signal on principle or, worse, to replicate it uncritically with high-profile but low-impact spending. The desired approach is not just skepticism but disciplined opportunism: the practice of turning fluctuating expectations into lasting learning capabilities.
Returning to the key fact: electricity demand for data centers is set to double this decade, with AI as the driving force. We can discuss whether current equity prices exceed “intrinsic value.” What we cannot ignore is that the investments they fund create the capacity—and dependencies—that will influence our students’ careers. Education policy must stop regarding valuation as a moral debate and start interpreting it as market data. We should educate students on how supply constraints, network effects, and option value affect prices, govern institutional adoption to ensure pilots evolve into workflows, and invest in sustainable compute so that pedagogy—not publicity—shapes our AI impact. By doing so, we will transform a noisy, hype-filled cycle into a quieter form of progress: more time focused on essential tasks for teachers and students, greater access to quality feedback, and graduates who can distinguish between future narratives and a system that actively builds them. This is a call to action that fits the moment and is the best safeguard against whatever the market decides next.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Atlantic, The. (2025). Just How Bad Would an AI Bubble Be? Retrieved September 2025. CEPR VoxEU. Eisfeldt, A. L., Schubert, G., & Zhang, M. B. (2023). Generative AI and firm valuation (VoxEU column). EDUCAUSE. (2025). 2025 EDUCAUSE AI Landscape Study: Introduction and Key Findings. EdScoop. (2025). Higher education is warming up to AI, new survey shows. (Reporting 57% of leaders view AI as a strategic priority.) International Energy Agency (IEA). (2025). AI is set to drive surging electricity demand from data centres… (News release and Energy & AI report). Macrotrends. (2025). NVIDIA PE Ratio (NVDA). McKinsey & Company. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey & Company (QuantumBlack). (2025). The State of AI: How organizations are rewiring to capture value (survey report). METR (Model Evaluation & Threat Research). Becker, J., et al. (2025). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (arXiv preprint & summary). MIT Media Lab, Project NANDA. (2025). The GenAI Divide: State of AI in Business 2025 (preliminary report) and Fortune coverage. Reuters. (2025). Alphabet raises 2025 capital spending target to about $85 billion; U.S. data centre build hits record as AI demand surges. U.S. Department of Energy / EPRI. (2024–2025). Data centre energy use outlooks and U.S. demand trends (DOE LBNL report; DOE/EPRI brief). The Department of Energy's Energy.gov U.S. Energy Information Administration (EIA). (2025). Short-Term Energy Outlook; Today in Energy: Electricity consumption to reach record highs. UBS Global Wealth Management. (2025). Are we facing an AI bubble? (market note). Meta Platforms, Investor Relations. (2025). Q1 and Q2 2025 results; 2025 capex guidance $66–72B. Alphabet (Google) Investor Relations. (2025). Q1 & Q2 2025 earnings calls noting capex levels. Bakhtiar, T. (2023). Network effects and store-of-value features in the cryptocurrency market (empirical evidence on Metcalfe-type relationships).
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Reshoring works only when automation slashes unit labor costs
Raise robot density and software-driven productivity, not tariffs
Tie incentives to verified plant gains and workforce upskilling
U.S. tariffs mix security aims with bargaining, causing confusion.
Allies hedge by shifting trade and investment toward China and cheaper energy, blunting U.S.
The UK–India pact swaps targeted tariff cuts for larger services and mobility gains
Phased quotas protect adjustment while amplifying each side’s comparative strengths
Biggest risk: an EU–India deal; move fast and fund skills to preserve advantage
How Much Is Too Much? A Proportion-Based Standard for Genuine Work in the Age of AI
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Published
Modified
Judge AI use by proportion, not yes/no
Require disclosure and provenance to prove human lead
Apply thresholds (≤20%, 20–50%, >50%) to grade and govern
Sixty-two percent of people say they would like their favorite artwork less if they learned it was created entirely by artificial intelligence, with no human involvement. Eighty-one percent believe the emotional value of human art differs from that of AI output. However, in the same survey, a plurality considered people who use AI to create art as artists if they provide “significant guidance” over the tool. This suggests that the public is not outright rejecting AI; instead, it is making distinctions based on the level of human involvement. The question of how much tool support we will accept before labeling a work as “not genuine” has become a key issue for education, the creative industries, and democratic culture. We should stop debating whether AI should be in classrooms and studios and start discussing the level of AI involvement needed to maintain authorship and trust. The good news is we can measure, disclose, and manage that level, opening up new possibilities for creativity and collaboration.
The Missing Metric: Proportion of AI Involvement
The primary issue with current classroom policies is the prevalence of binary thinking. Assignments are labeled as AI or not-AI, as if one prompt means plagiarism and one revised paragraph implies purity. This approach is already failing on two fronts. First, AI-text detectors are clearly unreliable and biased, with peer-reviewed studies showing they often falsely accuse non-native English speakers. Some universities have paused or limited the use of these detectors for this reason. Second, students and instructors are increasingly using generative tools; surveys in 2024 found that about three in five students used AI regularly, compared to about a third of instructors. Policies need to catch up with actual practices by using a metric that understands varying degrees of assistance, rather than one that shifts between innocence and guilt.
A workable standard should focus on the process, not just the product: What percentage of the work’s substance, structure, and surface was created by a model instead of the author? Since we cannot look inside someone’s mind, we should gather evidence around it. Part of that evidence could include provenance—verifiable metadata that shows whether and how AI was used. The open C2PA/Content Credentials system is now being implemented across major creative platforms. Even social networks are starting to read and display these labels automatically. Regulators are following suit: the EU’s AI Act requires providers to label synthetic media and inform users when they interact with AI systems. Meanwhile, the U.S. Copyright Office has established a requirement for human authorship, and courts have confirmed that purely AI-generated works lack copyright protection. Together, these developments make transparency essential for trust.
Figure 1. Audiences penalize fully automated art: 62% would like it less; only 5% would like it more.
“Proportion” must be measurable, not mystical. Education providers can combine three clear signals. First, version history and edit differences: in documents, slides, and code, we can see how many tokens or characters were pasted or changed, and how the text evolved. Second, prompt and output logs: most systems can produce a time-stamped record of inputs and outputs for review. Third, content credentials: when available, embedded metadata shows whether an asset was created by a model, edited, or just imported. None of this is about perfect detection; it’s about making a plausible, defensible estimate. As a method note, if an assignment is 1,800 words long and the logs show three pasted AI passages totaling about 450 words, plus smaller AI-edited fragments of another 150 words, a reasonable starting estimate of AI involvement would be 33% to 35%. Instructors can adjust this estimate based on the complexity of those sections, as structure and argument often count more than sentence refinement. The goal is consistency and due process, not courtroom certainty.
This process approach also respects the real capabilities of today’s generative systems. Modern models are impressive at quickly combining known patterns. They interpolate within the range of their training data; they do not intentionally seek novelty beyond it. However, current research warns that training loops filled with synthetic outputs can lead to “model collapse,” a situation where models forget rare events and drift toward uniform and sometimes nonsensical output. This is a significant reason to preserve and value human originality in the data ecosystem. Proportion rules in schools and studios thus protect not just assessment integrity, but also the future of the models by keeping human-made, context-rich work involved.
What Counts as Genuine? Evidence and a Policy Threshold
Public attitudes reveal a consistent principle: people are more accepting of AI when a human clearly takes charge. In the 2025 survey mentioned earlier, most respondents disliked fully automated art; the largest group accepted AI users as artists only when they provided “significant guidance,” Such as choosing the color palette, deciding on the composition, or making the final artistic decisions. Similar trends appear in the news: audiences prefer “behind-the-scenes” AI use to AI-written stories and want clear indications when automation plays a visible role. Additionally, more than half of Americans believe generative systems should credit their sources—another sign that tracking origins and accountability, rather than the absence of tools, underpins legitimacy. These findings do not pinpoint the exact line, but they illustrate how to draw it: traceable human intent combined with clear tool use builds trust.
Figure 2: Authorship hinges on visible human control: 42% accept AI users as artists only with significant guidance. Source: Béchard & Kreiman/Kreiman Lab via Scientific American.
A proportion-based policy can put that principle into action with categories that reflect meaningful shifts in authorship. One proposal that institutions can adopt today is as follows: works with 20% or less AI involvement may be submitted with a brief disclosure stating “assisted drafting and grammar” and will be treated as human-led. Works with more than 20% and up to 50% require an authorship statement outlining the decisions the student or artist made that the system could not—such as choosing the research frame, designing figures, directing a narrative, or staging a shot—so the contribution is clear as co-creation. Any work with more than 50% AI origin should carry a visible synthetic-first label and, in graded contexts, be assessed mainly on editorial judgment rather than original expression. These thresholds are guidelines, not laws, and can be adjusted by discipline. They provide educators with the middle ground between “ban” and “anything goes,” in line with how audiences already evaluate authenticity. This policy ensures fairness and objectivity in assessing the role of AI in creative work, instilling confidence in its implementation.
This approach also aligns with the views of artists and scholars on the irreplaceable nature of human work. Research from the Oxford Internet Institute suggests that machine learning will not replace artists; it will reshape their workflows while keeping core creative judgment in human hands. Creators express both anxiety and pragmatism: AI lowers barriers to entry and speeds up iteration, but it also risks homogenization and diminishes the value placed on craftsmanship unless gatekeepers reward evidence of the creative process and provenance. Education can establish a reward structure early, allowing graduates to develop habits that the labor market recognizes.
From Classroom to Creative Labor Markets: Building the New Trust Stack
If proportion is the key metric, process portfolios are the way forward. Instead of a single deliverable, students should present a brief dossier: the final work, a log of drafts, prompts, and edits, a one-page authorship statement, and, where applicable, embedded Content Credentials across images, audio, and video. This dossier should become routine and not punitive: students learn to explain their decisions, instructors evaluate their thinking, and reviewers see how, where, and why AI fits in. For high-stakes assessments, panels can review a subset of dossiers for audit. The message is straightforward: disclose, reflect, and show control. This is much fairer than relying on detectors known to misfire, especially against multilingual learners.
Administrators can transform proportion-based policy into governance with three strategies. First, standardize disclosures by implementing a campus-wide authorship statement template that accompanies assignments and theses. Second, require provenance where possible: enable C2PA in creative labs and recommend platforms that maintain metadata. Notably, mainstream networks have begun to automatically label AI-generated content uploaded from other platforms, signaling that provenance will soon be expected beyond the campus. Third, align with laws: the transparency rules in the EU AI Act and U.S. copyright guidance already indicate the need to mark synthetic content and uphold human authorship for protection. Compliance will naturally follow from good teaching practices.
Policymakers should assist in standardizing this “trust stack.” Fund open-source provenance tools and pilot programs; encourage collaborations between disciplines like arts schools, journalism programs, and design departments to agree on discipline-specific thresholds; and synchronize labels so audiences receive the same signals across sectors. Public trust is the ultimate benefit: when labels are consistent and authorship statements are routine, consumers can reward the kind of human leadership they value. The same logic applies to labor markets. Projections indicate AI will both create and eliminate jobs; a 2025 employer survey predicts job reductions for tasks that can be automated, but growth in AI-related roles. Meanwhile, recent data from one large U.S. area shows that AI adoption has not yet led to broad job losses; businesses are focusing on retraining rather than replacing workers. For graduates, the message is clear: the job market will split, favoring human-led creativity combined with tool skills over generic production. Training based on proportions becomes their advantage.
Lastly, proportion standards help guard against subtle systemic risks. If classrooms inundate the world with uncredited synthetic content, models will learn from their own outputs and deteriorate. Recent studies in Nature and industry suggest that heavy reliance on synthetic data can lead to models that “forget” rare occurrences and fall into uniformity. Other research indicates that careful mixing with human data can alleviate this risk. Education should push the boundaries of ideas, not limit them. Teaching students to disclose and manage their AI usage in ways that highlight their own originality protects both academic values and the quality of human data required for future systems.
The public has already indicated where legitimacy begins: most people will accept AI in art or scholarship when a human demonstrates clear leadership. The earlier statistic—62 percent—should be interpreted not as a rejection of tools, but as a demand for clear, human-centered authorship. Education can meet this demand through a proportion-based standard: measure the level of AI involvement, require disclosure and reflection, and evaluate the human decisions that provide meaning. Institutions that build this trust framework—process portfolios, default provenance, and sensible thresholds—will produce students who can confidently say, “this is mine, and here’s how I used the machine.” This approach will resonate across creative industries that increasingly seek visible human intent, preserving a rich cultural and scientific record filled with the kind of human originality that models cannot produce on their own. The question is no longer whether to use AI. It is how much and how openly we can use it while remaining genuine. The answer starts with proportion and should be incorporated into policy now.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Adobe. (2024, Jan. 26). Seizing the moment: Content Credentials in 2024. Adobe. (2024, Mar. 26). Expanding access for Content Credentials. Béchard, D. E., & Kreiman, G. (2025, Sept. 7). People want AI to help artists, not be the artist. Scientific American. European Union. (2024). AI Act—Article 50: Transparency obligations. Financial Times. (2024, Jul. 24). AI models fed AI-generated data quickly spew nonsense (Nature coverage). Kollar, D. (2025, Feb. 14). Will AI threaten original artistic creation? Substack. Nature. (2024). Shumailov, I., et al. AI models collapse when trained on recursively generated data. Oxford Internet Institute. (2022, Mar. 3). Art for our sake: Artists cannot be replaced by machines – study. University of Oxford. Pew Research Center. (2024, Mar. 26). Many Americans think generative AI programs should credit their sources. Reuters. (2025, Mar. 18). U.S. appeals court rejects copyrights for purely AI-generated art without human creator. Reuters Institute. (2024, Jun. 17). Public attitudes towards the use of AI in journalism. In Digital News Report 2024. Stanford HAI. (2023, May 15). AI detectors are biased against non-native English writers. TikTok Newsroom. (2024, May 9). Partnering with our industry to advance AI transparency and literacy. Tyton Partners. (2024, Jun. 20). Time for Class 2024 (Lumina Foundation PDF). U.S. Copyright Office. (2023, Mar. 16). Works containing material generated by artificial intelligence (Policy statement). Vanderbilt University. (2023, Aug. 16). Guidance on AI detection and why we’re disabling Turnitin’s AI detector. Federal Reserve Bank of New York (via Reuters). (2025, Sept. 4). AI not affecting job market much so far. World Economic Forum. (2025, Jan. 7). Future of Jobs Report 2025.
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Bitcoin’s value stems from settlement utility, not cash flows
Institutional demand and fees support this role
It is better seen as a monetary asset than a Ponzi
Same cash backbone, different rules: MMFs pay yield; stablecoins move money fast
Receive tuition via regulated stablecoins, then auto-sweep into MMFs/tokenized T-bills
This two-rail setup cuts cross-border costs, speeds settlement, and protects budgets