A million satellites would overwhelm orbital coordination long before technical limits are reached
Collision risk, debris, and governance failures scale faster than engineering solutions
Without strict global control, orbital AI becomes a systemic liability, not progress
Mobile design is now governed, not just created
This regime shapes how learning technologies function in schools
Policy can still redirect design toward education
Back in 2012, Apple won a case where a jury awarded it over a billion dol
Maximizing Agentic AI Productivity: Why German Firms Must Move Beyond Adoption and Teach Machines to Act
Picture
Member for
1 year 2 months
Real name
Ethan McGowan
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence
Ethan McGowan is a Professor of AI/Finance and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
German firms adopted generative AI fast, but productivity gains are flattening
The next phase is converting adoption into durable agentic AI productivity
Education and policy must shift from tools to systems, governance, and measurement
It didn't take long for generative models to get into businesses. However, the speed at which they were accepted conceals something: the initial positive results aren't as strong as they were. When a company says it uses AI, it could mean anything. It could be a simple chatbot test or a complete system that plans, executes, and learns across various tasks. The important point is that the number of companies that moved past the test stage into routine AI use has increased significantly since 2023. However, the additional output per euro spent is lower than it used to be for those who started using it early on. This means we're not talking about whether we should use AI anymore; we're talking about turning it into something useful. We need to translate general use into something specific and ensure that experiments lead to lasting improvements. If schools, colleges, and politicians see all companies as the same, they won't recognize that companies that have already widely used AI need different kinds of support. They don't need more introductions, but they do need assistance with changing their systems, measuring results, and managing things. In this way, AI can deliver consistent benefits. We call this agentic AI productivity. It's about AI systems that not only create things but also act reliably in complex businesses, increasing output over time.
Agentic AI Productivity and How We Need to Think About It
We need to stop focusing on how many people are using AI and start focusing on the benefits and what businesses are learning. The number of people using generative tools is important because it indicates how many companies have adopted them. However, it doesn't say much about whether companies are getting real value. Evidence shows that many German companies are using it widely, but the performance gains for each additional euro spent are decreasing for companies that already use the tech. This is a sign of saturation: early users get the easy benefits, but getting more requires bigger changes.
This is important for education policy because the skills and support that helped develop rapid tests differ from those needed to scale AI systems. Workers need to know how to write prompts and keep data clean. Also, they need to know how to work with machines, measure AI performance, and ensure explicit guidelines and feedback. Teaching people to use the tools at a basic level won't change their productivity. The big challenge is teaching people to change how they work so that AI becomes a helpful partner that reduces waste, prevents mistakes, and increases what can be done across jobs.
Figure 1: GenAI use in German firms has expanded rapidly across working hours and adopter cohorts, indicating broad diffusion and early signs of saturation rather than under-adoption.
Early tests focused on drafts, summaries, and prototypes. But using AI regularly needs measurements of process value, for example, shorter times across teams, fewer mistakes, and better decisions when things are uncertain. These things are harder to measure, but they're the only way to know if it's still worth spending more money. Companies that focus solely on basic metrics might report early gains that later fade. In short, using AI is now a basic requirement. What will make companies successful is measuring and managing AI productivity.
Proof of Saturation: Germany, Companies, and Returns
From 2023 to 2025, German companies moved quickly from experimenting to using AI regularly. National surveys show a big jump from low adoption in 2023 to much higher numbers the next year. But it's happening at two speeds: many companies report some use, but only a few are using it deeply and regularly.
Figure 2: Higher GenAI spending is associated with diminishing increases in use intensity, suggesting flattening marginal returns as firms move beyond early adoption.
Growth in the number of companies using AI is important. But independent reviews and company surveys show that the benefits aren't as high for those who started using AI early: spending more money doesn't yield as many gains as it did at first. This suggests that as companies use AI in many areas, the simple gains, like automating text, getting data, or making reports, are used up. Improving later requires changing jobs, combining models with work systems, and changing management.
The impact on policy is clear. If German companies reach a point where simply getting more people to adopt AI isn't enough, then the incentives that drive adoption are outdated. What's important is support for redesigning processes, public resources for measuring AI outcomes, and teaching people to think strategically and manage. These are big changes. They require time, cross-departmental leadership, and new courses in schools and colleges that teach how to set goals, define AI actions, and evaluate results. Without these things, greater use of AI will spread resources too thinly and increase costs without increasing real output.
Italy's Households and Slow Adoption: A Comparison
Compare the saturation in German companies with what's happening with households in Italy. Surveys in Italy show people know about AI systems but are slower to use them regularly, and even slower to use them in daily tasks. Surveys show that about 30% of adults have tried generative tools in the past year, but only a few use them monthly or in ways that change how they work or study.
Households face various challenges, including limited digital skills, restricted access to technology, and concerns about risks and privacy. This makes them careful about using AI. The result is a double contrast: companies adopt AI quickly and then reach a limit, while households adopt it slowly and may never use it properly without support.
For politicians and teachers, the Italian example shows that more access doesn't mean more productive use. For households, learning should start with basic skills and progress to understanding how to work with AI. Schools and adult learning programs need to teach people how to manage AI tools, check results, and make ethical decisions about what to allow AI to do. If not, the social gap will widen: companies that use AI will gain a competitive advantage, while households and small companies that rely on them will fall behind, and inequality will worsen.
From Tests to Systems: What Companies and Schools Must Teach
Turning AI into real gains needs five changes inside companies. Each change affects how teachers train workers. First, move from knowing how to use tools to knowing how to design together. Create workflows where AI and people share goals and feedback. Second, include measurement in how things are done. Focus on process results, not just output numbers. Third, allocate time to management and testing to ensure AI behaves reliably in new situations. Fourth, focus on data and protected links so models can act with accurate information. Fifth, create roles for people who turn strategy into AI instructions. These changes are as much about teaching as about tech. They require courses that combine tech with business design, people skills, and ethics.
Schools and universities need to change quickly. They need to offer courses that teach how to write goals, measure the effects of AI actions, and set limits for AI. Teaching needs to be hands-on: learners should design and run tests that measure gains from small changes. Short programs should teach managers to recognize when increased spending isn't worth it. Also, government policy can help by funding partnerships that enable small businesses to measure results and by supporting benchmarks that let firms compare AI outcomes without disclosing private data.
Addressing Criticisms and Providing Rebuttals
Two common criticisms will come up. First, critics will argue it's too early to discuss saturation because many firms still don't use AI. This is true overall; many small companies are behind. But the policy needs to differ across firms. A general push to increase use misses the firms that need change. Second, some will say that AI is unsafe and we should limit it. That's a valid worry, and it supports our main point. If AI is to operate autonomously, management, testing, and measurement are essential. Stopping AI is not the answer. A better policy supports safe use by funding research, requiring reporting of problems, and promoting test environments in which AI behaviors can be tested before widespread use. Evidence shows that regulators and companies can work together to create standards that allow safe use while protecting people.
Another argument is about how we're measuring things. Critics will say that surveys can't capture long-term results, so claims about benefits are just guesses. That's a fair point. Our view is that the available evidence suggests that benefits decrease after a quick start. We support that claim with different data, along with caution. In cases where direct measurements are missing, policy should concentrate on improving measurement rather than giving large, untargeted support. Companies and governments need to agree on measurements for AI outcomes so we can judge investments.
We are at a turning point. The first push of generative tools changed what companies tried. The next must change how they work. The key is not whether a firm uses AI but whether it can get benefits as investments increase. That's what we mean by AI productivity: systems that act, learn, and improve in ways that raise output for each euro spent. German firms are showing signs of saturation. Italian households show slow adoption. Both facts point to the same conclusion: stop treating use as the goal. Support measurement, management, business design, and teaching about how humans and machines work together. Those are the things that will turn experiments into permanent value. Time is short, and the choice is clear. Support organizations and teach the skills that enable AI to improve work rather than break it up. Doing nothing will give us more tools, more noise, and smaller gains for the same cost.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Bank for International Settlements (2025) Exploring household adoption and usage of generative AI: New evidence from Italy. BIS Working Papers, No. 1298. Centre for Economic Policy Research (2025) Generative AI in German firms: diffusion, costs and expected economic effects. VoxEU Column. Eurostat (2025) Digital economy and society statistics: use of artificial intelligence by households and enterprises. European Commission Statistical Database. Gambacorta, L., Jappelli, T. and Oliviero, T. (2025) Generative AI, firm behaviour, diffusion costs and expected economic effects. SSRN Working Paper. Organisation for Economic Co-operation and Development (2024) Artificial Intelligence Review of Germany. OECD Publishing.
Picture
Member for
1 year 2 months
Real name
Ethan McGowan
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence
Ethan McGowan is a Professor of AI/Finance and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Davos survives by staging power, not by exercising it
Ritualized presence replaces accountability and real decision-making
Its persistence reveals deep institutional and educational gaps
Climate shocks now directly strain public budgets and weaken tax bases
Disaster spending and insurance gaps are becoming fiscal risks
Without EU coordination, climate risk turns into a lasting deficit
Inflation spreads faster because firms reprice in response to shocks, not calendars
Energy and AI amplify this speed, but state-dependent pricing is the core driver
Policy and education must adapt to inflation that moves in days, not months
Professor of AI/Policy, Gordon School of Business, Swiss Institute of Artificial Intelligence
David O’Neill is a Professor of AI/Policy at the Gordon School of Business, SIAI, based in Switzerland. His work explores the intersection of AI, quantitative finance, and policy-oriented educational design, with particular attention to executive-level and institutional learning frameworks.
In addition to his academic role, he oversees the operational and financial administration of SIAI’s education programs in Europe, contributing to governance, compliance, and the integration of AI methodologies into policy and investment-oriented curricula.
Published
Modified
The AI Tax is turning memory scarcity into a hidden cost on education
Rising DRAM prices push computing access out of reach for many schools and families
Without action, personal computers risk becoming a privilege again
The price of memory is the new tax on learning. In late 2025, global memory prices jumped roughly 50% in a single quarter and analysts now warn of another doubling as data centers gobble AI-grade DRAM and HBM. This is not a distant supplier glitch. It is a structural pivot: wafer capacity, packaging lines and advanced-process allocation have been steered toward servers built to train and serve large models. The result is an AI Tax—an implicit added cost that falls on every device, classroom, and campus that relies on current computing. That tax is already reshaping procurement plans, delaying upgrades, and forcing schools to choose between connectivity, compute and curriculum. The urgent question for teachers and decision-makers is simple: how do we prevent a technological rollback—where modern personal computing becomes a luxury again—while the AI sector consumes the lion’s share of scarce memory resources?
AI Tax and the new memory scarcity
We reframe the debate away from vague “distribution network problems” to a policy problem that is redistributive by design. The shift of foundry and packaging priorities toward high-bandwidth, high-margin memory for AI servers is not a temporary hiccup; it is a market decision with distributional consequences. Major memory makers are consciously steering their capacity and product roadmaps to meet demand from cloud and AI vendors. That choice raises the effective price of commodity DRAM and DDR5 modules that schools, households and small labs buy. For practical purposes, the market is imposing a new cost on education: higher per-device procurement costs, slower refresh cycles, and fewer options for low-cost computing labs. This is why calling it an “AI Tax” is useful — it captures the predictable transfer of scarce hardware value from broad public use into a narrow industrial application.
Figure 1: Memory price inflation is being driven by server demand: DRAM prices for AI-oriented server hardware rise faster and higher than PC memory, shifting costs downstream to consumer and educational devices.
The evidence is clear on scale. Market trackers and industry briefs documented a sharp and sustained rise in memory pricing through late 2025, and market-research firms revised their forecasts upward in early 2026. The upshot is that commodity memory, which once kept mid-range machines affordable, now commands prices that push entire system budgets higher. Schools that planned three-year refresh cycles are now delaying or cancelling orders; districts with thin capital budgets face longer device lifespans, which lead to slower software adoption and weaker learning outcomes. These are not abstract supply-chain effects: they are concentrated harms to communities that depend on scheduled, predictable hardware replacement to keep students online and empowered.
Because capacity cannot be expanded overnight, the structural response from memory suppliers will be gradual. Foundry and packaging steps that support HBM and server-grade DDR require capital and lead time measured in quarters and years, not weeks. Meanwhile, cloud and hyperscalers are bidding heavily and locking supply through multi-quarter contracts. The bidding and allocation dynamics will keep upward pressure on consumer-facing prices long enough to shape school procurement cycles for at least two years. The practical consequence is that every procurement decision now embeds a macroeconomic bet: pay up for current availability, or wait, knowing that waiting may not bring lower prices if server demand stays high. That choice reshapes access: short-term purchases will favor well-funded districts and departments, creating a widening gap in practical computing access that compounds other digital divides.
What the AI Tax means for education and access
Short sentences matter because decisions must be made clearly. First, the immediate effect is an upgrade freeze. District and university buyers report postponing orders and trimming spec targets. Those freezes result in two harms: fewer devices per student and older machines that cannot run up-to-date educational software or tools. For computer literacy, computational thinking and hands-on STEM work, older machines are not simply slower; they constrain what teachers can assign and what skills students can practice. That erosion is invisible until a cohort arrives at a lab and finds the software cannot run. The “AI Tax” thus functions as a stealth curriculum knife.
Second, the cost shift affects software and cloud strategies. Schools that expected to offload heavy workloads to the cloud now face higher cloud pass-through costs because providers have absorbed higher capital costs. Higher server-side memory costs raise the marginal cost of hosted simulations, data science labs and adaptive training platforms. For schools already balancing licensing fees against hardware investments, the math changes: do we pay more for cloud services that keep older local machines running, or do we invest in better endpoints and reduce recurring service budgets? Neither choice is attractive for budget-limited districts. The tactical responses we are already seeing vary: extended device lifespans, conservative software rollouts, and a move toward lightweight web apps that run in low-memory environments. Each workaround is a second-best solution.
Figure 2: As DRAM prices surge, the cost of a capable personal computer rises sharply, reversing years of declining affordability and narrowing access to modern computing.
Third, inequity amplifies. Well-funded universities and private schools can hedge against the AI Tax by buying early, stockpiling inventory, or contracting directly with suppliers. Public systems and community colleges face procurement rules, budget cycles and constrained cash flow. The result is geographic and socioeconomic stratification of computing capability that resembles earlier eras—when home computers were rare and specialized labs were the gateway to coding and computational careers. If left unchecked, the AI Tax will recreate that old pattern: those with money get the machines that teach the future; those without become passive consumers. This is not inevitable. It is a policy outcome we can prevent through aligning procurement, subsidies and industrial policy with educational priorities. The next section lays out concrete policy steps.
Policy prescriptions to blunt the AI Tax
First, intervene where market allocation skews public interest. Governments and consortia can prioritize memory capacity for educational and public-interest computing using targeted purchase agreements, strategic stockpiles and capital grants. A practical step is to set aside periodic allocations of commodity DRAM for public institutions via negotiated allotments with major suppliers. Such allotments need not be large to be effective; even a fraction of a supplier’s quarterly consumer-module production can stabilize school procurement pipelines and lower volatility for public buyers. This is a classic market-shaping intervention: use collective buying power to reduce transaction costs and prevent thin-pocketed buyers from being priced out. It is not charity; it is public infrastructure policy.
Second, subsidize endpoint fairness rather than subsidize cloud bills. Many proposals intend to expand cloud access to compensate for weak endpoints. That can work in part, but cloud services will transmit the AI Tax downstream as server and memory costs rise. A more durable approach is targeted device subsidies and rotating upgrade funds for disadvantaged districts. These funds should be conditional on device-standard parity—guarantees that students have devices capable of running up-to-date educational software for the intended curriculum. A rotation model that replaces a fixed share of devices each year reduces the shock of a single large purchase and removes the incentive to postpone upgrades indefinitely.
Third, promote memory-efficient pedagogy and software standards as transitional tools for funding. Encourage edtech vendors to adopt low-memory modes and progressive enhancement designs so that core learning tasks do not depend on the latest hardware. This can be spurred through procurement standards and grant conditions. Simultaneously, fund transitional hybrid solutions—modest local compute appliances that use efficient accelerators or pooled labs that share modern hardware across districts. These interim measures buy time while longer-term industrial responses come online.
Fourth, rethink industrial incentives. Public policy should encourage a diversified memory supply chain and buffer capacity for consumer- and education-grade DRAM. That can include targeted incentives for fabs to maintain a share of commodity memory lines, support for packaging lines serving consumer modules, or R&D tax credits that reduce the cost of producing mainstream memory. Policymakers should also insist on transparent allocation practices when suppliers prioritize one customer segment; transparency enables public institutions to plan and respond.
Anticipating critiques, four lines of rebuttal are important. Some will argue that market forces must decide and that interventions distort efficiency. Yet education is not an ordinary consumer good; it is a public good with long-term social returns. Left to pure market allocation, we risk underproviding a service whose value compounds across decades. Others will argue that memory supply will expand and prices will settle. That may happen eventually, but lead times for fabs and packaging are long; in the meantime, cohorts of students will have lost critical learning opportunities. A third critique is fiscal: funds are scarce. That is real. But the alternatives—diminished skill pipelines and higher future social costs—carry larger long-term fiscal burdens. Finally, some may worry about gaming the system. Design allotments, rotation funds and procurement standards with clear audit and sunset clauses to limit rent-seeking. In short, legitimate concerns can be addressed; paralysis would be the far worse choice.
A practical call
The AI Tax is a simple reality: memory is scarce, prices are spiking, and the burden falls on those least able to pay. We can treat this as a market problem or a public policy issue. If we do nothing, the next few years will widen the difference between students who learn to build and those who merely consume. If we act, three modest moves will change the trajectory: coordinate buying power for public institutions, subsidize device parity not just cloud access, and impose disclosure and targeted incentives across the memory supply chain. These steps are practical, politically viable and, crucially, time-sensitive. The clock is not on an abstract market cycle; it is on school boards, budget cycles and the lives of students who may miss the computational foundations of future careers. The AI Tax can be paid voluntarily—through careful, equitable policy that spreads the cost and preserves access—or it can be paid involuntarily, through lost opportunity and entrenched inequality. We should choose a policy.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
AOL. (2026). AI data centers are causing a global memory shortage. AOL Technology. CNBC. (2026, January 10). Micron warns of AI-driven memory shortages as demand for HBM and DRAM surges. CNBC Technology. Counterpoint Research. (2026). Memory prices surge up to 90% from Q4 2025. Counterpoint Research Insights. Popular Mechanics. (2026). Why AI is making GPUs and memory more expensive. Popular Mechanics Technology Explainer. Reuters. (2026, February 2). TrendForce sees memory chip prices rising as much as 90–95% amid AI demand. Reuters Technology. Reuters. (2026, January 22). Surging memory chip prices dim outlook for consumer electronics makers. Reuters World & Technology. Scientific American. (2026). The AI data center boom could cause a Nintendo Switch 2 memory shortage. Scientific American. Tom’s Hardware. (2025). AI demand is driving DRAM and LPDDR prices sharply higher. Tom’s Hardware. TrendForce. (2026). DRAM and NAND flash price forecast amid AI server expansion. TrendForce Market Intelligence.
Picture
Member for
1 year 2 months
Real name
David O'Neill
Bio
Professor of AI/Policy, Gordon School of Business, Swiss Institute of Artificial Intelligence
David O’Neill is a Professor of AI/Policy at the Gordon School of Business, SIAI, based in Switzerland. His work explores the intersection of AI, quantitative finance, and policy-oriented educational design, with particular attention to executive-level and institutional learning frameworks.
In addition to his academic role, he oversees the operational and financial administration of SIAI’s education programs in Europe, contributing to governance, compliance, and the integration of AI methodologies into policy and investment-oriented curricula.
North Korea’s economic rise is less about growth than about funded capability Conflict-linked cash is speeding up industrial and military learning Policy must disrupt cash-to-capacity channels, not just impose sanctions
Tariffs do not just redirect trade; they quietly reroute skills, students, and institutions
Trade diversion reshapes education and jobs
Policy must treat trade shocks as human-capital shocks, not only as economic ones.