Skip to main content

The Vanishing Middle of Software Work and What Schools Must Do About It

The Vanishing Middle of Software Work and What Schools Must Do About It

Picture

Member for

11 months 2 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

AI is collapsing routine “middle” software work as adoption soars
Schools must teach systems thinking, safe AI use, and verification-first delivery
Employers will favor small, senior-led teams; therefore, curricula must reflect this reality

The key number in computing education right now is 84. That’s the percentage of developers in 2025 who use or plan to use AI tools at work, with most professionals using them daily. The trend is increasing. These new habits are lasting. The impact is real. Randomized trials have shown that AI-assisted pair programming can make specific coding tasks approximately 56% faster. Broader studies with knowledge workers show significant improvements in output and quality for well-defined tasks. If routine programming can be done in half the time, often by people who are still learning, the job market will not balance out. It will split. Junior and mid-level tasks are being automated or absorbed into smaller teams led by senior professionals. Employers are also indicating this shift: job postings in tech remain significantly lower than pre-pandemic levels, experience requirements are increasing, and corporate leaders are openly stating that AI enables them to accomplish more with fewer employees. The middle is disappearing, and education policy needs to determine whether to train for a shrinking area or redesign it.

We must be clear about what is disappearing and what is not. Government and industry data indicate that the most noticeable job losses are in “programmer” roles, which involve converting specifications into code, rather than in broader “software developer” roles that encompass design, integration, product judgment, and collaboration with stakeholders. This distinction matters for schools because it relates to the skills that AI struggles to replace: scoping, decomposition, security, systems thinking, and the social aspects of software delivery. It also aligns with regional statistics that may seem contradictory at first but are consistent underneath: the EU still reports millions of ICT specialists employed and ongoing hiring gaps, but firms also report challenges in finding the right mix of senior skills, rather than more entry-level workers. In short, demand is moving up the ladder.

The Middle Is Collapsing, Not the Profession

The near-term market signals are stark. Tech job postings on Indeed are weak, down about a third from early 2020 levels, after a significant pullback in 2023-2024. Where postings do exist, the requirements have increased: roles seeking 2-4 years of experience dropped from 46% to 40% between mid-2022 and mid-2025. In comparison, postings requiring over 5 years of experience rose from 37% to 42%. This is not a general “no more developers” situation; it suggests a “fewer average developers” reality. It reflects a production model where a small senior team, equipped with reliable AI tools, accomplishes what used to take multiple junior staff members. This trend is also evident from the leadership of major software companies, where finance leaders openly discuss AI as a way to create leaner organizations.

We also have evidence from learning curves to explain why entry-level positions are the most vulnerable to change. In field tests with generative AI, less-experienced workers often see the most significant boost in productivity on well-defined tasks. This effect can temporarily flatten parts of the experience gradient. Suppose a novice using an AI tool can complete the same narrow task as a mid-level employee. In that case, the firm's logical choice is to reduce mid-level roles or move them overseas. At the same time, these studies also show uneven gains for tasks that are more complex—such as setting specifications, solving ambiguous requirements, and managing unique challenges. These tasks still require significant human involvement and demand higher skill levels. This is why we are witnessing a decline in routine programming roles. At the same time, higher-leverage developer work remains stable or even grows over the medium term. Projections from the U.S. Bureau of Labor Statistics still indicate double-digit growth for developers through 2034, even as “programmer” employment declines. The market is not disappearing; it is changing.

Figure 1: AI is now default, but trust lags—wide adoption coexists with active distrust, reinforcing “use with verification” in curricula.

The team structure is shifting, too. The modern technology stack allows a lead engineer to manage agents, code generators, and testing frameworks throughout the development process. GitHub’s Copilot RCT recorded a 55.8% time reduction on a specific JavaScript task; data from Octoverse reveals a rise in AI-related repositories and tools. Insights from leaders suggest that AI has become an integral part of daily workflows, rather than a novelty. As a result, “ticket factories” staffed by layers of average workers are being replaced by small, senior-led teams that oversee automation, manage architecture, and address risks like security, privacy, and governance.

A New Skills Bar for Schools and Employers

If the middle is disappearing, entry into the field must not rely on the tasks that are vanishing. The old model—introduction to computer science, followed by two years of basic ticket work—no longer fits the market. Stack Overflow’s 2025 data show nearly all developers have experience with AI tools; even Gen Z’s early career paths are being redefined around tools rather than just spending time on repetitive tasks. However, the same surveys reveal a trust gap: almost half of developers do not trust the accuracy of AI outputs, and many lose time fixing generated code. This combination—high usage with low trust—highlights the need for AI literacy, verification workflows, and secure integration habits. Education must shift from “using the tool” to “designing a process that guides the tool.”

Curricula should adapt in three practical ways. First, shift evaluation from isolated coding to complete delivery under constraints. Students should define requirements with stakeholders, prompt responsible development, verify results through tests, and deliver minimal increments. Second, introduce systems thinking earlier. This means making architecture, interfaces, observability, and performance trade-offs understandable to first- and second-year students, not just in capstone projects. Third, formalize human-in-the-loop methods: code reviews, testing for model errors, and reproducible logs for prompts. These are not just extras; they are essential tasks. Since trust is a primary issue, we should include “explain-and-verify” as a skill, focusing on executable specifications, property-based tests, and static analysis, along with code generation. The aim is to produce fewer “average coders” and more junior systems engineers who can manage automation responsibly.

Employers must also rethink their design for early-career roles. Successful corporate experiments demonstrate that AI performs best when combined with guidelines and collaborative learning, including clear task-fit criteria, libraries of approved prompts, and structured peer coaching. Rotations should focus on integration, security reviews, and on-call simulations, rather than just completing tasks. Apprenticeships can evolve into “automation residencies” where new hires learn to connect tools, review outputs, and address unique challenges. If we do this, AI can help early-career talent advance faster in areas of the job that truly matter—ownership, judgment, and communication—rather than getting stuck on tasks that automation can easily take over.

Figure 2: A randomized trial shows Copilot users finished a standard coding task in 44.2% of the time—evidence for assessment that emphasizes verification, not raw keystrokes.

What Education Must Do Now

The policy direction should not be to resist the use of AI in the classroom. It should focus on enforcing real-world use with clear outcomes. We should require that accredited computing programs teach and assess responsible AI use from the start: students must disclose any assistance, maintain logs of prompts and tests, and explain how they verified results. Programs should place more weight on design quality, test coverage, and adaptability to change, rather than just counting lines of code. This aligns with the job market's signal that developers, not programmers, will create the most value. It also prepares students for the tools they will encounter immediately after graduation.

Funding should support cross-disciplinary build studios where education, health, climate issues, and public-sector partners present real problems. Students should work in small teams mentored by senior professionals, using AI freely but justifying each decision with automated tests and risk notes. The studios would publish open rubrics, datasets, and evaluations to boost quality across educational institutions. Since the EU reports both a large ICT workforce and continuous skill shortages, these studios should be regionally focused, addressing public needs, and open to apprentices and those seeking to upskill—not just degree-seekers. This approach will widen opportunities while avoiding training for tasks that are quickly disappearing.

Teacher development is the critical missing link. We need fast, practice-focused certification for “AI-integrated software teaching,” along with grants for redesigning test-heavy courses. The technology stack is essential: IDE plugins that log prompts and changes, CI pipelines that conduct static analysis and property-based tests, and dashboards that highlight insecure dependencies or data leaks. This is not about monitoring students; it is about professionalism. This is how we can transform the trust gap from a concern into a teachable skill: “never trust, always verify.”

Critics may argue that this is just a passing trend and that hiring for junior positions will rebound. There is some truth to that; economic cycles do matter, and developer jobs are expected to keep growing over the next decade. However, the shift in job composition is genuine. U.S. data shows that programming roles have lost more than a quarter of their jobs in two years, even as developer positions remain steady. Job postings remain low, and experience requirements are increasing.
Meanwhile, leaders from companies like Microsoft and SAP describe AI as already woven into everyday operations and cost management. Relying on a return to pre-AI job structures is not wise; it is evasion. The best move is to focus on the tasks that are hardest to automate and most crucial to oversee.

A final practical step is to align assessment with the job markets where students will actually work. Utilize open-source contributions as graded Projects. The Octoverse data indicate that AI-related projects are increasing rapidly, and students should have a tangible record of genuine collaboration. Promote internships that resemble “integration sprints,” not just bug-fixing marathons. Evaluate what truly matters for employability in an AI-driven world: the ability to break down problems, understand risk, prioritize testing, manage tools, and communicate trade-offs with non-engineers. These are valuable skills. These are the habits that enable small teams to achieve what larger groups could without overwhelming the system—or the students.

A more minor team, a higher standard, a stronger school

We began with 84, a clear indicator that AI is now a standard part of software work. We should conclude with another number that keeps us grounded: 56. That’s how much time can be saved on a real coding task in a randomized trial of AI pair programming. When productivity gains of this magnitude occur within companies, the market will not accommodate “average” workers completing routine tasks. It will favor those who can lead projects, integrate systems, and verify results. Suppose education continues to prepare students for roles that are fading away. In that case, we will let them down twice—once by misunderstanding the job market and again by not equipping them with the necessary habits to make AI effective and secure. The alternative is achievable. Teach students to take ownership of problems, to use AI openly and transparently, and to plan for both failure and success. The teams of the future will be smaller. Let our schools empower them to be stronger.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

BCG / Harvard Business School (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Working Paper 24-013.
Brainhub (2025). Is There a Future for Software Engineers? The Impact of AI on Software Development. Brainhub Library.
Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at Work. NBER Working Paper 31161.
Fortune (2025, Sept. 2). Microsoft CEO Satya Nadella reveals 5 AI prompts that can supercharge your everyday workflow.
GitHub (2022). Research: Quantifying GitHub Copilot’s Impact on Developer Productivity and Happiness. GitHub Blog.
GitHub (2024). Octoverse 2024: The State of Generative AI. GitHub Blog.
Indeed Hiring Lab (2025, July 30). The U.S. Tech Hiring Freeze Continues. Indeed Hiring Lab (2025, July 30). Experience Requirements Have Tightened Amid the Tech Hiring Freeze.
MIT News (2023, July 14). A study finds that ChatGPT boosts worker productivity for specific writing tasks.
Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The Impact of AI on Developer Productivity: Evidence from GitHub Copilot. arXiv:2302.06590.
Stack Overflow (2024). Developer Survey—AI.
Stack Overflow (2025). Developer Survey—AI.
Stack Overflow Blog (2025, Sept. 10). AI vs Gen Z: How AI has changed the career pathway for junior developers.
U.S. Bureau of Labor Statistics (2025, Aug. 28). Employment Projections—2024–2034; Software Developers, QA Analysts, and Testers. Occupational Outlook Handbook.
Washington Post (2025, Mar. 14). More than a quarter of computer-programming jobs just vanished. What happened?
Yahoo Finance (2025). CFO of $320 billion software firm: AI will help us “afford to…” (SAP workforce comments).
Eurostat (2024–2025). ICT specialists in employment: Towards Digital Decade targets for Europe.
ITPro & TechRadar (2025). Developers adopt AI while trust declines; 46% don’t trust AI outputs.

Picture

Member for

11 months 2 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

"No Reason, No Model": Making Networked Credit Decisions Explainable

"No Reason, No Model": Making Networked Credit Decisions Explainable

Picture

Member for

11 months 2 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

Network credit models aren’t “inexplicable”—they can and must give faithful reasons
Adopt “no reason, no model”: require per-decision reason packets and auditable graph explanations
Regulators and institutions should enforce this operational XAI so that denials are accountable and contestable

Consider this striking statistic: in 2023, the denial rate for Black applicants seeking conventional, owner-occupied home-purchase mortgages was 16.6%, nearly three times the 5.8% rate for non-Hispanic White applicants. These figures, reported under HMDA across millions of applications, underscore a significant disparity. As lenders shift from traditional scorecards to deep-learning models that comprehend relationships, these denials increasingly stem from network patterns—such as co-purchases, shared devices, and merchant clusters—rather than from three simple factors. Some institutions may label these systems as 'inexplicable.' However, the law does not permit complexity as an excuse. U.S. creditors are still required to provide specific reasons for denials. Europe's AI Act now categorizes credit scoring as a 'high-risk' activity, introducing phased requirements for logging and transparency. The central policy question isn't about accuracy versus explanations. It's whether we allow 'the network made me do it' to shield us from explanation, responsibility, and learning—especially in credit markets related to education that determine who completes degrees, starts programs, or maintains operations at community colleges.

We should change how we discuss this issue. Graph neural networks and other network models are not magical; they consist of linear transformations and simple nonlinear functions that communicate through connections. Near any given decision, these systems act like nested, piecewise-linear regressions. This means you can identify the critical variables and subgraphs, even if it requires more work than simply stating "debt-to-income ratio too high." The real issue is how operational: will lenders create systems that convert these technical explanations into clear, understandable reasons at the time of decision? The urgency and necessity of this issue cannot be overstated. Or will we accept a lack of clarity because it is easier to claim the networks cannot be made clear?

From Rule-Based Reasons to Graph-Based Reasons

For many years, adverse-action notices resembled a short checklist. When a loan was denied, institutions typically cited three reasons: insufficient income, a short credit history, and low collateral value. This approach worked because older underwriting models were additive and largely separable by feature. Network models change the process, not the obligation. When a GNN flags a loan, it might be because the applicant is two steps away from a group of high-risk merchants or because the transaction patterns mimic those that failed at a specific bank-merchant-device combination. Those are still reasons. To say "the model is inexplicable" confuses workflow choices with mathematical impossibility. Regulation B under ECOA requires creditors to provide "specific reasons" for denials. The CFPB has repeatedly warned that vague terms or generic reasons do not meet the law's standards—even for AI. Regulators play a crucial role in ensuring that the reasons provided are clear, understandable, and meet the law's standards. A regulator should hear "GNN" and say, "Fine—show me the subgraph and features that influenced the score and translate them into a consumer-friendly reason."

Technically, we know how to achieve this. Methods like LIME and SHAP create local surrogate models that approximate the complex model in the vicinity of a single decision. Integrated Gradients connects a prediction to input features along a path from a baseline. For graphs, GNNExplainer identifies a compact subgraph and feature mask that most influences the prediction. At the same time, SubgraphX uses Shapley values to find an explanatory subgraph with measured accuracy. Recent benchmarks in Scientific Data show that subgraph-based explainers produce more accurate and less misleading explanations than gradient-only methods. Newer frameworks, such as GraphFramEx, aim for standardized evaluations. None of this comes at no cost—searching for subgraphs can take several times longer than a single forward pass—but cost and complexity do not equal impossibility. Compliance duties do not lessen just because GPUs are expensive.

Figure 1: Even before networked models, denial rates were uneven; explainability duties exist to make reasons specific and contestable at the individual level, not to average away disparities.

Making Networks Clear: Auditable XAI for Credit

The primary policy objective should be operational clarity: reasons that align with the model and are comprehensible to people, generated at the time of decision, and retained for audits. A practical framework consists of three key components. First, every high-stakes model should generate a structured' reason packet' that pairs feature attributions with an explanatory subgraph, outlining key connections, edges, and motifs that influenced the decision. Second, enforce 'reason passing' throughout the credit stack. Suppose a fraud or identity risk model impacts the underwriting process. In that case, it must transfer its reason packet to ensure the lender cannot simply attribute the decision to unnamed upstream risks. Third, compile reason packets into compliant notices: map features to standard reason codes when possible and add brief, clear network explanations when necessary (for instance, 'Recent transactions heavily concentrated in a merchant-device group linked to high default rates'). Vendors already provide XAI toolchains to facilitate this; regulators should mandate such systems before lenders implement network models at scale.

Methodological details are crucial. How reliable are these reasons? Subgraph-based explainers can be validated by removing the identified subgraph and checking the decline in the model's risk score. Auditors should sample decisions, run validations, and ensure that reasons are not just plausible but also practical in counterfactual scenarios. How quickly can this operate? SubgraphX has been reported to take multiple times the base inference time; in practice, lenders can use quicker explainers for every decision and reserve heavier audits for a select sample, with strict real-time requirements for adverse actions. How do we ensure privacy? Reason packets should be modified for notices, using terms like 'merchant category group' rather than specific store names, while keeping full details for regulators. The EU AI Act already mandates significant logging and documentation for high-risk systems. Maintaining accuracy is fundamental, but without reliable, testable reasons, accuracy becomes unsubstantiated—and that represents a governance failure.

Figure 2: Operational XAI is not “impossible”—Subgraph-level explainers cost more compute than local gradients, but the overhead is measurable and manageable in production.

The regulatory framework is already present. In the U.S., the CFPB has clarified that companies using complex algorithms must provide specific, accurate reasons for adverse actions; "black box" complexity is not acceptable. SR 11-7's Guidance on model risk still applies: banks must validate models, understand their limits, and monitor performance changes—responsibilities that easily extend to the explainability aspect. Europe's AI Act is scheduled to take effect in August 2024, with its obligations phased in over a period of time. Credit scoring is categorized as high-risk, triggering requirements for risk management, data governance, logging, transparency, and human oversight. Critical milestones are set for 2025-2027. NIST's AI Risk Management Framework offers organizations a structured approach to integrating XAI controls into their existing policies. The direction is clear: if you cannot explain it, you should not use it for important decisions, such as credit for students, teachers, and school staff.

Education stakeholders have a unique role because credit influences access, retention, and school finances. Rising delinquencies and stricter underwriting—seen in the first ongoing drops in average FICO scores since the financial crisis—push for quicker risk decisions in student lending, tuition payment plans, and teacher relocation loans. Faculty and administrators can promote model understanding in their curricula, develop credit-risk projects where students implement and audit GNNs, and collaborate with local lenders to test reason-passing systems. Schools that manage emergency aid or income-share agreements should request decision-specific reason packets from vendors rather than just receiving PDFs of ROC curves. Regulators and accreditors can encourage a shared set of reason codes that covers patterns from graphs without becoming vague. If we teach students to ask "why," our institutions should demand the same when an algorithm says "no."

A Practical Mandate: No Reason, No Model

The guiding principle for policy and procurement should be this: no model should influence a credit decision unless it can provide reasons that people can understand, contest, and learn from. "No reason, no model" is more than a phrase; it is a compliance standard that can be incorporated into contracts and regulatory examinations. Lenders would ensure that every model in the decision chain generates a reason packet, that these packets are stored and auditable, and that consumer-facing notices clearly communicate those packets in terms aligned with the model's logic. Regulators would verify a sample of denials by independently running explainers and counterfactuals to confirm the accuracy of the bank's system. If removing the identified subgraph does not alter the score, then the "reason" is not a valid reason. This method respects proprietary models while rejecting obscurity as a business practice.

Predictably, there will be objections. One is cost: subgraph explanations can take longer, and building the necessary systems is complex. Yet compliance costs are mandatory. The time required for advanced explainers is manageable—seconds, not days—especially if lenders use varied strategies (quick local attributions for all decisions and deeper subgraph audits for a statistically valid sample). Another concern relates to intellectual property, arguing that revealing subgraphs discloses trade secrets. This is a distraction. Consumer notices don't need to show raw graphs; they must deliver transparent and trustworthy statements. Regulators can access the whole packet under confidentiality agreements. The final concern is performance: some argue that enforcing explainability might lower accuracy. However, benchmarks indicate that reliable explanations lead to better learning, and nothing in ECOA or the AI Act allows sacrificing people's right to reasons for a slight improvement in AUC. The responsibility to provide valid reasons lies with those who oppose them, not with those who demand them.

What should we do now? Regulators should provide Guidance that puts "no reason, no model" into practice for credit and related decisions, with clear testing procedures and sampling plans. They should also coordinate with NIST's AI RMF to specify requirements for documentation, resilience checks, and reporting of explanation failures, not just prediction mistakes. Lenders should create model cards that include metrics on explanation accuracy and subgraph validation tests, committing to reason-passing in vendor agreements. Universities and professional schools should establish "XAI for finance" programs that audit real models under NDAs and develop open-source reason packet frameworks. Civil society can assist consumers in challenging denials by utilizing the logic in those packets, translating complex graphs into practical steps ("spread transactions across categories," "avoid merchant groups known to relate to charge-offs"). This approach fosters governance that educates while making decisions.

A 16.6% denial rate in a significant market segment is more than just a number; it reflects the power dynamics at play. Network models can channel that power through connections too subtle for human observation. That is why we must insist on explanations that simplify complexity into accountable language. U.S. law already mandates this. European law is gradually introducing it. The science of explainability—local surrogate models, attributions, and subgraph identification—makes it achievable. When lenders claim "the network is inexplicable," they are not describing an unchangeable truth; they are choosing convenience over rights. We can choose differently. We can demand reason packets, reason passing, and accountable notices. We can train the next generation of data scientists and regulators to create and evaluate them. Suppose an organization asserts that its model cannot provide explanations. In that case, the solution is straightforward: that model should not be part of the market. No reason, no model.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Agarwal, C., et al. (2023). Evaluating explainability for graph neural networks. Scientific Data. https://doi.org/10.1038/s41597-023-01974-x.
Consumer Financial Protection Bureau. (2022, May 26). The CFPB acts to protect the public from black-box credit models that use complex algorithms.
Consumer Financial Protection Bureau. (2023, Sept. 19). Guidance on credit denials by lenders using artificial intelligence.
Consumer Financial Protection Bureau. (2024, July 11). Summary of 2023 data on mortgage lending (HMDA).
European Commission. (2024–2026). AI Act regulatory framework and timeline.
FICO. (2024, Oct. 9). Average U.S. FICO® Score stays at 717.
Lundberg, S., & Lee, S.-I. (2017). A unified approach to interpreting model predictions (SHAP).
Lumenova AI. (2025, May 8). Why explainable AI in banking and finance is critical for compliance.
National Institute of Standards and Technology. (2023). AI Risk Management Framework (AI RMF 1.0).
Skadden, Arps. (2024, Jan. 24). CFPB applies adverse action notification requirement to lenders using AI.
U.S. Federal Reserve. (2011). SR 11-7: Supervisory Guidance on model risk management.
Ying, R., et al. (2019). GNNExplainer: Generating explanations for graph neural networks.
Yuan, H., et al. (2021). On explainability of graph neural networks via subgraph explorations (SubgraphX).
Turner Lee, N. (2025, Sept. 23). Recommendations for responsible use of AI in financial services. Brookings.

Picture

Member for

11 months 2 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

When “looking through” looks away: inflation’s surprise, welfare loss, and what schools needed from monetary policy

Forecast errors turned a supply shock into larger welfare losses; “look-through” amplified them
Make look-through state-contingent with public shock decompositions and automatic triggers
Shield schools via indexed budgets, pooled energy hedging, and efficiency investments that cut volatile costs