What We Owe the Future

William MacAskill

Published

March 5, 2026

AUTHOR NAME

Shashank Heda, MD





What We Owe the Future


What We Owe the Future

William MacAskill — A Micro Reading Analysis

Genre: Civilizational & Long-Horizon Futures

Author: Shashank Heda, MD

Location: Dallas, Texas


Who Should Read This?

  • Policy architects and governance leaders
  • AI researchers and technology stewards
  • Philosophers and ethics practitioners
  • Citizens invested in civilizational futures

Why Should They Read This?

  • Reframes moral obligation across time
  • Exposes catastrophic risks we tolerate
  • Challenges short-termism in governance
  • Provides an actionable ethical scaffold

1. The Primary Hypothesis

MacAskill’s foundational proposition is deceptively simple, almost disarmingly so: positively influencing the long-term future is the defining moral priority of our time. Three premises anchor this claim. Future people count morally — their suffering and joy carry no less weight simply because they do not yet breathe. There could be unfathomably many of them — if humanity survives even a fraction of its potential lifespan, present-day humans represent a vanishingly small percentage of all who will ever live. And we can, right now, shape their world for better or worse.

This is not speculative abstraction. It is an ethical architecture built on a structural absence that most moral philosophy simply ignores: we have no governance layer for the unborn. MacAskill’s longtermism is, at its root, the construction of that missing layer. The analogy he deploys stays with you — if you were hiking and dropped a glass bottle on a trail, would it matter whether a child cuts herself on the shards tomorrow or in a century? Harm is harm. The temporal distance changes nothing about the wound.

2. Ten Things to Know and Why They Matter

  • Temporal Impartiality: Distance in time is morally equivalent to distance in space. If geography doesn’t diminish moral worth, neither should chronology.
  • The Sheer Scale of the Future: If humanity lasts even one million years — a typical mammalian species’ run — 99.5% of all human life lies ahead of us. We are civilizational infants.
  • Value Lock-In: Values entrenched through technology, especially AI, could persist permanently. History shows moral values are contingent, not inevitable. Slavery’s abolition was not predetermined; it was fought for by remarkably few people.
  • Artificial General Intelligence as Inflection Point: AGI could either amplify human flourishing or freeze authoritarian values into permanent dominion. The governance vacuum around AI is the most dangerous structural absence we face.
  • Engineered Pandemics: Bioweapons designed through synthetic biology represent a catastrophic extinction pathway that remains woefully underfunded in global security priorities.
  • Civilizational Resilience: MacAskill argues civilization is more resilient than we assume — collapse is recoverable. However, permanent stagnation or extinction is not. The distinction is critical.
  • Technological Stagnation: Centuries of stalled progress make extinction more probable and could yield a society governed by worse values than those we hold today. Stagnation is as corrosive as collapse.
  • Moral Contingency: The moral norms we consider settled are accidents of history. If the tape were replayed, different values might dominate. That fragility demands active stewardship, not complacency.
  • Early Plasticity, Later Rigidity: Civilizational values are moldable now. They may not be tomorrow. We are living in the window — perhaps the only window — when trajectory changes are still possible.
  • Career and Donation as Moral Instruments: Your most consequential moral decision is not what you consume — it is what you choose to do with your working life and where you direct resources.

3. What It Teaches Us for Current Challenges

We are building the most powerful technologies in human history — generative AI, synthetic biology, autonomous weapons — with governance architectures designed for the last century. MacAskill’s framework exposes this as a structural failure, not merely a policy lag. The absence of intergenerational governance is not an oversight. Entire civilizations could fall through it.

Consider AI regulation today. The debate fixates on near-term harms: deepfakes, job displacement, algorithmic bias. These matter. But MacAskill forces a harder question — what happens when the values embedded in these systems become permanent? When they calcify into the operating logic of institutions, economies, militaries? The Maginot Line analogy applies here with unsettling precision: we are fortifying against yesterday’s threat while the real danger leapfrogs every barricade we build.

The book also recalibrates how we think about climate change. Not as the terminal crisis — MacAskill argues humanity would likely survive even extreme warming — but as an existential risk factor that compounds every other vulnerability. A world strained by ecological degradation is a world less capable of navigating the truly irreversible threats.

4. Implications and Impact If We Ignore

The sequelae of indifference are not hypothetical — they are structural. Ignore AI value lock-in, and we risk a future governed permanently by whatever ideology happens to control AGI first. Ignore engineered pandemic preparedness, and a single act of bioterrorism could collapse global systems beyond recovery. Ignore stagnation, and we consign future generations to a slow erosion of capacity — not dramatic extinction but the quiet death of possibility.

The most pernicious implication? We normalize the abandonment of future people as morally acceptable. We already do this with climate policy, with national debt, with resource extraction. MacAskill’s argument strips away the rationalizations: temporal distance is not moral permission to harm. If we would not poison a child across the ocean, we cannot poison one across centuries and call it policy.

5. Advantages of Resolving the Issues

Adopting a longtermist governance orientation does not require abandoning present-day priorities. That is the facile objection. What it demands is calibration — the kind of evaluative discipline that asks not merely “what works now?” but “what survives contact with the next century?”

The advantages are architectural. Robust AI governance frameworks created now could prevent catastrophic value lock-in. Investments in biosecurity today are orders of magnitude cheaper than civilizational recovery after a pandemic engineered to resist every countermeasure. Institutional designs that incorporate intergenerational accountability would force a structural discipline on resource allocation that short-term electoral cycles cannot produce.

And at the individual level — perhaps the most resonant point — choosing a career oriented toward long-term impact, directing resources toward neglected existential risks, and strengthening institutions that safeguard moral progress are not acts of sacrifice. They are acts of stewardship. The return is not personal. It is civilizational.

6. What Should Be Our Civilizational Collective Memory?

We remember those who built cathedrals they would never see completed, who drafted constitutions for descendants their authors would never meet. Sanatan Dharma itself carries this temporal architecture — kartavya, the duty that extends beyond a single lifetime, the recognition that dharmic action seeds outcomes across generations.

MacAskill is constructing the secular equivalent of this ancient insight. The civilizational memory we must encode is this: we are ancestors now. Every governance architecture we build, every technology we unleash, every value we entrench — these are not our possessions. They are inheritances being written in real time. The question is not whether future generations will judge us. It is whether they will exist to judge at all.

The Closing Thought

Humanity’s written history spans five thousand years. The unwritten future could stretch for millions — or terminate tomorrow. MacAskill’s longtermism is not prophecy; it is diagnostic architecture for a civilization that has never built a governance layer for its own continuity. The foundational question is whether we are laying it with the discipline of architects — or the indifference of sleepwalkers.


Author: Shashank Heda, MD — Dallas, Texas

Organization: Raanan Group | Micro Reading Book Club