Why Someone Should Read Funnell and Rogers

Who This Article Is For

Published

March 5, 2026

AUTHOR NAME

Shashank Heda, MD





Why Someone Should Read Funnell and Rogers


Why Someone Should Read Funnell and Rogers

Author: Shashank Heda, MD

Location: Dallas, Texas


Who This Article Is For

  • Anyone designing interventions — nonprofit leaders, public health practitioners, social entrepreneurs, development professionals — who sense that good intentions alone don’t translate to durable outcomes, and who are looking for frameworks that actually work in practice
  • People who measure outcomes but struggle to understand why programs succeed or fail, or who’ve inherited logic models that feel more like bureaucratic ritual than actual thinking tools that illuminate causal pathways
  • Evaluators exhausted by backward-looking assessments that generate mountains of data without producing actionable insight, and funders trying to distinguish genuine impact from well-crafted narratives that look impressive on paper but lack structural coherence
  • Thinkers uncomfortable with treating social programs like controlled physics experiments — recognizing that context, variability, feedback loops, and emergence matter more than linear causality — and seeking architecture that accommodates complexity

Why You Should Read It

  • The book addresses a foundational problem most organizations won’t admit: they act without explicit theories of causation, then compensate through measurement. This produces fragile programs, misleading conclusions, and evaluation theater that satisfies funders without generating learning.
  • It provides frameworks that actually connect intention to impact — not as abstract philosophy but as practical diagnostic architecture for navigating complexity, anticipating downstream effects, identifying failure modes before they manifest, and building shared mental models across chronically misaligned stakeholders.
  • You’ll learn to distinguish authentic altruism from its superficial imitation, replace box-and-arrow ritual with disciplined causal reasoning, and transform well-meaning initiatives into adaptive systems genuinely capable of learning from both success and failure — which is rarer than it should be.

Altruism and the Architecture of Impact

Altruism moves through human action like an underground current. It shapes decisions across scales, from individual gestures to institutional commitments spanning decades. The impulse operates identically whether channeled through profit-seeking ventures or nonprofit missions. Labels change. The underlying purpose doesn’t.

We separate commercial enterprise from social good, imagining they follow different logics. That distinction collapses under examination. Beneath surface differences lies shared terrain: translating intention into verifiable outcome. What unites them isn’t sentiment — it’s structure. The architecture connecting what we mean to accomplish with what actually happens.

Funnell and Rogers confront this directly. Their work targets the thread linking action to impact, which operates through mechanisms that remain invisible until deliberately examined.

The Core Problem

Initiatives exist without clear understanding of how or why they’re supposed to work. Organizations launch interventions, mobilize stakeholders, allocate resources — all while operating on implicit assumptions about causation that remain unexamined, inconsistent across participants, or grounded in tradition rather than evidence.

The symptoms manifest predictably: programs lacking articulated theory, actions undertaken without causal clarity, measurement deployed to mask uncertainty rather than illuminate mechanism. When questioned, defenders point to activity — meetings held, materials distributed, training sessions conducted. But activity isn’t explanation. Busy-ness obscures the absence of structural logic.

Most initiatives begin with genuine concern. The breakdown occurs at causal reasoning. How exactly will this intervention produce that result? Through which mechanisms? Under what conditions? These questions get deferred, assumed away, or answered with platitudes that sound sophisticated but specify nothing.

Evaluation Done Backwards

The typical sequence operates backwards: design program, implement, collect data on outputs, assess outcomes at endline. Missing — interrogation of program theory before deployment.

The question is not “is it well-intentioned,” but: given what we know about how systems work, is this causal chain defensible? What conditions must hold for the intervention to succeed? Where might it fail despite competent execution?

Answering these questions requires discipline — acknowledging uncertainty, identifying assumptions, consulting evidence. It’s intellectually demanding work that produces no immediate deliverables, so it gets skipped. Organizations move straight to implementation, deferring epistemic clarity until evaluation, by which time resources are spent and incentives aligned around defending the program regardless of what evidence emerges.

The consequence? Data accumulates without generating insight. Evaluations produce statistical significance measuring effects without understanding mechanisms. When programs fail, nobody learns why because the causal architecture was never specified. When they succeed, replication becomes guesswork.

Logic Models as Ritual

Most organizations have logic models — boxes connected by arrows, inputs flowing to outputs flowing to outcomes. The diagram satisfies funders, appears in proposals, gets filed with compliance documentation. Then implementation begins and the model is never consulted again.

The model was created as bureaucratic artifact, not thinking tool. Boxes represent aspirations rather than testable hypotheses. Arrows signify sequence but don’t specify causation. The structure implies linearity where reality operates through feedback, context-dependence, and emergent effects.

This creates false confidence. Organizations believe they’ve done the intellectual work when they’ve produced a diagram. The real questions remain unasked: How does X actually cause Y? For whom? Under what conditions? What would constitute evidence against this theory?

The Linearity Trap

The most seductive design error: treating social programs as if they operated through linear causality. If we deliver X, Y will follow. Train teachers, students learn more. Distribute bed nets, malaria declines.

Sometimes this holds. Often it doesn’t — the assumed mechanism didn’t operate as expected, or operated only under specific conditions, or produced the intended effect but triggered compensatory responses that neutralized impact.

Social programs are dynamic, context-sensitive systems operating through multiple pathways simultaneously. Effects emerge from interaction between intervention and environment. Outcomes depend on who receives the intervention, what else is happening concurrently, how implementation adapts. Treating this complexity as physics generates programs that look rigorous on paper but prove fragile in practice.

From Ritual to Reasoning

Funnell and Rogers offer a discipline of thought — approaching program design with epistemic clarity rather than performance of rigor. The shift moves from pretending certainty through measurement to acknowledging uncertainty through explicit reasoning.

This means grounding action in testable theories of change. Not aspirational statements, but falsifiable claims about causal mechanisms. If X, then Y — because of Z, through W pathway, contingent on V conditions. Each element specified clearly enough that evidence could prove it wrong.

It means treating program theory as living architecture. The initial model is hypothesis. Implementation tests it. Evidence refines it. Evaluation interrogates whether effects occurred through the theorized mechanism. Learning accumulates as progressive refinement of causal understanding.

And it means building shared mental models before implementation — getting funders, implementers, evaluators, and policymakers aligned on what we’re claiming will happen, how, for whom, under what conditions.

Why This Matters

The accountability movement has intensified demand for evidence — impact evaluation, randomized trials, data dashboards. All aimed at demonstrating programs work. This is progress, but carries risk.

When measurement outpaces understanding of mechanism, we get sophisticated techniques applied to poorly theorized interventions. Statistical significance without causal clarity. Effect sizes that don’t reveal which components mattered. Replication attempts that fail because contextual conditions weren’t identified.

The corrective: rigorous thinking about causation must precede rigorous measurement. Without explicit program theory, evaluation becomes backward — measuring endpoints without understanding the system that produced them.

The Signature Insight

Organizations that act without epistemic clarity about causation and then compensate through measurement are building on sand. The foundation is reasoning, not data collection.

This reverses conventional practice, which treats design as easy and evaluation as sophisticated. Design — when done properly — is the intellectually demanding work. It requires specifying mechanisms, identifying assumptions, consulting evidence, acknowledging uncertainty, building falsifiable models.

Evaluation becomes clearer once this foundation exists. You’re testing whether effects occurred through the theorized mechanism, for the right population, under specified conditions. When they don’t, you learn which part of theory was wrong. When they do, you understand what conditions enabled success.

That’s what Funnell and Rogers provide — a way of thinking connecting intention to impact through explicit causal reasoning. Structure that serves purpose. Discipline that generates insight.

For anyone committed to creating change that lasts: clarity before action, reasoning before measurement, understanding before scale.


Author: Shashank Heda, MD

Location: Dallas, Texas