FrameworkList
Decision

Decision Tree

Branching outcomes and probabilities

Best for
evaluating branching choices
Time
30–60 min
Difficulty
Intermediate
Example

Build, buy, or partner for the analytics module?

Branches considered
  1. Build in-house — 12 wk · 2 eng, full control, drains roadmap focus
  2. Buy off-the-shelf — $24k/yr, live in 2 weeks, vendor lock-in risk
  3. Partner with a specialist — 12% rev-share, co-branded launch, roadmap misalignment risk
  4. Decision: buy now, revisit build at 100k MAU when leverage and data shift

What it is

A decision tree is a diagram that maps a decision and its consequences as a branching structure. Each decision node (usually drawn as a square) represents a choice you control; each chance node (a circle) represents an uncertain outcome with probabilities attached; each terminal node records the final value or payoff of that path. By multiplying probabilities and payoffs along each path and summing them, you compute an expected value for each initial choice — which gives you a rational basis for picking one.

Decision trees grew out of operations research and decision analysis in the 1950s and 60s (Howard Raiffa's work at Harvard was central), and are still the standard for explicit, quantitative decisions under uncertainty.

When to use it

A decision tree is worth the overhead when (a) the decision is consequential, (b) outcomes are genuinely uncertain, and (c) you can estimate probabilities and payoffs at least roughly. Reach for it when:

Skip it for fast, reversible, or low-stakes decisions; the modeling overhead won't pay off.

How to run it

  1. State the decision precisely and the time horizon you're modeling. "Should we license the technology or build it in-house, judged over 24 months?"
  2. Draw the first decision node and branch out the options you're actually considering — usually 2–4. Drop dominated options early.
  3. For each branch, add chance nodes for the major uncertainties. Limit yourself to the 2–3 uncertainties that actually move the answer.
  4. Assign probabilities to each chance branch. They must sum to 1 at each node. Use ranges (10/50/90 percentile) if a single number feels false.
  5. Assign payoffs at the terminal nodes — in dollars, utility, or whatever common unit fits.
  6. Roll back: multiply each terminal payoff by the product of probabilities on its path, sum across branches at each chance node, and take the maximum at each decision node.
  7. Run sensitivity analysis. Vary the most uncertain probability ±20% — if the recommended choice flips, you have a real decision to think harder about. If it doesn't, you can move on.

Common pitfalls

The biggest failure mode is combinatorial explosion. Every additional uncertainty doubles or triples the size of the tree, and after three or four layers the model becomes unreadable. Prune aggressively: a useful tree usually has fewer than 20 terminal nodes. If you find yourself adding a fifth uncertainty, ask whether you're modeling the decision or modeling the world.

Second pitfall: false precision in probabilities. Quoting a 37% probability when you really mean "somewhere between a quarter and a half" lets the math overstate confidence. Anchor probabilities to base rates or reference classes where you can, and run sensitivity analysis where you can't.

Third: ignoring risk preference. Expected value treats a coin flip between $0 and $200 as equivalent to a guaranteed $100. For most real people and companies, it isn't. If the downside would genuinely hurt — bankruptcy, career-ending — switch from expected value to expected utility, or simply note the downside in plain language and let the decision-maker weight it.

Variations

A close cousin is the influence diagram, which collapses the tree into a graph of decisions, uncertainties, and values connected by arrows. Influence diagrams are easier to read when the same uncertainty affects multiple branches, but lose the explicit path-by-path clarity. Monte Carlo simulation picks up where decision trees become unwieldy: instead of enumerating branches, it samples thousands of random paths through the uncertainty distributions. Reach for Monte Carlo when uncertainties are continuous (cost, demand, time) rather than discrete; reach for a decision tree when the choices and outcomes are naturally categorical.

Related frameworks

Want to fill in your own Decision Tree?
Get FrameworkList for iOS