What it is
A decision tree is a diagram that maps a decision and its consequences as a branching structure. Each decision node (usually drawn as a square) represents a choice you control; each chance node (a circle) represents an uncertain outcome with probabilities attached; each terminal node records the final value or payoff of that path. By multiplying probabilities and payoffs along each path and summing them, you compute an expected value for each initial choice — which gives you a rational basis for picking one.
Decision trees grew out of operations research and decision analysis in the 1950s and 60s (Howard Raiffa's work at Harvard was central), and are still the standard for explicit, quantitative decisions under uncertainty.
When to use it
A decision tree is worth the overhead when (a) the decision is consequential, (b) outcomes are genuinely uncertain, and (c) you can estimate probabilities and payoffs at least roughly. Reach for it when:
- Choosing whether to run a clinical trial, a major investment, or a product bet with measurable downside
- Deciding between settling a lawsuit and going to trial
- Sequencing experiments where each result changes what's worth running next
- Evaluating a "build vs. buy vs. partner" choice with known cost ranges
- Pricing a real option — when to invest more, when to walk away
Skip it for fast, reversible, or low-stakes decisions; the modeling overhead won't pay off.
How to run it
- State the decision precisely and the time horizon you're modeling. "Should we license the technology or build it in-house, judged over 24 months?"
- Draw the first decision node and branch out the options you're actually considering — usually 2–4. Drop dominated options early.
- For each branch, add chance nodes for the major uncertainties. Limit yourself to the 2–3 uncertainties that actually move the answer.
- Assign probabilities to each chance branch. They must sum to 1 at each node. Use ranges (10/50/90 percentile) if a single number feels false.
- Assign payoffs at the terminal nodes — in dollars, utility, or whatever common unit fits.
- Roll back: multiply each terminal payoff by the product of probabilities on its path, sum across branches at each chance node, and take the maximum at each decision node.
- Run sensitivity analysis. Vary the most uncertain probability ±20% — if the recommended choice flips, you have a real decision to think harder about. If it doesn't, you can move on.
Common pitfalls
The biggest failure mode is combinatorial explosion. Every additional uncertainty doubles or triples the size of the tree, and after three or four layers the model becomes unreadable. Prune aggressively: a useful tree usually has fewer than 20 terminal nodes. If you find yourself adding a fifth uncertainty, ask whether you're modeling the decision or modeling the world.
Second pitfall: false precision in probabilities. Quoting a 37% probability when you really mean "somewhere between a quarter and a half" lets the math overstate confidence. Anchor probabilities to base rates or reference classes where you can, and run sensitivity analysis where you can't.
Third: ignoring risk preference. Expected value treats a coin flip between $0 and $200 as equivalent to a guaranteed $100. For most real people and companies, it isn't. If the downside would genuinely hurt — bankruptcy, career-ending — switch from expected value to expected utility, or simply note the downside in plain language and let the decision-maker weight it.
Variations
A close cousin is the influence diagram, which collapses the tree into a graph of decisions, uncertainties, and values connected by arrows. Influence diagrams are easier to read when the same uncertainty affects multiple branches, but lose the explicit path-by-path clarity. Monte Carlo simulation picks up where decision trees become unwieldy: instead of enumerating branches, it samples thousands of random paths through the uncertainty distributions. Reach for Monte Carlo when uncertainties are continuous (cost, demand, time) rather than discrete; reach for a decision tree when the choices and outcomes are naturally categorical.