Skip to main content

Project Monte Carlo Risk

Use Monte Carlo simulation to understand the range of possible outcomes for your project timeline and budget. Get probability-based estimates using three-point estimation and discrete risk events.

For educational purposes only — not formal project management or financial advice

Project Setup

Tasks (Three-Point Estimates)

Enter optimistic, most likely, and pessimistic estimates for each task.

Task 1

Risk Events (Optional)

Add discrete risks with probability and impact on duration/cost if they occur.

No risk events added yet

Simulation Settings

Simulate Project Risk

Use Monte Carlo simulation to understand the range of possible outcomes for your project timeline and budget. Get probability-based estimates instead of single-point guesses.

Quick Start:

  1. Add your project tasks with three-point estimates
  2. Optionally add risk events with probabilities
  3. Set target deadline and budget (optional)
  4. Run the simulation to see probability distributions

What are three-point estimates?

For each task, estimate the optimistic (best case), most likely, and pessimistic (worst case) duration and cost. The simulation samples from these ranges.

Start by adding tasks above

The Risk Summary You Can Paste Into a Status Report

Your steering committee does not want a probability distribution — they want a sentence. Project Monte Carlo risk analysis gives you that sentence: “There is a 75% chance the project finishes within 14 weeks and a 90% chance it finishes within 17 weeks.” Behind that sentence sit thousands of simulated schedules, each sampling task durations from three-point estimates (optimistic, most likely, pessimistic) and respecting the dependency chain. The common mistake in project planning is quoting the most-likely total as the deadline — that number has roughly a 50% chance of being exceeded because delays compound along the critical path.

The gap between P50 and P90 is your risk buffer in concrete units (days, weeks, dollars). If P50 is 14 weeks and P90 is 17 weeks, a three-week buffer covers most downside scenarios. If P90 is 24 weeks, the project has a structural uncertainty problem that no amount of buffer can paper over — you need to re-scope.

Best, Base, and Worst: Framing the Three-Point Estimate

Every task in the project gets three duration estimates. The optimistic value is the fastest you could finish if nothing goes wrong — not a fantasy, but a realistic best case. The most likely is the duration you would commit to in a normal sprint. The pessimistic is the duration if a major blocker appears — an external dependency delays, a key person is unavailable, or the spec changes mid-task.

The simulation draws a random duration for each task from a triangular (or PERT-Beta) distribution defined by these three points. The shape of the triangle matters: if the pessimistic tail is much longer than the optimistic one (say, 3–5–15 days), the distribution is right-skewed and the mean is pulled above the mode. This is why summing most-likely values underestimates the total — the right tails compound across tasks.

A useful calibration: ask the estimator “would you bet your bonus that the task finishes within your pessimistic estimate?” If they hesitate, the pessimistic number is too tight.

When Your Assumptions Break Down

Tasks assumed independent but actually correlated. If the same developer works on tasks B and D, a delay on B cascades to D even if they are not on the same dependency chain. Standard project Monte Carlo models assume independent sampling unless you explicitly add resource constraints. Ignoring this understates timeline risk.

Estimates anchored to deadlines instead of work. If the team knows the deadline is week 12, their “most likely” estimates will cluster around values that sum to 12 weeks. The simulation then confirms the deadline with high probability — a circular validation. To break the loop, estimate each task in isolation before revealing the aggregate.

Optimistic estimates that are never optimistic enough. People tend to set their optimistic case at their most-likely case minus a day. A good optimistic estimate reflects genuine best-case conditions — the code works on the first try, the review is instant, no rework. If optimistic and most-likely are nearly identical, the left side of the distribution is artificially compressed.

What Changes the Answer: Sensitivity in Project Timelines

After running the simulation, a tornado chart ranks each task by its contribution to total timeline variance. Tasks on the critical path with wide estimate ranges dominate the chart. A task estimated at 3–5–7 days contributes little variance; a task estimated at 2–5–20 days dominates it. The tornado chart tells you where to invest in de-risking: prototype the high-variance task first, add a spike, or split it into smaller pieces with tighter bounds.

Off-critical-path tasks occasionally matter too. If a near-critical path has a high-variance task, it can become critical in some simulation runs, inflating the P90 without appearing dominant on the deterministic critical path. The simulation captures this because it recomputes the critical path in every iteration. Check the “criticality index” — the percentage of iterations in which a task falls on the critical path. A task with a 40% criticality index is a hidden risk even if it is not on the deterministic critical path.

Sanity Checks Before You Trust the Output

Does the P50 match your gut? If the simulation says P50 is 22 weeks but your experienced PM says “feels like 14,” either the estimates are padded or the PM is ignoring dependencies. Investigate the gap before presenting results.

Is the P90/P50 ratio reasonable? For a well-estimated project, P90 is typically 1.3–1.6× the P50. Ratios above 2.0 suggest one or more tasks have extremely wide ranges — tighten the estimates or split the tasks. Ratios below 1.1 suggest the team is not acknowledging real uncertainty.

Do the dependency chains make sense? A simulation that allows tasks to start before their predecessors finish will produce optimistic results. Verify the dependency graph before interpreting outputs. Similarly, check that parallel tasks are genuinely parallel — if the same person does both, they are effectively sequential.

Project Risk Simulation Equations

Three-point estimation and critical-path aggregation use these formulas:

PERT-weighted mean
tE = (O + 4M + P) / 6
O = optimistic, M = most likely, P = pessimistic
PERT standard deviation
σ = (P − O) / 6
Variance of path = Σ σi² for tasks on the critical path
Critical-path duration (per iteration)
TCP = max over all paths { Σ sampled durations along path }
Repeated N times to build the distribution of TCP
Criticality index
CItask = (iterations where task is on critical path) / N

Five-Task Software Sprint: Full Risk Walkthrough

Scenario: A sprint has five tasks. Task A (API design: 2/3/5 days) feeds Task B (backend build: 4/7/14 days). Task C (UI design: 1/2/3 days) feeds Task D (frontend build: 3/5/8 days). Task E (integration test: 2/3/6 days) depends on both B and D. Two paths exist: A→B→E and C→D→E.

Deterministic estimate: Path 1 most-likely total = 3+7+3 = 13 days. Path 2 = 2+5+3 = 10 days. Critical path is A→B→E at 13 days.

Simulation (10,000 runs): P50 = 15 days, P80 = 18 days, P90 = 21 days. Task B has a criticality index of 82% — it dominates the timeline in most runs. Task D has a criticality index of 31% because its pessimistic case (8 days) can push path 2 past path 1 in some iterations.

Action: The deterministic plan says 13 days; the P90 says 21 days. Quote 18 days (P80) to the stakeholder with a note that 21 days is the downside. De-risk Task B first — can the backend build be split into two smaller tasks with tighter bounds?

Sources

PMI — Schedule Risk Analysis With Monte Carlo Simulation: PERT-based schedule risk quantification methodology for project managers.

NASA — Cost Estimating Handbook (Schedule Risk): Three-point estimation and simulation-based schedule risk for engineering projects.

AACE International — Recommended Practices for Risk Analysis: Industry standard for probabilistic project scheduling and risk quantification.

Harvard Business Review — Delusions of Success: Why project estimates are systematically optimistic and how simulation corrects the bias.

Frequently Asked Questions

How many simulations should I run?

For most purposes, 5,000 simulations provides a good balance of accuracy and speed. The percentile estimates (P50, P80, P90) stabilize around 1,000-2,000 simulations. Use 10,000-20,000 simulations if you need very precise tail estimates or are presenting to stakeholders who expect high precision. For quick explorations while adjusting inputs, 1,000 simulations is fine. Understanding this helps you see how to choose appropriate simulation counts and why more simulations provide more stable estimates.

What if my tasks run in parallel, not sequence?

This simplified model assumes all tasks are on the critical path and run sequentially. If you have parallel work streams, you have a few options: only include tasks on your critical path, model parallel streams as separate simulations, use more advanced tools like @RISK or Crystal Ball for network scheduling. For most planning purposes, the critical path approach gives useful insight, but it may overestimate duration if you have significant parallel work. Understanding this helps you see when sequential assumption is appropriate and when parallel execution needs to be accounted for.

How do I estimate task durations if I have no historical data?

When lacking historical data, use structured estimation techniques: analogy (compare to similar tasks you've done before), decomposition (break tasks into smaller pieces you can estimate), expert judgment (consult with team members who have relevant experience), wideband Delphi (multiple experts estimate independently, then discuss). For the pessimistic estimate, consider what could go wrong (technical issues, learning curve, dependencies, reviews). Double or triple your most likely estimate as a starting point for pessimistic. Understanding this helps you estimate task durations when historical data is unavailable.

Should I use P50, P80, or P90 for planning?

The appropriate percentile depends on your risk tolerance and context: P50 (internal targets, stretch goals, or when you can absorb overruns), P80 (standard planning—provides reasonable buffer without excessive padding), P90 (external commitments, contracts, or when overruns have serious consequences). Many organizations use P80 for scheduling and P90 for budgeting. The key is to be consistent and communicate which percentile you're using. Understanding this helps you see how to choose appropriate percentiles for different planning purposes.

How do I incorporate risks that affect specific tasks?

The discrete risk events in this model apply to the overall project. If a risk affects only one task, you have two approaches: widen that task's pessimistic estimate to include the risk impact, or add as a separate risk event with appropriate impact values. The first approach is simpler but may overestimate uncertainty if the risk is low-probability. The second is more explicit but requires modeling more risks. Understanding this helps you see how to model task-specific risks and when each approach is appropriate.

Why do duration and cost have a positive correlation?

Strong positive correlation (r > 0.7) typically occurs because: labor costs scale with duration (more days = more wages), the same uncertainty drivers affect both (scope changes, technical issues), risks that add duration usually also add cost. Weak or no correlation can occur when cost drivers are independent of schedule (e.g., fixed-price materials, equipment costs that don't scale with time). Understanding this helps you see why duration and cost are often correlated and when they might be independent.

What probability of meeting targets is acceptable?

There's no universal answer—it depends on your organization's risk appetite: 80%+ (generally considered acceptable for important commitments), 50-80% (risky—consider adding contingency or adjusting targets), below 50% (high risk—targets may need significant revision). For critical projects, aim for 80-90% probability on both deadline and budget. The joint probability ('hit both targets') is often lower than individual probabilities, so pay attention to that metric for contractual commitments. Understanding this helps you see how to interpret target probabilities and what levels are acceptable.

How often should I rerun the simulation?

Rerun the simulation when: tasks are completed (remove them or mark as complete), estimates are refined based on new information, scope changes add or remove tasks, risks materialize or are mitigated, at major milestones or phase gates. For active projects, weekly updates during execution help track whether you're on the good or bad side of the distribution and if corrective action is needed. Understanding this helps you see when to update simulations and why regular updates are important.

Is the triangular distribution the best choice?

The triangular distribution is popular because it's intuitive and requires only three parameters. However, other distributions may be more appropriate: Beta-PERT (smoother than triangular, commonly used in project management), Log-normal (good for tasks that can't be negative and have long right tails), Normal (when you have mean and standard deviation from historical data). For most planning purposes, the triangular distribution is adequate. The choice of distribution matters less than the quality of your three-point estimates. Understanding this helps you see when triangular distribution is appropriate and when alternatives might be better.

Can I export the simulation results?

Currently, this tool doesn't export raw simulation data. However, you can: copy the key statistics shown in the results section, take screenshots of the charts for presentations, use the AI assistant to generate a summary of the results. Understanding this helps you see how to use simulation results for reporting and presentation purposes.

Explore More Operations & Planning Tools

Build essential skills in operations research, project management, and data-driven decision making

Explore All Data Science & Operations Tools

How helpful was this calculator?

Project Risk Monte Carlo - P50/P90 dates + drivers