Understanding the Goel–Okumoto Model & Musa’s Basic Execution Time Model
When it comes to software reliability growth modeling, two classic approaches stand out: the Goel–Okumoto (GO) model and Musa’s basic execution time model. Both emerged in the 1970s and have since become foundational in predicting and improving software quality. Let’s break them down.
The Goel–Okumoto (GO) Model
Developed by Goel and Okumoto in 1979, this model is built on a set of realistic assumptions about how software failures occur and get fixed during testing.
Key Assumptions:
- Failure distribution – The number of failures by time t follows a Poisson distribution with a mean value function μ(t), starting at μ(0) = 0 and approaching a finite value N as t → ∞.
- Failure rate – The number of failures in the interval (t, t+Δt] is proportional to the remaining undetected faults (N − μ(t)), with a constant proportionality ∅ (per-fault hazard rate).
- Independence – Failures in disjoint time intervals are independent.
- Perfect repair – Each fault is removed immediately after it causes a failure, without introducing new ones.
- Finite faults – The initial number of software faults equals the total failures expected over infinite testing time (N).
From Assumption 2, the failure intensity at time t is expressed as:
This means the failure rate depends on the constant hazard rate of each fault and the expected number of faults remaining — similar to the Jelinski–Moranda model, but with N treated as an expected value rather than a fixed number.
Musa’s Basic Execution Time Model
Introduced by John Musa in 1975, this model focuses on execution time — the actual CPU time spent running the software — rather than calendar time.
Core Idea:
- Failure intensity decreases over time as faults are found and removed during testing.
- Each detected failure reduces the failure rate by the same amount in the basic model.
- In Musa’s logarithmic model, this reduction follows a logarithmic pattern instead of being linear.
Model Characteristics:
- A finite number of total possible failures.
- Failures follow a Poisson distribution over time.
- Failure intensity decreases exponentially with execution time.
Musa’s model is mathematically equivalent to the GO model, but differs in interpretation. In Musa’s version, the hazard rate ∅ is split into two constants:
- f = Linear execution frequency (instructions executed per second per line of code).
- K = Fault exposure ratio (average number of failures per remaining fault during one full program execution).
Formula for f:
- r – Execution rate of object instructions
- l_s – Number of source code instructions
- Q_x – Average object instructions per source code instruction
Key Takeaway
Both the GO and Musa models help software teams predict failure behavior and plan testing efforts. While GO is more general and based on calendar time, Musa’s approach ties reliability directly to execution time, making it especially valuable for performance-sensitive applications.
In practice, these models can guide test scheduling, resource allocation, and release readiness — helping ensure that when your software ships, it’s as reliable as possible.