What is a
"gap analysis" in RGA?
Most of the reliability growth models used for estimating and tracking reliability growth based on test data assume that the data set represents all actual system failure times (complete data) consistent with a uniform definition of failure. In practice, however, there might be cases in which training issues, oversight, biases, misreporting, human error, technical difficulties, loss of data, etc. might render a portion of the data erroneous or completely missing. Analysis of such data sets may result in distorted estimates of the growth rate and actual system reliability. Gap analysis addresses this by, essentially, not using the data from the problematic interval. The analysis retains the contribution of the interval to the total test time, but no assumptions are made regarding the actual number of failures over the interval.
Gap analysis in RGA 7 can be performed on Failure Times data sets analyzed with the Crow-AMSAA (NHPP) model using the
Use Defined Gap check box on the Analysis page of the standard folio control panel.
For example, consider a data set that, when analyzed with the Crow-AMSAA (NHPP) model, yields the following Cumulative Number of Failures plot:
It appears that something is wrong with the failure data from approximately time 500 to 625. Looking at the
control panel, we see that the Cramér-von Mises statistical test indicates that the Crow-AMSAA (NHPP) model is not a good fit for the data set:
If the analysts have reason to believe that this behavior is due to mistakes in recording the data, they may wish to perform a gap analysis. The Gap Interval settings on the Analysis page of the control panel are shown next.
With this setting, the failure data between 500 and 625 will be ignored in the analysis, resulting in the following Cumulative Number of Failures plot, which shows a considerably better fit:
For further information on gap analysis, please refer to
How can I use simulation in Weibull++ to design a reliability test?
There are multiple ways to use simulation in designing reliability life tests. This tool tip describes one way of performing simulation-based test design in
Weibull++. With the SimuMatic utility, you can simulate data obtained from a planned test design in order to determine whether it will demonstrate a target reliability metric. If the results suggest that the test design will be inadequate, you can modify it by adjusting factors such as the test duration (for a time-terminated test), number of failures (for a failure-terminated test) and sample size. The modified design can then be assessed and, if necessary, modified further.
To simulate the data from a reliability test, open the SimuMatic Setup window and:
On the Main tab, specify the product's life distribution.
On the Analysis tab, specify how the data will be analyzed (rank regression or MLE).
On the Censoring tab, specify how the data will be censored. For example, if the planned test will continue until all the units have failed, then select
No Censoring. If the test will end before all the units have failed, select
Right censoring after a specific time or Right censoring after specific number of failures then enter the time or number of failures that will determine when the test ends.
On the Reliabilities & Times tab, specify which metric(s) you wish to calculate for each data set. For example, if you are planning a test that is intended to demonstrate
the reliability at 100 hours, then you would enter
100 in the Times table so the software will calculate R(100) for each data set.
Finally, in the bottom-right area of the window, specify how many data sets will be generated and how many data points will be contained in each set. For
more accurate results, you will need to generate a larger number of data sets (e.g., 1,000). The number of data points should correspond to the number of units that will be tested.
Then click the
Generate button to create a SimuMatic folio that contains the parameter/reliability calculations for each data set.
After performing the simulation, the folio has two data sheets: Simulation and Sorted. On the Sorted sheet, the data sets will be ordered according to their reliability. This allows you to see the confidence bounds on the reliability metric you chose to calculate for each set. For example, if you chose to calculate R(100), and you want to see the 95% lower one-sided confidence bound on that value, then look at the R(100) value on the row for the data set in the 5th percentile (5.00%).
In this case, the simulation predicts that the test results will demonstrate, with 95% confidence, a reliability of at least 92.11% at 100 hours. If this value is too low, you can adjust the test design by increasing the sample size or changing the censoring settings to see if a modified test plan could give you acceptable results.