New
Features in RGA 7 for Reliability Growth Analysis and Fielded
Repairable System Analysis
RGA 7 has just been released and
the software is
packed with new features to support more powerful
applications for reliability growth models in developmental
testing and fielded repairable systems.
In this article, we present a brief overview of some of the most exciting new
features in RGA 7.
Complete Reliability Growth Planning and Analysis Across
Multiple Test Phases Traditional reliability growth analysis models consider the
data from a single phase of developmental testing. However, a
reliability growth program often will be conducted across
multiple phases. RGA 7 now offers an array of new analysis and
management tools based on the Crow Extended and Crow Extended �
Continuous Evaluation models, which provide the appropriate
calculations for reliability growth program planning and
multi-phase data analysis.
- Crow Extended Model for Reliability Growth
Planning
In order to plan and execute an overall reliability growth program plan,
the first step is to set an idealized reliability growth curve and
a planned MTBF goal at each phase of the program.
With this approach, test data can be tracked against goals so that
early warning signals can be identified in time to
make significant changes in order to meet the final
MTBF goal for the product. RGA 7 utilizes the
Crow Extended model for reliability growth planning.
Unlike the traditional planning models, such as MIL-HBK-189,
the Crow Extended model for reliability growth
planning provides additional inputs that allow the
model to
account for a specific management strategy and
delayed fixes with specified effectiveness factors.
The model also provides a second curve that accounts for the average fix delay �
the amount of test time from when the failure mode
is discovered until the fix will be implemented in
the units under test. A growth planning plot in RGA 7 is
shown next.

- Reliability Growth Analysis Across Multiple
Test Phases Using the Crow Extended �Continuous
Evaluation Model
The Crow Extended -
Continuous Evaluation model is designed for
analyzing data across multiple test phases, while
considering the data for all phases as one data set.
It provides the flexibility to model the practical
testing situation where the corrective actions may
be applied immediately at the time of failure, at a
later time during the same test phase, in between
test phases, during a subsequent test phase or not
implemented at all. The Crow Extended - Continuous
Evaluation model is not constrained by the
assumption that testing will be stopped when fixes
are applied during a test phase or that all BD modes
will be corrected at the end of the test. Based on
this flexibility, the end time of testing is not
predefined and the model can be continuously updated
with new test data. This is the reason behind the
name "continuous evaluation."
- Multi-Phase Plotting
The most powerful
application of the Crow Extended � Continuous
Evaluation model is in tracking reliability
performance as the test progresses. RGA 7 allows analysis points at
specified times to be plotted
against the set goals that were specified using the
reliability growth planning utility. An example of a MultiPhase plot in
RGA 7 is shown next. It
includes the nominal and actual idealized growth
curves and the planned growth at each phase
(calculated with the Crow Extended model in the
Growth Planning Folio), together with the
demonstrated, projected and growth potential MTBF
values at each analysis point and phase (calculated
using the Crow Extended - Continuous Evaluation
model in a Multi-Phase data sheet) .

Operational Mission Profiles During a development
program, it is common practice for systems to be subjected to
operational testing in order to evaluate the performance of the
system under conditions that represent actual use. When the
system must be tested for a variety of different mission
profiles, it can be a challenge to make sure that the testing is
applied in a balanced manner that will yield data suitable for
reliability growth analysis. RGA 7's Mission Profile Folios are
used to:
- Create and manage an operational test plan that effectively
balances all of the mission profiles that need to be tested.
- Track the
expected vs. actual usage during testing for all mission
profiles and validate that the testing has been conducted in a
manner that will yield data sets that are appropriate for
reliability growth analysis.
In addition, in order for the growth model to be
applied appropriately, RGA 7 can automatically
group the data at specified "convergence points," which are
pre-defined points at which the actual usage for each profile is
managed so that it meets the expected usage.
The next figure shows
the Mission Profile plot for the reliability growth testing of
a Multi-Function Printer (MFP). The profiles for
printing, copying and faxing are tracked against their expected
usage values throughout the developmental testing. Two
convergence points during the growth test and one at the end of
the test are used to make sure that at those points the test
data can be grouped and analyzed in a way that best simulates
actual field usage.

Design of
Reliability Tests for Repairable Systems Design of
Reliability Test (DRT) methods that are based on the
parametric binomial, non-parametric binomial or
exponential Chi-Squared methods are suitable for
non-repairable items. However, when you want to design a
reliability demonstration test for a repairable system
that may fail and be restored multiple times during
operation, another method is required. The failure
process in a repairable system is considered to be a
non-homogeneous Poisson process (NHPP) with a power law
failure intensity. RGA 7 now provides a DRT
utility that models the failure process in this way,
enabling you to determine the amount of test time (or
number of test units) that will be required to demonstrate a
specified reliability goal (defined in terms of MTBF or failure
intensity at a given time) for a repairable system.
The next figure shows an example
of a calculation of the required test time per unit for a
reliability demonstration test (assumption of beta = 1) to
demonstrate an MTBF of 1000 hours (instantaneous and cumulative
MTBF values are the same when beta = 1), with an 80% confidence
level. The number of test units for the demonstration test is 6,
and the total number of allowable failures for the test is 2.
The result is that the required test time per unit is ~714 hours.

In addition, if you wish to consider a range of
possible options for the number of units and number of
allowable failures, you can use the utility to generate
a table like the one shown next.

Monte Carlo Data Generation and
SimuMatic� When analyzing developmental systems for
reliability growth or when conducting data analysis of fielded
repairable systems, it is often useful to experiment with
various "what if" scenarios or put together hypothetical
analyses before having available data. This can help to find the
best way to analyze data sets when they become available. With that
in mind, RGA 7 offers two utilities based on Monte Carlo
simulation: the Monte Carlo Data Generation utility and SimuMatic.
Monte Carlo data generation is a computational algorithm in
which we randomly generate input variables that follow a
specified probability distribution. We are interested in
generating failure times for systems that we assume have
specific characteristics. We expect the inter-arrival
times of the failures to follow a non-homogeneous
Poisson process with a Weibull failure intensity, as
specified by the Crow-AMSAA (NHPP) model in the case of
reliability growth analysis and by the power law model
in the case of repairable system data analysis.
With SimuMatic, reliability growth analyses are performed a
large number of times on data sets that have been created using
Monte Carlo simulation. Essentially, RGA 7�s SimuMatic utility performs a user-defined number of Monte Carlo simulations based
on user defined required test time or failure termination
settings, and then recalculates the growth parameters for each
of the generated data sets.
Monte Carlo simulation and
SimuMatic can be used in order to:
- Better understand
reliability growth and repairable system concepts.
- Experiment with the impact of
sample size, test time and growth parameters on analysis
results.
- Construct simulation-based confidence intervals.
- Better understand concepts behind confidence intervals.
- Design reliability demonstration tests.
The next figure shows simulation-generated confidence bounds for the instantaneous MTBF vs. time,
created using the SimuMatic utility.

|