Reliability HotWire: eMagazine for the Reliability Professional
Reliability HotWire

Issue 85, March 2008

Hot Topics
Degradation Analysis in Destructive Testing

[Editor's Note: This article has been updated since its original publication to reflect a more recent version of the software interface.]

Degradation analysis is an important data analysis technique for projecting failure data from the degradation history of a quality or performance characteristic that is associated with the reliability of a product. This approach typically requires that the degradation be measured for multiple units over time. However, in some cases, destructive testing is necessary to obtain degradation measurements and taking multiple measurements over the life of the same unit is therefore not feasible. This article describes an approach, using Weibull++ and ALTA, to handle such a data analysis problem and predict a failure distribution model.

Degradation analysis involves the measurement and extrapolation of degradation or performance data that can be directly related to the presumed failure of the product in question. Many failure mechanisms can be directly linked to the degradation of part of the product, and degradation analysis allows the analyst to extrapolate an assumed failure time based on the measurements of degradation or performance over time. (For more information about degradation analysis, click here.) This approach requires that multiple measurements be taken on the same units throughout their life. This is required in order to establish a time overview of the degradation and fit a model that projects when a critical degradation (failure) would be reached. In some cases, this requirement is not feasible, because obtaining degradation measurements would require a destructive test of the units and therefore the units could not be put back in the test (or operation) and revisited at a later stage to assess the additional degradation that occurs.

Note: There are other situations where such a data analysis problem is encountered even though destructive testing is not involved. For example:

  • In the case where degradation measurements are taken at different stages in the field operation but not on the same sample of units because it is difficult or impossible to locate the same units again the next time measurements are to be taken
  • In the case where an analyst had only one opportunity to collect data. In this situation also, only one degradation/time value is available per unit.

One of the ways to handle such a data analysis problem is to use the average (or median) value of each set of random degradation measurements obtained at different stages and to fit one degradation line through these summarized values and project the expected failure time. With such an approach, only one projected failure time is obtained, which is not sufficient to derive a failure distribution.

In this article we present a statistical approach that enhances the projection by allowing for the randomness in the measured degradation to reflect on the failure projection and allow a failure distribution to be obtained. In this approach, instead of using the typical degradation approach, we will attempt instead to model the percentile of units that reached a certain degradation level by a certain time and use that to derive a failure distribution model so that reliability metrics can be calculated.


A company collected the following degradation data over a period of 5 years, using a destructive test. The purpose of the investigation is to estimate the reliability at 5 years.


Inspection Time  1st Year 2nd Year 3rd Year 4th Year
Degradation 437

Note that in this data set, the lower the value, the greater the degradation. Failures are defined as units whose degradation measurement reached 150 or below. The data set shows that only 1 out of 44 units has failed. Degradation analysis would provide a more accurate assessment of the reliability compared to using standard life data analysis techniques with such a heavily censored data set.

In this example, we will use a life-stress distribution modeling approach, as commonly used in accelerated life testing analysis. We will treat time as the stress and the degradation as the random variable that is stress-dependent. In this approach, the whole data set is used simultaneously to fit one overall model. This relies on the assumption that the shape parameter used to model the degradation data (in this case, the log standard deviation parameter of the lognormal distribution) is the same across the different time measurements. Such an assumption might be verified using the likelihood ratio test or contour plots.


In Weibull++, to obtain the contour plots, we first need to fit a model to each year's degradation data. In this example, a lognormal distribution was fitted to the data for each inspection time.


  1st Year 2nd Year 3rd Year 4th Year


Lognormal (LogMean = 6.6332, LogStdDev=0.3699) Lognormal (LogMean = 6.3572, LogStdDev=0.2883) Lognormal (LogMean = 6.0818,   LogStdDev=0.3322) Lognormal (LogMean = 5.6381, LogStdDev=0.3852)


The next figure shows the 90% confidence contour plots of the distribution model that represent the degradation data obtained at different time stages.

This contour plot verifies that there is no significant statistical evidence that the log standard deviation parameter of the lognormal distribution is different for different years. This is verified by assessing whether a horizontal line can be drawn such that all contour plots are intersected.

We then use a life-stress model as the function that describes the relationship between degradation (life) and time (stress). We will also use a distribution model to describe the randomness of degradation reached at a certain time. The data set, as entered in ALTA, is shown next (due to the size of the data set, only a portion of the data is visible in the figure).

The lognormal is chosen as the distribution model for the degradation data in this example. The standard exponential model is used for the life-stress relationship. In ALTA, to select this life-stress model, we used the General Log-Linear model with a "None" transformation.

The life-stress model (degradation-time model, as we might call it in this context) is essentially described as follows:

Using the following reparameterization:

the life-stress model becomes:

This is the same as the exponential relationship, which is one of the commonly used models that describe degradation over time.

The calculated folio is shown next.

Note that the α1 parameter can be viewed by clicking the up arrow in the α parameter field. The shape parameter in ALTA is estimated assuming that it does not change with stress. The estimated model parameters are:

α0 = 6.9834
α1 = -0.3246
LogStdDev = 0.3214

The next figure shows the linearized life vs. stress plot (or in our context: degradation vs. time) with the critical degradation line.

We can now calculate the reliability at 5 years, which is the Probability(Degradation5yrs < 150) using the Lognormal(LogMean = 5.3605, LogStdDev = 0.3214), which is the distribution that describes the degradation at 5 years. This calculation can be done in Weibull++ by creating an empty folio and setting the distribution and parameters values and using the Quick Calculation Pad (QCP). Alternatively, this calculation can be done in a more straightforward way in ALTA using the following calculation (note that in our context "Mission End Time" actually means the critical degradation).

The reliability at 5 years is 86.18%, as shown above.

In this article, we presented an approach to handle degradation data obtained through destructive testing using a life-stress distribution model. A similar technique could be used to analyze degradation data from other situations in which it is not possible to measure degradation for the same unit at multiple points in time.

Copyright 2008-2014 ReliaSoft Corporation, ALL RIGHTS RESERVED