Degradation Analysis in Destructive Testing
- Part II
[Editor's Note: This article has been updated since its original publication to reflect a more recent version of the software interface.]
Degradation analysis is an important data analysis technique for projecting failure data from the degradation history of a quality or performance characteristic that is associated with the reliability of a product. This approach typically requires that the degradation be measured for multiple units over time. However, in some cases, destructive testing is necessary to obtain degradation measurements and taking multiple measurements over the life of the same unit is therefore not feasible. The previous month's issue of the Reliability HotWire presented an approach for handling such a data analysis problem and predicting a failure distribution model using ALTA. In this article an alternative method is presented using Weibull++.
In the approach presented this month, we will use the principles of life data analysis to fit distributions to degradation data collected for different time periods. These distributions will then be used to obtain the degradation at different percentage points of the distributions. A degradation model will then be fitted to each of the resulting degradation percentiles to obtain the times to failure for each of these percentage points. Finally, a distribution fitted to the obtained failure times will give us the desired reliability metric.
A company collected the following degradation data over a period of 4 years, using a destructive test. The purpose of the investigation is to estimate the reliability at 5 years.
Note that in this data set, the lower the value, the greater the degradation. Failures are defined as units whose degradation measurement reached 150 or below. The data set shows that only 1 out of 44 units failed during the 4 year observation period. With such a heavily censored data set, degradation analysis would provide a more accurate assessment of the reliability compared to using standard life data analysis techniques.
The first step in this approach is to model the variation of degradation obtained at each inspection time. In this example, a lognormal distribution is fitted to the data for each inspection time.
In the next step, various percentiles are calculated for the obtained distributions. They describe the degradation levels observed by certain percentages of units at different inspection times.
Note: To expedite the calculations involved in the above table, the General Spreadsheet and Function Wizard features in Weibull++ 7 were used.
The percentile data are then entered into a Weibull Degradation Folio, as shown next.
Next, we fit a degradation model to the degradation values for each individual percentage of the distribution. For this example, the exponential model (Y = beaX) is selected based on prior knowledge of the degradation process.
The degradation lines can now be used to obtain the projected failure times. These are displayed using the Show Extrapolated Values button on the Control Panel.
The projected failure times for each of the percentage points are entered in a Weibull++ Standard Folio using a free-form (probit) data sheet. The software then fits a distribution to the times to failure, as shown next.
The free-form data set is modeled with a Weibull distribution. The next figure shows the fit of the model.
The reliability at 5 years is estimated to be 82.7%, as shown next.
Conclusion: In this article, we presented another approach to handle degradation data obtained through destructive testing using degradation analysis and life data analysis techniques. This approach may occasionally lead to correlation issues when fitting a distribution to the free-form data; it may, however, be useful when the assumption of a common beta is not applicable and therefore the approach using accelerated life testing analysis mentioned in the previous months issue cannot be used.
Copyright 2008 ReliaSoft Corporation, ALL RIGHTS RESERVED