A Blueprint for Implementing a Comprehensive Reliability Engineering Program
Section 4 of 7: Field Data
While reliability testing is vital to the implementation of a reliability program, it is not the sole source of product reliability performance data. Indeed, the information received from the field is the "true" measure of product performance, and is directly linked to the financial aspects of a product. In fact, a significant proportion of field data may be more finance-related than reliability-related. However, given the importance of the link between reliability and income, it is important to insure that adequate reliability information can be gleaned from field performance data. In many cases, it is not too difficult to adapt field data collection programs to include information that is directly applicable to reliability reporting.
Some of the most prevalent types of field data are: sales and forecasting data, warranty data, field service data, customer support data, and returned parts/failure analysis data, as discussed below. These discussions will tend towards generalizations, as every organization has different methods of monitoring the performance of its products once they are in the field. However, the illustrations here give a good general overview of how different types of field data may be collected and put to use for a reliability program.
It should be noted at this point that there will usually be a "disconnect," or seeming lack of correlation, between the reliability performance of the products in the field and the results of in-house reliability testing. A typical rule of thumb is to expect the reliability in the field to be half of what was observed in the lab. Some of the specific causes of this disparity are discussed below, but in general the product will usually receive harsher treatment in the field than in the lab. Units being tested in the lab are often hand-built or carefully set up and adjusted by engineers prior to the beginning of the test. Furthermore, the tests are performed by trained technicians who are adept at operating the product being tested. Most end-use customers do not have the advantage of a fine-tuned unit and training and experience in its operation, thus leading to many more operator-induced failures than were experienced during in-house testing. Also, final production units are subject to manufacturing variation and transportation damage that test units might not undergo, leading to yet more field failures that would not be experienced in the lab. Finally, the nature of the data that goes into the calculations will be different; in-house reliability data is usually a great deal more detailed than the catch-as-catch-can data that characterizes a great deal of field data. As you can see, there are any number of sources for the variation between field reliability data and in-house reliability test results. However, with careful monitoring and analysis of both sources of data, it should be possible to model the relationship between the two, allowing for more accurate prediction of field performance based on reliability testing results.
The sales and forecasting category of data is a sort of general-use data type that is necessary as a basis for many other analyses of field data. Essentially, this information provides the analyst with a figure for the population of products in the field. Knowing how many units are being used at any given time period is absolutely vital to performing any sort of reliability-oriented calculations. Having an accurate measurement of the number of failures in the field is basically useless if there is not a good figure for the total number of units in the field at that time.
Warranty data is somewhat of a catch-all category that may or may not include the other types of field data listed below, and may not contain adequate information to track reliability-related data. Since most warranty systems are designed to track finances and not performance data, some types of warranty data may have very little use for reliability purposes. However, it may be possible to acquire adequate reliability information based on the inputs of the warranty data, if not the actual warranty data itself. For example, a warranty system may have ship dates and service call dates, but not actual time-to-failure data. In this case, we must make the assumption that the failure time is approximately equal to the difference between the ship date and service call date, even though the product may not have actually been used during the extent of that time before it failed. This is, of course, a case of "garbage in, garbage out," and a poorly designed warranty tracking system will yield poor or misleading data regarding the reliability of the product. At the very least, there should be a degree of confidence regarding the raw number of failures or warranty hits during a particular time period. This, coupled with accurate shipping data, will allow a crude approximation of reliability based on the number of units that failed versus the number of units operating in the field in any given time period.
Field service data is connected with field service calls where a repair technician manually repairs a failed product during an on-site visit. This is a potentially powerful source of field reliability information, if a system is in place to gather the necessary data during the service call. However, the job of the service technician is to restore the customer's equipment to operating condition as quickly as possible, and not necessarily to perform a detailed failure analysis. This can lead to a number of problems. First, the service technician may not be recording information essential to reliability analysis, such as how much time the product accumulated before it failed. Second, the technician may take a "scattershot" approach to repair. That is, based on the failure symptom, the technician will replace all of the parts whose failure may result in the failure of that particular system. It may be that only one of the parts that were replaced had actually failed, so it is necessary to perform a failure analysis on all of the parts to determine which one was actually the cause of the product failure. Unfortunately, this is not always done, and if it is, the parts that have had no problem found with them will often be returned to field service circulation. This may lead to another potential source of error in field service data, in that used parts with unknown amounts of accumulated time and damage may be used as replacement parts on subsequent service calls. This makes tracking and characterizing field reliability very difficult. From a reliability perspective, it is always best to record necessary failure information, avoid using the "scattershot" approach to servicing failed equipment, and always use new units when making part replacements.
Customer support data comes from phone-in customer support services. In many cases, it may be directly related to the field service data in that the customer with a failed product will call to inform the organization. In some circumstances, it may be possible to solve the customer's problem over the phone, or adequately diagnose the cause of the problem so that a replacement part may be sent directly to the customer without requiring a service technician to make an on-site visit. Ideally, the customer support and field service data would reside in the same database, but this is not always the case. Regardless of the location, customer support data must always be screened with care, as the information does not always reflect actual problems with the product. Many customer support calls may concern usability issues or other instances of the customer not being able to properly use the product. In cases such as this, there will be a cost to the organization or warranty hit, even though there is no real fault or failure for the product. For example, a product that is very reliable but has a poorly written user manual may generate a great many customer support calls. This is because, even though the product is working perfectly, the customers are having difficulty operating the product. This is a good example of one of the sources of the "disconnect" between in-house and field reliability data.
As was mentioned earlier, failed parts or systems are sometimes returned for more detailed failure analysis than can be provided by the field service technician. Data from this area are usually more detailed regarding the cause of failure, and are usually more useful to design or process engineers than to reliability engineers. However, it is still an important source of information regarding the reliability behavior of the product. This is especially true if the field service technicians are using the "scattershot" approach to servicing the failed product, replacing a number of parts which may or may not be defective. If this is the case, it is necessary for all of the returned parts to be analyzed to determine the true cause of the failure. The results of the failure analysis should be correlated with the field service records in order to provide a complete picture of the nature of the failure. Often, this correlation does not occur, or the returned parts are not analyzed in a timely fashion. Even if the analysis is performed correctly, there tends to be a significant proportion of returned parts with which no problem can be found. This is another example of a potential cause of the disparity between lab and field reliability data. However, even if the failure analysis group is unable to assign a cause to the failure, a failure has taken place, and the organization has taken a warranty hit. In the field, the performance the customer experiences is the final arbiter of the reliability of the product.
Copyright © 1992-2016 ReliaSoft Corporation, All Rights
Reserved. Document updated January 2016.