Early warning systems allow manufacturers to detect emerging issues before they become major liabilities. In a roundtable discussion, experts suggest the best strategies for implementing these systems, and how to use them to reduce warranty cost, boost product quality, and increase customer satisfaction.
Over the past eight months, Warranty Week and SAS Institute have sponsored a series of online Webinars entitled "Making Warranties Work for You," which can be viewed at the BetterManagement.com Web site. With the approach of the fourth and concluding chapter of the series scheduled for Thursday, October 13, we thought it worthwhile to look back at the third installment.
The July 12 Webinar, also available as an online video playback, featured a roundtable discussion with Dr. William Meeker of Iowa State University, Laura Madison of General Motors, and David Froning of SAS Institute, moderated by Eric Arnum, editor of Warranty Week. This was followed by a live question and answer session hosted by Dorothy Brown of BetterManagement.com. What follows is a transcript of that Webinar, lightly edited for a newsletter format:
Dorothy Brown: Hi everyone and welcome to Better Management Vision. I'm Dorothy Brown your host for this continuing series of Webcasts featuring views from thought leading executives and consultants on critical management issues. Early warning regulations and programs such as the TREAD Act are creating a wealth of new information on product usage and performance. Progressive companies are seeing the business value of integrating this information back into their product design, manufacturing processes and service operations to improve product performance and customer satisfaction, reduce production and maintenance costs and minimize risk exposure. Our panelists today will be discussing the various uses of early warning systems and programs and how your organization can achieve both financial and operational benefits from a comprehensive early warning program. Our moderator for today's discussion will be Eric Arnum. Eric is the editor of Warranty Week, an online publication for the warranty professional. Launched in 2002, Warranty Week focuses on the manufacturing industry's aftermarket with analyses of warranty costs, regulatory reporting, market value and warranty product and management trends. Eric and joins us now on the other side of the studio along with the rest of our panel. Eric, welcome.
Eric Arnum: Thank you Dorothy.
DB: We also welcome our first panelist Dr. William Meeker, Professor of Statistics and Distinguished Professor of Liberal Arts and Sciences at Iowa State University. Among his many accomplishments Bill is a Fellow of the American Statistical Association, an elected member of the International Statistics Institute and has twice won the American Society for Quality Wilcoxon Prize. Bill, it's a pleasure to have you joining us.
Dr. William Meeker: Thank you Dorothy.
DB: And our next panelist is Laura Madison. As a quality warranty data manager for General Motors. Laura's mission is to acquire and analyze warranty data, to monitor performance, drive product process improvements and provide resource allocation decisions for the global company. Laura's group is tasked with providing the data needed to drive warranty cost reduction and improved customer satisfaction. Laura, welcome to our panel.
Laura Madison: Thank you. It's nice to be here.
DB: Rounding out our panel today is David Froning, Product Manager for SAS Warranty Analysis. David works in partnership with manufacturers, suppliers and industry organizations to develop and implement solutions for manufacturing industry issues such as warranty, quality and early warning. Through participation in industry groups and independent research he works with manufacturing companies to develop best practices for technology applications. David, thanks for joining our panel today.
David Froning: Thank you very much.
DB: Now before I turn the reins over to Eric I'd like to remind you that the views expressed in this Webcast reflect our panelists opinions and are not intended nor should they be construed as offering legal advice and are not to be considered as authority on any legal proposition. These opinions are offered only as commentary intended to promote debate and reflection on the issues discussed. And we hope you'll take an active part in our Webcast by sending us your questions. You can submit questions at any time by clicking on the submit question button on your screen and typing in your question. Now I'd like to turn the discussion over to Eric Arnum and our panel. Eric.
Eric Arnum: Thanks Dorothy. The terminology of early warning came out of the desire for government auto safety regulators to catch safety issues sooner through the careful monitoring of warranty claims and other data. Nowadays independent of the Tread Act or any need for compliance all kinds of manufacturers are beginning to look at early warning reporting as a way to not only reduce warranty costs but also to boost product quality and brand reputation. But early warning reporting means different things to different people depending on the industry and other factors. Bill, how you define early warning?
Dr. William Meeker: Well I've gotten involved in warranty problems over the years primarily because companies have recognized that there was an important issue. At that point in time typically everybody in the company knew there is an issue and the real question was, how much is it going to cost us? So this requires using various different statistical tools in order to understand the magnitude of the problem and how long it's going to last. A secondary question that almost always came up is, gee, what would the cost have been if we had detected this problem much earlier? So detecting the problem as early as possible given the useful information in the warranty database is how I would define, early warning systems.
EA: Laura, how would you define it?
Laura Madison: Typically early warning is often looked at as finding something very soon after the product's manufactured. But we look at it a little differently. We look at it not only early warning very closely from the time that our vehicles are produced but also over the life cycle of that vehicle in terms of finding a problem sooner than we would normally find it through typical data sources. So that's how we would look at it.
EA: What would your definition be Dave?
David Froning: Well I think the definition is the same but I would expand it a little bit and focus on both sides of it. One is on the data, making sure that you've got all the data sources that are available within the company pulled together and integrated for analysis. So whether that's call centers or survey data, tech lines, or shop floor quality data. Whatever information you've got that could even be a leading indicator of a warranty event to start to predict warranty events is key. And then on the other side, once you know that you've got an issue we've got to take that next step because just knowing about the problem isn't sufficient. But being able to define it well enough so that you know it's a specific combination of these two parts when they're put together or a specific supplier and plant combination so that your problem solvers can take that next step and solve the problem.
EA: Laura, how can companies go beyond just the regulatory and reporting usage to get more business value from their early warning programs?
LM:: I think in our case, obviously we're meeting the requirements for reporting the numbers to the government for TREAD in this case. But also it's the integration of that data and getting that integrated information back into the development process to the people that need to know. So that's getting information back to not only our manufacturing process but back into suppliers, their manufacturing process, their design process, our design process and all the way through our development cycle. So looking at all of those sources of information, integrating it, providing fast feedback are the ways to we need to provide value with the data.
DF: And I think on top of that the regulation generally mandates that you pull the information together so that you can do some basic reporting. But going beyond that and building an early warning layer on top of this data that you've been forced to pull together is really the advantage that you can take from this thing that you've been forced to do.
WM: Yeah again from a regulatory point of view companies are required to report numbers. What's really important though is to take those numbers take those data and somehow turn them into useful information for developing important strategies that can be used to keep costs down. In this case particularly warranty cost.
EA: Let's pick up on that. Dave, what do you think are the most important strategies to implement wider use of early warning information through a company?
DF: Well, I really see it as three key pillars. One is integration. Bring the data together from all of those different data sources so that you have them in one place and ready for analysis. The second is analytics. Putting a layer of analytics on top of the data to filter out the noise. There's an awful lot of noise in warranty data and these other data sources. Most of them weren't created for problem solving. They were created for other purposes such as paying the servicer or the technician. So being able to filter out the noise in the way the data are collected is key. What the analytics provide is that filter. And then third is automation. We see in a lot of companies that we work with that they have great people in place and some of them even have great analytics in place but it's an enormous manual process to detect those new issues. They have to look through hundreds of charts. They have to do a standard monthly job that takes almost all month to complete before they can identify the issues. So automating that process to let the problem solvers know where the new issues are so they can go out and do what their job is, solve a problem, not look for it.
WM: Speaking as a statistician, I would like to follow on David's comments on the analytics. It's a very important component of early warning systems. It's a complicated statistical problem. We've got all this data in a huge database or data warehouse, coming in regularly, being updated frequently and what's needed is some method of finding the signals in this very noisy background. I liken it to the problem of detecting an airplane with radar. It's really very similar statistically and there are two different strategies. One is just to find some sort of aberration in the data that's different that stands out. If you know, however, what you're looking for; if there's a signature for the problem -- and often there is in early warning detection -- then you can design your filters to find that kind of a problem and really increase the sensitivity and keep down the false alarm rate. Beyond that, I'd like to say in terms of strategy, ten to fifteen years ago many companies were already beginning to think about this is kind of an early detection problem and they were trying to go it alone and to do other warranty type work alone. Today that's really not the way to go. There are a number of specialists out there -- solution providers -- who have developed very comprehensive systems to handle many different kinds of warranty problems. Companies should be talking to those people, in my opinion, to get the most value from the data in their in their warranty databases.
EA: Laura what do you think should be the best strategy?
LM:: In our case, clearly we have gone from having a very small number of people looking at and using data to expanding that three and four times just in the past couple of years. And getting data that's useful right to the engineer that's responsible for that component, right to the supplier that provides us with the part, has been one of our key strategies. But then having lots of data to lots of people, as valuable as that is, we wrapped that in a common process so that we don't have people just kind of in a free-for-all figuring this out on their own. Because there is a lot of noise in the data, and we do have to have the ability to filter that out with certain analytics so that we can provide what's really an issue to the right person. So people, common process and methods, also being able to have a system that's flexible. That's been another key because as the business environment changes we don't have the ability to go back and rewrite IT programs, and systems every year. So having the flexibility in the systems has been another one of our key strategies.
EA: Well I don't ask about GM specifics, but in general, what do you think are the most significant barriers for the expansion of early warning systems inside a company?
LM:: At least in in the manufacturing space, there's two things. One is integrity of the data. I mean at some point, somewhere, somebody's putting data into an information system. And so getting that valuable information into that system is a key. Another barrier too is just the complexity of data. Looking at all of the elements and in our case we've got, when was a vehicle produced? How many parts do we have on a vehicle? How long is the warranty period from which we are collecting data? Is that 90 days like it might be on a printer or in our case we've got some components of the vehicle warranted up to 10 years. A lot of complexity. A lot of opportunity in there. Those are some of the barriers in trying to simplify that data and make it useful for more people again comes back to some of the strategies.
DF: Well, I guess the two biggest barriers that I see; one is on the simplification side. As you do try to get this information out to more people who can actually take action and solve a problem it's key that you present it in a way that's understandable but also leads them to the right decision. So I think the balance between simplicity and analytics is key. So not just providing reports per say, but giving them charts and information that points them in a direction to say that this plant is significantly worse than that one or it's when you combine this part with that part that's significantly worse than the other combinations, to understand that sometimes one failure is significant but sometimes one failure out of a thousand isn't. So it's very important to walk that line between simplicity and accuracy of the message. And the other big barrier that I see is just a mindset in general that reporting is sufficient; that analytics aren't necessary. Generally we don't see that much with the people who were actually in the trenches trying to solve problems and looking for issues. They understand the value of analytics. It's generally more of an IT issue where they know they've got reporting tools in place and they don't understand why the business users can't just use the tools they've got today to solve the problems.
EA: Bill, same question.
WM: From my observation, on a much more simplistic level, talking with managers from companies who have not yet implemented an early warning strategy, the main barrier is cost. It's going to cost some money to get this whole thing set up. But once you demonstrate the potential savings of having such a system then I think that barrier comes down pretty quickly.
EA: Well, cost is one thing, but what about calculating the business value of an early warning system? Do companies include the value of providing the information to the divisions and pushing it out into the field or how you begin that test to define the benefits of early warning?
WM: The benefits of early warning can be difficult to quantify. The reason is that we're trying to quantify something that we haven't seen yet, this is a serious reliability problem that's beginning in the field. And then we're trying to quantify the savings for detecting that earlier. Again we're trying to quantify something that perhaps hasn't happened and we hope never will happen. On the other hand, many companies that have experienced such things that we can point to a number of serious problems where if it had been detected much earlier would have saved money. So I think that it's not too terribly difficult to justify the cost of moving in this direction in terms of the savings that that can be achieved.
EA: Well Laura, how do you calculate the value of things that didn't happen?
LM:: That is a challenge. When you find something the first time you have a problem or an issue and you don't ship any of those vehicles it doesn't occur in the field. How many would we have had? You would want to repair anything that you thought might be a problem or might be suspect. So that is an ongoing challenge for us. But we also have other opportunities around prevention. In other words, taking a known issue and making sure that we've worked it back and that it could be prevented on other products if it's a similar component, if it needs to go back into a design process again in terms of prevention. There's also just the obvious reduced warranty costs that you don't end up incurring and taking the early warning and shortening that detect-to-correct time. Again, I think Dr. Meeker pointed out the value of finding it earlier -- two months earlier, three months earlier in the field is certainly where we calculate value. And then obviously always the improved customer satisfaction and we see that in some of our external information.
EA: How do your customers do it?
DF: Most of them do focus on the days saved. So how much earlier can I detect a problem? How much faster can I define it so that I can get to the root cause and put a fix in place? And I absolutely agree. It's difficult to quantify that in an ongoing process when you're looking at real data as of the last time it was updated. What we've done with several of our customers is actually taken a look back in time. We take some historical data and identified when the customer knew the problem existed. And then we go through the data and say, okay what data was available last month? What data was available a month before that and a month before that? And generally when we go through that process, what we find is using the early warning techniques that we're talking about we see a three to four month improvement on what actually happened. That way we know we've got an apples-to-apples comparison. Because when the early warning system tells me tomorrow that I've got this new problem, there's no way to guess when I would of known about it. But by actually looking at historical data you can't put a figure on that. And that three to four months savings usually correlates to a 10% to 15% reduction in warranty costs. So it can be a huge savings. And that is definitely where our customers focus. But as Laura was bringing up, there are so many the things that early warning does for you. It helps you with customer satisfaction. It helps you increase sales because the customer and press and everyone else are happier about your product. Those things are extremely hard to quantify and that's why people don't generally focus on them. But others that are a little more quantifiable would be things like minimizing the size of a recall or preventing a recall to begin with. But by knowing about the problem earlier and defining it more accurately, so you know it was only the parts that were put with this other option at this assembly plant but not the other assembly plant and really focusing in on exactly what the problem is so that you only have to recall 10,000 units instead of a 100,000 units.
EA: So it's question of time and fixing problems earlier than before? How about when you discover a problem when you find out something's wrong and it's not just simply something was made wrong but you have a design problem? Laura, do you have any examples of when something was found that it was actually taken back and somebody had to actually change the design of a product?
LM:: We do that every day. In our company, anytime we find an issue we have a closed loop process where we document it, we drive it back to it's source, whatever that happens to be, whether that's in the manufacturing process, a supplier, or in an engineering design process. And we do that on an ongoing basis. But we also have that closed loop process in the overall vehicle development process. So at the very earliest concept of a vehicle, we begin to determine what would be the components on that vehicle. What are potential issues with it? If it's a component that we think is not something that is worth trying to fix, we would select a different component. So we do those comparisons and understand which battery performs the best across our lineup and select the ones that are the best. Further if it's a component that's unique to that vehicle that we really want to use but there may be still some reliability issues, we would drive that back into a more robust design in term of our design for Six Sigma process. So we have a short-term closed loop process to make an early warning find back to a fix. But then we also build that in long-term in terms of lessons learned and making sure that we prevent that and don't have that occur in future designs.
EA: Dave, same question.
DF: The keys are communication so that the design engineer as well as everyone else in the process has the information that they need to know that the problem exists and if it's been resolved, how was it resolved? So it's passing that information on to those engineers so that they can take advantage of it. But also then maintaining a technical memory so that they've got the history of what's been solved in the past and they don't have to go out and reinvent the wheel each time a new issue comes out.
EA: Or pass it on to the next generation.
EA: Bill, same question.
WM: Well a very important part of product design in general is a reliability model. When a product is being designed and the management ultimately want to know, gee, what sort of reliability will we have when we take this product to the field? And a very important input to the reliability model will be the reliability of individual components. And the way that information is typically obtained is by looking at previous field history. Certain components are used over and over again in different product models. And, when an engineer looks back and sees that a particular component had sort of a marginal reliability, they will use that to upgrade that component to improve the reliability of the future products. So warranty data is extremely important for product design, and engineers really need to have the ability to take the information from the warranty database, drill down and find out the reliability of individual components of the future product when they can. Now, if there're new components that haven't been tested before, then they'll have to go to other sources of information to get the reliability.
EA: Okay. Panelists great discussion. Thank you very much for your insights. Now I'd like to turn things back over to Dorothy Brown as we take questions from our viewers. Dorothy.
Dorothy Brown: Thank you, Eric. Now remember you can submit your questions by clicking on the submit question button on your screen and typing in the question. Here's our first question. Laura, this question is for you. This viewer asks, does the information you get from your early warning program influence the way you view or treat your customers?
LM:: I would say definitely. In our business, the sense of urgency is very strong around finding and fixing any issue that could be a problem for our customer. And we take it to a very personal level. Most people that we know in our business are driving vehicles, and you don't want to have a family member or friend or neighbor have an issue. So we do take that personally and we do drive that back to finding and fixing that problem, getting it documented, getting it communicated, preventing it, all of those are very key to us from a customer satisfaction standpoint.
DB: Excellent. In this next question a viewer asks, we don't have any automated way to get early warning data back to the production and design groups. Is it worth building a system to do this or is there some other solution? Dave, this sounds a good question for you.
DF: As Bill mentioned earlier, there are a few solution providers out there today that can help with this issue, SAS being one of them. I absolutely think it makes sense to use an out of the box solution that can be flexible. As Laura pointed out before, that's key. But starting with best practices that have been learned in other implementations is really a way to hit the ground running. And I think the other think the other key thing is start with a reasonable first phase. Don't try to pull in every data source within the company and every possible thing you could do with it in step one. Focus in on warranty claims and sales data, or whatever makes sense for your company, as phase one. But start with something achievable so you can start to get the return on your investment as you grow the system.
WM: I might add to that, in addition to early warning that the warranty systems that have been implemented can do lots of other things as we've been discussing here such as provide information back to design engineers on an ongoing basis.
DB: Excellent. Now this next question is for you Eric. And this viewer asks, what are some the best practices you've seen in the early warning programs?
EA: I would say overall, the most successful programs I've seen are the ones that have somewhat modest goals in that you're not looking to automate the entire company and give everybody a new workstation and train everybody. You're looking to do something very specific. And what we're talking about here today is finding very specific problems and finding them earlier. Not a day earlier or week earlier but months before and then to act upon that information. Because so many times companies will detect a problem, they'll report a problem, they may even report a problem to the government, but they're not taking the feedback loop all the way around and bringing it back into their own company and going to that engineer and saying, you know, we need to have a meeting. We have to sit down and do something about this. So it's bringing in the right people at the right time for very modest and specific goals to get something done.
DB: Excellent. All right Dr. Meeker this question is for you. This viewer asks, how do you create and or define the radar signature of a specific early failure?
WM: Well that's a challenging question. Let me try to answer it, however. There's two general ways of doing this. One is empirically, that is go back and look at the history and look to see how these things have shown up in the past. So two examples might be; an occurrence rate that shows up very slowly. You see a few then they grow through a trend and then you're looking for this trend that becomes the signature. Other types of failure modes may just suddenly occur and jump up more rapidly because they tend not to be related to use which is highly variable but just how long since they've been manufactured. So that could give two different signatures. One is this ramp and the other is sort of a step up. And so statistically, we can design algorithms to look for those things. We can also try to focus the search for those signatures in different points in time. I think most of these things would be discoverable very early because there are some high end users where those things are going to show up more rapidly verses looking out a year or two. There's also the more opportunity for savings if you can detect those things earlier. So again, when we have our focus, we can put the focus early in life or we can begin to spread it out. And these are all statistical decisions that have to be made when designing the detection algorithm.
DF: And I think the other important thing that you were getting at too was there's not one specific way to do it. And you need to make sure that you're looking at all of the potential patterns. Production periods, so you know when things were built because some things related to suppliers and assembly plant issues and those kinds of things. Time in service and usage because some things just wear out over time and that's the key variable. And the month of the warranty claim because some things are seasonal. You might see that when hot weather comes up this type of claim starts to peak. So there are a lot of different ways to look at it and it's important to have systems looking for all of the patterns all of the time to find those patterns.
DB: Laura, this next question is for you. This viewer asks, what are the checks and balances in the production line up to the finished product data, so that after-sales problems could be eliminated or avoided altogether?
LM:: In our production process we do have this notion of a closed loop system. If an operator has a problem they call "AND-ON" where they would send a signal that they need assistance. And if that can't be corrected right there and it does go further downstream there's certain thresholds in our process that define at what point do we have to call in more assistance or drive more activity around correcting a problem. And there are many of those throughout our standardized manufacturing process or global manufacturing system and it culminates at the end where we have a customer type assessment of the vehicle. So we have those, checks and balances in the process, feeding forward if a problem can't be solved in-station, and then feeding it back if a problem is received.
DB: Dave this next question is for you. This viewer asks, have you used early warning to understand how to improve the diagnostic process to reduce warranty claims that are inaccurate or misdiagnosed?
DF: One very common issue that we see with warranty data is that there's a lot of noise in it. And a lot of that can be from the technician not choosing the right code. Sometimes it's just easier to pick a different code or you might get paid a little more if you pick a different code. So there are biases built into the data. But one key way that we see to filter that out is by looking at the text. Generally, on a warranty claim -- whether you're making a washing machine or a car -- the technician puts some sort of text in there. And by using text mining techniques to analyze that text and start to group claims together based on the textural content, you can start to re-code some the things that maybe had a different code originally or even create a new coding structure. Several of our customers take their current labor coding structure or part number structure and then use the text associated with those same claims to refine it farther. For example, one of our white goods customers had codes for icemaker failures. Well, that could be a problem because it wasn't getting water, it wasn't getting electricity, or there was a mechanical problem. They were able to refine those codes and then once those codes are refined they can be put into the early warning system, so I can now identify that I've got a problem with icemakers that are not getting water. I can be much more specific and it also gets rid of a lot of the noise so the signal is that much more obvious, because it's not buried in all the noise from all the other types of failures.
DB: Laura this next question is for you. This viewer says that most techniques deployed today for finding emerging warranty issues depend on a stable, reasonably non-seasonal usage pattern for the end product. This viewer asks, has there been any effort around highly seasonal product usage?
LM:: In our business we do have some products that used in certain seasons more than others. For example, convertibles are driven more in the summer than they are in the winter. And so we do look at that in our data. There's a lot of opportunity too as we as we go into global analyses, to look at climate and seasonal factors in the analysis of the data. So when we talk about signal-to-noise, we can find that a particular issue might manifest itself during a snow season or a snowstorm, blizzard conditions, or an ice freeze condition that happens in different parts of the country. But we would take that kind of information that might be happening in one part of the world and apply that early warning back to another country that might not have seen the season or had the condition throughout the cycle of the year.
DF: Absolutely, bringing seasonality into the equation is critical. And it's really two pieces. One is the seasonality of the failures. Air conditioners fail when it's hot out. They don't fail in the wintertime very often. (chuckles) But there are also products that have seasonal sales fluctuations. Lawn mowers sell a lot in the springtime. So you're likely to see a spike in warranty, but that's just because there were more lawn mowers that could have failed. Computers sales spike right before the school year starts. So there are definite patterns in both sales and claims. And bringing that into your early warning process is critical. Otherwise you're looking through a bunch of false alarms.
EA: You know, I want to add there you're talking about false alarms. In the data that I'm collecting for Warranty Week for two, two-and-a-half years. The first year that I was doing it, I found that towards the end of the year, the claims went up. And I thought "that's interesting." You know, maybe it's seasonal because of the weather. Maybe it's because of the usage on motorcycles and lawn mowers and convertibles and things like that or the school year or something like that. But when it happened again what I realized was that there also is an accounting season. Either it's the end of the month or the end of the year where there's some pressure to get everything you can in. So while you're talking about false positives and false alarms, if you have a sudden spike in claims, it may signal nothing other than the approaching end of the calendar year.
WM: From a point of view of algorithm development, I think I'd like to make a comment about seasonality. My students and colleagues at Iowa State have developed some statistical methods for detecting early warranty issues. And like any other research project, you start off with something simple and get your arms around it. And so the work that we've done to date and published has been on non-seasonal processes. But we've been thinking about and indeed have written down on paper how one can do this for seasonal processes as well. And the seasonality can be caused by any of the things that we've had here -- brought up here in discussion. So it is certainly possible to develop early warning detection schemes that do account for seasonality. Particularly if that seasonality is understood well, and that again becomes part of looking for the signal in the noise. So you understand a little bit how the signal is going to behave seasonally, and that just becomes part of the detection algorithm.
DB: Dr. Meeker, this next question is for you. This viewer asks, how much historical data and you need to be able to accurately calculate cost avoidance?
WM: Well, (Chuckles) My engineering colleagues... I'm a statistician by training. My engineering colleagues say that I have an insatiable appetite for data, and the more the better. However, I guess you use what you have to in order to do this. And even though I'm a statistician and I like to have data, I recognize we don't always have data to answer the questions. And so then we have to go the other sources of information beyond just the data. Expert opinion, for example, could be very effective in trying to do this sort of quantification. An expert can sit down and say, look, if this component fails, this is how much it's going to cost. This is how many components we have out there or how many product views we have out there and they can figure out what the cost is going to be pretty readily.
DF: And the other thing that that we've seen is with a lot of manufacturers, especially products with shorter life cycles like consumer electronics and that sort of thing, they worry that they don't have enough data for early warning. But a lot of times as we dig into it we see that they re-use components. The next generation of this computer has the same keyboard the old one did, or it has the same video card even though the processor is different. So even though the entire unit may not be the same and may not be comparable over time, the components are. And really the level of analysis for early warning should be the components anyway. So you can often track those things over time. The other thing you can do is, even though it may be new part, it is very similar to a part that you do have historical data on and you expect the failure rate to be roughly the same or a little lower. So you can base assumptions on the data that you do have, even if it's not on the exact same part.
DB: Okay. Very good. Now this next question is for Eric. This viewer asks, what are good resources on the Web or magazines for identifying warranty related issues and trends and solutions?
EA: Did somebody really ask that? (Laughter) Thanks, Mom. (Laughter) Well, I mean I don't want to unduly promote Warranty Week, but one of the things that Warranty Week does is provide headlines from around the industry and link to those headlines, so that people can just go quickly to a single source and read 18, 20, 25 magazines, a sort of a digest of warranty news, just by going down the right hand column of the Web page. I find things all over. In the issue that I have going out tonight I'm pulling in from the Peachtree Corners Weekly in Atlanta and the Fon du Lac Reporter, as well as some standards like Business Week and Forbes. USA Today surprisingly is always on top of warranty issues, especially in the automotive industry. Some of the daily newspapers are very good sources. The Associated Press is very good. Also, from abroad. I find that from India in particular, many of the computer magazines are very much on top of the warranty issues. For whatever reason, warranty seems to be very important topic for consumer electronics and computers in India. So the simple answer is everybody once in a while will cover warranty. The key is to know when they do it and to go and take a look it that.
DB: Excellent. This next question is for you Dave. This viewer asks, does design for Six Sigma design methodology really have any impact on reducing product warranty issues?
DF: Oh, absolutely. The earlier in the process that you can solve a problem the better. So if you can design out the problem at the start, that's absolutely the most effective way for that problem never to exist to begin with. However, the reality is some problems are still going to exist. Things are going to fail. They may fail at a lower rate than they did before or it might be something else that fails that you hadn't planned on. And some things just take time to wear out and there's no way to test those things in the real world until they are in the real world. So absolutely, a focus on the design process and design for Six Sigma is important. But you can't let your guard down. You've got to have that early warning process in place to let you know when things do happen out in the field.
LM:: I'd just like add to that too. We have really started to take advantage of and been using design for Six Sigma process, and as well as it works there still are new innovations and new technologies coming into our space all the time. As we go into different types of power sources, there's a lot of new and different types of components, complexity of components, that from a reliability standpoint need to be factored in. And, as Dave said, there are always going to be issues. The idea is to try and find them, drive them out in your design and development process as early as possible. But you still have to have some way to detect them from a customer perspective in the field.
DB: Laura, this next question is for you. This viewer asks, what are the challenges that you see in including suppliers in the early warning process and how have they been addressed?
LM:: We view suppliers as partners and we do allow suppliers to look at warranty data. And although we don't have all of our data sources opened up to suppliers, we do share on an as needed basis. So we see the suppliers as an extension of our engineering. In our data organization, in terms of looking at the information, determining what the failure mode is, and then how do they need to make that fix in their process. Again, whether it's an integrated supplier and it's a design issue or if it's just a supplier manufacturing issue in their assembly process.
DB: Okay. Great. Dr. Meeker, this next question is for you. This viewer asks, are there references that you can recommend on developing algorithms or statistical analysis to identify when a warranty or complaint issue becomes a significant trend that warrants corrective action?
WM: I don't know very much other than the work that I've done with my colleagues. I mentioned earlier that when I was working on a reliability problem and doing prediction of warranty costs that managers were also asking how can we detect these things earlier? So my colleagues and I did work on the development of such detection algorithms. And ah, we published a paper several years ago that appeared in "Technometrics" I don't remember the exact year but was probably somewhere two or three years ago and that paper as I said, first of all it's highly technical. The implementation of the methodology does require the specification of certain tuning quantities which sort of correspond to this signature. Do you want to be looking early or do you want to be looking over a larger space of time and all the things affect the probability of detection relative to a false alarm rate. So the usual strategy is to develop rules that take the data compute certain statistics, if those statistics exceed a certain threshold then we would say there is a problem. Determining what those thresholds are, depending on what you're looking for in terms of the signature and all that, as I said, is described in this technical paper. That's really the only thing that I know about that's out there in the open literature. I imagine there are a lot of companies and other organizations who have been developing these kinds of methods but I've not seen them in the open literature.
DF: I agree. As we were going through the product development process coming out with SAS Warranty Analysis a few years ago we did a lot of research on what has been written on this topic and they're really hadn't been much. We found Dr. Meeker's papers and a few other sources, but there's not a lot out there. But I think the other thing to keep in mind is that it's great to understand the analytics to do that research but a lot of that has been packaged by software vendors such as SAS to help you hit the ground running so you don't have to build those models from scratch. And the other thing to keep in mind as you do look at software providers is the difference between the black box approach and flexibility. There are software providers that do have a black box. You give them data, they turn around and say okay here are your issues. But you're not able to do the tuning, looking at all the different factors: production, claims, usage, seasonality -- all those different things that that can be different for your product than they are for somebody else's product. So having that flexibility is key to looking for emerging issues and finding the signatures and being able to tune the system to look for them.
DB: Excellent. Dave this next question is for you. This viewer asks, how do you balance data analysis A) as an early warning tool and B) to document improvements? Can one system or analysis presentation do both?
DF: Well my bias is toward early warning in that equation. Early warning is the critical thing there to stop the bleeding. There's something going on out in the field. I need to go out there and I need to remediate that so that that problem doesn't exist anymore. So that's always step one. Now once I know what the problem is, and I've fixed it, I can document that. And that's a good thing, so someone else doesn't have to do that again later. But early warning I think has always got to take precedent out of the two.
DB: Okay. Laura this next question is for you. This viewer asks does the increase in global sourcing have an effect on early problem solving?
LM:: Absolutely. We are starting to look at leveraging global performance in making our new supplier selection decisions. And as we are sourcing globally, the data and the reliability performance of our components and parts and systems and how those are integrated into the vehicles is becoming a much more important factor in those decisions and is taking on a stronger role. As we move toward more global and less separate regions operating I see that having a stronger impact.
EA: Let me let me add that in the computer industry I've seen that there is such a thing as you know, maybe getting too global with sourcing on commodity components, for instance, that are going into your printed circuit boards. You realize that it's this capacitor or this coil and the first question you ask is, who made these for us? And you say, well three or four different companies. You not only don't know which company made it for you or which factory or what week it was made but how do you correct a problem when you don't know where the problem came from? Who made the bad parts is a simple question, but it cannot be answered because you're just mixing them into the bucket. It's just-in-time and you're [purchasing the] lowest-priced component of the day.
LM:: That is a challenge, and we are working toward ways of actually drilling back to which component out of which factory went on this vehicle built on this day at this time.
DF: And I think some of the new technologies like RFID will make that easier in the future so that you can track specific parts that went into specific products or specific subassemblies that went into assemblies and follow that whole genealogy and then have that traceability tied to the warranty data, so that when you do know this specific unit failed in the field you can trace back to everything that went into it.
LM:: (off camera) Right.
DB: Okay. This next question is for everyone. And this will be our final question that we'll be able to take today. Dave, why don't we start with you responding and then we'll just work right down the line of the panel. This viewer asks, what recommendations do each of you have for companies that aren't required or don't yet have early warning reporting programs and should they implement their own? Dave let's begin with you.
DF: My answer to the first part of that question is, do it anyway. There is enormous value and hopefully you understood that from the panel today that even if you're not forced to pull this information together and do basic reporting on it, there is still an enormous amount of value about knowing the problems sooner, solving them faster, avoiding recalls, having better customer satisfaction, and more sales. They're just so many benefits. And as I mentioned earlier, most of our customers are seeing a 10% to 15% reduction in warranty cost and that's millions and millions of dollars. And although as Bill mentioned before, the solutions aren't cheap, they're a lot cheaper than not doing it. So I think the answer to the first part of that question is absolutely, they should do it. And as far as implementing their own, my bias obviously as a solution provider is to talk to us and we can help. But I think that makes a lot of sense. Before SAS I actually worked for an automotive manufacturer and hitting the ground running is key. Starting from scratch and making your own just wastes an awful lot of time. Time that you can make up quickly by hitting the ground running with a packaged solution. The other point is that as you move forward in time you get the upgrades that come along with a packaged solutions verses relying on your own internal resources to think about a new analysis or new way of looking in data.
DB: Laura, do you agree?
LM:: I think I agree with everything you say. (Laughter) I recommend that any company leverage any source of data that they have, certainly to help improve their product performance and customer satisfaction. And it could even be early warning from a service perspective. It wouldn't necessarily just have to be from a manufactured product. But I also would want to support what you're saying too about developing your own methodology. It's very resource-intensive and you have to have a specific skill set to be a do that. And I think companies that are trying to move into this space that don't already have something today, would probably be better off trying to purchase a solution that has already taken advantage of some of the industry leaders and building on feedback from different companies that I know you supply to. Take advantage of that. Hit the ground running and move forward very quickly in terms of product enhancements.
DB: Dr. Meeker, your perspective on this?
WM: Well, again speaking as a statistician, most companies who warranty a product are required to have the warranty database for financial reasons. But that database contains an enormous amount of information that could be used for other purposes, one of them being early warning. I think it makes sense to take advantage of that information in the warranty database. How you do that is a more complicated question. But as I said before it would seem to me that going to a provider who has already thought this out and how to map a generic warranty database into actionable information would almost have to be a cost effective solution for any company who's spending a fair amount of money on warranty cost.
DB: Okay. And Eric:
EA: The TREAD Act was passed in 2000, and came into effect a couple of years later. There was always that carrot and stick with the TREAD Act. The stick was if we find out about a problem we're going to cause a recall and we're going to cost you a lot of money. The carrot is that companies in order to achieve compliance will put the systems in, and because these systems are in, it will benefit them in that they'll find out about their own problems before the government does. There's no TREAD Act for washing machines. There's no TREAD Act for computers. There's no TREAD Act for a whole bunch of other fields: medical equipment and even airplane engines, and of course consumer electronics. But each of those industries can make the same use of the information. You almost wish that there was a TREAD Act for refrigerators and things like that because they would benefit in the long run from having to achieve compliance and putting these systems in. Laura, I met you a couple of months ago at a meeting of the Automotive Industry Action Group, who now have a committee which is looking at early warning. But they are doing it just for the automotive industry, which is great... great for you. Some of the people that I work with in the computer industry in particular, they're coming to me and saying, we need one of those. We need a Computer Industry Action Group early warning committee. We need to do what the automotive industry is doing. And they are almost jealous that you're ahead of them. You're a couple of years ahead of them. So there are people watching you know, what happens and wishing that we could get those benefits as well.
DB: Excellent. Well unfortunately that's all the time we have for questions today. I'd like to thank our panelists Dr. William Meeker, Laura Madison and Dave Froning and our moderator Eric Arnum for joining us today. If you have any additional questions or comments about today's Webcast, you can send those to firstname.lastname@example.org. We hope you'll join us for the fourth and final Webcast in this Warranty Week series on October 13th. Details have been posted on BetterManagement.com. Again, that's Thursday, October 13th at 11:00 a.m. Eastern time. For our panelists and for BetterManagement.com, I'm Dorothy Brown. Thanks for watching and have a great day.