click here to to to the Warranty Week home page
July 31, 2007
Editor's Note:
This column by Ed Staats of Tavant Technologies is
the latest in an ongoing series of contributed editorial columns. Readers who
are interested in authoring future contributed columns can click here to see
the Guidelines for Editorial Submissions page.



Warranty Definition:

Rather than counting what warranty costs a company internally, manufacturers should instead look at what it costs the customer and the brand image externally. Warranty is an opportunity for a company to listen to its customers, and improve the integrity of its brand image.

By Ed Staats

Go ahead and write down your definition of warranty. I guarantee you it will change by the end of this article.

There are two main ways I've seen people define warranty. My money says you fit into one of these two buckets.

A typical company view of warranty: "A guarantee given to the purchaser by a company stating that a product is reliable and free from known defects and that the seller will, without charge, repair or replace defective parts within a given time limit and under certain conditions."

I think this view is self-serving and does a company's customer base a disservice. Unfortunately, this is how most people define warranty. It's what the dictionary says, right?

Here's my view. It's a market research view of warranty: "Warranties are an insurance policy applied to all customers during a period of time after a product is sold. The aggregated warranty data constitutes a log file of all interactions where a customer had to interrupt their normal daily routine to seek someone to repair their product to make it functional again."

The Market Research View

Here's an example of where I am coming from and why I subscribe to the market research view of warranty:

Wal-Mart Stores Inc. sells a brand of flat screen televisions called iLo. I bought one. Little did I know that I and many others would have issues with many of the controls failing to work after a short period of time (30 days to nine months). Sounds like a quality issue, right?

So, let me continue. The warranty is 90 days. Does it matter that I am in the warranty period or not when the unit fails? Answer is that it doesn't matter. I view the failure as catastrophic, as most people would.

What has Wal-Mart done? From my knowledge, Wal-Mart appears to have worked with Initial Technology, Inc., the makers of the iLo product line, to basically perform a silent recall. Here's how it works: When a consumer contacts you, what you do is offer them a great deal, explaining how you will go ahead and fix the product they have even if it is outside of warranty. Apologize profusely about the issues it caused them and maybe at most charge them a small cost to fix the product. Then, you smile and walk away.

Here's how it's worked so far with Initial Technologies. I talked to Wal-Mart. They say to call corporate. Corporate puts you in touch with a third party extended warranty partner. They in turn put you in touch with the manufacturer/importer, Initial Technology, Inc. (which is not labeled on the TV). Initial Technology says they'll do me the favor of sending the materials to pack the TV in for transport and they'll even insure it, they'll even ship it back for free! All I must do is pay $10 for the shipping materials. So, here's the point. I'm happy I'm getting the TV fixed. Don't get me wrong, but telling me you're going to do all this great work fixing (what I think shouldn't have ever gone wrong) for free and then charging me a measly $10 to fix the TV seems... I guess it's like ordering a dinner at a nice restaurant and having your food served lukewarm.

I'd feel somewhat okay with Initial's approach, even though I think it's about two inches short of smart policy. However, the problem I have is with Wal-Mart. I think that the problem isn't necessarily the supplier of the TV, but Wal-Mart. I believe that this is a failure to validate products, and on top of that, it's a failure to make sure that the necessary processes are in place to ensure success of those products in the marketplace.

This situation is admittedly an exception, but it's a great illustration of a real life scenario where the loss from lack of understanding customer expectations and needs could cost Wal-Mart a great deal of future revenue, not to mention bad press.

So, let me give you a brief overview of my experience and how to avoid being the real life situation I just described.

About 10 years ago, I began my career thanks to two people, Bharath "Vijay" Vijayendra and Mark Sickau at RDA Group. I was exposed to different ways to mesh surveys and survey design with statistical techniques to glean data out of consumer data. It was my first exposure to consumer data and I've stayed in that general arena since. What I learned to respect is the time that each consumer took to express their concerns, desires, and complaints about a product or service that they had. It's easy to forget that sometimes when you look at a lot of information. It's not the devil that's in the details, it's the consumer.

Even after all these years, I still maintain that RDA Group has some of the highest standards in the market research community when it comes to serving their clients. However, with any technical degree, it's always worth going for a graduate degree. The school I chose was Iowa State University due to the status and history of its Department of Statistics.

Welcome to the Sned

Snedecor Hall, or the Sned as we referred to it, was getting old. Pipes rattled, tiling was coming up, and in the cold Iowa winters, wind would find many ways to sneak in through window panes. I loved the character of the place; perhaps it's also where I gained my fondness for sweaters. It could also be that I had to find something to like as we waded through 80+ hrs of rigorous statistical work. It was a wing of the Sned that held two professors focused on manufacturing, who had the greatest impact on my career. These two professors were Stephen Vardeman and William Meeker.

Vardeman came from a highly theoretical background and quickly agreed with many of the great mathematicians that theory without practice is just theory. He bridged the gap between theory and practice from which he has created coursework and textbooks that contain exceptional information on statistics in engineering and quality control.

While Steve Vardeman's office was immaculate and highly organized, Bill Meeker's was dominated by piles of books, papers and pending work on reliability statistics (Weibull, accelerated life testing, etc.). Bill has written books on reliability and works with the Ames Laboratory. Bill was my major professor for my master's paper. Both professors believed strongly in teaching with only real life examples.

Data Has a Context

One particular conversation sticks out in my mind as Bill reviewed the work I'd done on my master's paper. I'd just finished running a three-day process to get our data together to do some analysis. So, when the question came I wasn't ready. It's was regarding using production vs. sales dates. I used sales dates to detect quality issues.

"Why did you do that?" he asked me. Bill certainly appeared to believe that I knew better. I honestly didn't; I had just begun dating my wife, and was probably thinking less about school than I needed to.

I vehemently defended why I did what I did. Sales dates are good for detecting quality issues, which is true, unless you can use production dates. Production dates are much better due to the fact that engineering changes, supplier mis-builds, and manufacturing mis-builds usually start on a specific production date. It was a bad move on my part to avoid considering the details of the process of how products are manufactured. I made a poor assumption which in the long run would have meant longer time to detect a quality issue and would make it harder for an engineer to diagnose how the quality issue arose.

What I learned from that encounter is that there is context with data. I forgot to understand how to be familiar or even empathize with the context of the data. If you don't do this, you force a solution that makes the wrong assumptions about the data you have. It's not picking a technology that makes the difference -- everything in analysis is about making the right assumptions about the data, turning the data into information, and then making sure that people can take the right action on the information.

I've seen a lot of broken processes, a lot of failed systems, and a lot of stressed people over my career. Usually it's because they were so close to the problem that the context of a solution is hard to grasp or they did not understand the analytic capabilities or technology that can be applied to turn their data into actionable information. Invariably, when a solution was created, people would disagree about the course of action because they usually had conflicting pieces of information or an analysis that was roughly put together.

The better course of action would be to get someone who has been intimate with data and processes before, and see if they want to get intimate with your data and your processes. See if the company can, in a rational way, augment your current processes rather than introduce a completely new process. Don't go by price. Don't go by customer base. Don't go by talk. Go by the person who wants to service you in the long run. Get them to prove to you before you buy that they are the people you want to deal with. It's wiser to pay a bit of cash up front to develop a proof of concept that works for you, than to sign up for a large investment off the bat. After all, you drive a car before you buy it right?

Engineering + Marketing = Improvement

In June 2000, I started with General Motors. In March 2005, I left GM due to family reasons, not career reasons. I enjoyed GM. Ask me my choice for the most challenging industry and I'll tell you hands down, automotive is it for me. It's tough, hard, requires planning, has the most demanding customer expectations, and requires the most balance of finesse and force to get things done. I think that GM employs some of the best people in the automotive industry -- Bob Lutz for example is one. But GM has been challenged by historical factors that take a long time to fix (this is a subject for a different time).

So, why is warranty especially important in automotive? Most people in the U.S. want quality and reliability. Warranty, along with design and increasingly, fuel economy, are the primary purchase behaviors. However, Consumer Reports exists to evaluate factors such as quality and reliability, and has a huge impact upon sales. Therefore, the value of warranty in automotive is that it is a great indicator of quality and reliability and helps an automotive company understand Consumer Reports' assessments.

In my opinion, GM only nominally recognized the value of quality and reliability from the 1970's until the early 1990's. The legacy of the era did not respect the fact that a car is usually the first or second largest cash outlay a family makes. Unfortunately, I think that the damage from that legacy still penetrates consumer's perception of GM today.

Here's a graph of GM's warranty, policy, goodwill, campaign, and recall expenditure over the past four years. The important thing to note is that this chart includes much more than just warranty costs. But notice how GM's costs have remained relatively flat, even though warranty terms have gone up during this time from one year to three years in many parts of the world; the company has ramped up sales in China (which is a brand new market); and it has lengthened powertrain warranties to five years in the US. All these factors point to dramatic increases in expenditures, but that hasn't happened at all.


Figure 1
General Motors Corp.
Warranty Claims & Accruals, 2003 to 2007
(in US$ Millions & Percent)

General Motors


I believe that GM began to embrace the value of quality and reliability roughly a decade ago. Warranty information was heralded as one of the most valuable insights into a product's quality and reliability. By 2000, it was ingrained in the company that warranty was useful for understanding quality and reliability perceptions. However, the warranty information wasn't very actionable. Engineering didn't have the view into problems in the field like they needed to. If you subscribe to the view that warranty is simply a policy to ensure that a product is fixed, then you don't have a problem and pseudo-proactively improve your product (i.e. Wal-Mart). If you subscribe to my market research view, then the problem is enormous (i.e. the consumer viewpoint).

I'll refer to a set of online courses from the Massachusetts Institute of Technology as I go through the rest of my experiences. These course notes and references serve to help understand the rational for why these issues are important within a company. I believe that it is both people and technology that change the way a company operates. A good presentation on this is Integrating Social and Technical Systems by Joel Cutcher-Gershenfeld and Thomas Kochan.

Warranty Metrics

Six months into my job at GM, I went from working on customer surveys to running the reporting of GM North America's warranty metrics. We tracked the performance of every vehicle line and plant performance. We measured 50 of the most influential people in GM North America's operations and everyone working on their teams.

In GM, metrics and targets rule. The quote "I do what my boss is measured on" is famous within GM. Therefore, this should have been the best way to improve quality and reliability in vehicles. However, by the time I entered the equation in late 2000, the organization was getting sick and tired of dealing with warranty data for two main reasons.

First, our warranty metric reporting process was very manual and error-prone, so people were questioning its validity. We had to stabilize the reporting process to eliminate problems caused by data quality issues. We managed to do so, but the reporting process still took two weeks, primarily because we were forced to extract data from an outdated mainframe system, which infrequently caused data quality issues. Eventually, we rolled out of the mainframe system and into a centralized data warehouse, which stabilized quality and helped to take our reporting process from a two-week process to one that took a single morning with multiple coffee breaks. We ran through data validation checks, and diagnosed which product lines and what issues drove the changes between reports.

Second, we had an issue with our warranty data fluctuating in a consistent manner. While our warranty metric was strategically and qualitatively correct, the metric violated the qualitative rule that it needed to clearly articulate progress to target proposed by Deborah Nightingale. The metrics always went about three-fourths of the way through the year as green (good) to target. But then by year end, the numbers were red. This caused the organization to feel confident that they were hitting targets during the majority of the year. But then at the end of the year, they felt betrayed when they missed their targets.

It took a year to develop a process that made the warranty data predicable against target. This was as important as the data quality simply because the key for any metric in a large organization is knowing exactly how you stand to target. We used advanced analytics to make the warranty data predictable against target, but like any social organization, new information will be questioned and will be accepted or rejected through the organization's accepted social behavior. The ability to assess standing to target ended up being controversial, which in GM meant a lot of proving answers to questions. But in the end, validity and stability of metrics was proven to be keenly valuable to the organization.

As stability of metrics vs. targets straightened things out, people became interested in the metrics again because status to target was measurable. Now people wanted to know more about how to improve the metric. We had a great analysis system in place, but the user base was limited, it took three days training to get up to speed, and it was easy for a user to make an analytical error.

So now is a good time to explain how GM is functionally organized from an engineering perspective, because this is critical. The organization is split both vertically (vehicle program teams), and horizontally (component areas such as chassis, electrical, exterior, etc.). The ability to know how a vehicle performed versus all others in the field was provided only on an ad hoc basis, usually within a vertical. There was no one easily accessible data repository that had all of the information for both vertical and horizontal performance in warranty. The only standard data that existed was in the verticals, and that data was limited to failures that happened shortly after the product was built (quality issues).

Steps were needed to create a standard that everyone could refer to, and it needed to look across verticals and horizontals for both quality and reliability issues. It had to change the way people worked from magnitude to opportunity. Magnitude is more of the traditional problem-solving methodology. Opportunity uses more of a statistical or even actuarial approach.

Simple Reporting Tools Needed

A reporting tool that anyone in the company could use was created. It took three minutes to explain how to use it. It used bar charts to illustrate which component areas on a vehicle had the best and worst performance. Having standardized data in one place changed how the verticals and horizontals in the company approached their work. They now had the ability to focus on where their top opportunities were two or three years after products were built. And those opportunities could be examined against the rest of the GMNA fleet. As a result of this, engineering designs on future programs were now brought forward one generation during the design process. This meant that instead of improvements being fixed once the units were in production or during the next design of the vehicle, the problem was being fixed in the current design process.

Now people knew where opportunity was and how to respond to it. Sounds great right? Not completely.

Demand on the server for the analytic system increased, and eventually the server slowed to a crawl. On top of that, people were getting conflicting results because it was easy to do different analysis techniques and come up with conflicting results. So, an analytic solution had to be developed and there wasn't much time to get a system operational.

We managed to get a project going under the radar. We would be given hard drive space, but could not impact system performance. We also would receive no additional funding and had to make the project work with existing technology, which at the time was Excel and SAS. We instituted a phased approach to our solution, which was named SWIFT.

SWIFT and was developed using iterative processes with the engineering, service, manufacturing, and purchasing organizations. It was operational in three months with a limited number of users. SWIFT was considered fully rolled out in under nine months with two people working on it part time, and over 250 users. Maintenance consisted of 20 hours a month. Currently, SWIFT uses a Java interface and SAS on the back-end for processing and has over 1,000 users at all levels of the organization.

SWIFT utilizes an interface that made analysis more role driven and minimized errors. It uses standardized data formats, and it labels all reports so that everyone knows what the data represents. On top of this all, it processes data 10 to 50 times faster than the power user analysis system it replaced.

Has this yielded value for GM? Yes!

  • The value wasn't achieved because a system was put in place.
  • It wasn't achieved because people were reorganized.
  • It wasn't achieved because a great analysis system was implemented.
  • Value was achieved because people were empowered with the information that they needed to do their job in the format they wanted.

We assumed that the role of a reporting and analysis system is to help people make more relevant decisions in a faster, reproducible, standardized manner. It's not about telling people what to do. That would be micro-management. The key to a business strategy is about giving people the ability to do their job more effectively and efficiently.

What else has GM done? GM is in the process of replacing their current claims processing system to get better information from the field and improve relationships with dealerships. The piece that made the claims processing system viable financially was the analytic capability to use the data, not the fact that you can collect new data. My suggestion to any company that is looking to get a new claims system is to consider how to use it in a manner that gives people in their organization the ability to do their job more effectively and efficiently. Then you can prioritize what you want your new claims system to do.

Life after GM

In May 2005, I began working for SAS Institute Inc. on their Service Intelligence Center team. My role was to support the global pre-sales activities of our service intelligence center offerings. At the time I started, the only offering was the SAS Warranty Analysis solution. SAS has since grown the Service Intelligence Center to include a number of capabilities, of which fraud and revenue management were areas that I helped develop.

I traveled to a multitude of destinations in the U.S. and other countries. Before I go on, I'll diverge quickly to talk about international markets.

There are many things to consider when working internationally. My television situation is a great example of how companies fail to understand the differences. In my mind, Wal-Mart had not focused on what customers in the U.S. expect in terms of quality. Their supplier, Initial Technologies, Inc. did not understand the demands that their product would undertake during use in the U.S. or did not know how to test reliability appropriately.

The common thread among countries globally is that it is critical to maintain consumer focus and the best possible service for your consumers. One key shift I see happening is that India, China, and other developing areas of the world are beginning to demand higher quality of product and service that industrialized countries are already accustomed to. Still, even with the shift, one has to take into account regional context. For example, having parts on hand is critical in the U.S., where high up-time is demanded. Having parts on hand is also critical in India, but there the cost of repair is by far a more critical factor.

Also, customers use products differently. For example, in India or China, drivers seem to use the horn on their vehicles more in one minute than people in the U.S. do in five years. Everything is different, yet processes are related if you have context of your customers.

Being at SAS allowed me to see how many companies run their after-sales service processes. Few companies, in my opinion, have the technologies in place to manage warranty well. There are three confounding issues to the problem. One, service guys are generally good product guys. They know the products they support in and out, how to fix them, and how to keep them running. Knowing software technology is not their area of expertise, nor should it be. Secondly, people sometimes are only exposed to their piece of contribution to the warranties. Lastly, many people view warranty as nothing more than a necessary cost. In other words, some companies take the stance that warranties are simply offered to placate the consumer rather than to differentiate the product. In my view, it's the difference between taking a pain killer to make a headache go away vs. treating the cause of the headache.

Most companies, and most IT companies as a whole, have made their money off of attacking point solutions. These point solutions don't provide the capability to improve warranty. Improving warranty requires more than that. It requires a vision of how and where to improve. It requires integration with the rest of an organization. It requires an understanding of the financial impacts. It requires processes around how to remediate quality issues. It requires an ability to grow and scale as the company's needs change. In short, it requires a market research point of view.

Here's a great point to illustrate what I mean by a market research view. Everyone focuses on a piece of technology for an issue, or a piece of the process, mostly because of their background and skill set. It's easy to miss the bigger customer perspective. For example, here's a great piece of info everyone should know about their company.

How many man-years do your customers spend getting your products fixed under warranty?

I'm not talking about your technicians. I'm just talking about your customers. Need a hand doing it?

Write down the number of repair orders you have under warranty. Now figure out an average time that a customer spends to get a repair taken care of. I'd assume one hour is minimal. So for a conservative estimate, multiply the number of repair orders by the average time a customer spends on it and divide by 2080, which is the number of man-hours in a year. That will give you an idea of how many man-years your customers spend fixing products under warranty.

Or, if you want a different statistics, take the number of customers that had repairs on your product and multiply them by 10. That's the rough estimate of how many people told others about how bad your product is. In contrast, those that did not have any repairs probably told only three people how good your product is.

Those numbers are pretty shocking right? It ought to change the way you think about warranty.

I've always thought it's odd that some manufacturers don't value warranty. Think about rework on an assembly line. Everyone wants to reduce rework, right? You have to because it impacts the functioning of the line. Warranty is rework in the customer's hands. The difference between warranty and manufacturing is that rework in the manufacturing process reports to the plant manager. In warranty, the plant manager reports to the customer. This means the risk and exposure to the company is much greater.

Life after SAS

At SAS, I saw the world and the world saw me. My wife and children did not. I left the aftermarket service area and headed to H&R Block. The first thing you would think is that the two would be worlds apart, but they aren't, not when you view warranty the way I do. I simply moved to a different arena of market research. I got the chance to work on where offices should be located, how to optimize lease costs, how to maximize customer demand, and where market constraints affected operations. It was a very useful and refreshing time after leaving SAS.

Unlike many people in my line of work, I thrive on variety. I enjoy the challenge and learning that comes from working with a lot of different people and different processes. I've found that what makes me tick is solving a problem. Getting to the heart of a customer's problem and fixing it is not only challenging but extremely rewarding.

So I ended up at Tavant Technologies Inc. I'm responsible for growing our IQ platform, which allows people in any consumer based activity to understand key metrics, standardize analysis of data behind the metrics, and enhance communication throughout the organization to improve the key metrics. Not only this, but our IQ platform is scalable in terms of how your organization matures. This means that we can not only grow in terms of data size, but also more importantly, we can grow with you in terms of analytic capability needs.

The value of our capabilities and experience has allowed us to develop some scalable and extendable solutions. We have two major solutions of note to a service organization. The first is Tavant Warranty Management Solution, which handles warranty management and administration. The second is Service IQ, which allows a service organization to visualize how key metrics in the organization are performing, understand detail information that makes up a metric, and understand through business intelligence tools where opportunities to improve service metrics.

Back To You

All in all, I believe that everything is service is about changing a consumer's experience. If you do that, your consumers will make you profitable through loyalty, price point, and word of mouth advertising. Warranty, in my mind, is where consumers tell you what they want your product to be, it's where you get the chance to fix their issues and listen to them. The goal for you is to figure out how to best interact and allow your consumers to communicate with you. I'll leave you with two good quotes to ponder:

Samuel Johnson: "Distance has the same effect on the mind as on the eye."

Edwin Schlossberg: "True interactivity is not about clicking on icons or downloading files, it's about encouraging communication."

Let's end where we began: What is your definition of warranty?

   -- Ed Staats can be reached at ed.staats@tavant.com





Assurant Solutions Shed The Light
Fulcrum Analytics
GWSCA First Annual Conference on Service Contracts
Sign up for a free subscription to Warranty Week:
     subscribe     change of address     unsubscribe

Related Articles From Warranty Week: