Skip to main content Skip to office menu Skip to footer
3 golden objects Minnesota Legislature

Office of the Legislative Auditor - Program Evaluation Division

State Agency Use of Customer Satisfaction Surveys

Executive Summary (PR95-23)

October 23, 1995


Increasingly, state agencies are using results from "customer satisfaction surveys" as one measure of their performance. Some agencies included data from customer satisfaction surveys in their 1994 performance report to the Legislature. Because the Legislative Auditor's Office is required to determine whether data in performance reports are valid and reliable, we decided to gain a better understanding of the methods agencies have used to gather customer satisfaction data and assess the quality of data that has resulted. (See Minn. Laws (1993), Ch. 192, Secs. 35 and 39-41, amended by Minn. Laws (1994), Ch. 632, Art. 3., Sec. 18, and Minn. Laws (1995), Ch. 254, Art. 1, Sec. 43, and Minn. Stat. ยง3.971, subd. 3.) We also decided to offer suggestions for future use of customer satisfaction surveys in performance reports. Our research addressed the following questions:

  • What methods should state agencies use to measure the satisfaction of their customers with agency services?

  • How well have state agencies conducted surveys of customer satisfaction?

  • Do performance reports contain accurate, complete data on customers' level of satisfaction with agencies' products and services? Are the data properly analyzed and interpreted?

To answer these questions, we reviewed published literature and manuals explaining customer satisfaction surveys and talked with experts in the field. From these sources, we distilled a set of guidelines that served as the basis of our evaluation of the agencies' surveys and presentation of results. Next, we interviewed staff from the agencies listed in the figure and reviewed documents that describe customer satisfaction surveys that resulted in performance measures in the 1994 reports. To the extent possible, we recalculated survey results and checked for discrepancies with the report.

Background

We looked at customer satisfaction surveys for three main reasons. First, many agencies have used them or are planning to do so as one way to account for their performance. Second, the methods and procedures for valid customer surveys, which are needed to produce credible performance data, may be hard for some agency staff to implement without guidance. And, finally, we thought that future performance reports could be improved by our effort to explain and apply recommended principles for survey research.

The general purpose of including customer satisfaction in performance reports is to demonstrate how well state agencies are progressing toward the goal of service improvement. By regularly asking representative groups of customers about their level of satisfaction, agencies can produce careful, quantitative ratings of their performance at various points in time. For example, agencies might pose questions about the courtesy or timeliness of selected agency services.

Customer satisfaction surveys are a form of "feedback" from those who have received services. But feedback may assume many forms, and the conclusions one can draw from feedback depend on the strength and type of controls that have been placed over the collection of information. For example, casual comments from customers can offer insights that improve services, but a scientific, rigorous survey of all or a sample of a customer population is needed to yield results that can be generalized with reasonable certainty to customers as a whole.

For performance reports, a certain rigor is necessary since they are designed to help improve important public programs, provide accountability to the public, and inform policy makers who must decide how to allocate scarce resources. Also, only rigorous methods can provide the quality of information that agencies need to support their claims of good performance. Even then, when the best methods are followed, some error is inevitable. However, if surveys are properly conducted, they can produce valid, appropriate measures of performance. Otherwise, state agencies should use customer feedback cautiously, since results could be misleading.

Guidelines for Customer Satisfaction Surveys

We compiled a set of 24 guidelines for customer satisfaction surveys. These guidelines, based on the advice of experts, constitute the steps we recommend state agencies follow in planning surveys, identifying customers, constructing and asking questions, editing and archiving data, and analyzing data and results. The same steps are appropriate for practitioners in the public and private sectors. In our view, they are also the only effective means of producing data that can adequately inform the public and policy makers about customers' satisfaction with agencies' performance. For the most part, the guidelines are practical, economical, and easy to find in books and manuals.

Two concepts are particularly important in conducting valid customer surveys:

(1) random sampling and (2) representativeness. Random sampling is the process of selecting random subsets of customers in order to draw conclusions about all customers of given types. No one may be drawn into such samples except by the laws of chance, which must be strictly invoked. Representativeness means that those who respond share important characteristics with all customers of given types. For example, representative samples of Minnesotans would include women and Twin Citians in close proportion to their existence in the state population or be statistically adjusted to offset differences.

Despite the most careful procedures, all surveys involve potential errors that can introduce uncertainty or bias. For the results to be credible, error must be reduced whenever possible, or at the very least agencies should make users aware of its potential impact. There are two basic types of errors: sampling and nonsampling. Sampling error occurs unavoidably when only a fraction of the customer population is studied. It is commonly known as the "margin of error," which is a specific number of percentage points. Some common nonsampling errors include nonresponse (customers' failure to participate); measurement bias (misinterpreting questions); and technical errors in tabulating data.

If the results for a sample are to represent the opinions of the specified population of customers, a sample of the correct size should be randomly drawn. The necessary sample size can be calculated statistically but varies depending on: the size of the population, the amount of sampling error that state agencies and policy makers can tolerate, the level of certainty that they would like, and the variability of responses. Also, the sample size depends on the level of detail needed in analysis and presentation of results. For example, a sample of 400 may be adequate to estimate the statewide level of satisfaction, but not in each region of the state.

In surveying customers, agencies need to ensure that those who respond are representative of all who received questionnaires so that results may be generalized to the larger population of customers who are not surveyed. Ensuring representativeness reduces the risk of "nonresponse bias," the chance that respondents are significantly different from nonrespondents. For example, research shows that poorly educated people are less likely to return mail surveys than highly educated ones. If not corrected, survey results therefore may not yield a true estimate of all customers' level of satisfaction. The responses may be overly positive, overly negative, or simply atypical. Perhaps those who respond are a collection of people with more time and motivation than others, for example, those with an ax to grind or who hope to ingratiate themselves.

To minimize nonresponse bias, staff of federal agencies, including the Office of Management and Budget and General Accounting Office, told us they work to achieve response rates of at least 70 or 75 percent. When sound methods and techniques are used, including follow-up with nonrespondents, experts suggest that response rates of 60 to 70 percent can be achieved.

Just as important as obtaining responses from representative groups of customers are the questions, response choices, and instructions that customers receive. Ambiguous, superficial, or leading questions may not elicit a fair and accurate measure of customer satisfaction. Overall, each aspect of a customer satisfaction survey should be designed to extract information that is clear, unbiased, sufficient, and appropriate to the agency's plan to document and improve customer service.

Common Problems in State Agency Customer Satisfaction Surveys

In our study, we found that four major problems often limit state agencies' ability to use customer satisfaction data as credible evidence in performance reports:

1. Survey results may not be representative of state agencies' customers.

With a few exceptions, agencies have provided little or no evidence that survey results apply to all of their customers for selected products and services. Neither have state agencies always cautioned readers about important limitations on customer satisfaction data. Yet, in some cases, data come not from random samples but from self-selected customers who chose to return questionnaires or voluntarily compliment agency officials. Also, very few respondents rated some services. For example, one agency obtained a 19 percent overall response rate to a survey, but only 3 percent of the customers rated certain services.

2. Survey results are not always useful for monitoring performance.

In several cases, state agencies have only recently begun to conduct customer satisfaction surveys, and they have not yet developed appropriate questions, sampling strategies, and performance measures. A related problem is that some agencies have changed the way in which they construct performance indicators from year to year, so that results cannot yet be compared meaningfully over time. In other cases, a combination of technical deficiencies casts doubt on the utility of customer satisfaction data that has been used in performance reports. Typically, the surveys were conducted for purposes other than performance monitoring.

3. The accuracy of some customer satisfaction data is questionable.

In some cases, we found that the results of customer satisfaction surveys are calculated incorrectly or misreported in performance reports. In a few cases, agency staff filled in data inappropriately or simply guessed at results. One agency used the same data for two different fiscal years and failed to catch an obviously mistaken claim of 99.6 percent satisfaction. Another agency combined the results from various evaluation forms into an approximate "+90 percent satisfaction rating." In other cases, we could not verify the accuracy of customer satisfaction data because agencies had discarded necessary documents.

4. Basic information needed to interpret customer satisfaction data is often missing.

Ideally, performance reports should provide the minimum amount of information that is necessary to understand and evaluate state agencies' major programs and objectives without consulting other sources. However, we found that state agencies rarely revealed the questions that were asked, the data collection methods that were used, who or how many answered, and how "satisfaction" was defined. In other cases, descriptive information in performance reports was vague or incorrect.

As a result of these and other assorted problems, we conclude that:

  • For most agencies we reviewed, customer satisfaction data in the 1994 performance reports need to be improved.

However, several of the 10 agencies whose surveys we evaluated are producing internally useful performance data, and making good use of the results. Among these are the Department of Employee Relations, which obtains high quality data about state employees' satisfaction with health care and health plans, and the Department of Revenue, which uses customer satisfaction data to monitor sales taxpayers' satisfaction with the audit process. Also, we found that the Departments of Natural Resources and Trade and Economic Development have the in-house expertise necessary to conduct and implement scientifically valid, useful surveys and that the Department of Transportation and Pollution Control Agency have successfully contracted with the University of Minnesota for high quality, representative, statewide information. In addition, the agencies in our study typically displayed a positive, businesslike appreciation for customer satisfaction surveys, with which they are becoming increasingly familiar.

Recommendations

To address the problems we found in customer satisfaction data associated with performance reports, we have developed several general recommendations. First, the Department of Finance's most recent set of instructions for developing performance reports specifically tells state agencies to:

  • State clearly what is being measured and how the measure is derived or calculated.

  • Explain why the measure is relevant to the program or service being provided.

  • Identify the data source(s) used to calculate the measure and indicate how often the data are updated, including basic information on how and when the data were collected and where the data can be obtained.

  • Include a supplemental attachment with information and explanation of data sources, specific agency contacts, methodology, and other information required to evaluate agency data for legislative audit purposes. (Department of Finance, Annual Performance Report Instructions (St. Paul, June 1994), 16.)

We endorse these instructions and urge agencies to follow them more closely. In our view, agencies need to take greater responsibility for ensuring that their data on cusomer satisfaction are accurate, thorough, and consistent from year to year. They should: (1) demonstrate a more rigorous approach to survey data collection, analysis, and reporting and (2) include basic descriptions of their methods.

Second, we recommend that

  • State agencies should develop systematic data retention schedules which will allow interested parties to verify and further analyze customer satisfaction data.

State law requires the Office of the Legislative Auditor to biennially review and comment on the appropriateness, validity, and reliability of measures and data in performance reports. However, state agencies lack records retention policies that will realistically permit retrospective reviews of performance data. In some cases, the agencies had only a summary of the results and not the individual responses that led to conclusions. Also, it was difficult for some of the agency staff to recall how they developed performance measures from their surveys.

Third:
  • In creating performance measures from customer satisfaction surveys, state agencies should adhere to guidelines for valid survey research.

For purposes of routine management or quality improvement, any comments from customers may be useful, but casual comments or unrepresentative samples do not constitute adequate measures of customers' satisfaction with state agencies or their programs. This can only be accomplished by designing and using scientifically valid surveys. Such surveys provide the most accurate, dependable information for managers as well as policy makers.

Considering how much it costs to administer any questionnaire to a large group, it costs little more to conduct the project so that results can be generalized to the population of interest. Simple administrative steps that can minimize errors and other problems include obtaining an adequate number of respondents and determining that those respondents are representative of the agency's customers.

In conducting future customer satisfaction surveys that will be used in performance reports, we also recommend that:

  • State agencies should develop standard questions that they use consistently from year to year to assess and report customers' satisfaction.

Since customer satisfaction surveys tend to be new to the state agencies in our study, we found that several have changed the questions they use to measure satisfaction from year to year. But without consistent wording of questions, it is impossible to monitor performance over time. At the same time, agencies may need to develop some new questions to better measure future performance.

Finally, we recommend that:

  • The Department of Finance, on behalf of the executive branch, should give state agencies stronger, clearer direction and training to accompany its next set of instructions for writing performance reports.

Although state agencies are mainly responsible for the data in performance reports, the 1995 Legislature gave the Department of Finance a role in ensuring that performance reports are accurate, reliable, useful, and complete. We have shown the need for greater accuracy in some agency performance data, and we urge the Finance Department to oversee the reporting process more vigorously.

Conclusion

State agencies experienced numerous problems in conducting and presenting the results of customer satisfaction surveys in the 1994 performance reports, but most of the problems were of a technical nature which does not surprise us nor suggest willful distortion. In most cases, the surveys were developed for internal use and then used in performance reports, with variable success. In our opinion, the agencies need to develop better skills for conducting credible, performance-related survey research and take greater responsibility for ensuring that performance data in the future are reported accurately, thoroughly, and consistently.


More Information

This study was initiated by the Program Evaluation Division under the general authority given to the Legislative Auditor in the Performance Reporting Act to "review and comment" on agency performance reports. For a copy of the full report, entitled "State Agency Use of Customer Satisfaction Surveys," (PR95-23), 102 pp., published on October 23, 1995, please call 651/296-4708, e-mail Legislative.Auditor@state.mn.us, or write to Office of the Legislative Auditor, 658 Cedar St., St. Paul, MN 55155.

Staff who worked on this project were Marilyn Jackson (project manager) and Jan Sandberg. For more information, contact our office.

Office of the Legislative Auditor, Room 140, 658 Cedar St., St. Paul, MN 55155 : legislative.auditor@state.mn.us or 651โ€‘296โ€‘4708