There’s little disagreement that data collection is a necessary starting point for assessing the quality of local government services and making improvements going forward. But pitfalls lie in wait for the unwary. Here are three common ones, along with suggestions for avoiding them.

Lack of consistent definitions: Dollars and labor are among the most commonly collected data. But does a dollar mean a dollar budgeted, expended, appropriated, or encumbered? Does a full-time equivalent work 7.5 hours a day, 8 hours a day, or something else? Are roads measured in lane miles or linear miles? Is vehicle “age” measured by years, miles driven, or hours operated?

Pinning down definitions is important not only to ensure that staff in different parts of the organization are collecting the same thing; it’s also crucial if you want to compare performance from year to year—or compare with other jurisdictions.

A good practice is to define and document the data you want to collect, ideally in a manual or data dictionary. That way, everyone involved is on the same page, and consistency is ensured in case of staff transitions. Such a document can also form the basis for broader consortium-based performance benchmarking with other jurisdictions, and eliminates the need to constantly re-negotiate the terms of comparison.

Unrepresentative sampling: It’s tempting to gather data by sampling rather than by polling the entire population, especially when a service affects a large number of residents. But unless the sample is designed with care, there’s a risk that key constituent groups will be left out or that the sample will ignore key variables. Mailed surveys may inadvertently target out-of-town property owners rather than residents. Internet surveys neglect people without computers, and phone surveys often miss those who have abandoned their land-line phones for mobile options.

Similarly, point-of-service suggestion boxes and comment cards are easy to place or distribute, but the feedback they gather is often skewed toward the extremes of satisfaction and dissatisfaction. And they reach only the people who happen to be at the points where the boxes or cards are available, potentially leaving out key groups such as the elderly, students, or homebound or non-English-speaking residents.

Sampling bias may also be caused by the time at which the sample is taken. Obviously, utilization of some services varies by month (parks, golf courses, swimming pools). Building permits, code complaints, and other activities may have seasonal patterns. And for emergency response, the actual response time or its perception may be affected by time of day and weather anomalies (rush hour, storm conditions).

Take variations like these into account as you design samples so that you can be sure you’re measuring what you intend. Rather than settling for a single, convenient option, discuss multiple strategies with a survey contractor to ensure the results you obtain will meet your needs.

Collecting Only Input and Output Data: Inputs (e.g., dollars spent, labor hours utilized), outputs (e.g., number of inspections completed, number of library books circulated), and workload (e.g., number of agendas prepared or council meetings staffed) are relatively easy to quantify.

But while input and output data may tell you how much is being done, they leave unanswered the basic questions of how well it’s being done and whether it should have been done in the first place. Efficiency and outcome measures come closer to providing useful information.

Efficiency measures, for example, relate inputs to outputs:

  • How many code complaints were investigated per full-time equivalent employee?
  • How much did the jurisdiction spend per purchase order issued?

And outcome measures are the gold standard of performance measurement and management:

  • Do residents feel safe in the community?
  • Are the various components of emergency response time within target ranges?
  • Is the number, frequency, or severity of negative consequences minimized (e.g., recidivism, technology failures, problems not flagged during inspections)?
  • Does road quality meet objective pavement condition criteria?
  • Are residents satisfied with the maintenance of parks?

By identifying key desired outcomes and analyzing the related data, a jurisdiction can link performance to larger strategic goals and address policy questions and budget decisions with meaningful data.

Local government managers and staff can often avoid these pitfalls by applying what they know about their organizations and their communities. Sound data provide the foundation for meaningful analysis of current practices and potential improvements in service delivery—for both the organization and the community.

Based on sections in Getting Started: A Performance Measurement Handbook by Gerald Young, published by ICMA and available in epub or mobi format from the ICMA Online Bookstore.

New, Reduced Membership Dues

A new, reduced dues rate is available for CAOs/ACAOs, along with additional discounts for those in smaller communities, has been implemented. Learn more and be sure to join or renew today!

LEARN MORE