By Thomas Miller, President, National Research Center, Inc.

Despite the contemporary erosion of facts, it’s impossible to run large organizations – private or public – without credible observations about what’s happening and, separately, what’s working. Performance measurement helps with both and can be as deliberate as Baldrige measurement criteria or as impromptu as “How’m I doing?” made famous by once-Mayor Ed Koch’s ad hoc inquiries of random New Yorkers. Metrics of success, like compass readings, keep the ships of state on course and because the enterprise is public, make the captain and crew accountable.

Over the years, thought leaders like David Ammons, Harry Hatry, and Marc Holzer have made the case and offered conceptual frameworks for measuring performance in the public sector, especially with an eye to comparing results among jurisdictions. Across the U.S. and Canada there are scores of jurisdictions that measure and share their performance data. Regional performance measure consortiums (Florida, Tennessee, North Carolina, Michigan, Ontario, and the Valley Benchmark Cities in Arizona) remain active, and ICMA, no longer offering a software platform for sharing performance data, continues “to advocate for the leading practice of creating benchmarking consortia.”

All performance measuring consortiums are in roughly the same business – to publish and compare their own metrics to the metrics of others or to themselves across time. Other jurisdictions track their own performance and publish results without the benefit of knowing, or letting others know, how they compare.

For all of these places, measuring performance in the public eye is gutsier than it is complicated, so local governments actively involved in public performance measuring should be lauded for participating in a show and tell that doesn’t always reveal any one place to be best in class or to prove improvement over time. Despite the value of measuring performance, especially when done collaboratively, the percent of jurisdictions actively measuring and publicly reporting performance is a small fraction of the 5,500 U.S. cities or counties with more than 10,000 inhabitants – those with enough revenue (probably between $8 million and $10 million) and staff to handle the effort. Across the six consortiums listed above, there only are about 120 jurisdiction participants.

So why don’t more jurisdictions participate in collaborative benchmarking? The fear of looking bad is no small deterrent but neither are those stringent standards imposed to equate each indicator across jurisdictions. Although measuring performance is neither brain nor rocket science, it does take meaningful staff time to define and hew to extensive collection criteria so that indicators are similar enough to be compared with other places or within the same place across years. For example, police response time sounds like a simple metric, but should the clock start when a call comes in, when the police receive the notice from dispatch, when the patrol car begins to drive to the location, when a non-emergency is logged?

When a large number of indicators is identified for tracking, the staff time to collect them, following rigorous collection protocols, explodes. For example, in the 426-page Tennessee Municipal Benchmarking Project annual report, accessible from the home page of the project’s website, 22 measures are reported for code enforcement alone by each of the 16 members. And the report covers 10 other categories of municipal service in addition to building code enforcement.

We need to lower the barrier to entry and expand the value of participation. The “measure everything” approach (with thousands of indicators) has been found to be intractable, and the detailed work required to equate measures remains a tough hurdle. If we choose a small set of indicators that offers a stronger dose of culture (e.g., outcome measures of community quality) than accounting (e.g., process measures about service efficiencies and costs), we will reduce workload and as a bonus more likely attract the interest of local government purse holders – elected officials.

Imagine, across hundreds of places, a few key indicators that report on quality of community life, public trust and governance and a few that measure city operations. Then visualize relaxing the requirements for near microscopic equivalence of indicators so that, for example, any measure of response time could be included as long as the method for inclusion is described. Statistical corrections then could be made to render different measures comparable. This is what National Research Center does for benchmarking to equate survey responses gleaned from questions asked differently.

Besides the time-cost barriers to entry there have been too few examples of the value of the performance management ethos. We know it’s the right thing to do but we also know that with relatively few jurisdictions collecting common metrics, researchers are hampered from exploring the linkages between government processes and community outcomes. Too often comparisons among jurisdictions become a game of “not it,” whereby staff explain away the indicators that show their jurisdiction to score poorly. When we expand the number of places willing to participate, we will have a better chance of offering a return on investment in performance measurement. With many more participating governments, we can build statistical models that suggest promising practices by linking processes to outcomes.

We can broaden participation in comparative performance monitoring when common metrics are few, targeted to outcomes, easy to collect, and proven to matter. It’s a good time to make these changes.

This post is reprinted with minor modifications from the ASPA National Weblog.

New, Reduced Membership Dues

A new, reduced dues rate is available for CAOs/ACAOs, along with additional discounts for those in smaller communities, has been implemented. Learn more and be sure to join or renew today!

LEARN MORE