By Brent Stockwell and David Swindell

Eleven Phoenix-area jurisdictions have been working together since 2011 with the Alliance for Innovation (AFI), the Center for Urban Innovation at Arizona State University (ASU), and the International City/County Management Association (ICMA) to improve local government performance. The Valley Benchmark Cities group tracks demographic, financial, and performance data to enhance reporting and understanding, uncovers best practices, and collaborates on new approaches to service delivery.

This article makes the case for comparative performance management and shares lessons learned for starting a similar collaboration in other metropolitan areas.

The Case for Comparison

Local government professionals, elected officials, and the media have an ongoing interest in how their communities compare with one another. Because of the complexity of each of our organizations, however, it is often difficult to make apples-to-apples comparisons.

Sometimes the differences seem overwhelming, too difficult, or too time-consuming to research and explain, so we give up. But the truth is that we learn by comparing. Comparisons are helpful because they give us context for decision making.

Have you ever had to decide in the grocery store which brand or quantity of a product was the best deal? Groceries and drugstores now provide a unit cost for each item, so consumers can more easily compare brands and quantities.

And yet, we often ask our residents to trust us and have confidence in the work we do—what they pay and receive for services—without similar information. As humans, we are interested in comparisons to better understand ourselves and the world around us.

So it’s only natural that others compel us to do the same. Our councils, local newspapers, and residents are seeking comparative information to better understand what we do, why we do it, and how well we do it. It’s clear that if we don’t compare, others will . . . and they do.

With reliable comparative data that we share with our publics, we can validate our performance claims, track progress toward community goals, and help rebuild confidence in government.

A proactive comparison effort lets you frame the discussion, rather than having to react to questions pushed upon you by others. The effort can also help build a sense of collegiality among jurisdictions in the region.In our case, the effort has helped some of our member communities establish new benchmarks for their services, while others are integrating the comparative data into their performance management programs.

Creating a Consortium

The Valley Benchmark Cities group began in October 2011 as a consortium of staff from the 11 largest cities in the Phoenix metropolitan area: Avondale, Chandler, Gilbert, Glendale, Goodyear, Mesa, Peoria, Phoenix, Scottsdale, Surprise, and Tempe. Populations ranged from 72,300 (Goodyear) to 1.5 million (Phoenix).

The initial impetus was a number of information requests and articles online and in print in the regional newspaper. The media were reporting inaccurate and incomplete comparisons, which were then repeated in letters to the editor and at city council meetings.

One of the local communities requested a meeting with the regional communities that were being compared to each other, and the 11 communities agreed to work collaboratively. We agreed to identify common information to share and discuss so we could better understand the similarities and differences among our operations.

The ultimate aim was to improve local government performance and to provide accurate responses to the media and the public. Each community had at least one consistently involved representative, though some had as many as three. Formal rules governing the number of participants were not an issue.

ASU’s Center for Urban Innovation and AFI agreed to host and staff the effort. ASU’s downtown Phoenix campus provided a central and neutral location for the group to meet.

The effort was slow-going in the beginning. For the first two years, the representatives met primarily in the fall, and everything was put on hold in the spring during budget season. To build rapport and understanding of each other’s operations, the group initially focused on presentations about research efforts under way in each community and national research into performance management.

This led to presentations on property taxes, building permit fees, and utility comparisons, as well as other benchmarking and performance measurement efforts elsewhere in the nation.

Repeatedly, the group found that we were not limited by what could be measured, but by a lack of consensus about what should be measured and how. A key moment occurred when the group began focusing on measures from the perspective of a typical resident as opposed to the perspective of city staff. When a resident calls 911, for example, the person is fundamentally interested in how long it will take the police or fire department to arrive.

Not surprisingly, with every additional point of data, staff members were curious to find differences among communities, and reasons for those differences. Why does this community issue building permits more quickly than the average? Why is that community’s cost so low? What difference does that make?

Answering “why?” is exactly the point of comparative benchmarking exercises. Exploring the differences helped us recognize many aspects of the services we produce that residents might not see.

Assembling and Reporting Comparative Data

After two years of building rapport and trust, and learning about the value of comparative measures, we were finally prepared to begin real work on an initial project of our own, with the intent of collecting comparative data and publishing it in a report.

We began by bringing city managers from the member communities together to help identify key topics to include in the report. We selected seven major service areas for comparison: fire; police; libraries; parks and recreation; streets and transportation; water, sewer, and trash; and finance and administration.

The next nine months were spent doing the work: constructing measures, compiling data, and discussing and analyzing it across the seven service areas with the specific intent of presenting information in a way residents could easily understand. The group completed and released the comparative report in August 2015 (icma.org/valley-benchmark).

Several tasks were completed between the time of the initial meeting with the managers and publication of the final product. Each month, outside of the monthly meetings, a rotating team collaborated on a service-related topic area to highlight a range of measures of value to residents.

Then, at the monthly meetings, the group as a whole reviewed and discussed the work and made suggestions for improving the information. The overall group also used the monthly meetings to review reports from other benchmarking efforts, ultimately settling on a format similar to the one used by the Ontario (Canada) Municipal Benchmarking Initiative.1

Representatives from individual cities volunteered to be the primary organizers of the initial drafts of each section of the report based on service area. This process typically involved collecting data from specific departments within each participating community and performing an initial comparative analysis.

This allowed team members to engage with others in their own organizations as well as in other communities, creating a great learning experience and relationship-building opportunity. The teams then presented this preliminary analysis to the whole group, who then had the opportunity to ask questions and raise issues regarding the validity of the comparisons. Often, subject matter experts from the service-related departments attended a specific meeting covering their area of expertise in addition to the usual municipal representative.

For each of the service areas, the report has a section with three components:

  • A descriptive overview of the service to help residents understand what is included under the umbrella of that service.
  • The common factors that may potentially influence the quantity and/or quality of that service in a jurisdiction.
  • A presentation of the comparative performance measures, listing each community.

In addition to service measures, the report includes demographic information about each of the communities, including phase of current physical development, age and condition of existing infrastructure, services provided, and service delivery methods—important and often unique factors that can affect comparability.

The end of the report includes a glossary of terms as well as an appendix of additional measures that we discussed in the monthly meetings, but decided not to include in the body of the report.

Success Factors

Several factors were key to the success of this endeavor. A safe work environment with technological support helped the flow of information. Several facilitative leaders emerged within the group; these individuals helped maintain an agenda and momentum, and enabled the group to collaborate productively.

On the technology side, the group used a private group site on the Knowledge Network hosted by ICMA to collaborate across organizations and share information and discussion threads. This ensured that draft information could be shared privately until it had been reviewed, verified, and released.

Creating this forum for discussing drafts greatly increased participants’ willingness to share information. Although data held by individual communities is public information, the group wanted to ensure that it did not publish comparisons that failed to take account of institutional, geographic, or other factors that can explain differences across cities.

This online structure ensured that early comparisons were not publicly released if they did not adequately address the differences across local governments. No formal privacy requirements or nondisclosure agreements were signed by any participants in the group.

The group is also coordinating with the ICMA Insights™ program to collect and analyze performance data, both on measures in common with that national benchmarking effort and on custom measures of specific interest to Phoenix-area communities.

The partnership with ASU’s Center for Urban Innovation provided the communities with a central and neutral location to meet so that each participant had equal standing in the discussions. It also allowed for academic researchers to participate in the development of the benchmarking effort.

The Center, along with AFI, provided a Marvin Andrews Fellow from ASU’s Graduate Program in Urban Management to support the group. Together with the technology, this partnership created a safe space for the various participants to discuss and refine the comparisons.

The “X factor” in such a comparative performance effort as this is the participation of a key individual playing a critical role in bringing the group together, driving it forward, and participating in the analyses every step of the way. Such an individual needs to enjoy learning about other communities and be comfortable with the “wonky” side of data.

This is tricky, but identifying such individuals across the staff of multiple jurisdictions is not impossible. Similar efforts have failed because no one was willing to play this role. Finally, it is critical that this role be filled by one of the community representatives rather than an outside party.

Creating Your Own Consortium

This approach to developing comparative data can be replicated in other metropolitan areas. A key requirement is for interested communities to reach out to a local university or research organization, or vice versa, to explore the level of interest and initiate a research partnership.

For our efforts, the partnership was essential because it provided the neutral third party that facilitated trust and cooperation among the participants. It also provided a forum where the participants could discuss and revise comparisons in relative privacy.

While some of the conditions that prompted the formation of the Valley Benchmark Cities are unique, the value of undertaking comparative benchmarking is clear, and many of our efforts can be duplicated elsewhere.

We encourage communities to commit to identifying potential comparative jurisdictions within their region or nationally with whom they can explore joint measurement interests. We recommend trying to focus regionally because going outside a shared region can introduce factors that could affect service performance, including climate, geography, demand levels, political environment, and funding constraints.

Beyond this, communities must identify leadership willing to engage and to meet regularly where they can share and identify common data, discuss possible differences and similarities, and commit to publicly report information. These key steps could be used by other organizations:

  • Identify and invite key leaders from each target organization with which you want to compare.
  • Identify potential partners among local universities or research organizations with a commitment to their local communities and broad community outreach.
  • Build trust and rapport through discussions and presentations from outside the region about the value of comparative measures.
  • Dialogue about efforts already under way to understand what each community is doing.
  • Begin collecting and sharing information.
  • Consolidate key findings and results into a report that can be shared with others, including the public.

ENDNOTE

1 Ontario Municipal CAO’s Benchmarking Initiative: http://mbncanada.ca/.

 

New, Reduced Membership Dues

A new, reduced dues rate is available for CAOs/ACAOs, along with additional discounts for those in smaller communities, has been implemented. Learn more and be sure to join or renew today!

LEARN MORE