FAQs

Benchmarking in the public sector can be defined as, an ongoing process of comparing an agency’s practices and outcomes with those of similar agencies or organizations, with the ultimate goal of improving performance by adopting noteworthy practices that have proven to bring success in similar agencies.

The data for public measures are collected from the public sources identified in the platform.

The data for benchmarking network measures are collected directly from the agencies themselves.

The Peer Selection Similarity Score is calculated based on the summed, normalized, absolute difference of all selected characteristics. For each characteristic, values are normalized to a 0 – 1 scale by subtracting the minimum and dividing by the range. Then, the absolute value of each difference between your DOT agency and other agencies are calculated for each agency. Finally, the differences across all characteristics are summed. Finally, the summed difference is divided by the number of selected characteristics and multiplied by ten to produce a score between 0 and 10, where 10 is perfect similarity and 0 is perfect dissimilarity. When categorical variables (e.g. region) are used, the normalized absolute difference is set to 1 for states with the same categorical variable value (e.g. states in the same region), and 0 otherwise. This scheme gives categorical more weight than other numeric values.

Once an organization decides to take up benchmarking, the question becomes who to benchmark against. TCRP Report 141 posits that peer selection is “perhaps the most important step in the benchmarking process.” Measurement approaches that do not account for differences among entities can generate irrelevant or misleading performance comparisons, making subsequent work on performance improvement less effective (or worse, counterproductive). For example, comparing congestion among all 52 AASHTO members would find higher congestion levels in heavily urbanized states compared to predominantly rural states, but likely would not generate much opportunity for sharing lessons learned between these states. As a general principle, comparisons should occur between similar peer states to allow for the appropriate practices to be shared. Depending on its size, a benchmarking network may only include entities who are, on the whole, appropriate “peers”. However, given that appropriate peers can vary from one performance area to another, practitioners in a benchmarking network can still consider the similarity of other members on the most relevant characteristics for any given measure. There are numerous agency characteristics that can be considered when selecting peers.

The platform provides state characteristics to support peer selection based on these categories:

  1. Geographic region (AASHTO regions);

  2. Climate zones or factors (heating degree days);

  3. Socio-economic and demographic (e.g. average income);

  4. Agency size (based on annual budget, percent of road network lane miles that are urban, road network mileage, bridge deck area);

Most practitioners researched in this review choose to provide information on peer similarity, but leave final selection to users. A 2004 workshop of DOT practitioners revealed a strong preference among attendees to have the ability to select their own peers. Having this control likely gives participants a greater sense of comfort, especially on a topic considered politically charged by some DOT interviewees. However, TCRP Report 141 warns that when an agency self-selects, there can be a bias against including agencies that might be performing better.

Key Considerations for Selecting a Meaningful Peer Group:

  1. There is no single set of state peer groupings that will work for all performance measures.

  2. Different agency characteristics are important for different performance areas.

  3. Most practitioners researched in this review choose to provide information on peer similarity, but leave final selection to users.

  4. Proper normalizing can allow for comparisons between organizations that appear different at first glance.

An Independent benchmarking approach is usually characterized by a non-anonymous benchmarking process in which individual agencies use centrally collected, public data that includes their peers (and which is easily accessible to any individual agency – hence not anonymous) to set peer grouping comparisons of their own choice, and use the results to independently compare performance. It is left to the agency to take the initiative to reach out to high-performing peer agencies to learn what practices have brought success. Independent benchmarking is an approach that can be established quickly and at a low cost, since it is built on data that is easily available. However, such efforts put long-term sustainability at risk since they may fail to gather an invested audience that buys into the process, and they will be limited in scope by their reliance exclusively on data that is already collected.

There are a number of reasons an agency may choose to independently benchmark. As noted just above, a lack of committed peers may be one reason to undertake benchmarking independently. Another may be if resources—whether funding or staff time—are too scarce, or leadership is not convinced that there is value in committing them. Finally, independent benchmarking could begin to approach the benefits found in benchmarking networks in areas where much of the important data elements are already collected in a centralized location, and where regular interaction among practitioners already occurs.

In all situations, the key consideration to remember is that measures to benchmark on are far more limited under an independent structure. Publicly available sources, subscription data services, and to some degree data that can be obtained from a practitioner’s existing network make up the landscape of options for measure selection. Because there will not be discussions up front to agree on definitions and processes, independent benchmarking works best for measures that already enjoy widespread acceptance and standardization across an industry.

The platform supports independent benchmarking using measures such as Safety: Fatal Injuries per 100m VMT, and Asset Condition: Percent of Structurally Deficient Bridges by Deck Area.

TCRP Report 141 defines a benchmarking network as, “a group of independent agencies who join together for the purposes of sharing benchmarking data, best practices, and research resources.” A network typically agrees on common reporting measures and measure definitions, and is often facilitated by an external third party. Facilitators promote formal benchmarking activities, but a number of sources cite the informal aspect of a network—simply having regular contact peers experiencing the same issues, knowing who to talk to, etc.— as just as important as the formal aspects.

A benchmarking network builds tight connections among groups of peers, in part by using self-reported data from each participant and by providing specialized information-sharing channels. These channels are what allow for efficient and effective exchange of successful practices that enable learning for participants over the long term. Benchmarking networks – like the Canadian Water and Wastewater Benchmarking Initiative or the International Bus Benchmarking Group for transit agencies in New York, London, Sydney and elsewhere—show how a robust benchmarking process can be achieved within a network environment, but also show how such efforts require considerable investment of time and money by their participants.

Pros – can be kept private, ability to dig deeper into data, DOTs may be more motivated to remain engaged once in a network, tend to be long-running

Cons – higher commitment needed from DOT leadership, greater cost to maintain, greater ‘care and feeding’ required, likely to be most suited to small groupings of peers Generally, a benchmarking network is more robust, but more resource intensive than an independent benchmarking effort. Most measures suitable for independent benchmarking could also work for a network, but other measures may only work in a network benchmark context, particularly if no centralized data is available.