FAQs

Benchmarking in the public sector can be defined as, an ongoing process of comparing an agency’s practices and outcomes with those of similar agencies or organizations, with the ultimate goal of improving performance by adopting noteworthy practices that have proven to bring success in similar agencies.

Non-DOT users may view “public” benchmarking measures. Any DOT user may use the platform on behalf of their agency.

One goal of comparative benchmarking is to foster collaborative continuous improvement. The platform’s notable practice narrative functionality provides an easy, contextualized and interactive method of sharing and exploring notable practices between agencies.

To facilitate collaborative learning and improvement, the platform facilitates sharing information between practitioners. Agency users may view and request noteworthy practice statements, offering explanations of the factors contributing to achieved performance. The platform’s ‘noteworthy practice statement’ feature is designed to provide a method for inquiry and sharing, and to foster high-quality communication of practices and experiences between agency users.

Notable Practice Statements are embedded into the Measure Comparison interface, allowing users to view or request notable practice narratives from within the context of the measure comparison. A user who wishes to inquire about a peer agency’s performance selects the noteworthy performance score, and may use the platform to send a request to the agency for more information about how it achieved its measured performance.

1. Is Your Account Assigned to a DOT Agency?

To foster a "comfortable" setting for practitioner discussions of agency practices, the ability to view Noteworthy Practice Statements is limited to users registered using a Department of Transportation email address or users directly supporting a DOT benchmarking initiative. If, when logged in, you see an icon next to your agency name in the upper-right, your account is not associated with a DOT. If you believe this is in error, send please contact admin@benchmarking.site

2. Are there Noteworthy Practice Statements Available for Your Measure?

The measure you're viewing may not have any associated Noteworthy Practice Statments. Try selecting another measure.

All registered users affiliated with a DOT

If nothing happens when clicking on a data point node, your account does not have permission to create Noteworthy Practice Statement requests. Only DOT agency users (with at least Agency User level permissions) are able to create new Noteworthy Practice Statement requests. If you believe you should have this functionality, please contact your agency administrator.

The data for public measures are collected from the public sources identified in the platform.

The data for benchmarking network measures are collected directly from the agencies themselves.

Email the site administrator with a request to define the measure and the group

The Peer Selection Similarity Score is calculated based on the summed, normalized, absolute difference of all selected characteristics. For each characteristic, values are normalized to a 0 – 1 scale by subtracting the minimum and dividing by the range. Then, the absolute value of each difference between your DOT agency and other agencies are calculated for each agency. Finally, the differences across all characteristics are summed. Finally, the summed difference is divided by the number of selected characteristics and multiplied by ten to produce a score between 0 and 10, where 10 is perfect similarity and 0 is perfect dissimilarity. When categorical variables (e.g. region) are used, the normalized absolute difference is set to 1 for states with the same categorical variable value (e.g. states in the same region), and 0 otherwise. This scheme gives categorical more weight than other numeric values.

Once an organization decides to take up benchmarking, the question becomes who to benchmark against. TCRP Report 141 posits that peer selection is “perhaps the most important step in the benchmarking process.” Measurement approaches that do not account for differences among entities can generate irrelevant or misleading performance comparisons, making subsequent work on performance improvement less effective (or worse, counterproductive). For example, comparing congestion among all 52 AASHTO members would find higher congestion levels in heavily urbanized states compared to predominantly rural states, but likely would not generate much opportunity for sharing lessons learned between these states. As a general principle, comparisons should occur between similar peer states to allow for the appropriate practices to be shared. Depending on its size, a benchmarking network may only include entities who are, on the whole, appropriate “peers”. However, given that appropriate peers can vary from one performance area to another, practitioners in a benchmarking network can still consider the similarity of other members on the most relevant characteristics for any given measure. There are numerous agency characteristics that can be considered when selecting peers.

The platform provides state characteristics to support peer selection based on these categories:

  1. Geographic region (AASHTO regions);

  2. Climate zones or factors (heating degree days);

  3. Socio-economic and demographic (e.g. average income);

  4. Agency size (based on annual budget, percent of road network lane miles that are urban, road network mileage, bridge deck area);

Most practitioners researched in this review choose to provide information on peer similarity, but leave final selection to users. A 2004 workshop of DOT practitioners revealed a strong preference among attendees to have the ability to select their own peers. Having this control likely gives participants a greater sense of comfort, especially on a topic considered politically charged by some DOT interviewees. However, TCRP Report 141 warns that when an agency self-selects, there can be a bias against including agencies that might be performing better.

Key Considerations for Selecting a Meaningful Peer Group:

  1. There is no single set of state peer groupings that will work for all performance measures.

  2. Different agency characteristics are important for different performance areas.

  3. Most practitioners researched in this review choose to provide information on peer similarity, but leave final selection to users.

  4. Proper normalizing can allow for comparisons between organizations that appear different at first glance.

An Independent benchmarking approach is usually characterized by a non-anonymous benchmarking process in which individual agencies use centrally collected, public data that includes their peers (and which is easily accessible to any individual agency – hence not anonymous) to set peer grouping comparisons of their own choice, and use the results to independently compare performance. It is left to the agency to take the initiative to reach out to high-performing peer agencies to learn what practices have brought success. Independent benchmarking is an approach that can be established quickly and at a low cost, since it is built on data that is easily available. However, such efforts put long-term sustainability at risk since they may fail to gather an invested audience that buys into the process, and they will be limited in scope by their reliance exclusively on data that is already collected.

There are a number of reasons an agency may choose to independently benchmark. As noted just above, a lack of committed peers may be one reason to undertake benchmarking independently. Another may be if resources—whether funding or staff time—are too scarce, or leadership is not convinced that there is value in committing them. Finally, independent benchmarking could begin to approach the benefits found in benchmarking networks in areas where much of the important data elements are already collected in a centralized location, and where regular interaction among practitioners already occurs.

In all situations, the key consideration to remember is that measures to benchmark on are far more limited under an independent structure. Publicly available sources, subscription data services, and to some degree data that can be obtained from a practitioner’s existing network make up the landscape of options for measure selection. Because there will not be discussions up front to agree on definitions and processes, independent benchmarking works best for measures that already enjoy widespread acceptance and standardization across an industry.

The platform supports independent benchmarking using measures such as Safety: Fatal Injuries per 100m VMT, and Asset Condition: Percent of Structurally Deficient Bridges by Deck Area.

TCRP Report 141 defines a benchmarking network as, “a group of independent agencies who join together for the purposes of sharing benchmarking data, best practices, and research resources.” A network typically agrees on common reporting measures and measure definitions, and is often facilitated by an external third party. Facilitators promote formal benchmarking activities, but a number of sources cite the informal aspect of a network—simply having regular contact peers experiencing the same issues, knowing who to talk to, etc.— as just as important as the formal aspects.

A benchmarking network builds tight connections among groups of peers, in part by using self-reported data from each participant and by providing specialized information-sharing channels. These channels are what allow for efficient and effective exchange of successful practices that enable learning for participants over the long term. Benchmarking networks – like the Canadian Water and Wastewater Benchmarking Initiative or the International Bus Benchmarking Group for transit agencies in New York, London, Sydney and elsewhere—show how a robust benchmarking process can be achieved within a network environment, but also show how such efforts require considerable investment of time and money by their participants.

Pros – can be kept private, ability to dig deeper into data, DOTs may be more motivated to remain engaged once in a network, tend to be long-running

Cons – higher commitment needed from DOT leadership, greater cost to maintain, greater ‘care and feeding’ required, likely to be most suited to small groupings of peers Generally, a benchmarking network is more robust, but more resource intensive than an independent benchmarking effort. Most measures suitable for independent benchmarking could also work for a network, but other measures may only work in a network benchmark context, particularly if no centralized data is available.

To allow non-DOT users to use the platform, and to minimize the barriers to entry and participation for DOT users, the platform has been configured to support multiple user roles. These are:

  • Non-Agency User: a member of the public who is not affiliated with a DOT

    • View Performance Scores for all Public Measures

    • Manage and Create Peer Groups

    • Choose which DOT Agency to Interact with the Platform as

  • Agency Reader: a read-only user who is affiliated with a DOT (registered with a DOT-affiliated email address)

    • Manage and Create Peer Groups

    • View Performance Scores for all Public Measures and Measure Groups containing the user’s DOT

    • View Noteworthy Practice Statements

    • View performance scores for measures related to user’s DOT

  • Agency User: a user who is affiliated with a DOT (registered with a DOT-affiliated email address). Can create and fulfill Noteworthy Practice Statements.

    • Manage and Create Peer Groups

    • View Performance Scores for all Public Measures and Measure Groups containing the user’s DOT

    • Receives notification of Noteworthy Practice Statement requests (unless opted out)

    • Create Noteworthy Practice Statement Requests

    • Create Noteworthy Practice Statements

    • Edit / delete the user’s OWN Noteworthy Practice Statements

  • Agency Administrator: administrative user for a DOT. Can manage other users affiliated with the DOT, all Noteworthy Practice Statements for the DOT, and data for the DOT.

    • View Performance Scores for all Public Measures

    • Manage and Create Peer Groups

    • View performance scores for measures related to the user’s DOT

    • Receives notification of noteworthy practice statement requests (unless opted out)

    • Fulfill incomplete Noteworthy Practice Statement requests

    • Create Noteworthy Practice Statement Requests

    • Upload / edit data for assigned custom performance measures

    • Receives notification whenever a new user associated with the same DOT registers

    • Define default registration level for new users affiliated with the same DOT (Non-Agency User, Agency Reader, or Agency User) (“Agency User” is the default)

    • Manage accounts associated with the DOT (reset password, delete, disable, change name/email address)

    • View and edit or delete all Noteworthy Practice Statements and Noteworthy Practice Statement Requests associated with the DOT

  • Platform Administrator: an administrative user that controls platform content and agency administrators.

    • Assign Agency Administrators

    • Create Measure Groups and assign DOTs access to Measure Groups

    • Create Performance Measures

    • Add / edit data for Performance Measures

    • Change the Date Range for Performance Measures

    • Add / update peer selection State Characteristics

Registered users who are affiliated with a Department of Transportation and have at least “Agency User” permissions may create a new benchmarking initiative for a benchmarking network by submitting a request to the platform administrator. To do so, go to Account Management and click the “Create new Benchmarking Initiative” link.

Before filling out the form, you’ll need to know who else should participate (other members of your “benchmarking network”), and the type of data (whole number, decimal, or percent).

The benchmarking platform includes safety and bridge condition performance measures for performance that pre-dates the introduction of the FHWA transportation performance measures. These data is maintained on the platform to show historic performance.