Skip to main content
Rubric Versioning

How rubric iteration works in MaestroQA

Matt avatar
Written by Matt
Updated over a month ago

Overview

When making updates to your rubric, you want to be conscious of these two things:

  1. What is the downstream reporting impacts of my changes

  2. What do I need to update after I changes this rubric

The Rubric Versioning workflow is intended to help:

  • Help ensure historical evaluations remain accurate for the rubric that was used at the time

  • Prevent accidental overrides of rubric labels that could cause incorrect data downstream

  • Reduce effort required to update any automation or metrics that are tied to the rubric after it is updated


Rubric Versioning Flow

What will cause a new Rubric Version to be created?

There are two conditions for Rubric Versioning to trigger:

  1. The update rubric has been used to grade tickets

    1. AND those graded tickets are not deleted and exist in MaestroQA

  2. One of the updates made to the rubric match one of them actions described in the next section

What Rubric Updates/Changes Trigger Versioning?

Focus of Rubric Versioning is to help provide data cleanliness by retaining score accuracy in relation to the state of the rubric at the time of creation.

As a result, if any of the listed updates are made, it will trigger the Rubric Versioning workflow:

  • Creating any of these section types:

    • Scoring

    • Auto-fail

    • Bonus

  • Deleting section, criteria, or options

  • Updating Rubric Criteria Options such as:

    • Changing a rubric option to be an auto-fail option

    • Adding a new option that has a point value

    • Changing the point value of an option

    • Changing the weight value of sections

  • Making updates to sections

    • Changing section type Ex. Changing a section from Standard to Auto-Fail

    • Changing section settings Ex. Adding feedback required fields

What type of Rubric Updates will not trigger the workflow?

If your rubric updating actions only involve:

  • Updating the rubric's name

  • Updating wording of the Sections, Criteria, or Options

  • Updating the Criteria Explanation Field

  • Drag and drop sections, criteria, or options

  • Create a new non-scoring sections

    • A non-scoring section is a section that only contains non-scoring criteria or is limited to only be able to have non-scoring criteria

  • Add NEW options to non-scoring criteria

  • Add NEW options to feedback fields

Then you will not be required to create a new Rubric Version.

What happens if I make an update and don't choose to create a new Version?

If you updated a rubric and chose not to create a new Rubric Version, this can still have impact on how historical scores appear. As a result, if you believe your change will have an impact on historical results, it may be worth exporting your QA data prior to making the rubric update.

Here are some example scenarios to help illustrate this:

Scenario 1

You heard feedback from graders that they wanted to have an additional feedback field they could select to more accurately

Scenario 2

Changing a rubric option label can make it seem graders historically selected that option when they may have selected something else


Rubric Versioning Workflow

After making changes to a rubric that has been used for grading, you may be prompted to create a new version of the rubric.

If the change made was a notable update, you will be required to create a version:

If the change made was not notable, a new version isn't required - you can choose to create a new version or update the current version:

Creating a New Rubric Version

When creating a new version of a rubric, you’ll need to provide a unique name for this version. This helps distinguish it from previous versions and ensures accurate tracking and reporting over time.

In addition to naming the new version, you may also have the option to update any existing automations or custom metrics that used the previous version:

  • Agent QA Automations: If the previous rubric version was set as the default rubric in any Agent QA automations, you’ll see an option to update those automations to use this new version as the default. You can select which automations, if any, should adopt the new rubric version. This update will replace the old rubric in the selected automations

  • Custom Metrics: If the previous rubric version was being used in any custom metrics, you can choose to add the new rubric version to those metrics. This action will include the new version alongside the old version within each selected metric, preserving historical data while allowing the metric to reflect the latest rubric criteria (see "Example" for more details)

The screenshot below shows the options available when creating a new rubric version:

After creating a new version of the rubric, the new version can be found either on the Rubric page or in the Rubric Folder the original rubric is located in.

Example: Updating Agent QA Automations to use the new Rubric Version

After updating your Rubric Version to Customer Experience Scorecard v1.2, you can now choose what Agent QA automations should default to using this rubric on new automation runs.

Any historical automation runs will still reference the previous rubric, Customer Experience Scorecard v1.1. So we recommend to make your rubric updates prior to your automation run time to ensure people do not accidentally create scores with an older rubric version.

Reminder: After QA Scores are created using the new Rubric Version, make sure to also update your Send Graded Feedback and Send Grader QA automations to include this new rubric.

Example: Updating a Custom Metric to Include a New Rubric Version

Let’s say you have a custom metric, "Resolution: Avg. QA Score," that tracks the average QA score specifically for the "Resolution" section of several rubrics.

This metric currently references the "Resolution" section from these two rubrics:

  1. "Customer Experience Scorecard v1.0"

  2. "Customer Experience Scorecard v1.1"

When you create a new version of the rubric, Customer Experience Scorecard v1.2, and select "Resolution: Avg. QA Score" as a metric to update, here’s what happens:

  1. Updates to the metric: The metric will now capture QA data from the latest rubric version and any historical rubric versions. As a result, the "Resolution: Avg. QA Score" metric will now include data from the "Resolution" section from these rubrics:

    1. Customer Experience Scorecard v1.0

    2. Customer Experience Scorecard v1.1

    3. Customer Experience Scorecard v1.2

  2. Unified Metric View: Any dashboard, report, or widget referencing the "Resolution: Avg. QA Score" metric will now display data from all three rubric versions. `

In this way, selecting a metric like "Resolution: Avg. QA Score" for updates when saving a new rubric version allows you to track the "Resolution" section across rubric versions seamlessly, giving you a complete view of performance trends over time.

Using Rubric Versioning to update a metric will not update the name of the metric.

Did this answer your question?