Skip to main content
Grader QA Workflow

Building out a Grader QA workflow in MaestroQA

Matt avatar
Written by Matt
Updated over a week ago

What is GraderQA?

Benchmark how accurate your graders are grading tickets by utilizing GraderQA. GraderQA utilizes a unique automation to help assign tickets to a designated grader who serves as the benchmark for how accurate QA scores are

Terminology List

Here is a quick list of terminology to use when discussing GraderQA:

  • Agent QA: Process of Grading an Agent’s Performance

  • Grader QA: Process of Grading a Grader’s Performance

  • Original Grader: Original grader who graded the ticket

  • Original Score: QA Score on ticket

  • Benchmark Grader: User who conducts GraderQA

  • Benchmark Score: The “correct” QA Score for the ticket

  • Accuracy Score: How accurate the Original Score was relative to Benchmark Score

How to set up GraderQA?

Note: An Admin role is required to access the Role Permissions page. Admins can access Automations by default, while Limited Admins, Graders, and Managers can be given access to create and edit Automations within Role Permissions (be sure to check your configuration for details).

Step 1: Set up users who will be the Benchmark Graders

On the User roles > Role Permissions page, set Benchmark Graders through the "Allow these users to grade Graders" permission:

This is located right underneath the Role Permission table. Once done, ensure that the Benchmark Graders you've selected also have Agent Group Access to all the Agents they will receive graded tickets for.

Additionally, confirm that the Benchmark Graders have Rubric access to the Rubrics used by the original Graders.

NOTE: Benchmark Graders will be unable to see Assignments/Ticket Reviews created for them if they do not have access to the Agents or Rubric involved.

Step 2: Setup User Groups for graders to review

Right underneath the above permission, you can also set up what User Groups are configured to review Grader QA benchmark scores.

Note: Any user who QAed a ticket in the past is considered a grader.

Admins can also decide which User Groups should have access to their Grader QA Feedback. This can be configured in Role Permissions as well.

Step 3: Create the GraderQA Automation

Go to Sharing Automations, and then click on “Create New Automation” and select "Send Grader QA Assignments".

In the Grader QA Automation form, you will be required to specify what Rubric(s) you want the automation to source graded tickets to assign for.

Only tickets associated with the selected rubric(s) will be assigned out for Grader QA.

Selecting multiple rubrics does not guarantee that each rubric will be distributed by the automation.

As an example:

  • IF you selected Chat QA, Email QA, and Phone QA as rubrics in this selector

  • WHEN the Grader QA automation runs, any ticket that was graded using Chat QA, Email QA, OR Phone QA will be selected

As a result of the above, it is possible that after selecting multiple rubrics, you still receive Grader QA assignments that are all associated with one rubric.

Step 4: GraderQA Workflow

After the automation runs, benchmark graders will receive their assignments, and they can freely check on these assignments from the Home page.

graderqa workflow

From here, benchmark graders can grade these tickets like they would during AgentQA.

During the GraderQA process, benchmark graders cannot change:

  1. The grading type

  2. Agent being graded

  3. Rubric being used

These will be grayed out like shown in the screenshot below.

graderqa scores

Step 5: Leave Written Feedback for Graders and Share Grader QA Results

After the Benchmark Grader creates a Benchmark Score, the Alignment Score is automatically calculated.

To provide additional insight, Benchmark Graders can then leave written feedback for the Original Grader. The screen will show a side-by-side view of both scores for each criterion. Benchmark Graders will now be able to see how the Original Grader shared their feedback and offer tips to improve them.

  • Is the Original Grader too concise? Should they add more detail?

  • Should annotations be used to point the agent in the right direction?

  • How is the Grader's tone of voice? Is there a better way to suggest how an agent can handle a particular customer inquiry?

Similar to agents, Graders can benefit greatly from detailed and targeted feedback.

Once completed, Original Graders who have been given access can see their feedback. To find Grader QA results manually, a Grader can go to the Tickets tab and filter for benchmarked tickets. Click into the ticket search options on the Tickets tab and select Benchmarked Graded = True.

Press See Scores for any tickets. From here you should see an Agent QA score created by the Original Grader, and a Grader QA score created by the Benchmark Grader.

Once you click View for the Grader QA score, you will see a side-by-side comparison of the scores and qualitative feedback from both the Original Grader and the Benchmark Grader. This includes the annotations, checkboxes, and comments left by each Grader. The side-by-side view allows users to spot differences between the two scores with ease.

The top-right corner of the left panel shows:

  • The overall Rubric Alignment score

  • The Original QA Score by the Original Grader

  • The Benchmark Score by the Benchmark Grader

Throughout the ticket, each criterion/question will show the Original Score, the Benchmark Score, and the Alignment Score for that particular criteria.

Step 6: Sharing Feedback via GQA Feedback Automation

Though Graders can search for their feedback manually, Admins can also set up a workflow to send the feedback directly to Graders at a regular cadence!

Did this answer your question?