Intro to Reporting in MaestroQA

Save this resource as a guide to kickstart your Reporting needs! Quickly access links to articles detailing each of our reports.

Matt avatar
Written by Matt
Updated over a week ago

Purpose

Congratulations! After extensive preparation, iteration, and execution of your QA program - your team has successfully compiled enough graded tickets to look for insights.

The purpose of grading tickets is more than just a pat-on-the-back exercise. CX teams perform QA to accumulate data to uncover trends that may be difficult to spot with the naked eye. Assuming you've collected a decent sample size of graded tickets, a user can construct graphs and charts to tell a story about how the support team is truly performing. With this information, an admin could make actionable recommendations to his/her/their internal team to impact the people, processes, and/or product(s) of their respective organizations.


Who has access to Reporting?

Any user who is not an Agent has access to the Reporting tab and can view data for all Agents/Agent Groups he/she/they specifically have access to. Some charts are only visible to Admins and Limited Admins, which will be detailed later on.

Agents don't have access to the Reporting tab. But they do have access to graphs in their Coaching dashboard when they sign in to their respective accounts. Click here to learn more about the Agent's experience.


Your QA Program Inputs Impact Your Reporting Output

Grading tickets only represent a piece of the total equation when building effective reports in Maestro. In order to identify trends that are meaningful, it is also important to confirm your reporting inputs align with what you're investigating. To put it another way, your inputs will be how you choose to slice and dice your team's overall QA Score.

Here are some of the inputs to keep in mind.


Reporting Input: Agent Groups/User Groups

Key stakeholders in your support organization may be interested in seeing data at the Agent level. But what other data points could be useful based on how your support team is constructed? Your QA metric can be as versatile as you want it to be when you consider various ways you'd like to split the data early on in your QA process. It can tell you the difference in performance across various groups to make it easier for you to highlight top-performing teams and lower-performing teams. Admins can go to Settings > User Roles > Agent Groups or User Groups to build out these teams for further analysis.

Here are some common ways for Admins to break out Agent Groups for Reporting purposes:

  • Support Channel

    • Ex. Email team, Chat team, Phone team, etc.

  • Internal vs External/Outsourced teams

    • Ex. Internal team, Taskus team, FCR team, etc.

  • Team Lead/Supervisor teams

    • Ex. Team Daisy, Team Alex, Team Lizzy, etc.

  • Support Function/Support Org Teams

    • Tech Support team, Refunds team, Escalations team, etc.

  • Hiring classes

    • Ex. Q1 2021 Agent Hires. Q2 2021 Agent Hires, etc.

Reporting Agent Groups

With these groups created, users can easily compare QA scores for different CX teams. Remember - users can only see reporting for the Agents, Agent groups and User groups they have access to. Admins can go to Settings > User Roles > User Accounts to provide proper access to any team member with a Maestro account.


Reporting Input: Ticket Attributes

Which ticket fields matter most to your support team? As Agents resolve thousands of tickets, they are categorizing those interactions through the ticket attributes they're filling out at the ticket level. If a ticket attribute has a defined list of options for an Agent to select, that attribute can be "Added" to Reporting by an Admin (Settings > Ticket/User Attributes - see visual below). This means that a user could view QA scores segmented by a particular Ticket Attribute.

Ticket Attributes

For example, many support teams utilize Tags to describe what topics are discussed in a ticket. An admin can add the Tags ticket attribute to Reporting to see Agents' QA scores for tickets with different tags. So an admin could see QA Scores for the Email team when they handle tickets with the "billing" tag versus when they handle tickets with the "returns" tag.

As we can see above, some Ticket Attributes can be added to Reporting, and some cannot. Source, Status, and Tags are all Ticket Attributes that have a finite list of potential selections an Agent can make when describing a ticket. For example, an Agent can only choose "Open", "Pending", "Closed", or "Solved" as a Status for a ticket. Since it has been added to Reporting, users could potentially compare QA scores for tickets with a "Closed" Status to QA scores for tickets with a "Solved" Status. Ticket attributes like Subject have no structure; customers can title their tickets in many different ways which makes it difficult to categorize every possible option. Because of this, the Subject attribute cannot be used for Reporting.

Ticket Attributes Reporting

In order to maximize the effectiveness of Ticket Attributes as a Reporting input, Admins should be able to Identify which attributes their Agents are filling out consistently when handling a ticket. Which ticket fields are Agents using in Salesforce when responding to a ticket? Why would it be valuable to view QA scores for this particular attribute and what recommendations could be made if any trends are found? Answering these questions can go a long way towards optimizing your reports.

Important note: Clients on the Team Package or Professional Package have access to this Reporting capability. If your team is on the Essentials Package and would like to take advantage of this functionality, please reach out to either your CSM or customer support to discuss the next steps.

Reporting Input: Rubrics

For a QA Program, the Rubric(s) used to assess an Agent is extremely important, as it serves as your primary data collection tool. The insights collected for your Agents/Agent Groups can have varying levels of granularity with a carefully constructed Rubric. How you set up your Rubric questions will determine what type of trends you'll be able to find in your reporting for your support teams.

Maestro reporting allows a user to see data for 4 different levels within a Rubric:

  • Rubric level (least granular)

  • Rubric Section level

  • Rubric Criteria level

  • Options/Drivers level (most granular)

Here are some ideas to consider at each Rubric level to help you determine the best ways to build out your Rubrics to gain actionable insights.

Rubric-Level Reporting Considerations

  • How many Rubrics do you need? Users can easily separate QA scores by Rubric. Because of this, each Rubric could represent a different evaluation you're interested in analyzing for your support team. Some examples/reasons for splitting Rubrics include (but are not limited to):

    • Rubrics for each channel of support

      • Ex. Email Rubric vs Phone Rubric

    • Rubrics by Agent tenure

      • Ex. New Hire First 90 Days Rubric vs Veteran Agent Rubric

    • Rubrics by Support Function/Department

      • Ex. Tech Team Rubric vs Tier 3 Support Rubric vs Collections Team Rubric

    • Rubrics for Different KPI Analyses

      • QA Scorecard vs CSAT Root-Cause Analysis

REMEMBER - Do what works best for your support team and your overall QA program goals! Some teams like multiple Rubrics because they can have a more targeted approach to assess specific behaviors for each respective support flow. Other clients prefer fewer Rubrics to simplify the overall QA process for their respective teams. Work with your CSM to determine what works best for you!

Did this answer your question?