Skip to main content
Exporting Raw Scores

Understand in more detail the information provided in the raw data export.

Domenic Nucci avatar
Written by Domenic Nucci
Updated over 2 months ago

Overview

Users can export detailed CSVs of ticket data to help do customized review of graded tickets. This functionality is accessible via the Tickets tab, and it will only work with graded tickets. Ungraded tickets cannot be exported from MaestroQA. 

In this article, we will walk through in more detail how to configure the export as well as explaining the data provided.

Configuring a Raw export

1. Easiest ay to export raw scores for graded tickets is via the Tickets tab. 

From there, you can Export Raw Scores in two ways: 

         Method 1: Manually 'check off' tickets and then select "Export Score"

          Method 2: Apply a search filter and then select "Export Scores"


If you would like to include calibration scores, Grader QA Scores or Tickets & Scores from Unavailable Agents as well, be sure to select "Include calibration scores" before you export! 

2. Fill out the pop-up form. This includes editing the name of the resulting export, selecting which agents to include/exclude, the date range of tickets to include (for Method 2), and the recipients of the export. A message can also be included in the email that is sent.

3. Select "Send CSV of Raw Scores"

The resulting files can be accessed in your email (if included in the "Recipients" section above) or in the "Exports" tab under Tickets in the top menu or under Reporting, depending on your MQA configuration.

4. Select "Download" in either the email or the "Exports" tab.

The resulting download will have a folder with 4 different files, which we will describe in more detail below.

Understanding Results of Exports

After exporting the tickets, you will see multiple CSV files. If you wish to merge CSV files with an existing sheet, we suggest to merge on the gradable_id column as that reflects the Ticket ID.

For all time fields, they will all automatically convert to GMT.

A quick explanation about what each CSV contains can be found below: 

Individual_answers.csv

Rows: Each row represents one criteria (question) from every graded ticket included in the export.

Columns: There are 15 columns that include varying information, described below:

  • gradable_id: the Ticket ID

  • agent_id: distinct ID number for each agent. 

  • agent_name: Name of the agent

  • rubric_id: ID string associated with each rubric

  • rubric_name: The name of the rubric used when creating the rubric

  • question_id: ID string associated with each question

  • question: The relevant question

  • answer_label: The answer that was selected by the grader

  • feedback_answer_label: If checkbox or dropdown functionality is used in the feedback section of a question, the results will appear here.

  • question_score: The amount of points awarded for the given answer. This will be blank for N/A responses. Auto-fail questions will show 0 when selected.

  • max_question_score: The maximum amount of points available to receive for that question. (Auto-fail questions will have max_question_score of -100).

  • comment: Any free-form comment included in the feedback section after each question.

  • date_graded: The date and time the ticket grade was saved.

  • grader: The person that graded the ticket

  • grade_type: "Grading" or "Calibration". This will be driven by whether the grader selected "Start Grading" or "Start Calibration" when starting to score the ticket.

  • ticket_created_at: The date and time the ticket that this grade was for was created

  • date_first_started: The date and time the Grader selected an Agent and Rubric and began trading the ticket

  • date_first_graded: The date the ticket was initially graded and the score was saved

  • date_reported_as: If the ticket grade was not altered in any way, this indicates the original graded date. If a new graded date is manually selected for the ticket, then "Reported As" refers to the new, manually-selected graded date.

  • section_id: ID string associated with the section containing this question

  • section_name: The section name of the section containing this question

  • source: Helpdesk the ticket originated from (ex. Zendesk, Salesforce, Intercom)

  • total_grading_time: The amount of time it took the Grader to fully grade the ticket (displayed in seconds)

PLEASE NOTE: Most of these columns will be in the Section_scores and Rubric_scores files. Only columns that vary amongst the 3 files will be described below

Section_scores.csv

Rows: Each row represents a section from every graded ticket included in the export.

Columns: There are 12 columns that include varying information, described below:

  • section_id:  ID string associated with each section

  • section_name: The name of the section

  • section_score: Percentage of correct answers in that section. Auto-fail sections will show -100 when selected. A section with answers that are all N/A would show blank.

  • max_section_score: Total percentage available (100 for all rows)

Rubric_scores.csv

Rows: Each row represents a rubric from every graded ticket included in the export. Therefore, in this file, each row represents a ticket.

Columns: There are 11 columns that include varying information, described below:

  • rubric_score: Final rubric score.

  • max_rubric_score: total percentage available (100 for all rows)

  • comment: General comments left at the bottom of the ticket.

Annotation.csv

Rows: Each row represents an annotation. Multiple or none could exist for each ticket.

Columns: There are 11 columns that include varying information, described below:

  • annotation_id: random sting to uniquely identify the annotation

  • selected_text: the highlighted text

  • comment: the comment left by the grader associated with the highlighted text


Note: All graded_dates from the export have their time reflected in UTC time

CSAT.csv

To the extent that the tickets included in the export have a CSAT in Maestro, this file will include all of those CSATs.

Rows: Each row represents a different ticket ID in the helpdesk.

Columns: there are 5 columns that include varying information, described below:

  • gradable_id: the Ticket ID in the Helpdesk

  • csat_score: the CSAT score

  • csat_comment: the CSAT comment

  • ticket_created_at: when the ticket was created in the Helpdesk

  • source: the source of the CSAT

Did this answer your question?