Data Warehouse Table Overview

Describes the tables maintained by the data warehouse integration

Matt avatar
Written by Matt
Updated over a week ago

Quick note: If you're just looking for what tables and data is included our data warehouse, consult this Data Dictionary.

Introduction

The Data Warehouse integration allows you to view all the Maestro grading data in your internal database. This gives you the flexibility to create whatever reports your team needs.  This also opens the door for much more sophisticated reporting, particularly  the ability to combine Maestro QA data with all your other data.  Enabling you to tie all your QA processes with the business metrics you care about.

This article will describe how all your Maestro data is broken up into the different tables and columns. As a warning, this article will go into some amount of technical detail. At the end of the article we will provide an example of a query to mimic a basic report Maestro  provides.

In addition to this document, please see this reference for additional details and copyable schema and dashboard files: https://github.com/adtribute/looker-maestroqa that you can use directly in Looker or adapt to use in various other BI tools.

Table Overview

Rubric Tables

There are 5 tables which collectively describe all of your rubrics, i.e. your grading templates, that you create here  https://app.maestroqa.com/settings/rubrics .
These tables are the sql-normalized storage of your Rubrics. In terms of actually using these tables for building queries, they essentially provide named labels/descriptions as Foreign Keys to the  corresponding 'Score' tables described in 'Score Tables' below.

The tables are:

  1. templates

  2. sections

  3. questions

  4. custom_options

  5. feedback_options

Score Tables

There are 5 tables which hold the information of each grade you make. There is one score table per Rubric Table, which represent the parts of a  template 'filled out' .  These tables actually hold all your QA scores, and will be the primary tables you use to create reports from.  As mentioned above each table has a foreign key(s) into one or more 'Rubric Tables'. These will be described in detail in the 'Tables Descriptions' section.

The tables are:

  1. answers

  2. section_scores

  3. question_scores

  4. option_selections

  5. feedback_selections

Annotations Table

There is 1 table that holds the information for the annotations made on grades. This table has foreign keys to the answers table and to the templates table.

The table is:

  1. annotations

Agents / Group Tables

There are 2 tables which hold basic info of your agents & the Maestro groups they are in. This allows you to easily create reports for certain groups, by joining into these tables & filtering on the relevant groups.

The tables are:

  1. helpdesk_id_email

  2. user_groups

Appeals Tables

There are 3 tables that hold the information for appeals to grades. Each appeal has a unique appeal_id. Comments exchanged during the appeal process are in appeal_comments, and the original and updated grades are in appeal_answers.

The tables are:

  1. appeals

  2. appeal_comments

  3. appeal_answers

Coaching Tables

There is 1 coaching table that holds high level information for coaching sessions. Each coaching session has a unique coaching_sessions_id that will link to additional tables as they are released.

The tables are:

  1. coaching_sessions

Example Query 

To give you a concrete example of how to use this data, we'll mimic a basic Maestro Report. Let's mimic this group report, as an example

example query

This report is for 1 group 'Test Group',  split on 3 specified rubrics, graded in March.

To mirror this report we'd use this query:

We first set the relevant time zone. Since all dates are stored in UTC & we are based out of NYC, we'll use America/New_York, i.e:

time zone query

Then to get those 3 values, we would use this query:

query

There is a lot going on here, so let me point out 3 important pieces:

  1. Time filtering: 

query

To mimic the Dashboard's 'graded at' time axis, we use answers.updated_at field. 

2. Agent availability:

agent availability

Some default Maestro reports will only counting grades whose 'gradee_id' is an 'Available' agent. The question to include agents you are no longer grading doesn't have a consistent answer, it depends what you want!

3. 'default Score filters'

default score filters

 By default almost all Maestros reports will include these first 3 filters, and if you are NOT doing calibration reporting, then you probably only want to look at answers whose 'answer_type' is 'grading'. 

Did this answer your question?