How to Run a BDC Call Calibration Session
A step-by-step guide to running a BDC call calibration session that builds consistent evaluation standards and aligns your team on what good looks like.
Call calibration is one of the most effective team training activities a BDC manager can run — and one of the least used. A calibration session is not a call review. It is a team alignment exercise that answers the question: do we all agree on what good looks like?
Without calibration, every manager in a multi-manager BDC scores calls differently, every rep gets inconsistent feedback, and "good" becomes a moving target that varies based on who is coaching.
With regular calibration, shared standards replace subjective impressions. Reps know what they are working toward regardless of who is coaching them.
What a Calibration Session Is
A calibration session is a structured team exercise where everyone independently evaluates the same call using the same scoring criteria, then compares and discusses the differences.
The goal is not to find the "right" answer for any single call. The goal is to surface the places where your team's standards are inconsistent, discuss what the standard should actually be, and leave with more aligned evaluation criteria.
A call that everyone scores the same reveals your team's shared standard. A call where scores vary widely reveals your team's ambiguity — where the standard is unclear or where people are applying it differently.
When to Run Calibration Sessions
Monthly calibration sessions are ideal for most BDC teams. Quarterly is the minimum to maintain consistent standards.
Run calibration sessions:
- When you introduce a new script or evaluation criteria
- After a new hire joins and you want to establish shared expectations early
- When you notice significant variation in how different managers or senior reps evaluate calls
- Any time you realize "good" means different things to different people on your team
Preparing the Session
Select the Calls
Choose two to three calls that will illustrate different skill levels and different scenarios. A good selection might include:
- One strong call (mostly everything working well)
- One developing call (some things working, some not)
- One call with a specific skill moment that is genuinely debatable
Avoid calls from current team members if possible — use anonymized recordings or calls from past hires. Evaluation of peers' calls can create social dynamics that interfere with honest scoring.
If you must use current team calls, get the rep's permission and frame it as "your call is being used as a training example" rather than "your call is being evaluated."
Prepare the Scoring Criteria
Each participant should have a copy of your BDC call evaluation scorecard before the session. If you have not built one, the calibration session is a good opportunity to draft it collaboratively.
Set the Time and Location
45-60 minutes for a calibration session covering two to three calls. Schedule it when the calling floor is less active — end of day or a slower midweek period.
Running the Session
Step 1: Frame the Purpose (3 minutes)
"Today we're doing a calibration session. We're all going to listen to the same calls and score them independently. Then we're going to compare our scores and discuss where we agree and where we see it differently. The goal is not to find the 'right' answer on any call — it's to make sure we're all applying the same standards when we coach and evaluate."
This framing matters. Reps who understand the purpose of calibration engage differently than those who think they are being tested.
Step 2: Listen to the First Call (call length)
Everyone listens to the full call without discussion. Each person scores independently using the evaluation scorecard. No talking, no reactions, just listening and scoring.
Step 3: Independent Scoring (2 minutes)
Complete scores are recorded before anyone shares their assessment. If you ask for scores before people finish scoring independently, you get social influence rather than independent evaluation.
Step 4: Score Reveal and Discussion (10-15 minutes per call)
Go section by section through the scorecard. Ask each person to share their score for that section before any discussion.
When there is agreement: "Good — we're all seeing [Section] the same way. This is part of what we're looking for."
When there is disagreement: "Interesting — we've got a range from [low score] to [high score] on [Section]. [Person who scored high], what did you hear that brought it there? [Person who scored low], what did you hear that brought it lower?"
Let people explain their reasoning. Do not referee early — let the disagreement surface before you weigh in with what the standard should be.
After the discussion: State clearly what the standard is or where the ambiguity in the scorecard needs to be resolved. "Based on this discussion, I think the criteria for a 3 on Value Bridge should be [specific clarification]. Let's update the scorecard after this session."
Step 5: Repeat for Remaining Calls
The second and third calls typically produce faster discussion because the first call established shared context and vocabulary.
Step 6: Debrief and Action Items (5-10 minutes)
End the session with:
- What areas of the scorecard had the most agreement? (These are your strongest shared standards.)
- What areas had the most disagreement? (These need clearer criteria.)
- What will you update in the scorecard before the next session?
- What did this session reveal about your team's current coaching focus?
Document the discussion and any scorecard updates. Distribute the updated scorecard within 24 hours.
Common Calibration Mistakes
Not scoring independently first. If people hear each other's scores before scoring themselves, social influence contaminates the results. Independent first, discussion second. Always.
Getting defensive about inconsistency. When one manager's scores are consistently lower or higher than the team average, that manager sometimes gets defensive. Frame the goal as calibration, not correction — everyone has slightly different standards, and the goal is alignment.
Selecting calls that are too clear-cut. A call where everyone agrees it is excellent or terrible reveals nothing about shared standards. Select calls with genuine complexity where reasonable people might score differently.
Skipping the action items. If the session ends without concrete updates to the scorecard or coaching criteria, the learning will fade. Always leave with at least one documented standard clarification.
Doing it once and stopping. Calibration is ongoing. Standards drift over time as new team members join, new scenarios emerge, and old habits creep back in. Monthly is the right frequency to maintain consistent standards.
Calibration for Multi-Location Teams
If you have BDC reps across multiple dealership locations with different managers, calibration becomes even more critical. Reps at different locations may be receiving inconsistent coaching on the same skills, creating disparate standards across the organization.
Run multi-location calibration sessions at least quarterly. Pull calls from multiple locations and score them together. Surface the differences in how each location's reps perform and ensure the standards being applied are consistent.
DealSpeak adds a useful dimension to multi-location calibration: you can compare AI practice session performance scores across locations to identify whether reps at different locations are demonstrating different skill levels on the same scenarios. This data point supplements call recording calibration with practice-based comparison.
Frequently Asked Questions
Should managers be evaluated in calibration sessions too? Yes. If multiple managers are coaching the team, calibrate the managers' scoring as well. This is often the most productive calibration exercise because manager inconsistency is frequently the root cause of rep confusion about standards.
What if a rep's call is used in a calibration and the team scores it poorly? This should be handled privately with the rep after the session. Calling someone out in front of peers for a poor call is not calibration — it is public humiliation. If you are going to use current team members' calls, brief them first and debrief them privately afterward.
How long does it take to build consistent standards across a team? Most teams reach meaningful alignment after two to three monthly calibration sessions. Complete alignment on nuanced scoring criteria can take four to six months of consistent calibration practice.
Consistent Standards Produce Consistent Results
Reps who receive consistent feedback from consistent standards develop more reliably than reps who get different answers depending on who is coaching them. Calibration is the mechanism that produces consistent standards.
Run it monthly. Update the scorecard based on what you learn. Watch your team's coaching conversations become more productive because everyone is working from the same definition of good.
Learn how DealSpeak supports BDC coaching consistency across your team and locations.
Ready to Transform Your Sales Training?
Practice objection handling, perfect your pitch, and get AI-powered coaching — all with your voice. Join dealerships already using DealSpeak.
Start Your Free 14-Day Trial