| Finding | Impact | Recommended Action |
|---|
| Subject | Run Rate | Target/Mo | Util %30-day utilization rate (recent 3 months). Arrow shows trend vs older months: ↑ improving, ↓ declining, → stable. Counts show recent contracted vs utilized. | Gap | Problem Type | Priority Action |
|---|
| Subject | Run Rate | Target/Mo | Util % (30d)30-day utilization rate (recent 3 months). Arrow shows trend vs older months: ↑ improving, ↓ declining, → stable. Counts show recent contracted vs utilized. | Gap | Problem Type | Category | Recommendation |
|---|
How to Read This Table
- Utilization / Algorithm — Low tutor assignment rates suggest placement barriers (LSAT pattern). These could be hiding real supply shortages. Investigate algorithm before recruiting.
- True Supply — High utilization confirms the pipeline is genuinely short. External recruitment levers needed.
- No Util Data — No utilization data available to classify. Could be either problem type. Gather data to determine next steps.
| Subject | Run Rate | Target/Mo | Util %30-day utilization rate (recent 3 months). Arrow shows trend vs older months: ↑ improving, ↓ declining, → stable. Counts show recent contracted vs utilized. | Gap | Type | Category | Assessment |
|---|
BTS Portfolio Progress (Apr–Oct)
| Subject | Run Rate | Mar (Baseline) | Apr | May | Jun | Jul | Aug | Sep | Oct | BTS Total | Remaining | PacePace shows if a subject can meet its BTS goal at the current run rate. Formula: (Run Rate × Months Left) ÷ Remaining Need 🟢 ≥100% — On track 🟡 80–99% — At risk 🔴 <80% — Will miss forecast Note: Only finalized months reduce "Remaining Need." In-progress months are shown but don't shift targets or pace. |
|---|
Upload Updated Forecast
Upload a new monitoring_table.xlsx from Pierre. This replaces the current forecast and triggers recalculation.
Upload Manual Forecast Adjustments
Upload a CSV with manual overrides for specific subjects for a specific month. Required columns: Subject Name and Final Forecast (the target for that month, including new subjects added on the goal sheet). Values apply to this upload month only; other months stay at zero for brand-new subjects until you upload later month files. If you use Commit to Repo with a GitHub token saved, the dashboard compares your file to the existing month CSV: any subject that appeared before but is missing from the new upload is written as 0 for that month only (removal from the list). Overrides persist across new model uploads. Legacy exports may include a separate Goal column only when Final Forecast is blank—the pipeline uses the same number either way.
Advanced: Manual Data Overrides
These data sources are automatically synced from Looker daily. Use these uploads only if you need to manually override or correct the automated data. Changes will persist until the next Looker sync.
Upload Monthly Actuals
Override Looker actuals for a specific month. Format: Subject,Actual_Contracted
Upload Updated Run Rates
Override Looker run rates. Upload an updated run_rates.csv with the latest contracting run rate data.
Upload Updated Utilization Data
Override Looker utilization data. Upload an updated utilization.csv with current tutor utilization data.
GitHub Connection
To commit files directly from the dashboard, enter a GitHub Personal Access Token with repo scope. The token is stored in your browser session only and is cleared when you close this tab.
Don't have a token? Use the "Download File" buttons above instead, then upload to the data folder on GitHub.
Cumulative BTS Progress
| Type | File | Month | Subjects | Uploaded |
|---|---|---|---|---|
| No uploads yet | ||||
Methodology Documentation
Version 2.6 | Last Updated: -
Overview & Purpose
This analysis compares back-to-school forecast demand (April-October 2026) against organic recruiting capacity to identify subjects requiring intervention. The critical innovation is distinguishing between true supply shortages and utilization problems where contracted tutors sit idle due to algorithmic barriers.
Key Innovation: Utilization-based problem classification extends the LSAT investigation methodology across the entire subject portfolio, identifying 24 subjects with the same algorithmic root causes.
Data Sources & Quality
1. Run Rate Baseline (Dec 2025 - Mar 2026)
Source: Looker query tracking new tutor contracting by subject by month
Calculation: Run_Rate = Mean(Dec 2025, Jan 2026, Feb 2026, Mar 2026)
- Excludes Nov 2025: Manual subject adds (adding subjects to existing tutors without consent) artificially inflated supply through January 2026. This practice ended February 2026.
- Excludes Pre-Dec 2025: Paid spend campaigns were active in earlier months, inflating run rates above organic baseline.
- TVA Stabilization: Late 2025 contracting process change (HireVue → Micro1 → TVA) reduced acceptance rates from ~50% to ~15%. Dec 2025 forward represents post-transition baseline.
- Strict SOP: February 2026 implemented stricter subject approval policy, affecting which subjects can be added.
Data Quality Concerns:
- High volatility: Month-to-month coefficient of variation ranges 32-51% for key subjects (Chemistry: 7-24 contracts/month)
- Small sample: Only 4 months of clean data limits confidence in averages
- Declining trend: Portfolio capacity down 15% from Nov-Feb to Dec-Mar (concerning trajectory)
Validation: Feb-Mar 2026 data shows consistent patterns with Dec-Jan 2026, suggesting baseline is stable despite volatility.
2. Forecast Data (V1.4 Model)
Source: Pierre's monitoring_table_wide.xlsx
Metric: forecasted_headcount - predicted new tutor contracting need by subject by month
Model Version: V1.4 (as of Week 13, 2026)
Known Model Characteristics:
- 4-month backtest (Nov 2025 - Feb 2026) showed ~31% MAE improvement over manual forecasts
- Buffered version showed ~46% improvement
- Buffer logic: If P90 time-on-net > 24 hours, select higher of buffer/non-buffer forecast
- Pending enhancement: Existing tutor capacity feature (will adjust forecasts based on tutors who can add subjects)
forecasted_headcount metric predicts how many NEW tutors need to be contracted to meet demand, not how many will be utilized. This is why utilization classification is critical.
3. Utilization Data (Feb-Mar 2026)
Source: Looker query tracking tutor assignment timing
Metrics:
Total_Contracted= New tutors contracted in Feb-Mar 2026Utilized_30d= Contracted tutors with ≥1 assignment within 30 daysUtil_Rate = (Utilized_30d / Total_Contracted) × 100
Period Rationale: Feb-Mar 2026 is cleanest period (post-manual-adds, post-TVA stabilization). Provides most honest view of current tutor utilization patterns.
Alternative Windows: Also calculated 60-day and 90-day utilization. 30-day chosen as primary metric because tutors should receive first assignment quickly (longer windows mask algorithm issues).
Analysis Steps
Step 1: Demand Adjustment (3-Month Group Smoothing)
Problem: Original forecasts show volatile month-to-month demand (e.g., SAT ramping from 27 in April to 46 in October). Additionally, tutors recruited too early are likely to attrit before they are actually needed.
Solution: Smooth demand within fixed 3-month groups. If any month in a group exceeds the run-rate cap, the group's total is averaged across its three months. This keeps demand concentrated near peak while preventing spikes that exceed contracting capacity.
Algorithm:
Group 1: Aug + Sep + Oct
Group 2: May + Jun + Jul
Standalone: Apr
For each group:
IF any month in group > Run_Rate:
total = sum of group
each month = floor(total / 3)
middle month gets remainder
enforce manual adjustment floors
April: left unchanged if Group 2 needed correction.
If Group 2 did NOT need correction and Apr > Run_Rate,
cascade Apr excess to May (up to May's available room).
Benefits:
- Keeps demand concentrated near peak months (no leaking back to April)
- Reduces tutor attrition risk (smoothing stays within 90-day windows)
- Total demand is preserved across each group
- Manual forecast adjustments are always respected as the floor
- Months that don't need correction are left untouched
Example - SAT (Run Rate: 11/mo):
| Month | Original | Smoothed | Change |
|---|---|---|---|
| Apr | 27 | 28 | +1 |
| May | 26 | 30 | +4 |
| Jun | 30 | 32 | +2 |
| Jul | 36 | 30 | -6 |
| Aug | 42 | 44 | +2 |
| Sep | 45 | 45 | — |
| Oct | 46 | 44 | -2 |
| Total | 252 | 253 | Demand preserved within groups |
Group 1 (Aug/Sep/Oct): 42+45+46 = 133 → 44, 45, 44. Group 2 (May/Jun/Jul): 26+30+36 = 92 → 30, 32, 30. April left as-is since Group 2 had corrections.
Step 2: Capacity Gap Analysis
Metrics Calculated:
Gap = Adjusted_Target - Run_RateGap_Pct = (Gap / Run_Rate) × 100Max_Capacity = Run_Rate × 1.2(20% stretch assumed achievable with focused effort)
Threshold Logic:
- If
Adjusted_Target > Max_Capacity→ Flag as "Needs External Levers" - If
Adjusted_Target ≤ Max_Capacity→ "On Track" (achievable with organic pipeline)
Step 3: Utilization-Based Problem Classification
Innovation: This step separates supply problems from utilization problems using actual tutor usage data.
Classification Algorithm:
IF Util_Rate < 50% AND Gap > 20%:
→ POSSIBLE PLACEMENT ISSUE
→ Recommendation: Investigate placement/assignment before recruiting
→ Note: New tutors contracted but not assigned — may be algorithm or forecast issue
ELSE IF Util_Rate >= 50% AND Gap > 20%:
→ TRUE SUPPLY PROBLEM
→ Recommendation: External recruitment levers
ELSE:
→ ON TRACK
→ Recommendation: No action needed
Rationale for 50% Threshold:
- Utilization <50% within 30 days indicates systemic barriers (tutors should get first assignment quickly)
- LSAT investigation showed 22% new tutor utilization (well below 50%) correlated with algorithmic issues
- Industry standard for tutor marketplaces: 60-80% utilization within 30 days
- Below 50% signals algorithm prioritizing wrong tutors or capacity routing failure
Key Findings Explained
Finding 1: 53% of Flagged Subjects Are Utilization Problems
What we found: Of 45 subjects flagged as "needs external levers" based on gap analysis, 24 (53%) have utilization rates below 50%.
What this means: More than half of perceived "supply shortages" are actually algorithm failures. We're contracting tutors but they're not getting work.
Example - High School Chemistry:
- Feb-Mar contracted: 21 tutors
- Utilized within 30 days: 9 tutors (43%)
- Forecast says need: 38 tutors/month
- Analysis: If we contract 38/month at 43% utilization, only 16 will get used. We're not short on tutors - we're short on algorithm efficiency.
- Correct solution: Fix utilization to 75% → current 14/month run rate yields 10-11 utilized (vs. current 6)
Finding 2: Only 10 Subjects Are True Supply Shortages
What we found: Only 10 subjects have both (a) good utilization (>50%) AND (b) insufficient run rate capacity.
What this means: External recruitment spend should be targeted to these 10 only. Other subjects need algorithm fixes, not more tutors.
Example - SAT:
- Feb-Mar contracted: 29 tutors
- Utilized within 30 days: 18 tutors (62% - GOOD!)
- Run rate: 11/month, Target: 36/month → Gap: 227%
- Analysis: Tutors ARE getting work when contracted (62% is healthy). Problem is we can't contract enough through organic pipeline.
- Correct solution: External recruitment levers (paid spend, InMail, opt-in)
Limitations & Caveats
Run Rate Volatility (High Concern)
Month-to-month capacity swings significantly:
| Subject | Dec | Jan | Feb | Mar | Mean | Std Dev | CV |
|---|---|---|---|---|---|---|---|
| SAT | 8 | 7 | 14 | 15 | 11.0 | 3.5 | 32% |
| High School Chemistry | 24 | 11 | 7 | 14 | 14.0 | 6.3 | 45% |
| AP Pre-Calculus | 6 | 3 | 3 | 9 | 5.2 | 2.5 | 47% |
Implication: Run rate is NOT a stable capacity baseline - it's a volatile trailing indicator. Using point estimates (single average) without confidence bands masks significant uncertainty.
Recommended Enhancement: Add confidence intervals (e.g., Run_Rate ± 1 Std Dev) to capacity estimates. Flag subjects with CV >40% as "high variance - monitor closely."
Forecast Accuracy Unknown
Pierre's V1.4 model has limited historical validation:
- 4-month backtest (Nov 2025 - Feb 2026) showed good MAE vs. manual forecasts
- But only 4 months of actuals available (small sample)
- Model assumes stable demand patterns (back-to-school surge may differ from historical)
- Existing capacity feature pending (may materially change forecasts)
Mitigation: Track actual Apr 2026 contracting vs. forecast as early validation. Adjust methodology if significant over/under forecasting detected.
Acceptance Rate Structural Constraint
Late 2025 contracting process change reduced acceptance rates from ~50% to ~15%. This is a 70% decline, requiring 3.3x more applications to achieve same contracted volume.
Impact: To contract 36 SAT tutors/month:
- At 50% acceptance: need 72 applications
- At 15% acceptance: need 240 applications (3.3x more!)
Implication: Even with paid spend increasing application volume, 15% conversion constrains scalability. Funnel optimization may be higher ROI than recruitment spend.
Recommended Parallel Workstream: Track acceptance rate monthly (Dec-Mar 2026 trend analysis). If declining further, TVA process fixes become higher priority than forecast analysis.
Validation Against Prior Work
LSAT Utilization Investigation (Completed Week 11, 2026)
Findings validated by current analysis:
- LSAT showed 22% new tutor utilization vs. 66% veteran utilization
- Current analysis: LSAT utilization 33% (Feb-Mar 2026) - consistent with investigation
- Root causes: Four-stage exclusion cascade, capacity misdirection, ranking issues
- Conclusion: LSAT is utilization-constrained, not supply-constrained
Current analysis extends this finding: 24 subjects exhibit same pattern (low utilization + forecast gap). Recommendation: Apply LSAT investigation methodology to Chemistry and College Accounting as highest-volume representatives.
Stockout Analysis (Week 8, 2026)
Found 69% of stockouts were process-related, not supply-related. This aligns with utilization problem hypothesis - system barriers preventing tutor-student matches even when supply exists.
V1.4 Forecast Accuracy Backtest
Model performed well against manual forecasts (31% MAE improvement), providing confidence in forecast quality. However, historical accuracy doesn't guarantee future accuracy - especially for back-to-school surge pattern.
Assumptions & Constraints
Key Assumptions:
- 120% capacity threshold: Assumes recruiting can stretch to 20% above average with focused effort but not beyond
- Even distribution optimal: Assumes consistent monthly recruitment is operationally superior to reactive ramping
- 30-day utilization standard: Tutors should receive first assignment within 30 days (longer indicates system issue)
- Dec-Mar baseline stable: Assumes last 4 months represent go-forward capacity (may not account for seasonal patterns)
- Utilization patterns persistent: Assumes Feb-Mar utilization rates representative of ongoing patterns
Known Constraints:
- Cannot model seasonality: Dec-Mar may not represent summer/fall recruiting capacity (different from spring)
- Cannot predict SOP changes: Future policy shifts could alter run rates significantly
- Cannot predict acceptance rate: If TVA acceptance continues declining, capacity will worsen
- Cannot predict demand shifts: Back-to-school patterns may differ from historical (exam changes, curriculum updates, etc.)
Confidence Levels
| Component | Confidence | Rationale |
|---|---|---|
| Utilization Classification | HIGH | Validated against LSAT investigation. 50% threshold conservative. Data quality good (Feb-Mar 2026). |
| Adjustment Methodology | MEDIUM | Mathematically sound. Assumes even distribution optimal (likely true but unvalidated). |
| Run Rate Stability | LOW | High volatility (CV 32-51%). Only 4 months clean data. Declining trend concerning. |
| Forecast Accuracy | MEDIUM | Model performed well in backtest but limited history. BTS surge pattern untested. |
Recommended Use & Interpretation
Primary Use Cases:
- WBR Preparation: Weekly review of subjects requiring intervention
- Resource Allocation: Prioritize algorithmic fixes vs. recruitment spend
- Supply Team Planning: Consistent monthly targets for recruiting
- Trend Monitoring: Track run rate and utilization changes over time
Interpretation Guidelines:
- Utilization Problems: Do NOT recruit more tutors. Investigate algorithm barriers first. Expected ROI of fixes: 50-100% improvement in utilization.
- True Supply Problems: External levers required. Prioritize by gap size (>200% = critical).
- High Variance Subjects: Treat capacity estimates with caution. May need wider confidence bands.
- Declining Run Rates: If subject run rate declining month-over-month, investigate root cause before assuming forecast achievable.
Update Cadence & Maintenance
Automated Weekly Updates:
Every Monday 6:00 AM UTC (1:00 AM CST):
- Pull latest Looker data (run rates + utilization)
- Read current forecast (monitoring_table.xlsx)
- Re-run analysis with fresh data
- Update dashboard automatically
Manual Updates (As Needed):
- When Pierre updates forecast: Upload new monitoring_table.xlsx to data/ folder
- When major SOP changes: Re-baseline run rates with new clean period
- When utilization patterns shift: Investigate root causes, update classification thresholds if needed
Quarterly Review:
- Validate forecast accuracy against actuals (Apr-Jun 2026 will provide first real test)
- Assess whether adjusted targets improved operational outcomes
- Re-evaluate utilization threshold (50%) if patterns change
- Update run rate baseline as more clean months accumulate
Related Analyses & Context
- LSAT Tutor Utilization Investigation (Week 11, 2026): Root cause analysis identifying algorithmic barriers. Serves as validation for utilization classification methodology.
- Core Stockouts SOP Deployment (MOPS-NAT-001, Week 10, 2026): Hashtag-based reason codes separating process vs. supply stockouts. Found 69% process-related.
- V1.4 Forecast Accuracy Analysis (Week 12, 2026): 4-month backtest showing ~31% MAE improvement over manual forecasts. Buffered version ~46% improvement.
- Tutor Attrition Survey (Week 9, 2026): 72% of attrition preventable (niche subjects, communication gaps). Informed understanding of supply constraints.
- Contract Runway Dashboard (Q4 2025): Historical application volume analysis, Rx spend multiplier (~1.20x recommended).
Ownership & Contact
Analysis & Dashboard: Leigh Robbins (Tutor Forecasting Team) — Analysis design, methodology development, utilization-based classification framework, dashboard build and automation
Team Contributors:
- Darren Martinelli (Tutor Forecasting) - WBR presentation, stakeholder alignment
- Pierre (Model Development) - V1.4 forecast model, monitoring table
- Kevin (Manager) - Strategic direction, resource allocation
Questions, Issues, or Feedback:
- File GitHub issue on this repo
- Direct message Leigh Robbins