SM-001Srdan Mijuskovic
~/work spec-001
SPEC-001Analytics PlatformDEPLOYED

Customer Support Analytics Platform

Multi-portal classifieds platform · 5M+ monthly users
IMPACTFirst-ever real-time KPIs · Manual Excel process eliminated
→ Background

Support quality at the company was essentially invisible. The team was tracking key metrics (user frustration rates, first contact resolution, handle times) but doing it manually in Excel spreadsheets. That meant data arrived days late, required a dedicated person to compile it, and was riddled with human error. By the time anyone saw a trend, it was already too late to act on it. The support leads knew things weren't great, but they couldn't prove it or quantify it in any meaningful way. When they escalated problems to leadership, they were doing it on gut feel rather than data. And because the data was so painful to produce, they only looked at it monthly. For a platform with millions of users, that's effectively blind flying.

→ What I did

I started by sitting with the support leads for a week. Not to gather requirements in the traditional sense, but to understand what decisions they were actually trying to make. What would they do differently if they had real-time data? What questions were they always trying to answer but couldn't? From those conversations, two metrics emerged as the most decision-relevant: user frustration signals (tickets reopened, escalation rates, negative CSAT scores within 48h) and first contact resolution. Everything else was vanity. I then worked with the data team to identify where these signals already existed in our systems. They were there, just scattered across three different tools with no aggregation layer. We built a real-time pipeline that unified them, designed a dashboard that surfaced the metrics support leads actually needed, and replaced the Excel process entirely. No new data collection needed. Just connecting what we already had.

→ The critical moment

The turning point was the first week the dashboard went live. Within three days, the support lead noticed a spike in reopened tickets for a specific listing category. In the old world, this would have shown up in the monthly report three weeks later. Instead, they flagged it to the product team the same day, identified a UX issue in the listing flow, and got it fixed before it compounded. That one incident changed how the support team thought about their role. They stopped being reactive reporters and started being proactive monitors. The dashboard became the first thing the team lead checked every morning.

→ What didn't work

The first version of the frustration metric was too noisy. We were capturing too many signals and weighting them equally, which meant the score fluctuated constantly and people stopped trusting it. We had to go back and work with support to define what "frustration" actually looked like in their context. Not every reopened ticket is a frustrated user, but three reopens within 24 hours almost always is. The recalibration took two weeks but made the metric actually actionable.

→ What I'd do differently

I'd involve the support leads in the metric design earlier, before we touched any code. I assumed I understood what "user frustration" meant from the data, but the support team had a much more nuanced definition that only emerged through iteration. Starting with a co-design workshop would have saved two weeks of recalibration and built stronger ownership from day one.

01
Use existing data instead of building new collection

The signals we needed already existed across three systems. Connecting them was a weeks-long project; building new data collection would have been months. Speed mattered. Support leads needed something they could trust now, not eventually.

02
Focus on two metrics only (frustration + FCR), ignore everything else

The instinct was to build a comprehensive dashboard. I pushed back. More metrics means more noise and less action. We identified the two questions support leads were always trying to answer and built the dashboard around those. Everything else got cut.

03
Daily check-in rhythm before full launch

Rather than a big launch, we ran a two-week period where the support lead used the dashboard daily and flagged anything that felt wrong. This caught the noise problem with the frustration metric before it went org-wide and saved us from a trust crisis.

First-ever real-time view of support quality
Manual Excel reporting process eliminated entirely
Metric review cycle: monthly to daily
First issue caught in real-time within 3 days of launch
Support team shifted from reactive to proactive
Data
PythonSQLETL pipelines
Analytics
Real-time dashboardsKPI designApache Superset
Process
Metric designStakeholder alignmentPhased rollout