Customer Support Analytics Platform
Support quality at the company was essentially invisible. The team was tracking key metrics (user frustration rates, first contact resolution, handle times) but doing it manually in Excel spreadsheets. That meant data arrived days late, required a dedicated person to compile it, and was riddled with human error. By the time anyone saw a trend, it was already too late to act on it. The support leads knew things weren't great, but they couldn't prove it or quantify it in any meaningful way. When they escalated problems to leadership, they were doing it on gut feel rather than data. And because the data was so painful to produce, they only looked at it monthly. For a platform with millions of users, that's effectively blind flying.
I started by sitting with the support leads for a week. Not to gather requirements in the traditional sense, but to understand what decisions they were actually trying to make. What would they do differently if they had real-time data? What questions were they always trying to answer but couldn't? From those conversations, two metrics emerged as the most decision-relevant: user frustration signals (tickets reopened, escalation rates, negative CSAT scores within 48h) and first contact resolution. Everything else was vanity. I then worked with the data team to identify where these signals already existed in our systems. They were there, just scattered across three different tools with no aggregation layer. We built a real-time pipeline that unified them, designed a dashboard that surfaced the metrics support leads actually needed, and replaced the Excel process entirely. No new data collection needed. Just connecting what we already had.
The turning point was the first week the dashboard went live. Within three days, the support lead noticed a spike in reopened tickets for a specific listing category. In the old world, this would have shown up in the monthly report three weeks later. Instead, they flagged it to the product team the same day, identified a UX issue in the listing flow, and got it fixed before it compounded. That one incident changed how the support team thought about their role. They stopped being reactive reporters and started being proactive monitors. The dashboard became the first thing the team lead checked every morning.
The first version of the frustration metric was too noisy. We were capturing too many signals and weighting them equally, which meant the score fluctuated constantly and people stopped trusting it. We had to go back and work with support to define what "frustration" actually looked like in their context. Not every reopened ticket is a frustrated user, but three reopens within 24 hours almost always is. The recalibration took two weeks but made the metric actually actionable.
I'd involve the support leads in the metric design earlier, before we touched any code. I assumed I understood what "user frustration" meant from the data, but the support team had a much more nuanced definition that only emerged through iteration. Starting with a co-design workshop would have saved two weeks of recalibration and built stronger ownership from day one.
The signals we needed already existed across three systems. Connecting them was a weeks-long project; building new data collection would have been months. Speed mattered. Support leads needed something they could trust now, not eventually.
The instinct was to build a comprehensive dashboard. I pushed back. More metrics means more noise and less action. We identified the two questions support leads were always trying to answer and built the dashboard around those. Everything else got cut.
Rather than a big launch, we ran a two-week period where the support lead used the dashboard daily and flagged anything that felt wrong. This caught the noise problem with the frustration metric before it went org-wide and saved us from a trust crisis.