How to measure a scaled customer success program with 1000+ customers

Dan Ennis
Scale Team Manager, Customer Success at Monday.com
Grow your Customers

What you’ll learn:

  • How Monday.com’s scaled customer success team of 17 manages 1000’s of accounts while maintaining a world-class GRR
  • Marketing tactics for scaled customer success programs
  • How to replicate this play at your business!

This playbook is right for you if:

  • You need strong success measures for your scaled CS program 
  • You want a proven strategy to launch or rebuild a scaled program successfully

Skip to the bonus content: 

Do you know how to effectively measure your scaled customer success program? Download the free scaled CS program checklist and metric bank to get you on track.

The Problem

Newer technologies and changing economic needs have caused a rise in scaled programs being launched, but most leaders are building as they go, without a clear picture of how to measure its success. Given the lack of clarity on how to measure scaled programs, companies risk investing on inefficient programs, neglecting their customers, and, worse, losing revenue.

Play Intro

Monday.com offers comprehensive project management features to help businesses plan, track and execute their projects. In 2020, Dan Ennis, now the Scale Team Manager, Customer Success (US), helped build the scale team which is now a global team of 17 members overseeing 1000’s of accounts. For Dan and his team, the directive for building out the scaled program was clear: be willing to break things and iterate quickly, but everything must be measured. Establishing the importance of measurement has led the Monday.com team to create a well-oiled scaled CS machine.

“The reason why measuring a scaled customer success program is challenging is that teams are measuring the success of their scaled programs the same way they measure their high-touch, traditional customer success motions. That’s because most Customer Success leaders come from a much more traditional background. The things that they use to measure success in traditional customer success, like relationship and sentiment, can’t be used to measure their scaled program. ”
- Dan Ennis

The Results:

  • A team of 17 that enables success for thousands of customers globally
  • 5x increase in ARR managed with just a 1.5x increase in headcount
  • Sustained world-class GRR

How

Monday.com

Ran the Play

Building today’s scaled program at Monday.com required a shift away from the autonomy of delivering a white glove experience. The north star metric for the team is still gross retention, but  developing programs that indexed on leading indicators allowed them to create consistent and repeatable plays, accelerating their customers’ ability to experience ROI from their product. 

Monday.com used its own platform and Looker to operationalize their programs. Here’s the step-by-step breakdown of the Play:

Step 1: Test freely and measure everything

The team launched their digital customer journey model in Q1 2021. This included flags for when customers were veering off the course and needed intervention.

The team was then free to test, iterate and report on their findings. Their goal was to identify what could be measured and be tied to successful outcomes. Documenting these outcomes allowed the team to spot commonalities across the customer base and ultimately became their leading indicators to measure success long term. 

Monday.com's high-level strategy for its scale program (*”Q” = quarter)

Step 2: Get clear on the north star of your team and go upstream

Like every organization, Monday.com had to have clear discussions to determine true ownership of metrics. Given the Sales and Account Management teams owned expansion targets, the Customer Success team’s main focus was to protect the base. This meant the Scaled team was also rallied behind Gross Revenue Retention (GRR):

GRR = (Starting MRR – Churn – Contraction) / Starting MRR

However, to understand how to impact this lagging indicator, Dan and his team looked upstream to identify leading indicators that would move the needle.

Product utilization was one of the vital leading indicators the scaled team relied on. They wanted to measure how people’s product utilization behavior changed and identify where they were engaging within the product.

Step 3: Introduce marketing methodologies

While product utilization was a leading indicator for Dan and his team, they still needed to uncover more indicators to understand the performance of their initiatives. Dan noted that measuring marketing attributions such as open rates, click-through rates, and conversion rates turned out to also be critical success measures for their scaled program. 

Email campaigns and in-product messaging were core communication channels with customers. By measuring engagement performance, they could fine-tune their programs and make adjustments across their thousands of customers centrally.

Dan’s advice for dealing with lower-performing communications?  “Don’t change the content just because they didn’t open the email”. You have to put on your “marketer’s hat” and ask whether the communication got to the customer in the right way. Rather than spending calories reworking content that never got consumed, make sure you test different ways to get content in front of your customers to get the feedback you need on the content.

Associating the right metrics with the right actions is key to having a well-performing program.

Step 4: Review and action on metrics in a regular cadence

To separate noise from signal, Dan’s team put a methodical review cadence in place to ensure they were making changes at the right time, based on the right indicators. Below is a rundown of data points they review, when, and how they use their learnings:  

Information delivery success through email performance

  • Engagement (open and click-through rates): Immediate and campaign based. Learnings are implemented with new launches.
  • Intended outcome (typically a related product utilization metric): Monthly/quarterly on a rolling basis for long-term programs. Learnings may impact iterations on content and call to action (CTAs).

Adoption success through product utilization 

  • Breadth (Monthly Active Users or MAU) and Depth (specific feature metrics) of usage: Quarterly on a rolling basis. Learnings are turned into new asynchronous campaigns, additional topics for office hour sessions, additional feedback to Product, identifying accounts requiring human intervention, and more.

Risk mitigation success through human intervention

  • “Just in time” customer success (metrics like number of accounts touched and time to resolution): Quarterly review. Learnings are translated into program adjustments to minimize risk that requires human intervention and additional training for the team.

It’s important to note that Dan and his team look at all of these metrics in conjunction with one another. For example, they may review last month’s customer cohort on whether they acted on the intended CTA. Understanding, if there is a correlation to the change in product usage or human intervention required on the topic, can help determine the success of the scaled playbook.

Step 5: It all rolls up to a revenue metric

To keep track of their impact on revenue metrics, the team uses Looker to analyze the correlation of their leading indicators to retained and expanded revenue on a quarterly basis. 

Also, looking at accounts that have either churned or downgraded allows the team to flag larger themes of risk within their customer base, which they can then turn into new proactive triggers moving forward. The leadership team will also identify these opportunities by reviewing accounts flagged for risk, but had no follow-up intervention.

Monday.com’s agenda for their quarterly review

Impact of the Play

The effect of this Play has been threefold for Monday.com: 

  1. Sustainable Headcount: The team has managed to take on a growing book every year while optimizing workflows so that 5x growth in ARR managed only amounted to 1.5x growth in headcount!
  2. Consistent Revenue Target: Monday.com has scaled its team’s managed book without having to compromise on retention, maintaining a world-class GRR standard.
  3. Reliable Customer Experience: Consistent tracking and iteration allowed Monday.com’s scaled program to grow efficiently while improving its customers’ experience.

Run The Play Yourself

Now it’s your turn! Here’s how to start measuring the effectiveness of your scaled CS program broken down into three sections:

Establish Your Team's North Star Metric

  1. Identify the key metrics that drive revenue growth for your business.
  2. Determine which metric can be owned by the revenue team, as other teams might already own other metrics.
  3. Define the metric as your team's north star metric. This will guide your team's activities.

Identify Leading Indicators

  1. Identify leading indicators that your revenue team can impact, and that will drive your north star metric.
  2. Analyze these indicators to determine which ones to focus on to achieve your north star metric.
  3. Establish a clear measurement plan to track and monitor these indicators.

Test and Iterate

  1. Establish a process to test and iterate on new approaches.
  2. Measure and document the results of each test and use these findings to refine your measurement plan and approach.
  3. Continuously iterate on your approach to improve performance and achieve your north star metric.

How Catalyst Can Help

To build a world-class scaled program, you need the right technology. Set up just-in-time actionable alerts, measure the health of your customers (scaled and high-touch), and automate emails while still making them feel personalized–Catalyst has all the solutions you need to level up your scaled program, check out how we can help!

client success platform
Get our Newsletter!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
© 2024 Catalyst Software. All rights reserved.