Managing New User Satisfaction On A Daily Basis

As we get to the end of 2016, I’m in many conversations about 2016 performance and 2017 budgets. While 2016 isn’t over yet, most SaaS companies know how things are going to end up within a few percentage points. As a result, their focus on 2017 is an extrapolation from how they have been doing in 2016, typically building on month over month activity.

Since there are plenty of variables, the conversations are generally quantitative. In the midst of one last week, I said “why aren’t we talking about increasing conversions and lowering churn?” This was in response to a CFO who had modeled conversion rate and churn at a fixed percentage each month throughout the year.

The CEO responded by defending the CFO, who he’d worked with on the model. The CEO said “we are going to model conservatively, but we think we have lots of room for upside.”

I’ve been in this particular type of conversation 5,371 times over the past 20 years. I’m still nice, but I’m no longer patient with it.

I said, “If we don’t have a plan for increasing conversions and lowering churn, by June when we are on or slightly off plan, we’ll be happy, will have forgotten this conversation, and will not be doing anything to execute on the upside. Let’s stop the budget discussion ten minutes before the end of our scheduled time and talk about ways we can increase conversions and decrease churn.”

If you are wondering why I’m repeating the phrase “increase conversions and decrease churn” I point you at the John Lilly New York Times Corner Office article On The Role of Simplicity and Messaging.

We talked for more than ten minutes on the topic of increasing conversions and decreasing churn. We spent most of it on increasing conversions and tabled a discussion on decreasing churn for the next budget call (which was to finalize the budget). In either case, the modeling for the year was to include both an increase in conversion month over month (modest) and a decrease in churn month over month (also modest). There was acknowledgement that if we didn’t have a change in the number, no one would focus on it.

We came up with a very simple operant conditioning loop framework. We used a binary measurement for each new user – they are either healthy or unhealthy.

On day one, every user is healthy since they just signed up.

On day two, there is a specific attribute measured for each member of the cohort. In this company’s case, let’s call it “create a record in the system.” Any user that creates a record is heathy and goes to day 3. Any user that doesn’t is unhealthy and gets an operant conditioning action, which in this case is an email with instructions on how to create a record. The user stays at the day two level, but they are now in category b. On day three, if they still haven’t created a record, they get a phone call from customer care asking if they’d like some help creating a record.

I’m keeping the examples “create a record” super simple so you can follow along. But until a user creates a record, they stay at day two. We have a category c option, category d option, and category e option. If by category e they still haven’t created a record, they drop out of the funnel.

On day three, another attribute is measured. If a user has achieved the attribute, they go to day four. If they don’t, they get the operant conditioning action, stay at day three, and are now in category b. The loop continues.

The free trial period for this company is 14 days long. There are several operant conditioning actions that can extend it by a week or two weeks. For example, if someone is on their 13th day of the trial but is only on the 7th day of the process, they automatically get another week on the trial.

Recognize that these are not drip campaigns or email triggers. Instead, it’s a very deliberate operant conditioning process. And, as part of any system that involves operant conditioning, there are a significant set of measures by cohort and across the system so the operant condition elements can be A/B tested and modified as we get more data.

Now, this might seem complicated on the surface, but it turns out to be more complicated to explain than it is to work out on a whiteboard. Putting the system in place varies based on the underlying tech you are using, but it’s not that difficult if you approach it with a clear framework.

The goal is deep and immediate engagement by new users. So many companies talk about increasing the number of prospects at the top of the funnel, but they spend remarkably little time making sure actions are taking – on a daily basis – to make sure these prospects convert into paid users.

We have many rapidly growing SaaS companies in our portfolio. Over the past decade, I’ve observed too many companies see their growth rates decelerate, or even stall, because they didn’t focus enough on increasing conversion and decreasing churn, which henceforth I will refer to as ICDC.

2017 is the year for ICDC in my world.


Also published on Medium.

  • Interesting. I guess my question would be this. Its certainly important focus on getting a process in place that has simple, useful metrics that managers (and investors) can track over time. However, what’s more important is the why. Whatever the conversion rate, why is it where it is? What is the essence of the product that is causing the rate to not be higher?

    • Yup – the why is a critical part of things.

  • If you can’t measure it you can’t manage it. 🙂

  • Damon DeVito

    What adjustments do you make when the sales process is recurring revenue, but higher touch and relies more on humans at each interval? Assume, for the purposes of the question, that the human, high touch cannot be automated. As you can imagine it is low volume, high $ per sale.

    • It’s basically, the same process, just with high touch interactions instead of automated ones.

  • The biggest takeaway is that you discuss the numbers to get to the issues. Not the other way around.

    Many times people want to put up a big number and expect that to push the issues: NO!

    Push the issues and then understand the numbers.

  • nathanlatka1

    Brad, are you able to share the tools this team is using to track if “an account is created” (or some action taken) and what tools they’ll use to notify/help users meet those actions.

    I’m interested if they are going to build this sort of thing manually or rely on services like Intercom and SendGrid.

    Thanks

    • SendGrid is a primary one in the mix in this particular example.

  • Niall McCarthy

    Hi Brad, I feel ICDC could be applied to consumer facing product/services. Do you agree? Have you seen this in your experience? Thanks

    • Yup! The metrics / dynamics are different but the concept holds.