Brad Feld

Back to Blog

The Industry Analyst Evaluation Game

Jul 12, 2018
Category Management

Over the past 25 years, I’ve invested in many startups that sell products to large enterprises. Many of these companies end up either creating or helping to create a new category. As the startups (or the category) become visible, they inevitably attract the attention of industry analysts, who write reports on the categories and the startups as part of the industry analysts’ business.

Engaging with analysts can result in significant investments of time, effort, and capital on the part of the startup. The choice is a complicated one since startups are often challenging the status quo and industry analysts, while well-intentioned, don’t necessarily have a full grasp of the underlying industry changes taking place until well past the point that changes – and resulting trends – become obvious.

One of our portfolio companies recently engaged with an industry analyst for the first time with a very disappointing outcome. In this case, the company has created and is leading a new category. As a result of growing their customer base quickly, they were invited to participate in an analyst evaluation. Having invested little in relationships with this specific industry analyst, the company was hesitant, but the study was directly relevant and the industry analyst conducting the evaluation was prestigious, so the company decided to participate.

Unfortunately, it quickly became clear that the analyst conducting the review simply didn’t understand the problem being solved or why this company’s solution was so disruptive to the other vendors. The outcome was a deeply flawed report. In retrospect, the company would have been better off if they had never gotten involved.

While it’s easy to say “oops” and move on, this company will now have to deal with this report for a while in competitive situations. Rather than be pissed off about it, our feedback was to use this as a learning moment in the development of the company, figure out why this happened, and determine what could be done differently in the future.

Several of the issues were exogenous to the company, but one big one was under the startup’s control. And, in all cases, the startup should have been much more forceful about their perspective on each issue. The specific issues follow:

The terminology was loosely defined by the analyst. Big shifts in technology are often interpreted at first as evolutionary, not revolutionary. It was notable that several of the “leading” companies in the report introduced their products over a decade ago, well before the category being addressed in the report was even invented. As a result, the younger companies approaching the problem in a completely new way were ranked poorly because the analyst missed the real value to the customer.

The analyst didn’t behave like a customer. In this product category, most customers perform an in-depth analysis of vendor capabilities through a thorough review based on their customer’s buying criteria before deciding on the solution. This analyst didn’t feel like the study warranted a deep look and used vendor demos instead. This eliminated the opportunity for the analyst to understand the customer’s perspective and to compare and contrast the different solutions being evaluated. All decisions and scoring were left to vendor claims (also known as “marketing”) while operational aspects of the customer, and how the various products addressed them, were ignored.

The analyst went wide instead of deep. The magic of this company’s product and the new category they have helped pioneer is a result of focusing on a very specific, yet critically important aspect of a broader problem. The analyst either didn’t understand this or didn’t focus on it and included a wide range of product capabilities, many of them irrelevant to the problem being addressed, in the evaluation. As a result, the study favored broad tools that covered more surface area (mainly from very large, established technology vendors), but had less specific capabilities, especially in the new product category being addressed.

The company failed to fully engage the analyst. Since the company didn’t have a fee-based relationship with the industry analyst firm, there was no long-term relationship. To young companies, paying analyst fees can feel like extortion, but it’s an essential part of engaging with and helping the analysts to better understand your product and how it’s different, especially when you are leading the creation of a new category. In this case (as in many others), the established vendors of broad products had spent years shaping analyst opinions. Even though these broad products didn’t compete effectively in the new category, their relationship with the analyst, who in this case relied on marketing information rather than real product engagement, won the day.

If you sell a product to large enterprises, neglect analyst relations at your peril. I generally categorize this activity in the same bucket as PR, even though they are different functions and often driven by different leaders in the company. Don’t assume that the industry analysts are all-knowing. Instead, start early and feed them regularly or risk having large, established companies win at this game.