When I first joined Terrene, I was given the responsibility of internal organization. A key aspect of that was setting the company's Key Performance Indicators (KPIs), particularly for the business. Setting the KPIs was something I struggled with for a long time and I am still trying to actively improve. One problem I found was that so many of the KPIs I read about were designed for companies who were either already into sales or had a very different focus (e.g. monthly active users for an app). I will share with you some of the lessons I learned and what we are trying to do now.

KPIs for Pre-Revenue Ventures

Lesson 1: Don't get caught up in the numbers

I love Excel, I have worksheets for everything, and I really like seeing numbers, so, when I set out to track KPIs, I wanted everything to have numbers that were easy to track so I could monitor change over time and make some nice graphs. I still think this is a good mindset, especially as you grow. However, in the early days, everything changes really quickly and, honestly, what matters to you now probably won't matter in six months anyway. Don't be afraid to track more qualitative KPIs if they matter now.

Lesson 2: Don't focus on marketing data

Because of my love for numbers, marketing analytics such as website views and Facebook likes were some of the first things I recorded. They are very tempting because you can see the change every week, they're easily trackable, and, if people are seeing your product, that's good, right? Well, really, for a B2B company that was still developing, like Terrene, they're pretty useless. Chances are, in the early days, everyone who likes your Facebook page is a friend of you or your co-founders and all of your website views are from people trying to sell you consulting services. And if they're not and you have an actual web presence, you need to finish that product ASAP because increasing interest in a product you haven't built isn't very useful.

Since marketing data isn't useful, you have to decide what is useful for your company. For Terrene, when we started properly tracking KPIs, we were at the stage where we had a working prototype, so we started tracking the number of demo requests we got each week, the number of use cases we came up with, the number of new leads each week, and the number of key features we implemented. After a few weeks of tracking these KPIs, we realized they weren't working because they didn't directly correlate to how the company was doing and they were easy to inflate.

Lesson 3: Choose KPI's that directly correlate with success

This is a lesson we learned which seems pretty obvious in hindsight, but some KPIs that seem like they matter really don't once you stop and think about it. When we started tracking the number of use cases, our thought process was something along these lines: When we are talking to potential clients, we need to know how we can help them. Therefore, coming up with new use cases every week will make us better at selling to potential clients. Also, since we are building a product that is industry agnostic, coming up with use cases in different industries helps us determine a total market size.

This is not a good KPI, because, in reality, it doesn't matter if you can come up with one use case or a thousand. If you're not talking to anyone who the problem affects, you're not learning how to build the solution that they need. A much better way of tracking success in this situation would be to create a set of interview questions or a survey for potential clients. That way, you're finding out about real problems and you're meeting potential clients who could use your solution in the future.

Lesson 4: Set rules about your KPIs that make it hard to fake numbers

In the previous example, I mentioned creating a set of interview questions or a survey. In my mind, this is very important. Not because it's better to have a script when interviewing a user--in fact, I think you learn more when you don't--but because, if you don't have definable criteria for your KPI, it can sometimes be tempting to inflate your numbers to reach your goal.

We were in Techstars when we used the use case KPI. In Techstars, you present your weekly KPIs in front of your cohort every week. After looking back, I realized we did inflate our numbers occasionally. We weren't intentionally trying to misrepresent ourselves, but when we would record our KPI's each week, there would be a small discussion between Francois and I that would go something like this:

Francois:"What use cases did we come up with this week?
Me: We talked about [list of 3 use cases].
Francois: Didn't we also talk about [use case 4]?
Me: We did. Oh, and we also talked about [use case 5].

Then we would tally five use cases. The problem is that use cases 4 and 5 were small things we might have mentioned once in passing. However, because we had no definition of what constituted a use case, and 4 and 5 were technically ways of using machine learning, we still counted them. Your criteria don't have to be as rigorous as a full set of interview questions or a survey, but you should at least set a minimum acceptable value for what you will tally to your KPI.

KPIs We Decided On

After many iterations, these are a few KPIs we used. A word of warning: we did not use these for long so there is a chance they aren't much better our previous KPIs and we just never realized. All those iterations took time and, shortly after choosing these, we changed our KPIs again as we began entering into sales.

The best KPI we tracked and the only KPI we kept from Techstars was our weekly rock and monthly boulder. These were not quantifiable KPIs but a set of weekly and monthly targets we set as a team and then tracked the success rate of. This should not be a to-do list; it should be one to two goals each week or month. We usually set one for tech and one for business. The obvious problem is setting goals which are too easy or too hard, but, with some practice, that shouldn't be a issue.

Another helpful KPI we used was the percentage of product completion. One day, we got into a room and wrote down everything we needed to have to complete V1 of our product. Then, every ~2 weeks, we would do an update and see how close we were to completing the project. For this, we had to rely heavily on Kash's expertise as to how long each feature would take to implement.

The final metric was tracking progress to our first pilot. Just as with product completion, we estimated what steps we would need to complete with our first client to get a pilot (which became the basis for our sales pipeline). As we completed each step, we would estimate percent completion.

KPIs for Early Revenue Start-ups

We are a new early revenue start-up, so I'm sure we will learn about flaws in many of our KPIs in the coming months. For now, I will share with you what KPIs we are tracking and how we think they are working.

Sales Metrics


The obvious metric to track for sales is revenue--that should be pretty self-explanatory. But, besides total revenue, we track a few metrics. Our product is sold in three categories--small, medium, and enterprise--and can be sold in multiple industries, so we track revenue across all of those categories. This lets us see what industries we have traction in and what size of clients are our best performers. As a small company, you probably have a good idea of these insights already. However, revenue is a KPI you will want to continue tracking as you grow and it's good to get in the habit as early as possible.

As we grow and our team expands, we plan to also track revenue by sales person/team. However, for now, Francois and I are both actively involved with every client so we haven't assigned clients to an individual.

Revenue Velocity and Acceleration

I, personally, am a huge fan of revenue velocity tracking for a company at any stage and I think it's better in the early days than revenue for a company with long sales cycles like us. Because we only have a few clients and our sales cycle is between 4-8 months, our revenue plateaus for extended periods of time, which means it's not a great indicator of our performance week over week. Revenue velocity is the rate at which deals are moving through your sales pipeline, or, in other words, how much new revenue you should be generating each week (if you measure the length of the sales cycle in weeks). The formula for revenue velocity is:

In the early days, you will need to estimate the success rate of a deal at each stage in your pipeline but I recommend you track your success and conversion rates for the future.

I mentioned earlier that revenue velocity is a measure of how much new revenue you are generating each week, which is not quite true. For an established company with a full sales funnel it's a good estimate, but when you're early in sales process your pipeline is very skewed to the early stages. So it is better to think of revenue velocity as the rate at which deals are moving through the pipeline.

The reason I like revenue velocity is that it quantifies success at all stages in the pipeline. Maybe you didn't sign a new client this week, but you did bring one client from "give demo" to "follow-up meeting" and another from "proof of concept" to "pilot negotiations." The chance of success goes up with each stage in the pipeline so you can see your growth over time.

I also like tracking revenue acceleration which is the rate of change in velocity (or week 2's RV minus week 1's RV). Acceleration is useful for making sure you continue to grow at an increasing pace (as startups should). We haven't implemented this yet, but we think acceleration will also be good for quantifying the success of a new sales hire. Adding a new salesperson should increase the rate at which you are selling, so, if your revenue acceleration does not grow with a new hire, you should take some time to consider why the new person has not increased your sales output.

One last word of warning about revenue velocity: make sure you are increasing your revenue at each stage in the pipeline. I track the value of each stage and the average duration a company is in the stage to ensure revenue velocity is increasing across the board and not because I keep adding a lot of companies to stage 2 and 3. I also purposely set the first stage of my sales pipeline to 0% chance of success to make it harder to artificially increase the revenue velocity by adding a lot of companies to the first stage.

Hiring Metrics

We are planning on expanding the team in the next few months, so hiring has been on my mind. We have hired two co-op students in the past and we tried tracking a few metrics, but, if I am being completely honest, I have not found any that I like. Also, I feel that until you are regularly hiring (e.g. one new hire every month or more) hiring metrics are not super crucial, but I like planning for the future so I have one less thing to think about.

Metrics I Don't Like

Last year when we hired our current co-ops, we tested metrics such as interviews conducted and applications received. These metrics are not useful because they don't measure success. If I conduct 100 interviews and have 100 bad interviews, that means my selection process is bad. The same goes with applications. If we receive 1000 applications but only 15 people are qualified, something is wrong with our job posting.

Alternatives I Plan on Trying

To plan out what metrics I am going to track next time we hire, I've been going through what would consititue a successful hiring process. First, it would be a good job posting. For this, I am going to modify the old metric of applications received to the percentage of applicants who are qualified (get selected for interview). To test for flaws in our selection process, I will also be tracking the percentage of candidates who are invited back for a second interview.

For tracking the actual interview, there are two questions I want to answer. The first: Did we sell the candidate on the job? The second: Did we get an accurate assessment of the candidate's quality? Unfortunately, I don't think there will be a quick answer for either. My plan for the first question will be offer acceptance rate, but, until we are hiring for more than one position, it will be hard to properly assess our interview process. As for determining the candidate's quality, that will likely be based on a performance review conducted a few weeks or months into their position. Again, it will be hard to properly measure this until we hire multiple people (unless we really mess up the first time).

Thanks for reading! I hope that this post will help you avoid some of the mistakes I made. I will probably post an update in a few months about how our KPIs have changed and any new lessons I have learned. I think we have better KPIs now than we did six months ago, but we can always improve.