Posted on February 27, 2017 @ 07:22:00 AM by Paul Meagher
A lean startup uses innovation accounting to properly measure the effect of design changes on customers. A startup can fail if it is
measuring the wrong things. The chapter "Measure" is about strategies we can use to make sure we are measuring the right things.
We discussed the concept of a Minimum Viable Product (MVP) in the last blog ("Test") of
this blog series on Eric Reis seminal book The Lean Startup (2011). One property of an
MVP that I didn't discuss was the use of an MVP to gather initial baseline measurements of the Key Performance Indicators (KPI). When designing your MVP, keep in mind that one important role that it can serve is to kick off the process of measuring baselines for
key performance indicators like the number of registrations, number of downloads, number of customer logins, number of payments,
and so on (sales funnel behaviors). Once you gather this baseline data for your key performance indicators, then you can verify whether any future design changes you make actually have a significant effect on the levels of these key performance indicators.
The term Innovation Accounting refers to the repetitive 3 step process of gathering baseline measurements, making a design change intended to improve KPIs, and then using these measurements to help you decide whether to pivot or persevere in your present course. The more times you can successfully complete this cycle, the more actual value-adding innovation is happening.
Lots of startups measure the performance of their business but you can still fail if you are measuring vanity metrics rather
than actionable metrics. Vanity metrics are numbers that portray the startup in the best possible light but which don't actually give
us much insight into what is working or not. These graphs often look like increasing sales graphs measuring gross numbers of users registering or performing some other desirable action on a website. While those numbers look good, it may be masking problems with other more critical metrics like conversions and sales. Ultimately the problem with a vanity metric is that it is not fine grained enough to inform us about what is working and what is not working. If we want to figure out what is working or not, then we need to apply scientific/statistical techniques to the design process.
If we made the effort to measure baseline performance with our MVP we are in a position to conduct A/B testing on some feature to see if it affects our baseline numbers or not. A/B testing involves presenting the potential customer with two versions of the product with one major factor made to differ across the two versions. If we find that version A delivers more sales than version B, and that A delivers more sales than our previous baseline sales, then we can start to develop a causal understanding of what factors are important to the success of our startup and which ones are not.
Eric unashamedly uses the term "cause-effect inferences" (p. 135) to describe the goal of measurement in the lean startup. He believes that
A/B Testing and Cohort Analysis are both readily available techniques startups can use to achieve such understanding. He provides a detailed case study of how the educational startup Grockit applied A/B testing to figure out what was working and what was not working on their learning platform. They believed that peer learning was an underutilized aspect of learning and developed lots of platform features to support it but eventually realized the new features weren't producing improvements in their KPIs. This lead to the realization that learners also want a solo mode for learning which resulted in pivot in their design approach to more fully support both peer-based AND solo modes of learning.
I've discussed the book Getting to Plan B as an important influence on Eric's thinking. Chapter 2 of Plan B, Guiding Your Flight Progress: The Power of Dashboards, offers more useful ideas and techniques around measuring what matters. Plan B advises using Dashboards what list out what leaps of faith you are testing, how they are translated into hypothesis, what metrics you'll use to decide if the leap of faith is true or not, what your actual measurements are, and what insights and responses are appropriate given the results. Here is a simple dashboard for a lemonade stand which illustrates the basic ideas and format/layout they advocate.
What Eric did was add many useful details about the need for baselines, MVPs, innovation accounting, split testing and cohort analysis to this framework. These techniques help the lean startup more reliably find a value proposition and business model that works.
I'll conclude this blog by asking you to think about whether these ideas can be applied to developing new songs? Should a musician begin by develop a Minimal Viable Song that they expose to audiences to get baseline feedback? What key performance indicators might they measure? What variations might they experiment with to see if a change makes the song better (e.g., same lyric but different melodic delivery)? Could they achieve a cause-effect understanding of what elements of the song are contributing to the success of the song? What vanity metrics might mislead them about the success of their song?
I was listening to an interview with a musician recently that suggested she was using a sort of lean startup methodology to figure out how to develop new songs and thought it was an interesting domain of application for lean methods.
|