Category Archives: Lean Startup

Around 90% of Products Fail And More Than 60% of the Features of a Typical Software Product are Rarely or Never Used: Here’s What You Can Do

285a9fe9-d3de-4688-a079-2365393561ad-large

Mary, the development team leader, was already eager to start developing and happy when she got the requirements. She and her team went ahead and created the software right away. Afterwards, Paul tested the software against the requirements. As soon as the software fulfilled the requirements, Linda, the product manager, deployed it to the customer. The customer did not like the software and ignored it. Ringo, the head of software development, was fired. How come?

Most Ideas Fail to Show Value

Nowadays, we have tremendous capabilities for implementing nearly all kinds of business ideas with the help of software. We can apply agile practices for reacting flexibly to changing requirements, we can use distributed development, open source, or other means for creating software at low cost, we can use cloud technologies for deploying software rapidly, and we can get enormous amounts of data showing us how customers actually use software products.

However, the sad reality is that around 90% of new consumer products fail, and more than 60% of the features of a typical software product are rarely or never used. (You can read about the cost of features here.)

Failure

But there is a silver lining – an insight regarding successful features: Around 60% of the successes stem from a significant change of an initial idea. This gives us a hint on how to build the right software for users and customers.

Many software projects fail to deliver or only deliver little value due to the wrong assumptions made on requirements. A questionable assumption is, for instance, that customers or experts can come up with the right requirements. In consequence, projects usually have an upfront business analysis phase before the development starts. There are of course projects such as large-scale contract software projects in well-understood domains where upfront analysis is feasible and successful. But we should consider that these projects represent a very small percentage of all software projects.

“If we’re not solving the right problem, the project fails.”

     – Woody Williams

Nowadays, nearly all software projects are conducted in complex environments where the relationship between cause and effect with respect to features and their success can only be understood in retrospect. Nobody “knows” upfront if and how features will create value for customers. Making decisions on what to develop based on opinions is highly risky in dynamic and non-predictable environments. Developing wrong features creates cost for development and maintenance as well as opportunity cost representing the missed opportunity to develop something of value instead.

A promising way to create products in complex environments is to quickly and systematically iterate an initial product idea towards success before running out of time and other resources. Simply speaking, this means that you need to create a plan A that describes the scope of the software, identify the underlying assumptions of this plan, test the riskiest assumptions, and iterate until you have a plan B that works. The initial ideas we come up with are seldom successful. Identifying, testing and refining multiple options helps to discover better ways to provide value for users or customers.

Every Business and Innovation Idea can be Tested with an Experiment

A means for doing this is continuously conducting experiments to test assumptions and making being wrong cheaper. Insights from experiments directly influence what is given to the users. This process of continuous experimentation consists of three meta-steps:

  1. Break down your product idea into a product roadmap that can be efficiently tested. Be aware that the roadmap changes over time and is basically a list of goals and assumptions. Constantly reprioritize the assumptions.
  2. Run frequent and additive experiments to test assumptions. This includes systematically observing users’ behavioral responses to stimuli such as features. An example for a hypothesis is “The new posting feature will increase sign ups of new users by 5% in two weeks“. If an experiment does not deliver the expected result, do not test another option at random. Carefully choose what to test next.
  3. Use results from experiments to iteratively modify your product roadmap. This might lead to an improvement of a product or a significant change of the strategy. It might also mean that you need to stop the project.

Steps

Success cases from companies such as Etsy, Amazon, Ericsson and Supercell show that an experimentation-based development approach helps companies to gain competitive advantage by reducing uncertainties and rapidly finding product roadmaps that work. However, experimentation is hard.

How do we Find Good Hypotheses and Conduct the Right Experiments?

Customers and users are a questionable source for novel ideas. What they say often does not match what they will do. Consider the wish of users for privacy and the way they use Facebook. However, customers often have a good understanding of problems and asking the right questions can help reveal good hypotheses in the problem space.

Developers are usually good at coming up with solution proposals. They are familiar with technical options for solving a problem and can be a good source for revealing hypotheses in the solution space. Intensifying the communication between users and engineers promises to be another good source for hypotheses.

A further source for identifying hypothesis is usage data. It can be used to gain insights and new ideas on what to develop if the right data is collected and appropriately analyzed. Further hypotheses to test, often hidden and not directly visible, can be found in the respective business models.

What about the HiPPOs? HiPPOs are the highest paid person’s opinions. HiPPOs currently dominate decisions about what to develop. However, there is no guarantee that their ideas are better or successful. Listen to HiPPOs and take their ideas into account when prioritizing what to test. But make development decisions based on validated assumptions.

hippo

“One test is worth one thousand expert opinions.”

     – Wernher von Braun

The experimentation process follows the scientific method. It is important that you state upfront what you expect. Otherwise you just see what is going on. And many people are excellent at rationalizing what they see and would be surprised if they would have stated their expectations upfront.

“It’s not an experiment if you know it’s going to work.”

   – Jeff Bezos

There are many techniques available that support experimentation such as multivariate tests, prototyping, or customer interview techniques. But consider that choosing the right experiment technique requires that you know what you want to learn. Do you want to better understand the problem? Do you want to test the feasibility of a solution? Do you want to compare solution alternatives? Do you want to understand a behavior change? Do you want to test the efficiency of a distribution channel? All these questions lead to different experiments.

Overall, developing successful products and services requires a deep integration of testing critical assumptions in the overall development process. It emphasizes rapid and constant learning by empirical means in order to create software that provides value for users, customers, and the developing organization. Success with software is not luck. We all have the opportunity to deliver high-value software. What is your most critical assumption?

Key Takeaways

  • It’s more important to do the right thing than to do things right. – Peter Drucker
  • Success in highly dynamic application domains traces back to disciplined experimentation.
  • Defining and running the right experiments is hard.
  • Experimentation must be deeply integrated in the design and product development process.
  • Platforms for experimentation can be seen as a core part of future development environments.

References

Scott Anthony, David Duncan, and Pontus M.A. Siren. The 6 Most Common Innovation Mistakes Companies Make. S . Harvard Business Review, June, 2015.

Emil Backlund, Mikael Bolle, Matthias Tichy, Helena Holmström Olsson, and Jan Bosch, “Automated User Interaction Analysis for Worflow-Based Web Portals”, Presentation Slides, 5th International Conference, ICSOB 2014, Paphos, Cyprus, June, 2014.

Fabian Fagerholm, Alejandro Sanchez Guinea, Hanna Mäenpää, Jürgen Münch. Building Blocks for Continuous Experimentation. In Proceedings of the 1st International Workshop on Rapid Continuous Software Engineering (RCoSE 2014), Hyderabad, India, pages 26-35, June 2014.

Jim Johnson, Chairman from the Standish Group, “ROI, It’s Your Job” (keynote), Third International Conference on Extreme Programming, Alghero, Italy, May 26-29, 2002.

Eveliina Lindgren, Jürgen Münch. Raising the Odds of Success: The Current State of Experimentation in Product Development. Information and Software Technology, 77:80-91, 2016.

Carmen Nobel, Clay Christensen’s Milkshake Marketing, Harvard Business School Working Knowledge, 2011.

Ash Maurya. Running Lean. O’Reilly, 2012. Scaling Lean, 2017.

Eric Ries. The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Crown Publishing, 2011.

Sezin Gizem Yaman, Myriam Munezero, Jürgen Münch, Fabian Fagerholm, Ossi Syd, Mika Aaltola, Christina Palmu, Tomi Männistö. Introducing Continuous Experimentation in Large Software-Intensive Product and Service Organizations. Journal of Systems and Software, 133:195-211, November 2017.

[An earlier version of this post has been published in Perspectives on Data Science for Software Engineering, chapter Continuously Experiment to Assess Values Early On, pages 365-368. Morgan Kaufmann, 2016.]

Talk: “The Wheels of Value Model: Driving Product Ideas to Their Fullest Strength”

The Wheels of Value Model is a tool for driving product ideas to their fullest strength by systematically unearthing critical product assumptions. Instead of identifying assumptions for each element of a business model it generates a closed value chain among the right actors and ensures that you do not miss important links. By doing this you can rapidly see what you need to validate. This talk explains the main elements.

Presenter: Prof. Dr. Jürgen Münch
When: 16 Sept. 2015, 10:15 am
Where: Pori, N4SQ3, Yyteri Hotel, Finland

wheels of value excerpt

How to Find out Which Features to Implement in Popular Smartphone Apps?

In the world of smartphone applications, there is a rich set of user feedback and feature suggestions which are updated continuously. Especially for smaller development teams, prioritizing these requests is of utmost importance. The prioritization is usually done based on techniques which rely on methods that are built upon predictions and customer interviews such as “Would you buy that feature?”. However, predictions can be wrong and customer interviews suffer from contextual biases.

We developed an approach that allows us to find out about the real business value of a feature. The approach is based on mock purchases and allows product managers and developers to depict the real business value of a feature without having to implement it. Hence, the approach allows feature prioritization based on facts rather than on predictions. The rationale behind the approach is to eliminate contextual biases. On top of that, the approach allows us to experiment with the feature pricing.

approach

Figure 1. Approach.

Figure 1 depicts six elements of the approach:

Product Backlog: Each user is assigned only one of the features from the backlog that are to be tested. Thereby, potential contextual biases can be reduced to a minimum.
Description Page: For each feature, a custom description page is created which describes the feature and contains a button to load the feature’s price.
Purchase Page: After loading the price, the amount is displayed on the page and a purchase button appears.
Acknowledgement Page: After purchasing, the user receives an acknowledgement about the feature not having been implemented yet. Moreover, the user is then asked whether he or she likes, dislikes, or very much dislikes the approach. On top of that, he or she can provide a custom message in case he or she needs the feature urgently or wants to complain.
Questionnaire Page: In case the user cancels the purchase (i.e. wants to move away from the page after having loaded the price), he or she is asked why he or she did so. Users can choose that they are not interested in the feature, it is too expensive, they were disappointed by other in-app purchases, or they do not spend money on apps. They can also provide a custom message.

The study was implemented in an app called Track My Life which is used by Nokia CEO Stephen Elop and the Finnish Minister for European Affairs, Alexander Stubb amongst others. The app is a GPS Tracker that automatically collects the users’ location information in the background and analyses the data upon opening the app. Thereby, it can answer questions such as “How much time do I spend at home, work, and on my way to work?”. On top of that, it provides statistics such as how many kilometers the user travels per day, week, and month, and which places he or she spends most of his or her time at.

Moreover, the app leverages user feedback by providing several mechanisms e.g a Zendesk and Jira client to provide feedback to the developer.

The study started on April 5, 2013 and ended on April 23, 2013. Prior to that, the implementation of the app in both the iOS and Windows Phone version was done. The implementation took about 1.5 days per platform due to specialities such as the possibility to enable and disable the study remotely and converting feature prices to the users’ local currencies in price intervals which correspond to their smartphone operating system’s pricing intervals (i.e. convert 1€ to flat 70 rupee rather than to 71.54 rupees). Figure 2 represents the implementation of the approach on iOS (left) and Windows Phone (right).

Implementation

Figure 2. Implementation on iOS and Windows Phone.

Finding the Right Features to Implement

Figure 3 depicts the number of purchases that were made as well as the hypothetical revenue attached to those. Each of the six columns represents one feature at a given price tag. In total, there were two features being investigated, each of those at the price tags EUR 1, EUR 3, and EUR 5. As an example, SSF represents feature one and the number 3 represents that the base price (i.e. the price that was converted into the users’ currencies) was EUR 3.

revenue

Figure 3: Number of purchases and revenue.

As anticipated, figure 3 depicts a correlation between the number of purchases and the price of a feature. Moreover, it allows us to compare the two features and also allows us to make judgements about the feature pricing. For instance, EUR 5 for the HF feature yields a lower total revenue than EUR 3. In contrast to that, the maximum revenue for SSF seems to be at an even higher price tag than EUR 5. Moreover, the revenue created with SSF is higher than HF.

tooexpensive

Figure 4: Number of users who said that they did not buy the feature because it is too expensive.

Figure 4 shows the number of users who regarded the feature as too expensive when surveyed about why they cancelled the purchase (i.e. tried to navigate back from the feature’s description page after loading the price). It underlines the point that there is a correlation between a feature’s price and the number of purchases and also that fewer users are willing to pay for the feature HF at a price tag of EUR 5 than for SSF at the same price.

14% of the users were unsatisfied (i.e. selected I dislike the approach) or very unsatisfied (i.e. selected I am very annoyed by the approach). However, the wide majority of users understood the purpose of the experiment.

We showed that the approach allows us to find out about the real return of a feature and allows us to find the ideal price of a feature.

The study is described in the following article and will be presented at the Lean Enterprise Software and Systems Conference in Galway, Ireland (December 1-4, 2013). The reference for the article is the following:

  • [PDF] [DOI] Alexander-Derek Rein, Jürgen Münch. Feature Prioritization Based on Mock Purchase: A Mobile Case Study. In Proceedings of the Lean Enterprise Software and Systems Conference (LESS 2013, Galway, Ireland, December 1-4), volume 167 of LNBIP, pages 165-179. Springer Berlin Heidelberg, 2013.
    [Bibtex] [doi] [url] [pdf]
    @inproceedings{LESS2013b,
      author    = {Alexander-Derek Rein, Jürgen Münch},
      title     = {Feature Prioritization Based on Mock Purchase: A Mobile Case Study},
      booktitle = {Proceedings of the Lean Enterprise Software and Systems Conference (LESS 2013, Galway, Ireland, December 1-4)},
      publisher = {Springer Berlin Heidelberg},
    series = {LNBIP},
    volume = {167},
    pages = {165-179},
      year      = {2013},
    doi = {10.1007/978-3-642-44930-7_11},
    url = {http://link.springer.com/chapter/10.1007/978-3-642-44930-7_11}
    }