Product Thinking Playbook – Hypothesis Driven Validation
July 20, 2022
Estimated Reading Time: 5 minutes
Everyone has assumptions, and that’s not bad. On the contrary, it’s what drives inquisitive minds forward as they pave the way for new ideas, shining light on the collective unknown. That is, assuming those assumptions can be tested, validated, and used to inform decision-making when turning those ideas into new and innovative products.
Fortunately, our Product Thinking Playbook has a technique that can do all of that and more – Hypothesis Driven Validation.
Ready to start iteratively increasing your confidence?
What is Hypothesis Driven Validation?
Hypothesis-driven validation is a seven-step framework that focuses on turning product and customer assumptions into hypotheses, testing them, and using them to inform product decisions. Hypotheses may be related to customers, problems, or solutions under the lens of the four product risks (desirability, viability, feasibility, usability).
Why would product teams do it?
Hypothesis-driven validation allows product teams to validate/invalidate the most critical hypotheses, allowing leaders to make evidence-based decisions that significantly reduce the risk of subsequent investments. Invalidating a hypothesis can be just as powerful, if not more, than validating one because it helps the team avoid major pitfalls and can lead to even better product decisions.
When should product teams use it?
Research Planning: Write hypotheses that expand on research objectives and add direction. Use early learnings from immersion activities to write informed hypotheses, prioritize and determine the appropriate validation methods to create a testing plan. Hypotheses about potential users or user groups may also inform a recruitment strategy (e.g. segmentation, screener criteria, etc.).
User Research: To inform and aid in creating research materials such as discussion guides or research activity content (e.g. a card sort). Reflect during the synthesis phase and determine if the hypotheses have been validated, invalidated, or require more data.
Prototyping: To guide the creation and testing of prototypes to validate a new product or feature concept. Hypotheses can be assessed, added to and evolved iteratively.
Demand Validation: To structure the testing of the desirability of a proposed strategy, concept, or feature. The goal is to ensure that the solution addresses a real, observed struggle and/or generates new demand.
Who is required?
How do I do it? (Best Practices)
As an applicable framework, hypothesis-driven development can be plotted in the following seven steps:
1. Identify your assumptions: List all the assumptions about potential product concepts and/or features based on your team’s existing knowledge. It’s essential to consider them in terms of the four product risk areas—feasibility, viability, desirability, and usability. Depending on where you are in the product lifecycle, your hypothesis development may be targeted toward certain risk areas. For example, focusing on validating desirability earlier on and usability later on.
2. Reframe assumptions as “hypotheses”: Reframe your list of assumptions as “We believe that…” statements or clear hypotheses. This helps to expose them as subjective opinions, not objective facts.
3. Rank them in order of importance: Ask yourself how critical the hypothesis is to the product’s success – and certainty – how confident are you that the hypothesis is correct. If the hypothesis is considered vital to success but lacks certainty, it should be prioritized. On the other hand, if it’s critical and you’re confident in the hypothesis, you should prioritize investigating riskier bets.
4. Design and rank tests: Each hypothesis should be matched to one or more tests based on its risk level and which test would best identify risks and opportunities. You’ll unlikely have the resources or time to run every test on every hypothesis, so rank them by test effectiveness and prioritize high-risk hypotheses to accelerate learning. Focus on tests that provide reliable data that validates or invalidates your hypotheses. If you’ve identified a make-or-break hypothesis, consider designing a combination of testing approaches to triangulate data to generate the most confidence in your results
5. Conduct the tests (Build, measure, test): Following the test plan you’ve outlined, it’s time to commence testing! Depending on the tests you’ve chosen, this step can take days, weeks, or even months – and remember that this testing cycle can occur iteratively! Don’t forget to record all of the data right from the start.
6. Synthesize your learnings: Debrief the team, synthesize data, and extract learnings. Some of your hypotheses will have been invalidated, while others will be proven true. But more often than not, the reality is not that straightforward. Allow flexibility when determining the result and next steps for each hypothesis.
7. Act: It’s time to put the learnings into action! This might mean scrapping an invalidated product idea or moving forward to high-fidelity prototyping if you’re working in a discovery phase. In the delivery phase, it might mean adjusting the roadmap or being given the confidence you need to bring in extra resources to accelerate development.
Our Product Thinking Playbook is filled with tactics and techniques that help product teams build better products. Click here to download your copy of the complete playbook, and stay tuned as we share more from it in the coming weeks.
Subscribe to Our Newsletter
Join the Thoughtworks newsletter list to receive curated content that exemplifies our Product thinking approach.
Warning: Undefined array key "modal" in /home/customer/www/thoughtworks.dayshiftdigital.com/public_html/wp-content/themes/connected-2021/template-parts/newsletter-modal.php on line 2
Thu Dec 1
Global Day of Coderetreat Toronto 2022
Earlier this month, we hosted Global Day of Coderetreat (GDCR) at our Toronto office. After a three-year hiatus, we wanted to re-awaken some of the community enthusiasm when we hosted the event for the first time back in 2019. This year we planned for a larger event which turned out to be a really good thing as we nearly doubled our attendance. And in case you were wodering, this is how we did it.
Wed Nov 9
You’re Wrong & Don’t Know It: Process Biases
Process biases occur when you process information based on cognitive factors instead of concrete evidence, skewing your perception of reality, even if all the pertinent and necessary data is right in front of you. And in our third installment of You’re Wrong & Don’t Know It, discover some of the different types of process biases, their impact, and most importantly, how they can be avoided.