Test the promise of lofty ideas with just enough research.
“Ideas are worthless, execution is everything.”
If you operate in the VUCA world of creating new businesses like we do, then you’ve probably come across some or the other variant of the above quote. But, does that quote really paint a holistic picture? Sure, execution matters, but is that enough? What about those well executed ideas that never took off? Remember Segway, the technological marvel that didn’t live up to the hype? Why did it fail? Or for that matter, if execution is everything, why does any well executed idea fail at all?
In our humble opinion, execution is important — we’ve built a whole business around it. But, is it everything? We don’t think so. We think, a well executed idea isn’t worth much if it fails at delivering value to its users. Gauging if the idea is valuable or not is as important, if not more, as executing it well.
How do we determine if users find value in the idea, before we expend any effort on its execution?
For us at Obvious, the answer lies in the three step process that follows.
Step 1: Design the experiment to gauge usefulness
Our aim is to only gauge if people can resonate with the broad idea, not the details of its execution. Without getting lost in those details, we try and outline our learning outcomes in a way that they align with the broad business goals. For example, if the business goal is to check the appeal of voice search in an application, here's how we would craft our learning outcome, experiment design and success criterion —
- Learning outcome: Learn if users naturally gravitate towards voice search.
- Experiment design: Add a dummy Search Using Voice button to the interface and observe if users interact with it.
- Success criterion: Users tap the button expecting to be led to a voice search interface.
In the above case, tapping the button might not do anything at all, but the usefulness of search using voice gets established the moment the button is tapped, or gets debunked in case the users don't hit the button.
We often have several such usefulness tests built into a single prototype, all of which add up to paint the big picture regarding the usefulness of a potentially lofty idea, without having to build it in reality.
Step 2: Craft a Minimum Viable Prototype (MVPr)
To test our idea, we need to help our users imagine a world where the idea is a reality. Only when they can picture living with that idea, can we know if they find it useful or not. The best way to do this, is to create what we call a Minimum Viable Prototype (MVPr).
MVPr isn’t the final manifestation of the idea. In fact, it’s quite the opposite. It’s a mere facade, which seems real enough that it helps the users imagine living with it, but has enough gaps that it invites candid feedback. It allows the users to see it as something that’s only partially complete, that they can mould to their specific needs by expressing their opinions freely. By gauging these opinions, we determine if the users find the idea useful or not.
So what does it take to craft a MVPr? At Obvious, we focus on the following —
- Make it just real enough-
- Prioritize clarity over subtlety-
- Build only if necessary-
We ensure that the prototype is just real enough, but not more. If it feels too complete, users find it hard to talk about their specific needs because they get caught up in what they think the product offers, not what they require of it. On the other hand if it feels too incomplete, users are unable to imagine living with it and usually digress into subjects that aren’t necessarily related to the idea, which often derails the whole test. Working with lower fidelity also ensures that we don’t invest a lot of time in the MVPr, which helps us remain unattached to the idea and keep our biases out of the picture.
We’ve noticed that 'The Law of the Vital Few' holds really true in MVPrs. 80% of the learnings from our MVPrs come from mere 20% of the features. Rather than building peripheral features that only contribute to completeness, but don’t necessarily give us an in-depth understanding of how the idea was received, we focus on being extremely clear about those 20% features and see if the users resonate with the idea. Sometimes this requires us to be unrealistic and over emphasise certain elements — imagine really big buttons with large bold text. Such elements are of course not representative of how the actual product might look, but they help keep the test hygienic by ensuring that the user does not miss the message. Remember, we're just trying to test the appeal of the idea, not the finesse of the product.
We try and make the best use of available resources in the service of the MVPr. It’s far wiser to prototype using existing tools even if it doesn’t add up to a cohesive experience that the end product might deliver. For example — If our idea is to test a flow using a chat-bot, rather than mocking up a chat-bot that sits well within the experience of the product, we prefer using existing chat tools like Typeform’s conversation platform even if the experience seems slightly disconnected from the overall product. If the chat-bot isn’t exciting enough for the user to look past the inconsistencies in the experience, then it’s likely that it’s not going to fly in the real product as well.
Step 3: Test and Synthesize
Irrespective of whether testing and synthesis happen in parallel, or one after the other, the process remains the same. In addition to our time-tested approach for preparing, conducting, and synthesising user studies, we are mindful of the following aspects when testing for usefulness.
- Conversation mindset over task mindset
- Opinions matter
- How is the proposed idea likely to affect their lives?
- How could the idea be modified to bring them more value?
- What parts of the MVPr worked well for them? What parts didn’t?
Unlike a usability study where we get the user to go through a series of tasks and check for usability gaps, in a usefulness study we use the MVPr as an aid for facilitating a rich conversation around the broad idea. We encourage them to think out loud and venture into use cases we might not have thought of.
Before closing, we get the users to reflect upon what they just experienced by asking some opinion questions such as—
“If I had asked people what they wanted, they would have said faster horses” — allegedly said by Henry Ford