The Product Thinking Playbook is our adaptable, customizable way of designing project plans for building better products. Consisting of various tactics, techniques, and milestones borrowed from design thinking, agile development, lean product strategy, and jobs-to-be-done theory, the Product Thinking Playbook helps facilitate conversations around what you will and (just as importantly) will not do to achieve your product goals.

On a recent project, Connected was engaged as an end-to-end product development partner to redesign the web experience for a global workforce to discover and connect with the employer health and wellness services that most appeal to them in their well-being journeys. This article is the second in a series that will highlight specific Playbook tactics and techniques used during the project, with an emphasis on how we adapted them to work effectively in a remote setting.

In the previous piece in this series, we looked at the Research Planning tactic card. Today, we are shifting our focus to Concept Evaluation technique card…

Objective: What are we trying to achieve at this stage in the project?

After the Immersion phase and through research activities like the Service Blueprint workshop and exploratory interviews, the team gained an aligned understanding of the user’s jobs-to-be-done. We used this knowledge to formulate how-might-we prompts to bring into Concept Generation workshops with multiple groups of multidisciplinary stakeholders. With more than 100 concepts generated, the team needed to conduct Concept Evaluation to select a subset of high-risk product concepts that we could bring into Concept Testing with users. 

To do this, we needed to design the evaluation framework before regrouping with the broader client team and their subject-matter experts to conduct the prioritization exercise. In tackling these steps in a remote environment, the team went through trial and error to figure out the best way to collaborate internally and with the client team. We learned when to diverge and converge in our team collaboration, so our remote interactions were as effective and efficient as possible. We also practised how to build consensus with our clients throughout the remote product discovery journey. Now, we will deep dive into these learnings as we walk through our approach to Concept Evaluation.

Approach: How do we action this Playbook technique and adapt it to a remote setting?

a. Designing the Evaluation & Prioritization Framework

Prior to working remotely, product discovery teams at Connected preferred to collaborate in a co-located team space (a ‘war room’). A common physical space allowed us to quickly tap someone on the shoulder and engage in spontaneous discussions. This flexibility in team collaboration was obviously much more difficult in a remote environment.

We tried to mimic this type of co-located interaction by scheduling two- to three-hour working sessions. Often, at the beginning of the session, there was free-flowing discussion on potential directions to take. The team, however, would eventually reach a point of paralysis due to missing context. As a result, we decided to shorten the group sessions and ‘converge’ only for debriefs, feedback, and initial brainstorming. We would, then, ‘diverge’ and divide and conquer by assigning an owner to carry on with the information gathering and analysis independently. In the case of Concept Evaluation, the owner had to research and assess appropriate prioritization frameworks to use. 

The delegated owner needed to be autonomous in figuring out when group solutioning was most effective. While taking on the initial round of analysis, the owner also developed a coherent visual system to illustrate their thought process for others in the team to understand the data. Visually organizing the analysis process minimized the time needed for explanations so the team could use their valuable time together to agree on the extracted insights, discuss contrasting perspectives and make key decisions. This added step of digitally translating work-in-progress did incur some overhead at first, but these outputs were digital assets that we were able to reuse later in the process.

In coming up with the evaluation and prioritization framework, the owner assessed potential methods such as the Kano model and the weighted scorecard. A weighted scorecard is a very pointed and quantitative approach that is challenging for teams and client stakeholders to come up with an appropriate weight distribution. On the other hand, the Kano model would require user satisfaction feedback, which was premature at this stage with an unrealistic number of feature concepts. After sharing this assessment with the team in a group feedback session, the team decided that a lightweight prioritization method was more appropriate to serve the immediate need of identifying high-risk concepts to test further with users. With another round of diverging and converging, the team landed on a nested impact-risk matrix as the prioritization method. The first matrix would plot the results from the evaluation of a concept’s desirability against its potential business viability risks. Winning concepts (those evaluated as moderate to most impactful and viable) from this first matrix would graduate to the second matrix with the added dimension of technical feasibility risks. These dimensions were all evaluated based on a 3-point scale. 

In developing the 3-point scales, the team divided into two groups so we could efficiently utilize the expertise of our cross-functional team. The design, design research and product team members took part in determining the evaluation considerations for desirability. Based on our knowledge from exploratory interviews, we first had to hypothesize the key product values that would address our user’s jobs-to-be-done. 

As an example, our product could enable users to build personal connections with their workplace peers. By pinpointing a set of three to five product values, we were able to catalogue feature concepts by their primary values and then evaluate them by the level of impact (least, moderate, most) that their functionalities could have in realizing the specific product value. Continuing the earlier example, the concept built around the idea of an online forum for activity-based interests, such as running, was most aligned with peer-to-peer connection as its primary value.

In parallel, our engineering team members focused on evaluating relative implementation complexity and technical dependencies in a similar 3-point scale from least, moderate to most complex to deliver/de-risk. The team then came together to share uncertainties that came up during our separate analyses. Since product concepts at this stage were quite broad, we spent most of our time together getting a shared understanding of each concept by finding comparable experiences from our competitive product scan before finalizing our evaluation.

b. Building consensus with client teams

In evaluating business viability risks, we leaned on our client stakeholders and subject-matter experts in their organization. Given that there was a large service delivery component required to realize most of these product concepts, we needed to ensure that the evaluation scale used was sufficient to address operational requirements that the service delivery and customer service teams would need to fulfill in the future.

Without the luxury of having our client teams available for spontaneous discussions and knowledge sharing, we conducted a remote prep session with our client leaders prior to the prioritization workshop. We used this session to build a common understanding of the product risks that the prioritization should consider, and we ran through prompting questions to brainstorm key assumptions across each risk area. This was also a good opportunity for us to introduce the nested prioritization process along with some sample evaluation parameters we came up with based on our knowledge of the existing service operations requirements. From this session, the client team not only understood the approach that we had designed but also had concrete examples of how these evaluations would be used in the prioritization to come.

Areas of Improvement: What can we do to ensure continuous improvement and progress?

In a remote setting, we learned to take extra steps to bring our client partners along in the discovery process by being a lot more thoughtful in building consensus before conducting the next steps. This is a practice that we have found valuable and would continue to do and finesse even as we move into a hybrid work setting. 

On the other hand, diverge-and-converge collaboration is definitely not new and has been in practice among co-located product teams for a while. However, in this new remote environment, these strategies require new forms of experimentation to address issues like ‘Zoom fatigue,’ lack of visibility on the team’s progress, and digital tooling. It is certainly a stress test for remote teams that need to navigate the ambiguities of product discovery at a fast pace. However, through trial and error, we were pushed to identify and correct inefficiencies in our remote process quickly and were able to organically build trust as our collaboration evolved.


The Concept Evaluation card is often played at a key moment in a project. It signals an intention to push from early discovery into deeper exploration, a critical milestone in the effort to build better products. In this project, we saw clearly how important it was to find the right moments for individual creativity and group brainstorming, and as a result of both practices, we had a prioritized list of product concepts ready to be tested. 

In the next article, we will look at how our team used the Iterative Design card on this project.