Some Questions Benefit from Group Discussion. Others Don’t.

Some Questions Benefit from Group Discussion. Others Don’t.

by Bloomberg Stocks
0 comments 76 views
A+A-
Reset

Research on the concept of “collective intelligence” has shown that in many cases, groups tend to come up with more accurate estimates after discussing a question than individual experts do on their own. However, a new study found that while this holds true for quantitative questions — i.e., “How long will the project take?” — groups are actually less accurate than individuals when it comes to yes/no questions, such as, “Will the project be done before the deadline?”. Based on this nuanced distinction, the authors offer three strategies for managers to reap the benefits of group deliberation without falling prey to its downsides: Focus teams on discussing data, not predicting outcomes; separate “How Much?” questions from “Yes or No?” questions; and continuously capture data on group dynamics and team members’ strengths and weaknesses to inform future decision-making.

When you’ve got a difficult question to answer, do you consult multiple experts to get a sense of their individual views, or ask a group to deliberate together? Studies on the concept of collective intelligence suggests that when managed properly, asking a group can lead to more accurate estimates than simply averaging the recommendations of multiple independent advisors.

However, our recent research suggests that while group deliberation can indeed increase the accuracy of forecasts, it can lead you astray when it comes to making a final decision.

Prior studies on the power of collective intelligence have largely focused on how groups estimate answers to quantitative questions, such as How long will it take to get this product ready for market?, How much will this project cost?, or What score would you give this job candidate?. For these sorts of questions, deliberating groups generally arrive at more accurate forecasts than individuals do.

But when it comes to yes/no decisions, we found that group deliberation actually reduced the likelihood of making the correct choice. In other words, even when deliberation produces more accurate estimates, if the group then votes on the simple yes-or-no decisions related to those questions — i.e., Will this product launch on time?, Will we overrun our budget?, or Will this candidate meet our standards? — they are actually less likely to make the correct decision.

Why is this? In our research, we found that group discussion usually either just amplifies the initial majority opinion, or worse, turns what would initially have been a correct vote into an incorrect vote (even as the group’s quantitative estimate improves). We also found that this effect holds true even if the group waits to share votes until the end of the conversation. The same social dynamics that cause quantitative forecasts to become more accurate through discussion can often also transform an incorrect minority view into a majority one, or an uncertain, questionable, bare minimum majority into a strong consensus.

This may seem like a contradiction — but time and time again, we saw groups’ average quantitative forecast become more accurate after discussion, while their votes on a related yes-or-no question became less accurate. How is that possible? Suppose you ask a five-person group to estimate how long it will take to complete a project (a quantitative question), as well as whether the project will be done within nine months (a yes/no question). Initially, the five people guess that it will take four, five, 10, 11, and 13 months to complete. Those estimates average out to 8.6 months, but if the group were to vote, a 3-2 majority would vote no, the project won’t be done by the nine-month deadline. Suppose also that, unbeknownst to the group, the project will take 10 months, meaning their initial vote is correct.

Now suppose that after discussion, the group members revise their estimates to eight, eight, nine, nine, and 13 months. That would increase the average to 9.4 months — a more accurate forecast — but if the group voted, a decisive 4-1 majority would now incorrectly say yes, the project will be on time. With only one dissenter projecting a late completion, managers would likely be convinced that the project would be finished on schedule with no urgent need for contingency planning.

While this example illustrates a worst-case scenario, the general point is that improvement in a forecast does not guarantee improvement in a final vote. To explore this phenomenon, we analyzed a series of lab experiments with more than 450 groups of 10 to 40 participants each, in which we asked people to answer questions like how many gumballs were in a jar, what percentage of Americans have health insurance, or what the U.S. per capita GDP is, both before and after a group discussion. We found that on average, even though their quantitative estimates improved, the groups had about an 80% chance of either reversing an initially correct vote, or simply amplifying their initial majority opinion regardless of its accuracy.

Of course, this doesn’t mean there isn’t a place for group decision-making. There is still wisdom in the crowd — but only if you ask the right question, the right way, with the right management. Specifically, there are three strategies we’ve found can help managers to reap the benefits of group deliberation without falling prey to the effect we’ve outlined above.

Focus on Discussing Data, Not Predicting Outcomes

During discussion, managers should encourage people to share relevant information, such as personal experiences, facts, and data — but not numerical estimates or decision recommendations. For example, in a committee discussing a product launch, it can be useful to share factual information such as: Our last product for this customer was late because they asked to change a key feature, or The UX design team has finished its last three projects ahead of schedule. This type of information helps everyone move towards a more accurate understanding of the issue at hand.

However, managers should do their best to steer groups away from straw polls and public predictions. Comments such as I think it will be late, or, I think it will be done in three months tend to increase the similarity of opinions regardless of the reasons why group members feel one way or another, ultimately leading to fewer accurate votes when the time comes to make a decision. The natural human tendency for social interaction to cause people’s opinions to become more similar is the very thing that improves groups’ quantitative estimates, but it often ends up undermining accuracy in a final vote.

Separate “How Much?” from “Yes or No?”

One way to take advantage of the benefits of collective intelligence while still optimizing for accurate decision-making is to explicitly separate the forecasting discussion (how much?) from the decision (yes or no?). Consider asking a group to discuss a quantitative forecast, then take the average of their predictions and leave the final decision to a manager. For example, instead of asking your team whether a product will be ready on time, ask them only for a revised development time estimate, leaving the actual decision on your desk. Or if it’s important for a decision to be reached democratically, have one group make forecasts after discussion, then give those estimates to a second group and have that group take a vote without discussion, based only on the first group’s estimates.

Importantly, we’ve found that nearly every decision involves some sort of forecasting — even when it’s not obvious. Market strategists facing the classic “build or buy” problem are not just asking what they should do, but how expensive and effective each option is likely to be. Public policy experts don’t just need to know if a program will be successful; they need to forecast how many people will enroll in a program, how impactful it will be according to various metrics, etc. And managers facing questions such as whether a budget will overrun, whether an investment will offer a positive return, or whether a risk exceeds some acceptable threshold can benefit from quantitative estimates around likely costs, returns, and risk levels. Even if the ultimate goal is to make a decision, it’s almost always possible to separate that decision from a crowdsourced forecasting discussion.

Capture Data on Group Dynamics

Both in our recent research and in our prior work, we’ve found that people who are more stubborn — that is, those who tend to make smaller revisions to their initial estimates after group discussion — also tend to be more accurate. By tracking data on both the evolution of group members’ opinions and ultimate project results, you can learn over time whose opinions to put the most stock into.

In addition, you’re also likely to find that different people are better at estimating different types of questions. You may, for example, discover that some people on your team are really good at estimating project speed, while others are good at estimating project cost. You might also find that the really confident people on your team are usually right — or you might find that they’re usually wrong. Tracking this kind of information can help you ask the right people the right questions, and optimize your forecasting and decision-making processes accordingly.

***

In many situations, groups are likely to arrive at more accurate estimates than any individual would come up with on their own. However, the same process of social influence that leads groups to gravitate towards more-accurate forecasts can also push groups towards less-accurate answers to yes/no questions. As such, managers should be careful to limit group discussions to facts rather than judgements, separate forecasting from decision-making, and continuously iterate and improve their decision-making processes based on past data.

Read More

You may also like

Leave a Comment