Choosing the right format for your crowdsourcing contest: Joint or separate?

How can an innovative firm solve its practical problems by taking advantage of crowd wisdom? The so-called “crowdsourcing contest” launched over the internet is emerging as a popular form these days to crowdsource creative solutions from scientists and experts across the globe. A typical crowdsourcing contest for addressing challenging problems is often funded with millions of dollars, attracting many solo inventors or small teams.

Figure 1 The crowdsourcing platform for Pentagon’s project

Crowdsourcing platform

A crowdsourcing project often has multiple attributes or dimensions. One format for the firm is to launch multiple small contests, each dealing with one attribute of the project. For example, in 2013, the Pentagon launched a contest through a web portal called Vehicleforge.mil for the design of an amphibious vehicle for the US Marines (see Figure 1). The 1st sub-contest, with a one-million-dollar prize, involved mobility and drive-train subsystems for the vehicle. About six months later came a sub-contest for the design of the chassis and other subsystems, a contest with another one-million-dollar prize. An alternative is to run a single contest requiring participants to submit their comprehensive solutions involving all the attributes. As a matter of fact, right after the separate contest for the military vehicle mentioned above, the Pentagon launched a contest with a two-million-dollar prize in 2014. In contrast to the separate contest held in 2013, this joint contest required contestants to submit a single solution for an entire vehicle.

While both contest formats are commonly seen on the crowdsourcing platforms such as InnoCentive and Kaggle, the question is which format to choose, a separate or joint contest? Ming Hu at the Rotman School of Management, University of Toronto, and Lu Wang at the College of Business, Shanghai University of Finance and Economics, investigate this question in a new article published in Management Science entitled, “Joint vs. Separate Crowdsourcing Contests.” They conduct a game-theoretical study to identify the conditions under which either contest would be optimal.

The agency or firm hopes to find the best solutions out of submissions of the prize contest. As a result, if the contest format, joint or separate, achieves a higher expected “best performance,” it will be more preferred. The authors postulate the performance of a participant as a combination of sweat and luck. More precisely, one’s performance is a sum of effort and some random factor. The random factor reflects the participants’ random inspirations, the uncertain designing or experimenting environment, or the undisclosed judgers’ tastes. The authors find that the separate contest is better in striking lucky because the combination of the random inspirations in the best solutions from separate sub-contests (which can be experienced by different participants) tends to be better than those in the best complete solution (which is experienced by one participant). As the old saying goes, “Two heads are better than one.” However, the participants’ equilibrium effort level is expected to be higher in the joint contest than in the separate contest because the equilibrium effort level is affected by the amount of prize and the odds of winning. The joint contest that pools the prizes and random factors together is better in motivating participants to make efforts than the separate contest.

Therefore, if participants’ performance highly relies on the effort level, the project is called “effort-based,” and the joint contest tends to be optimal. If a project is full of randomness, it is called “randomness-based,” and the separate contest tends to be optimal. The authors also find that the comparison depends on the number of participants. If the number of participants is high, the separate contest is better than the joint contest; otherwise, the joint contest is better.

Figure 2 Four categories of projects

Four Categories of Projects

The study has clear managerial implications for firms in choosing the contest format. Figure 2 categorizes the projects into four groups by their level of randomness (high or low) and the number of participants (large or small). It would be better for the firm to hold a separate contest for a randomness-based project with a large pool of participants, such as ideation projects and brainstorming for general business plans. A joint contest would be better for an effort-based project with a small group of specialists, such as the government-sponsored R&D and theoretical challenges. Depending on the situation, either contest format can be preferred in the other two categories: effort-based projects with a large number of participants, such as the data analysis, predictive modeling and practical innovative challenge, and randomness-based projects with a small number of participants, such as specialized art design and knowledge sharing.

The authors also find that if the firm can decide on the amount of prize, the optimal prize in the joint contest should be no less than that in the separate contest. The rationale is that the joint contest has a higher efficiency in motivating participants to make efforts than the separate contest, and so it can achieve a higher return from more investment.

Read the full article at https://doi.org/10.1287/mnsc.2020.3683.

Reference:

Hu, M and Wang L (2021). Joint vs. Separate Crowdsourcing Contests. Management Science 67(5):2711-2728.

l

Comments