Team Size and “Disruptive” Science

Team Size and “Disruptive” Science

 


THE SHORT READ:

The Question

 

How does the size of your research team impact your odds of creating breakthrough science and technology?

 

The Answer

 

The science produced by solo researchers and small teams is more likely to create new knowledge and inventions that disrupt existing scientific ideas.


THE LONG READ:

 

How the Study Was Designed and What the Researchers Learned

To explore the link between scientific products and team size, a group of researchers analyzed citation patterns in three large data sets:

  • published articles in the Web of Science database,
  • patents in the U.S. Patent and Trade Organization database, and
  • software projects in the GitHub database.

The investigators defined team size as the number of authors who contributed to an article, patent, or software version.

Using sociology and statistics, the authors of the study went on to try and answer a number of interesting questions…

 

How does team size relate to creating science that “develops” or “disrupts”?

The main focus of this study was to seek out links between a team’s size and the influence of the scientific products (paper, patent, or code) it produces.  Products were ranked on a continuum from “develop-to-disrupt”:

  • develop:
    • the scientific product solves or refines known science challenges, “developing” existing scientific ideas
    • development is indicated by later papers citing the works cited by the developmental work as much as they cite the output itself, i.e., it remains on equal footing with prior work
  • disrupt:
    • the scientific product suggests or presents new science pathways and challenges, “disrupting” existing scientific ideas
    • disruption is indicated by later papers citing the disruptive work more heavily than any of the works cited by the output itself, i.e., it overshadows prior work

The investigators tested their develop-to-disrupt scale by checking that commonly recognized disruptive papers (like those that won their authors a Nobel Prize, or those that interviewed scholars gave as examples of disruptive papers in their field) scored more highly toward the disruptive end of their spectrum.  They also checked that “review” papers, papers that do not present new science but summarize existing research, scored more heavily toward the develop end of their spectrum.  Their scale passed all three checks.

Next they went on to look for team size and disruptiveness trends in the citation data they had.  Three key findings stood out:

  • The chance of “published” work (journal articles, patents, or open source computer code) being disruptive decreases with the addition of each new group member (defined as a listed author), i.e., smaller teams outperform larger ones on disruptive publication rates.
  • This trend holds no matter how you break up and look at the data: it’s visible across team product types; the trend holds over the last 60 years; it’s visible across many different scientific journals; and it holds across sub-fields and science topics.
  • In general, large teams are more likely to be high impact (cited more often) and produce developmental contributions, while small teams and solitary individuals are more likely to produce disruptive contributions.

The only parts of the data that did not follow the above trends as clearly were engineering and computer science.  The study authors point out that the majority of research in these two fields is published in conference proceedings, which are not included in the primary data set they used—the Web of Science database.

 

What search strategy did different team sizes use to find helpful prior research?

The investigators in this study also wanted to look at whether or not small versus large teams search for research to build on in different ways, which could impact their odds of refining or up-ending existing science ideas.  To study this aspect, the authors outlined three measures of “search behavior” that they could pull out of the citation data each team included as part of its scientific product (e.g., a reference list or source version history):

  • Depth: average age of reference works (older = deeper)
  • Popularity: average citation rate of reference works (more cited = more popular)
  • Novelty: citing reference works from unusual fields (more atypical = more novel)

Some trends emerged between varying team sizes:  soloists and small teams cited older, less popular works more often; large teams emphasized newer, more popular work and this emphasis grew larger the larger the team size; and as team size grows, the works cited by the team include fewer interdisciplinary references.

 

How does funding affect a team’s scientific outcome?

The study authors pulled out a subset of articles that listed support from government funding agencies in their acknowledgements section.  When they reviewed this subset, they found the type of funding received may undermine the disruption potential of small teams: the small team government funded papers ranked at the bottom of the scale (i.e., developed existing ideas, rather than producing new ideas) and with government funding differences between small versus large teams disappeared.

 

Do all teams’ efforts earn the same kind of recognition?

To answer this question the investigators used an existing tool called the “Sleeping Beauty Index”: it measures the gap between first publication and a spike in citation rate, i.e., how long and in what way a paper goes from being unpopular to suddenly being a smash hit.

In applying this to small and large teams, the study found that large teams get more immediate citations, while small teams are more likely to produce Sleeping Beauty papers, which only get a lot of attention after a longer delay.

 

Is it really team size, or something else, affecting the outcome?

On a final note, the authors did some preliminary investigations into whether or not qualitative differences in the scientists involved in small versus large teams might be influencing the team size effect.  For example, they looked to see if the trends still held when the data was broken down into theoretical versus experimental papers.

The average number of figures included in a published article was used as a way to distinguish, with the study authors noting that theory papers on average have fewer figures, while experimental papers on average have more figures.  However, the effects of team size remain, independent of the published contribution type (theory vs. experiment).

It even holds within a very small fraction of the data: among the sub-group of just review papers, those produced by smaller teams are more likely to rank higher on the disruptive scale than those with more authors.

 

What the Study Teaches Us

The authors of this study suggest some things, based on other research studies of team dynamics and team size, that might influence the science of small versus large teams.

Large teams may have more reasons (like tougher funding controls, more pressure to succeed, and a tendency in large groups to produce fewer ideas and limit perspectives) and be better structured to produce science that builds on existing scientific ideas.

On the other hand, small teams may be more open generating new scientific ideas because they have more to gain and less to lose.  But this distinction is lost for small teams that rely on government funding, who may face the same pressures regarding failure and reputation as well as limits on perspectives, that larger teams traditionally do.

This study is the first of its kind to observe large-scale patterns showing differences in the outputs between small and large groups in a scientific context.  The study points out that it is hard to determine if the variations are due to differences in the individuals who choose to participate in small versus large team work, or if the effects are largely due to the team size itself.

However, the authors state that while their findings offer valuable insight, they should not be used to give preference to one team size over another within the broader scientific community.


PUT IT IN ACTION:

 

Three Things to Try

 

(1)  Work with as small a team as you can on your breakthrough discovery goals.

(2)  Review older research literature for good ideas you can build on.

(3)  Read research literature in fields you might have ignored as irrelevant.

 


THE FINAL WORD:

 

Best Quote from the Study Authors

“Both small and large teams are essential to a flourishing ecology of science and technology…These results suggest the need for government, industry and non-profit funders of science and technology to investigate the critical role that small teams appear to have in expanding the frontiers of knowledge, even as large teams rapidly develop them.” (pages 381-382)

 


Full Citation

Wu, Lingfei, Wang, Dashun and Evans, James A.  “Large teams develop and small teams disrupt science and technology”, Nature, volume 566, 21 February 2019, pages 378-382.  (5 pages + 17 pages of supplemental material)

Categories:  Invention, Scientific Discovery

Tags:  insights from research