Meta-analytic choices and judgment calls: Implications for theory building and testing, obtained effect sizes, and scholarly impact

Abstract

The authors content analyzed 196 meta-analyses including 5,581 effect-size estimates published in Academy of Management Journal, Journal of Applied Psychology, Journal of Management, Personnel Psychology, and Strategic Management Journal from January 1982 through August 2009 to assess the presumed effects of each of 21 methodological choices and judgment calls on substantive conclusions. Results indicate that, overall, the various meta-analytic methodological choices available and judgment calls involved in the conduct of a meta-analysis have little impact on the resulting magnitude of the meta-analytically derived effect sizes. Thus, the present study, based on actual meta-analyses, casts doubt on previous warnings, primarily based on selective case studies, that judgment calls have an important impact on substantive conclusions. The authors also tested the fit of a multivariate model that includes relationships among theory-building and theory-testing goals, obtained effect sizes, year of publication of the meta-analysis, and scholarly impact (i.e., citations per year). Results indicate that the more a meta-analysis attempts to test an existing theory, the larger the number of citations, whereas the more a meta-analysis attempts to build new theory, the lower the number of citations. Also, in support of scientific particularism, as opposed to scientific universalism, the magnitude of the derived effects is not related to the extent to which a meta-analysis is cited. Taken together, the results provide a comprehensive data-based understanding of how meta-analytic reviews are conducted and the implications of these practices for theory building and testing, obtained effect sizes, and scholarly impact. © The Author(s) 2011.

Publication Title

Journal of Management

Share

COinS