The finished logic of our articles and books is a façade, put on after the fact.
Experienced scholars describe writing as joining conversations within a particular field of interest (Patriotta 2017). Having analyzed his experience of editing the Journal of Management Studies, Academy of Management Review, and Organization Studies, Gerardo Patriotta suggests that a text-building strategy has to start with the identification of a ‘good’ conversation that provides the baseline for a contribution. Not only is a ‘good’ conversation needed to be found, but it must also convince others that the conversation is worthy of attention. Two recent papers show that, in social science, reviewers mostly evaluate how authors are successful in finding a ‘good’ conversation. If authors fail, reviewers suggest reworking a paper that in many cases means changing a front and a back of a paper.
What usually do reviewers focus on? Do reviewers prioritize the commitment to a research question connected to a theory, and in case of misfit between theory and data ask to adjust data analysis? Or do reviewers behave under the assumption that research is guided by the data accessibility, and when there is a disconnect, the research framework should be changed?
Misha Teplitskiy (2016) proposes an elegant research design strategy that studies the type of review prevalent in sociology. His main idea is to compare already published research articles with the initial versions presented at an Annual Meeting of the American Sociological Association (ASA). No submissions may be presented at the ASA’s Annual Meeting if the paper has been previously published or been accepted for publication. Once the samples of these papers are linked to their subsequently published versions, how significantly the original papers were changed during the peer-review process can be analyzed.
The central question sought to identify what changed: the data analyses or the theoretical frameworks? Thirty quantitative article pairs were analyzed both qualitatively and quantitatively. The quantitative analysis was based on finding similarities between the original version and published article. Changes to the set of references and the set of variables used in the data analysis were also measured. The main conclusion was that the data analysis changed only slightly between versions while the theoretical framework was substantially expanded and reworked.
Teplitskiy concluded that ‘published theoretical frame thus appears to be a result of negotiation between the authors, reviewers, and editors, rather than a finely specified theoretical question that motivated the study in the first place’ (Teplitskiy, 2016, p. 284). These findings contradict a conventional story about linear research and linear writing strategy explaining how an article is driven by a research question that generates data collection and data analysis. In reality, we all commit Sharking (Secretly Hypothesizing After the Results Are Known), pretending that our theoretical framework was defined a priori before knowing the results from the data (Hollenbeck & Wright, 2016).
David Strang and Kyle Siler (2015) came to the same conclusion after a study of articles submitted to Administrative Science Quarterly between 2005 and 2009. Strang and Siler not only compared the original submitted manuscripts with the published versions, but they also conducted a survey asking the authors about the comments they received and revisions they made to the originals. The survey respondents reported that the theory section was the most criticized and most modified component of the paper, followed by a discussion while the methods and results were moderately reworked. For reviewers, the most important thing was to establish a connection with the right audience through the changes made to the theoretical framework. This task does not necessitate a change to the technical core. Only five authors indicated additional data collection and measurement occurred.
The pattern of changes across submitted and published papers was consistent with the survey respondents’ answers. The results sections remained virtually unchanged while the theory sections were heavily revised up to a reconceptualization of the motivation and raising a fundamentally different theoretical issue. Some observations let Strang and Siler conclude that most papers faced an interpretive challenge that question a study’s theoretical framework.
First, to satisfy a reviewer’s comments, manuscript expansion across all sections is required, especially at the end of a paper. As a result, discussion expanded from an average of 1,648 words in the original submissions to 2,258 in the published paper. Second, only 40 percent of the original hypotheses remained intact without substantive changes. Third, the bibliographies were revised considerably. References not only were added but were dropped. Approximately 30 percent of the citations that appeared in the original submissions failed to appear in the published paper.
What are the practical implications for early career researchers struggling to make their first submission? It seems they could benefit from considering writing as a form of engagement in conversation with other scholars. Different journals are usually organized around different conversations and this goes for higher education as much as for any other field or discipline. In this sense, the step of choosing a scholarly journal is crucial because it may also represent a choice of an audience. A mistake at this step could result in a straight rejection even if the manuscript in itself is of good quality. It is possible that reviewers would not recognize a paper as a contribution because this journal participates in other conversations. Still, very often a conversation stretches across journals and the author needs to put the pieces together. Either way, it is up to the author to do the convincing.