Müller, R., & Klein, G. (2019). Qualitative Research Submissions to Project Management Journal®. Project Management Journal, 50(1), 3–5. https://doi.org/10.1177/8756972818817625
This is the second in a series of editorials designed to provide authors with guidance for future submissions to Project Management Journal® (PMJ). We started this series in the October issue of PMJ (volume 49, issue 5) by outlining the lack of theoretical contribution as one of the main reasons for submission rejection. We also provided a few recommendations to overcome this, such as the use of interesting and contemporary subjects combined with established theoretical perspectives in order to gradually improve or expand existing theory. Nevertheless, we also welcome submissions with newly developed theory, even though these submissions are quite rare when compared with the number of submissions that apply existing theories or theoretical frameworks. The aim of this editorial remains the same: to provide more transparency in the requirements that lead to promising submissions to PMJ.
Generally speaking, the nature of qualitative inquiries—with its large variety of ontological, epistemological, and methodological stances—calls for an equally wide variety in ways of writing about these studies. However, the manuscript should satisfy some basic requirements for academic publishing and allow the novice as well as the experienced reader to follow the process of the study and the author’s line of thought. We recommend referring to the Journal Article Reporting Standards for Qualitative Research as outlined by the American Psychologist Association (Levitt et al., 2018) before writing and submitting qualitative and mixed-methods work. In the following section, we first provide a few general recommendations for all types of submissions to PMJ and then address the specific expectations for qualitative submissions.
All research starts with a research question (Makadok, Burton, & Barney, 2018). We refer here to the author guidelines on the PMJ website, which clearly state that submissions must have a research question and address a phenomenon that has a wider relevance than just “being interesting.” Unfortunately, we continue to receive numerous submissions that neither have a research question (i.e., the what) nor a clearly identifiable justification (i.e., the why) for the study.
Equally critical is to outline the study’s contribution to a specific academic discourse, which positions a study relative to other studies in a specific subject area, contributes to its justification, and furthers understanding of the novelty of the findings. To accomplish this objective, an article shall critically engage with both the current and classic literature of a specific discourse to embed the work in a focused and well-researched current understanding of the phenomenon. Independent of the chosen methodological approach, the findings in an article are expected to be set in relation to those of existing publications in order to develop a verbal description of the observed regularity, which leads to a theoretical contribution that expands on the existing knowledge on the subject.
Qualitative studies are known to be a premier choice for answering research questions that start with how, such as questions about behavior or processes. Unique to these studies is the need to convince the reader through a coherent and exhaustive “story” that respects the worldview of informants, provides evidence for its claims, and contributes to theory development (Pratt, 2009). Due to the lack of “significance levels,” such as those in quantitative studies, qualitative studies aim for and require result credibility. Credibility stems from the plausibility of the research and its design, as well as the rigor in execution. Hence, a qualitative paper must outline how the study was designed, that is, the coherence between ontological and epistemological assumptions about reality on one hand and the ways of data collection, analysis, and interpretation of the results on the other. All these elements must fit together. Rynes and Gephart (2004) provide some examples of frequently encountered mistakes, such as claiming a positivist stance but conducting a Grounded Theory study.
To avoid these inconsistencies, we recommend starting the methodology section in a submission with a description of the design process or framework that was followed and led to the particular methodology. Many of the established design processes suggest starting with the underlying philosophical assumptions. This provides hints on the nature and generalizability of the results. For example, positivist studies will aim for general laws, post-positivist studies will aim for trends, and realist studies will aim for possible, but not necessarily the only explanation of a phenomenon. Phenomenalism studies will aim for the understanding of subjective lifeworlds of individuals, and postmodernist studies consider contradictions and random phenomena. Depending on the underlying stance, interpretation of the results from the data analysis will differ; therefore, study results can only be interpreted when the underlying philosophical stance is known and outlined in the paper (Biedenbach & Müller, 2011).
Research design frameworks come in varying levels of sophistication. Relatively simple frameworks, such as the “onion” framework by Saunders, Lewis, and Thornhill (2009), provide a robust, albeit normative process of six decisions made to design a study. These are (in required order) the underlying philosophy, approach, strategy, methodological choices, time horizons, and data collection and analysis type. More theory development–oriented frameworks address the wide variety in methodological choices through higher abstract levels during the design phase. Makadok et al. (2018) offer a taxonomy of design choices. Starting with the how to question, they define different modes of theorizing, which include the various philosophical stances, but name them by their practical execution (such as process- vs. variance-based modes); they continue with who and where questions by defining the levels of analysis and theoretical perspectives, respectively. Subsequently, they address possible causal mechanisms (the why) and the constructs or variables to be investigated (the what). They finish their theorizing process by defining the boundary conditions (the when).
There are many more such frameworks than the two named here. The choice depends on the nature of the study and the researcher’s preferences. The main point we want to highlight is the need for an understandable and traceable description of the design process, supported by arguments for the chosen design choices, so that readers can follow the researcher’s line of thought and develop trust and credibility in the study results.
An additional factor that contributes to credibility is the underpinning of analysis results with quotes or other evidence that supports the claims being made. This should be done whenever possible, but not overemphasized. Short and concise quotations are more helpful than long and ambiguous elaborations on a theme. Tables become very helpful summaries for readers and reviewers to maintain oversight of the “big picture” throughout the full discussion. If needed, the main text may contain short excerpts of the quotes, whereas longer quotations are collected in an appendix for the interested reader. Problems with this approach arise when two-level reflexive methods are used, such as those by Alvesson and Kärreman (2007), which build on a reflection of the empirical evidence and a subsequent reflection (called reflexion) on the first-level reflection. In these cases, it is unlikely that direct quotes can be used to support the findings. Here alternative chains of evidence should be provided; if not possible, an explanation should be given to the reader as to why empirical evidence was not provided.
The step from analysis results to theory building requires a translation of the findings into an elaboration of the what, the how, and the why of the phenomenon investigated; in other words, a theory. Here again, transparency in the process supports the credibility of the results. This includes a clear statement as to whether or not an existing theory is tested, expanded, and elaborated, or a new theory is developed (Fisher & Aguinis, 2017; Pratt, 2009). Not every study requires a Kuhnian paradigm shift (Kuhn, 1996). Rather, some studies are tests of boundary conditions, which help to define the protective wall around the kernel of repeatedly tested variables in a Lakatonian sense of research programs (Lakatos & Musgrave, 1970). Hence, a discussion of the underlying theory’s particular life cycle stage (and the idiosyncratic contribution of the present study to this stage) contributes to the overall understanding and positioning of the study’s contribution to theory (Müller & Shao, 2013).
In reviewing submissions to PMJ, we notice a dominance of traditional methodologies, such as case studies that use long-established analysis techniques following, for example, Glaser and Strauss (1967); Miles, Huberman, and Saldana (2014); and Yin (2009). Traditional methodologies are typically well developed and rigorous, and thereby provide for systematic and comprehensive analysis. Choosing these methodologies often lowers the risk of using ill-defined procedures or unclear processes (Rynes & Gephart, 2004). Although these studies are needed and valuable, they may not lead to new insights or might not support the particular perspective taken in a study; in this case, a contemporary method may be more appropriate. Newly developed methodologies, which are less proven on a timescale and/or are developed from a more interpretive worldview, address some of the underlying assumptions of the more traditional methodologies differently. Examples of this include Alvesson and Kärreman’s (2007) “Mystery Construction” method, which deliberately aims for subjectivity, use of several theoretical frameworks, and fosters self-criticism in analysis of the data.
New methodologies are those developed for a study and not yet tested in other studies; thus, they may suffer from some not yet fully developed process elements or analysis techniques. New methodologies bear the highest risk of satisfying skeptical reviewers because they have yet to be subjected to reference studies. The nature of such submissions is either a whole paper on a newer methodology for research in management (for which PMJ might not be the best outlet) or a paper on a research study, in which the majority of space is devoted to the description of the methodology, compromising the level of detail that can be reported on the research study.
In the beginning of this editorial, we said that credibility in a submission comes from clarity and transparency in the description of the research design and methodology. We can now add that we also need to further develop in terms of methodologies and research perspectives—all this within the obvious limitations in terms of number of issues and articles published per year. To that end, we look forward to more balance in traditional and contemporary methodologies in PMJ submissions. We encourage authors to submit papers using traditional, contemporary, and new methodologies, but to be sensitive to their differences.
Full text, references, at the link above
Check also Journal Article Reporting Standards for Qualitative Primary, Qualitative
Meta-Analytic, and Mixed Methods Research in Psychology: The APA
Publications and Communications Board Task Force Report. Heidi M. Levitt et al. American Psychologist 2018, Vol. 73, No. 1, 26–46. http://dx.doi.org/10.1037/amp0000151
Tuesday, January 29, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment