In this blog post we set out our concerns over the downgrade of Journal of Marketing Management (JMM) in the ABS Academic Journal Guide 2015, with specific reference to the validity of the quantitative and qualitative methodologies followed.
The ABS Academic Journal Guide was published in February 2015. (Since the Guide was published in February 2015, ABS has now become the Chartered Association of Business Schools. We shall therefore refer to them as CABS throughout).
In the 2015 Guide, Journal of Marketing Management was downgraded from a 3 to a 2 rating. This was surprising on both a qualitative and quantitative level, given both JMM’s performance in the recent REF exercise, and that the normalised values for JMM given in the Guide did not seem to reflect our own monitoring of JMM’s citation data.
To begin with we contacted CABS to clarify the metric methodology followed.
CABS Citation Analysis: Incomplete Citation Data
CABS was unable to provide the exact dataset used in their calculations for copyright reasons, however, historical datasets are available on the Scopus website, and following the information provided by CABS that data was collected in late 2013, we have based our analysis on the SCOPUS dataset published in September 2013. JMM was put forward for Scopus during 2010 and the first year JMM has a SNIP or SJR in this dataset is 2011. According to the Scopus website
“Established journal (newly indexed in Scopus): … The first values will be published in the year after launch. So, if a journal is indexed in 2013, it gets 2014 values in 2014. The complete accurate citation impact of the journal will be visible after four years of publication history in Scopus” [our emphasis]. http://www.journalmetrics.com/faq.php
Scopus makes their SNIP and SJR calculations based on a 3 year citation window, that is, ratio of citations in a year (Y) to scholarly papers published in the three previous years (Y-1, Y-2, Y-3) divided by the number of scholarly papers published in those same years (Y-1, Y-2, Y-3). The SNIP and SJR values use this citation window and adjust for subject field (SNIP) and source prestige (SJR). Obviously given the time taken to get through peer review to publication, particularly in the social sciences, it is likely that older articles will be cited more than more recent articles within the citation window.
We therefore ran the calculations as described in the Guide, based on the September 2013 dataset, using the values for 2008, 2009, 2010, 2011 and 2012 (as specified by CABS). The reason for attempting to replicate the statistical analysis was to confirm that the “raw” SNIP and SJR values from the SCOPUS dataset had been used with no adjustments. The average difference between our standardised SJR and the CABS standardised SJR across the Marketing list was -0.000016. So, we are fairly confident that in the dataset used by CABS, JMM only had values for 2011 and 2012.
The 2011 SCOPUS JMM values were based on only 2 years of publication history (Number of citations in 2011 to publications in 2010), so, not the full 4 years required for an accurate picture of the journal’s citation impact. The 2012 SCOPUS values used for JMM were based on only 3 years of publication history (Number of citations in 2012 to publications in 2010 + 2011), again, not the full 4 years required for an accurate picture of the journal’s citation impact.
Therefore, as it seems CABS averaged the 2011 and 2012 values to calculate the standardised values, we believe that the figures they have calculated and published for JMM are based on incomplete and therefore inaccurate citation data. This was the basis for our concerns, which we communicated to the CABS Academic Journal Guide Management Board via the Guide Co-Editors.
The initial response was that we had “misunderstood what quantitative data was used to inform the Guide”, however, after we had contacted CABS again, and after checking with the methodologists, their response to our quantitative findings was that “The subject specialists on the Scientific Committee were provided with the best data available at the time. Calculations can only be based on the data available and, as you know, for many journals there is no or limited data available. Incomplete data is not by definition inaccurate, it is just imperfect.” [our emphasis]
However, we would contend that given that SCOPUS clearly states that 4 years of publication data is required for the “complete accurate citation impact” of a journal, it would be sensible to only use values based on complete publication data when making these kinds of calculations.
It is therefore understandable that we were concerned if this data, or a ranking based upon it, was used to inform the qualitative analysis undertaken, or indeed the overall ranking, without due consideration of the fact that the metrics used did not give an accurate reflection of the citation impact of JMM. So, we also took a look at the qualitative evidence …
We also asked CABS “What specific qualitative procedures were undertaken by the subject experts consulted”?
Their response was “This is clearly explained in the methodology section of the Guide.” However, we would argue that there does not appear to be a detailed standardised qualitative methodology provided. The Guide says that “The subject experts were encouraged to consult with learned societies, professional associations and/or leading academics in their area” (p.8). It is unclear exactly what form this consultation took, for example, how many leading academics were contacted by subject experts, on what criteria were these academics selected, what information was given during the consultation phase, (e.g. were they given a ranking based on the metrics or just a list of all the journals in their subject category?), what specific instructions were issued to those consulted, etc.
You can read a posting by Emeritus Professor Michael J Baker here in which he discusses the steps he took in performing his own qualitative analysis, a procedure which saw a number of senior UK and International Deans, 3 past Presidents of EMAC (the European Marketing Academy), 2 past Presidents of ANZMAC (the Australian and New Zealand Marketing Academy) and 2 former Editors of the Journal of Marketing, the pre-eminent publication in the field, offer supporting statements (17 in total) regarding the quality of JMM.
Additionally, other internationally recognised marketing academics were in correspondence with the JMM Editors regarding their support for JMM, and the comments from 6 of these are included in Mark Tadajewski’s commentary published in Volume 16, Issue 1-2: ‘Academic Labour, Journal Ranking Lists and the Politics of Knowledge Production in Marketing’.
JMM – Evidence from Peer Review
Recent research has also indicated that metrics do not tell the whole story – See Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. DOI: 10.13140/RG.2.1.4929.1363 re the importance of Peer Review in any kind of assessment.
In the 2014 UK Research Excellence Framework (REF) JMM was the most submitted journal in the whole Business and Management Unit of Assessment (equal with British Journal of Management, both with 200 outputs submitted). It should be noted that the REF panel read every piece of work to assess it individually rather than using metrics or journal ranking lists –
“Just over 1,100 marketing outputs were submitted to the sub-panel … 70 per cent of these outputs being recognised as either world-leading or internationally excellent in terms of originality, significance and rigour … marketing outputs assessed as world-leading were published in over 35 different marketing and non-marketing journals. Three journals, namely the European Journal of Marketing , the Journal of Marketing Management , and Industrial Marketing Management , account for almost 40 per cent of the total submissions.” (Available from Research Excellence Framework 2014: Overview report by Main Panel C and Sub-panels 16 to 26, p.63. Figures in [ ] from our own analysis of the REF data).
Furthermore, JMM is referenced as a key output from the research in 10 REF2014 Impact Case Studies, highlighting the relevance of the research we publish. You can read this blog post for more on JMM and the 2014 REF.
Other criteria used by CABS
In terms of the other criteria assessed by CABS, namely quality standards, rankings in other lists, and length of time the journal has been established, we believe that JMM bears comparison with the 3 ranking definition.
“3 rated journals publish original and well executed research papers and are highly regarded. These journals typically have good submission rates and are very selective in what they publish. Papers are heavily refereed. Highly regarded journals generally have good to excellent journal metrics relative to others in their field, although at present not all journals in this category carry a citation impact factor.” (ABS Academic Journal Guide 2015, p.7, adapted from Harvey et al. 2010)
JMM – Good submission rates and very selective: 2016 is Volume 32 of the Journal, and there are 18 issues each year, published as 9 double issues. Of these, 4 double issues are made up of standard papers – JMM receives on average around 440 submissions per annum, of which approx. 2/3 are standard (non-special issue) submissions. Of these standard submissions, for all papers which have received a final decision in the review process and submitted since 1 January 2011 (to 17 May 2016), the acceptance rate is 9.2%, the desk reject rate is 68% and the overall reject rate is 90.8%.
JMM – Papers are heavily refereed: All papers are read by the Editor and assigned to an Associate Editor, expert in the relevant subfield, who will review the paper again and, if they consider it suitable, will then assign the paper to 3 referees for double blind review. Papers generally go through multiple rounds of revision and review.
JMM also publishes 5 ‘Special’ double issues per annum, based around specific themes, and these Special Issues are sometimes open calls for papers, and other times contain papers which have already gone through a competitive review process and been selected to submit to JMM, e.g. following an event like the Academy of Marketing Annual Conference. All Special Issue papers submitted to JMM go through the same rigorous review process as standard papers.
JMM – Rankings in other lists: In terms of other listings used by CABS in their formation of the Guide, journals ranked 3 by CABS such as European Journal of Marketing (EJM) or Marketing Theory (MT) do not appear on Financial Times 45 (2012), or the Dallas List (2016). On the most recent CNRS (May 2016), JMM, EJM and MT are all rated 3. On the Australian Deans list (2013), JMM and MT are A, EJM is A*, and finally, on the latest VHB (2015), JMM, EJM and MT are all rated C.
Other metric issues
There are other issues with the use of metric data. Any citation analysis is, by necessity, a backwards-looking exercise. For the 2015 iteration of the list, the actual articles that citations were counted for were published in the period 2003-2011 (CABS, 2015, p.10). This lack of recency in the metric data is a worry, particularly as this list is intended to be valid until 2018, when the next list is due to be released.
Furthermore, for a number of years, the JMM has been steadily increasing the number of issues we publish, due to the increasing amount of high quality submissions received. In 2010 the Journal moved from 10 to 14 issues per year, in 2013 the Journal increased again from 14 to 16 issues, and in 2014 it increased further to 18 issues per year. Obviously given the time taken to get through peer review to publication, particularly in the social sciences, it is likely that older articles will be cited more than more recent articles within the citation window. So, it takes time for the increased amount of content published to generate citations, but the volume of content published has an immediate effect on the citation calculation. As JMM has now stabilised at 18 issues we would expect the metrics to continue to increase over the next couple of years as citations ‘catch up’ but the number of citable documents remains relatively stable, thereby increasing our metric.
The June 2016 SCOPUS data release (number of citations in 2015 to content published 2012, 2013, 2014) would seem to bear this out, with JMM rising 24 places up the Scimago marketing journal rankings by SJR between 2015 and 2016. JMM was ranked in Q1 for both marketing, and strategy and management, in this ranking in 2016.
Update 19 July 2016: JMM’s h5-index in the 2016 version of Google Scholar Metrics was 34, meaning JMM was ranked 13th in the 2016 Google Scholar Top Publications in Marketing.
Additionally, the UK has a strong tradition of research in Marketing which does not follow the positivistic approach favoured by many of the major American and European journals which are often very highly ranked on such lists. JMM is an important outlet for critical and interpretive work in the field. Read the commentary in JMM Academic Labour, Journal Ranking Lists and the Politics of Knowledge Production in Marketing by Mark Tadajewski for a more in depth discussion of the qualitative and paradigmatic concerns with the process followed.
Where do we go from here?
We have made a submission to CABS regarding JMM for consideration when they are compiling the next iteration of the Guide due out in 2018.
JMM’s ambition is to publish eclectic, thought-provoking research in marketing, and to get the widest recognition for that research, so it can translate into impact. We help our authors share their work through our blog and social media channels, and will assist our authors in any way possible to promote their research. We hope you share these ambitions, and we look forward to welcoming your contributions and your continued collaboration.
This post is licensed under a Creative Commons Attribution 4.0 International License, unless otherwise stated. Third party materials remain the copyright of the original rightsholder.