top of page

9. "What's Your Impact (Factor)?"

  • Writer: Bianca Blanch
    Bianca Blanch
  • May 1, 2020
  • 9 min read

Updated: Jul 27, 2020


Did you know it is not just important to publish your research? The journal quality is also judged. How do we know which journals are good quality? Should we focus more on the quality of our outputs, or the quantity? This week I explore the metrics researchers, employers and funders use to measure quality research.



Imagine you have started your first research study and you need to find one review paper to summarise the current knowledge in that field. There are thousands of journals to search, and no system to tell you the quality of the journals or their publications.


What do you do? How do you approach the literature search?


Should you look at methodological journals (those that only publish reviews) or content area journals (that may publish a review of interest). Do you systematically search, or haphazardly just read any review you happen to come across until you find one that you think is good?


As a researcher in the 1960s, this was the situation Eugene Garfield faced. Thousands of articles and journals, with no idea how the journals were related to each other or their quality. His solution was to create the journal citation report (JCR). This was the first iteration of what we now know as the journal impact factor.


What is a Journal Impact Factor (JIF)?


A JIF is the average number of times a paper is cited in the first year after its publication. The JIF is based on the number of citations articles published in the past 2 years received. If a JIF is 4.325, each paper published in the past 2 years has been cited an average of 4 times. Therefore, IFs are always changing!


For those who want the formula. To calculate the JIF for 2020:


Number of citations received by all published items from 2018-2019


Divided by


Number of citable items* published in 2018-2019


*A citable item is a research paper or review. A non-citable item is a letter to the editor, editorial, news item or obituary. The citation counts of editorials are included in the numerator, but not for the denominator.


What are Journal Impact Factors Used For?


Originally, Garfield created the journal citation report (JCR) to show the connections between journals, and to show which journals were actually useful to researchers. So if you were publishing a review on opioid and alcohol misuse, you would know which journals were the most connected to your key topics, and which ones people read, based on the JCR.


Today, JIFs still have this purpose, journals with high impact factors are more likely to be read by scientists in that field, and publications from these journals are assumed to be high quality research.


Unfortunately, JIFs are increasingly used by employers and funders to compare researchers. They were NEVER intended for this purpose! In fact, Garfield himself along with multiple researchers, research bodies and funders have widely criticised this practice.


Despite this disagreement, the importance of JIFs as a proxy for quality research is now ingrained in research. If your research is published in high-impact factor journals, you are more likely to get funding than a researcher publishing in low-impact factor journals. Academics often plan a study question and methodology with the intention of submitting it to a specific high-impact factor journal.

One of the main issues with using JIFs to compare researchers is the variability in JIFs based on each research field. For example, ‘CA: A Cancer Journal for Clinicians’ currently has the highest impact factor, sitting at 223.7. Meaning, every paper is cited an average of (almost) 224 times in its first year of publication! Compared to my field of expertise, pharmacoepidemiology, the leading journal is ‘Pharmacoepidemiology and Drug Safety’ (PDS) with an impact factor of 2.9.


Quite different. Are we lesser researchers because our work isn't as highly cited as in other fields?


The Lessons: What Else Do I Need to Know About Journal Impact Factors?


Do I want my paper published in a high-impact factor journal?


Yes! The higher the impact factor of the publishing journal, the more people will likely read your research, the more likely it will get cited, which is another important metric for researchers. See 'The Academic Playbook' for researcher's other measures of productivity.


Also, if your work is published in a journal like The New England Journal of Medicine, especially as a junior researcher, your chances of securing post-doc funding will increase exponentially.


My experience: I had the privilege of working with a research group that had a recent publication in The New England Journal of Medicine. Multiple researchers from the lab were invited to influential conferences around the world to present their work, their advice was sought after for editorials in multiple prestigious journals (which increased their publication count), and, not surprisingly, they received multiple grants in the 1-2 years following the NEJM publication. The Group Head said it was all due to the NEJM paper.


One paper, career changer!


What are the Downsides to Publishing in High-Impact Factor Journals?


Everyone wants a publication in a high-impact journal. It is a common practice in academia to submit your paper to a journal that is just out of reach, to see if they are interested.


However, the higher the JIF, the more people submit their manuscripts to that journal for consideration. May more people submit journal articles than the journal can publish, and some journals even put on their website the time frame you should expect between submission and their response. So it may be a long time between your article submission and the journal's response. Trying for a high-impact factor journal is a gamble, and you should discuss the pros and cons with your co-authors.


But this strategy of submitting your paper to the journal with the highest impact factor may change in the future.


A few years ago, I came across journals asking you to list all journals you had already submitted your paper to, and their reasons for rejection. I did not like this practice, as these past comments would influence the new journal's opinion of your manuscript. Also, what journal wants to know they were your fifth choice?? It does not set the right tone for them to review your work objectively.


If this ‘submission transparency’ becomes more common in the future, researchers may rethink their strategy and play it safer, submitting their manuscript to a lower impact factor journal as a first option.


Is a Journal Impact Factor the Only Metric That Measures A Researchers Impact?


No, of course not. In research, there is rarely one metric that everyone agrees on. Other common metrics include:


  • Number of citations. The number of times your (co-)authored published papers have been cited. This is why self-citation is common in academia.


  • The 10-index. The number of papers you (co-)authored that have been cited 10 or more times. This is a simple index to examine quality independent of the journal’s impact factor.


  • The Field-Weighted Citation Impact (FWIC). This score assigns a weighting to your paper compared to similar articles published in the same field and timeframe. A score of 1.00 means it is cited as expected, greater than 1.00 means higher citations than expected and less than 1.00 means lower number of citations that expected. The exact calculation method has not been made public.


  • Source Normalised Impact per Paper (SNIP). This quality metric gives a weighting to your research based on your field. It determines the total number of citations per field. The impact of a single citation is given a higher weight in fields where there are fewer citations, and a single citation is given less value in a field with more citations. This is considered a fairer method in compare scientists between fields.


  • The h-index. The h-index is a more complicated metric as it reflects both the quantity and quality of your publications. Bare with me through the explanation below, it is a hard metric to explain, but a simple concept to understand.


How to Work Out Your H-Index


To work out your h-index, create a spreadsheet (see figure below). In the first column list your article/paper titles, second column list each paper's number of citations and the third column is order. Each row represents one paper.


Sort your papers by number of citations, the highest number of citations at the top and the lowest number at the bottom.


In the 'order' column, start in the top row and write the number 1, then increase the count by one for each row, e.g. 1, 2, 3, 4, 5 etc. Keep on going until the number in your ‘order’ column is larger than the number of citations for that paper. Go back to the last 'order' number that was either equal to or less than the number of citations for a previous paper. That is your h-index.


According to my 'Google Scholar' research profile (below). I have a h-index of 13, as for paper 14 the number of citations is 12, which is lower than the number 14. A h-index of 13 means my top 13 publications have been cited at least 13 times each. The higher your h-index, the better. Please note, I am not actually a co-author on the number 7 publication.


Are There Any Social Media Research Metrics?


Yes, and my prediction is that there will be more in the future!


  • Attention Score/Altmetric. I was looking up an article recently and found an Attention Score/Altmetric associated with my article. The summary said my article had been cited 79 times, and then referred to an Am Score of 90. I had never heard of an Am Score, so I clicked on the icon, and the picture below emerged. I was quite impressed! It gives you a really good insight into what impact your research has had on the academic and non-academic world. I had never seen this metric before a few days ago, so I am not sure how widely it is used or how it collects its data.



  • Twitter metric (?). There is chatter on social media that a new Twitter metric may be developed to consider the number of people who see your research. Stay tuned to see if this is developed in the future! See 'Book Review: Twitter for Scientists' to up your Twitter game.


My experience: There are many metrics to demonstrate your impact. If a funding application does not ask for one specifically, use the metric that makes you and your research look the best.


For your CV, enter all the quality metrics in your publications section, if they are flattering. This will demonstrate the impact of your work to potential employers and that you are aware of these research quality metrics.


I add these figures to the top of my ‘Publications’ section like this:

Number of citations = 755; 10-index = 15; h-index = 13. Then list my publications in reverse chronological order, e.g. most recent first.


Number of citations, 10-index and h-index are all recorded in Google Scholar, so you don’t need to keep track of your metrics. Just sign up for a Google Scholar account and they will do the hard work for you. Be careful using Google Scholar metrics for grant applications, as they can inflate the number of citations and be inaccurate. Scopus is another referencing database widely used for citations, however, not all journals are indexed in Scopus and it can underestimate your research impact values.


What Do I Do If The Journals In My Field Have Low-Impact Factors?


Some grant applications require you to write the JIF for every paper you have published. If you report a JIF, you should also tell your reviewer the JIF for your field. Otherwise, you will look like a lesser researcher if your JIFs are between 2 and 4, and another researchers' papers are published in journals with impact factors around 7.


My experience: When I applied for post-doc funding, in my application I said plainly that PDS is the leading international journal in pharmacoepidemiology and its impact factor is 2.9. This was the benchmark by which all of my papers should be judged. I then highlighted the number of publications published in journals with higher impact factors to show the impact of my work to other fields as well as pharmacoepidemiology.


Love them or hate them, JIFs have made our lives easier in some ways and infuriating in others! What do you think of JIFs? Is it necessary to compare journals and academics? Is another system preferable? Please let me know your thoughts by leaving a comment below or emailing me at AuthenticResearchExperiences@gmail.com


BB



Additional Resources:

History of the impact factor: https://arxiv.org/pdf/1801.08992.pdf


I came across this article when researching this week's blog post. I was not excited to read it, I thought it would send me to sleep! But I actually thought it was quite interesting. I had never heard the criticisms of the JIF, nor that most thought it was not a good tool to compare researchers. So if you are so inclined and have a spare 30 minutes, you may be intrigued to read the history of the JIF!


Related Posts:

I will write a new post every Friday about another aspect of the research world. Please email me to subscribe to my blog. AuthenticResearchExperiences@gmail.com


I am also an avid reader of start-up stories, or research a passionate person has embarked upon across all topics. Click here If you want some new book recommendations.

Comments


  • Facebook
  • Twitter
bottom of page