Evaluation of Scientific Journals – Impact Factor and Alternatives

Over the time, many metrics have been developed to compare the influence, prestige or relevance of individual scientific journals. This evaluation can be taken into account when an author is selecting a journal for his publication, as well as for personal performance measurement, having successfully survived the review process for the journal.

Impact Factor and Related Metrics

These metrics compare the number of publications with the total number of „citable items“ published in a journal, so the following definition is assumed:

The journal that achieves the higher number in this calculation would be considered the "better" one. The best-known index of this kind is the Impact Factor, here the last 2 years are considered. Similar indices are the Immediacy Index (1 year), the 5-year Impact Factor or the Impact per Publication (3 years).

Criticism (see literature):

  • Editorial notes can be used to increase the impact factor. They don’t count in the denominator, but are included in the citations.
  • The size of the impact factor is strongly dependent on the subject discipline
  • Factor depends on the database which is used
  • Journals with many review articles or particularly interesting and current topics usually have more citations.
  • In addition, there is a language- and subject-specific bias in the selection of journals, since English-language journals dominate and thus important journals in other languages are given little consideration.

Prestige-Weighted Metrics

With the Impact Factor, all citations are weighted equally. In order to bypass this valuation inequality, there are prestige-weighted metrics. With this metrics the influence of the citing journal is taken into account: If a journal receives a citation from an important journal, this citation counts more than one from an insignificant journal. Examples of such metrics are the Eigenfactor or the SCIMago Journal Rank.

 

Determination

The Journal Citation Report from Clarivate Analytics is compiled every year and calculates the Impact Factor, the Eigenfactor, the Immediacy Index, the 5-year Impact Factor and many more that have not been mentioned here. Journals from the Web of Science and their citations are evaluated.

The Journal Metrics, which are based on the Scopus database, are freely available on the web. Included metrics are, among others, the Impact per Publication and the SCIMago Journal Rank.

The Google Scholar Metrics evaluate journals by the h-index. For the definition of this metric, see subsection "Measuring the influence of scholars".

Further Reading

  • C.T. Bergstrom, J. D. West and M. A. Wiseman (2008): The Eigenfactor Metrics ; Journal of Neuroscience 28(45):11433-11434.
  • Delgardo-Lopez-Cozar E, Cabezas-Clavijo A (2013): Ranking journals: Could Google Scholar Metrics be an alternative to Journal Citation Report and Scimago Journal Rank?; Learned Publishing, 26 (2): 101-114.
  • González-Pereira B, Guerrero-Bote VP, Moya-Anegón F (2010): A new approach to the metric of journals’ scientific prestige: The SJR indicator, Journal of Informetrics, 4 (3): 379-391.
  • Ha TC, Tan SB, Soo KC (2006): The journal impact factor: Too much of an Impact? Annals Academy of Medicine, 35(12): 911- 916.
  • Kaltenborn K-F, Kuhn K (2003): Der Impact-Faktor als Parameter zur Evaluation von Forscherinnen/Forschern und Forschung. Klinische Neuroradiologie, 13 (4): 173 -193.
  • Rousseau R, Stimulate 8 Group (2009): On the relation between the WoS impact factor, the Eigenfactor, the SCImago Journal Rank, the Article Influence Score and the h-index (Preprint)

Measuring the Impact of Scientists - h-Index and Alternatives

Comparing scientists and their academic achievements can be important for different scenarios. On the one hand, a benchmarking can play a role in recruitment or acquisitions of third-party funding, on the other hand, it can also play a role in the budget allocation process. This section will present general metrics for the performance evaluation of researchers. The following paragraphs will deal with the evaluation of individual publications.

Classic Metrics

There are a number of metrics that are easy to specify. First, the number of publications can be interpreted as a measure of productivity. Further, the impact can again be measured by the number of citations received on all publications or the number of average citations per publication.
Such observations do not take factors into account such as the length of the academic career, the number of authors per publication or the number of self-citations.

h-Index

The h-index, published by Hirsch in 2005, has become very common. The aim of this metric is to express productivity and influence by one number.

The h we are looking for is the maximum. That means at least h publications have been cited h times. This can be determined graphically by first sorting all publications by the number of citations. Plotted on a graph (see below), the angle bisector gives the h-index. In the example below, h=3.

Although Hirsch already wrote in 2005 that self-citations do not have a major influence on the h-index, factors such as discipline, length of research career or age of publications are not taken into account in the calculation. If a few publications were particularly successful, this is also not reflected in the h-index. Assuming that the most cited article in the example was cited 100 times, this would not change the result h=3.

Alternatives to the h-index

Point of criticismExamples of alternative metrics
Most frequently cited publicationsg-Index, e-Index, A-Index, R-Index
Age of publicationContemporary h-Index, AR-Index
Number of authorsmostly normalisations
Lenght of scientific careerSee e.g. Harzing (2007)

Calculation

There is an author search in databases such as Web of Science or Scopus. Usually, the search results can then be analysed, whereby the h-index is also defined.

Alternatively, the analysis can be done with Google Scholar. If the search for a scientist returns his or her Google profile, this also includes the h-index. The Publish or Perish software uses Google Scholar as the basis for determining the h-index and many alternatives. By using the Google Calculator, a browser add-on, an analysis of various metrics is carried out directly when searching in Google Scholar. In general, the h-index of the same person differs in different databases, since each operator selects the journals to be evaluated. Google Scholar usually delivers the highest h-index.

 

Further Reading

  • Alonso S, Cabrerizo FJ, Herrera-Viedma E, Herrera F. (2009): h-Index: A review focused in its variants, computation and standardization for different scientific fields. J. Informetr. 3(4):273–89
  • Andrés A. (2009): Measuring Academic Research How to Undertake a Bibliometric Study. Oxford: Chandos Publ. 1. publ. ed.
  • Harzing A.W. (2007): Publish or Perish, available from www.harzing.com/pop.htm
  • Hirsch JE. (2005): An index to quantify an individual’s scientific research output. Proc. Natl. Acad. Sci. U. S. A. 102(46):16569–72

Measuring the Impact of Publications

Of course, the individual publication and its impact in the (scientific) world can also be considered. With the help of such metrics, the most important publications could be determined, e.g. for a third-party funding application.

Classic Metrics

The simplest metric is based on the citations of a publication, although this is easily manipulated by self-citations. In addition, it takes a relatively long time for the influence to become visible. This is because other scientists must first read the article and cite it in a new paper, whereby this new article must also go through the publication process.

Metrics of the journal in which the article was published are also often used instead e.g. Impact Factor,… . But using the metric of the journal only reveals the "potential" of a publication, i.e. that the article has survived the review process. Of course, a journal with a high impact factor will also have publications with little or no citation.

Alternative Metrics

Alternative metrics are also called article-based metrics, open metrics or altmetrics. The aim is to use the new possibilities of the internet not to take the citation as a measure, but to start earlier. Here, the search, download and discussion on the internet are taken into account.
This applies first of all to the view or download on the internet. The libraries of users can also be evaluated in a reference management software. Studies have shown, for example, that there is a correlation between download numbers and citations.
Furthermore, mentions of the publication can be found in blogs, Twitter, news portals, Wikipedia, discussion platforms, YouTube, GitHub and so on. It was shown here that frequent mentions in blogs do not correlate with a high number of citations, because articles frequently cited in social media are often more popular scientific or topical publications.

Calculation

Some journals (e.g. PLOS, Nature, AIP, ...) offer metrics directly on the individual pages of the articles that show the influence of the publications on the internet. In some cases, such metrics are also applied to books. The company Altmetric evaluates many internet sources to calculate a score. The result can be visualised more precisely and also analysed. As a free product, one can download a bookmarklet that triggers a search for citations on an article's homepage and displays the score calculated from it.
Other products that deal with this topic are Plum Analytics, Webometric Analyst and Impact Story.

Further Reading

  • Costas R, Zahedi Z, Wouters P (2014): Do “altmetrics” correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective.
  • J. Assoc. Inf. Sci. Technol. 1–17
  • Nicholas D, Rowlands I. (2011): Social media use in the research workflow. Inf. Serv. Use. 31(1):61–83
  • Opthof T (1997): Sense and nonsense about the impact factor. Cardiovasc. Res. 33(1):1–7
  • Peters I, Jobmann A, Eppelin A, Hoffmann CP, Künne S, Wollnik-Korn G (2014): Altmetrics for large, multidisciplinary research groups: A case study of the Leibniz Association. Libr. Digit. Age LIDA Proc. 13(0)
  • Peters I, Jobmann A, Hoffmann CP, Künne S, Schmitz J, Wollnik-Korn G (2014): Altmetrics for large, multidisciplinary research groups: Comparison of current tools. Bibliometr. - Prax. Forsch. 3(0):1–12
  • Priem J, Piwowar HA, Hemminger BM (2012): Altmetrics in the wild: Using social media to explore scholarly impact. ArXiv12034745 Cs