Selected bibliometric indicators
Roughly simplified, bibliometrics measures the impact of a person (author level indicators), a work (publication level indicators) or a journal (journal level indicators) on science. All indicators are essentially based on measures of publication output (e.g. number of publications) or on measures of reception by the scientific community (e.g. number of citations).
More recent indicators include social media (Altmetrics), the cited references (measure of disruptivity) the context, function and value of the citations (Citation Content / Context analysis) in the analysis.
On this page a selection of indicators is presented, briefly discussed and further literature is recommended.
Table of contents
Author level indicators
The counting of publications and citations and their evaluation is based on authors level.
Hirsch-Index (h-Index)
Number of h publications that have at least h citations (e.g. h=7 means that the person has published 7 publications that have been cited at least 7 times).
How can the extreme popularity of this measure be explained?
- (comparatively) easy to calculate and compare
- combines measure of productivity (number of publications) with measure of response (citations)
- resistant to few highly cited as well as numerous less cited publications
Criticism
- distortion due to scientific age
- insensitivity to fluctuations in the perception of research performance and productivity of a researcher (index does not decline)
- no indication about the distribution of citations in publications, for details see Bornmann / Daniel (2009), DOI: 10.1038/embor.2008.233
Due to this criticism, the method was improved e.g. annual h-index, g-Index, Google Scholar's i10-Index.
By using an ID, e.g. ORCID, researchers can contribute to the correct allocation of publications / citations and increase their visibility. With Scopus, an ID is automatically assigned to each author entered. Access and correction requests are possible via the SupportCenter.
Publication level indicators
The counting and evaluation of citations and literature references takes place on the publication level. In addition to the simple counting of citations, the significance can be increased by taking into account the citation behaviour of the scientific discipline. This so-called (field) normalization is usually based either on the relevant average of citations or on frequently cited publications/journals ("Higly cited"-labels).
Normalized indicators
- based on the average: Field Normalized Citation Impact (FNCI)
- FNCI < 1 implies below the field-average e.g. 0,85 means 15% less cited than similar publications
- FNCI >1 implies above the field-average e.g. 1,50 means 50% more cited than similar publications
- based on frequently cited publications: Top1%, Top5% or Top10% of the most frequently cited publications of a subject area and period
- based on frequently cited journals: published in Top1%, Top5% or Top10% of the most cited journals in a subject area and period
Criticism and alternative approaches
This concept of normalisation is directly linked with the clear separation of subject areas. Therefore the informative value of these indicators may not be ideal in any case especially not for interdisciplinary publications.
Altmetrics
These can be very useful for measuring the impact of a publication outside the scientific community. In particular, the impact in social media (blogs, Twitter, Facebook, YouTube etc.) is reflected.
A typical indicator is Altmetric Attention Score.
When looking at these metrics, one should be aware that they reflect the public perception of the net community and are thus subject to a strong bias towards trend topics and easily explainable issues, respectively. Furthermore, they are much more susceptible to manipulation than conventional metrics.
Journal level Indicators
This class of indicators is intended to capture the importance of individual journals.
Impact Factor
One possibility is to put the citations of all articles in the journal gained in a year in relation to the number of articles in the same year. One of the oldest and best known indicator is based on this principle, know as the Impact Factor.
Criticism
- Due to the susceptibility for fluctuation of annual observations, it is also available for 2 and 5-year periods.
- no consideration of field-related differences in citation behaviour
- average number of citations of the publications in a journal is mainly determined by a small proportion of highly cited publications, hence a conclusion from the citation impact of a journal is therefore not representative for the influence of a single publication in this journal
Normalized indicators
The principle is similar to the impact factor except that citations in important journals are given a higher weighting. With the help of this value, a ranking is formed within the discipline. The ranking within the field is now the indicator for the importance of the journal. Two main approaches are used.
Citing-Side Normalization
This alternative normalization approach corrects for the length of the bibliography. This idea is based on the assumption that the differences in citation density between subject areas are mainly due to the fact that publications in some subject areas tend to have longer reference lists than in other subject areas. An indicator based on this concept is the SNIP.
Recursive Citation Impact Indicators
Another normalization approach, for so-called Recursive Citation Impact Indicators, weights the value of the citation depending on the source in which it was published. The idea is that a citation in Nature or Science, for example, should be weighted higher than a citation in a less prestigious journal. Typical examples of this are SJR and Eigenfactor.
Criticism
Again, a conclusion from the citation impact of a journal is not representative of the influence of an individual publication in that journal (see above).
further readings
The following links lead you to detailed explanations including a critical discussion of the indicators used by InCites and Scopus.
An excellent overview and critical discussion of bibliometric indicators is provided by Waltman (2016).
A quick introduction to common bibliometric indicators is provided by Metrics Toolkit.