Overview of metrics
This page summarises some of the key data sources, and commonly used bibliometric indicators. Please see additioanl pages for information on when and where certain metrics can be used (and the limits of their use), information on the responsible use of metrics at Durham, or information on the support and services available at Durham.
Citations refer to a source or underpinning set of data, usually for the purpose of acknowledging their relevance to the topic of discussion.
The number of citations an article receives is one indicator of the "academic impact" of the article, providing an indication of its popularity in terms of how many people have read and then applied or referred to that research. A high citation count is not a direct indication of high quality, however. Read about the Limitations of metrics for further information.
In order to monitor citations, you need an as comprehensive citation dataset as possible to make the collection, counting and analysis in any way meaningful.
Below are four key sources of citation data available:
|Open Citations (via Library Discover)||
The Initiative for Open Citations (I4OC) is a collaboration between scholarly publishers, researchers, and other interested parties to promote the unrestricted availability of scholarly citation data.
It recognises that in order to best enable researchers, and the wider public, to keep up with new and significant developments in any field, it is "essential to have unrestricted access to bibliographic and citation data in machine-readable form" and that citation data are "not usually freely available to access, they are often subject to inconsistent, hard-to-parse licenses, and they are usually not machine-readable".
Further information about I4OC.
Open Citation data is provided by many academic publishers and may be accessed within a few days through the Crossref REST API (which is fed into our Library Discover service).
|Web of Science||
Previously provided by Thomson Reuters, and now by Clarivate Analytics, the Web Of Science is the original 'Citation Index" for published academic research, originating with the Science Citation Index (SCI) in 1964 (and later followed by the 'Arts and Humanities Citation Index' and the 'Social Sciences Citation Index'.
Further information about Web of Science content coverage.
Citation data from the Web of Science is used to calculate the:
Provided by Elsevier, launched in 2004 as a competitor to Web of Science.
Citation data from Scopus is used to calculate the:
Citation data from Scopus was used in REF2014, and forms part of the calculations used by the:
|Google Scholar||Unlike Web of Science and Scopus (which require subscription access), this is a free to access service which provides citation data.||Many academics create a Google Citations Profile to track citations for their own publications, or use Publish or Perish (free software) to download and calculate various metrics from the data available.|
Some key journal level metrics are summarised below, and some common uses (and limitations) of these are summarised here.
Journal Impact Factor (JIF)
Calculated from the previous 2 year’s worth of citation data found in the Web of Science (Clarivate Analytics) database. It gives an approximate measure for the average number of citations articles published in that journal over 2 years have received in that year (So a 2015 JIF is the average number of citations received in 2015, for articles published in 2013-14). Citations are not weighted, nor can you draw any conclusions from comparing journals across subject boundaries as it will not take into account differences in publication or citation culture.
Further information: http://wokinfo.com/essays/impact-factor/
JIF Scores: Available via Web Of Science (Journal Citation Reports) - Library Subscription
Calculated from the previous 2 years of citation data as tallied by the Journal Citation Reports (Web of Science (Clarivate Analytics) database). Citations are weighted based upon where they come from. Eigenfactor scores are scaled so that the sum of scores for all journals listed in the JCRs total 100, so that a journal with an Eigenfactor score of 1.0 has 1% of the total “influence” of all indexed publications. There are over 11,000 journals ranked, with PLoS One having the highest Eigenfactor Score as of 2016 (with a score of 1.81924).
Further information: http://www.eigenfactor.org/index.php
JIF Scores: Available via Web Of Science (Journal Citation Reports) - Library Subscription
Calculated from the previous 3 year’s worth of citation data found in the Scopus (Elsevier) database. Launched in December 2016, 'Citescore' is similar to the JIF - but is updated monthly as well as annually. It gives an approximate measure for the average number of citations articles published in that journal over 2 years have received in that year (So a 2016 Citescore is the average number of citations received in 2016, for articles published in 2014-15). Citations are not weighted, nor can you draw any conclusions from comparing journals across subject boundaries as it will not take into account differences in publication or citation culture.
Further information: Elsevier press release
Citescore Rankings: Available via Scopus Journal Metrics
SCImago Journal Rank (SJR)
Calculated from the previous 3 year’s worth of citation data found in the Scopus (Elsevier) database. Citations are weighted based upon where they come from (a journal with a higher or lower SJR), and normalised based upon the set of documents which cite its papers, thus providing a ‘classification free’ measure for comparison.
Further information: http://www.scimagojr.com/
SJR Scores: Available viaScopus Journal Metrics
Source-Normalised Impact per Paper (SNIP)
Calculated from previous 3 years of citation data found inthe Scopus (Elsevier) database. A journal’s ‘subject field’ is taken into account, normalising for subject specific citation cultures (average number of citations, amount of indexed literature, speed of publication) to allow a more ready comparison of scores for journals between different subject areas.
Further information: https://www.elsevier.com/solutions/scopus/features/metrics
SNIP Scores: Available via CWTS Journal Indicators
Some author-level metrics are summarised below, and some common uses (and limitations) of these are summarised here. The h-index is by far the most widely used of the author metrics presented.
The Hirsch index (or Hirsch number) was first proposed in 2005 as a measure for the academic productivity and impact of a researcher's publications over their career. An author's h-index will increase over time, as they publish more papers and their published papers attract more citations.
The h-index is defined as follows:
"An author has an h-index of h, if a number h of their papers have h or more citations"
Example: An author has published 22 publications. Of these publications, at least 8 have received at least 8 citations each. The author does not have 9 publications which have received at least 9 citations. Therefore, that author has an h-index of 8.
Calculating your h-index
You can view or calculate your h-index using data from Google Scholar, Web of Science or Scopus. Note that each uses a different data set, so will provide a different figure, and you should be clear if using your h-index that you are both using the correct data source (if prescribed to), and clearly indicate which datset this is
- Find your h-index using Web of Science
- Find your h-index using Scopus
- Find your h-index using Google Scholar (and Publish or Perish software)
The g-index, proposed by Leo Egghe in 2006, us similar to the h-index but aims to take some account of any highly-cited papers.
The g-index is defined as follows:
"[Where a given set of articles are] ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g2 citations."
Example: An author has published 22 publications. Of these publications, the sum of the citations of the top 12 articles (by number of citations) is equal to or over 144 12 squared) citations. The sum of the citations for their top 13 articles (by number of citations) is less than 169 (13 squared) citations however. Therefore their g-index is 12.
The M-index, or M-quotient, was also proposed by Hirsch in 2005. It aimed to allow a more fair comparison between academics of differing career lengths.
An author's m-value is found by dividing their h-index by the number of years the author has been actively publishing (measured as the number of years since their first published paper).
Example: An author with an h-index of 18 who has been actively publishing for 6 years will have an m-index of 3. An author with an h-index of 30 who has been actively publishing for 15 years will have an m-index of 2. If the two author's are publishing in the same field of study, this may give a more fair way of comparing the impact of the author's publication output over the length of each of their publishing careers.
Most of the metrics identified below can be derived from data provided by Scopus or Web of Science, or from a citation analysis service such as SciVal (which uses Scopus data). See also common uses (and limitations) of these as summarised here.
The most basic metric which can be used as a measure of productivity is the number of publications produced by an individual, or group of individuals.
The total sum of citations received by an author's research outputs, or a group of researcher's outputs.
Citation Impact (Mean Citations per publication)
The mean citation rate of a group of research outputs.
Either a total number of publications which have received at least 1 citation, or a percentage of total publications which have received 1 or more citations.
Field-weighted Citation Impact (FWCI) ~ calculated from Scopus citation data
A comparison of the actual number of citations received by a single output, or large group of outputs, with what might have been the expected number of citations they would receive, based upon the mean number of citations received by all other similar publications (e.g. normalised by output type, output age and field of study).
- A FWCI of 1.00 indicates that a group of outputs have been cited exactly in line with the global average for similar outputs.
- A FWCI of 1.82 indicates that a group of outputs have been cited 82% more than the global average for similar outputs.
- A FWCI of 0.77 indicates that a group of outputs have been cited 23% less than the global average for similar outputs.
% Outputs in Top percentiles
The % of a group of outputs which are in the global top 1/10/25% most cited outputs.
% Outputs in Top Journals
The % of a group of outputs which are in the global top 1/5/10/25% of journals, when ranked by an identified journal metric (eg by JIF, Citescore, SJR or SNIP).
Collaboration Impact metrics (based on co-authorship of outputs)
Some metrics may also look at the Citation Impact of outputs within a group of outputs, which have a co-author with an affiliation which does not belong to the parent group.
For example, this might offer a comparison of the Citation Impact of a group of articles with international (e.g. where a co-author's affiliation does not belong to the author's institution and is outside that institution's country) or corporate co-authors, compared to the Citation Impact of the whole group of articles.
Traditional bibliometrics, where looking at publication impact, focus on traditional scholarly activity in the form of citations. These can have a number of limitations in providing a full picture of the impact of a scholarly output: