Scholarly Communications

Assessment

Bibliometric indicators

Publication and citation metrics are used as an indicator of research quality by league tables, funders and, increasingly, employers. Whilst controversial and imperfect, they are here to stay and it is important that individuals understand what indicators are available, their strengths and weaknesses. Citation indicators are affected by:

Table 1 outlines a selection of publication and citation indicators commonly used for ‘royalty-free’ literature such as journal articles and conference papers.  An entity could be an individual, research group, school, university, group of universities, publication set, journal or country - anything in fact that has outputs associated with it.  Most of the indicators listed can be derived from Scival, a citation benchmarking tool, where the outputs (taken from Scopus) include journal articles, reviews, editorials, short surveys, and conference papers.  Loughborough University subscribes to SciVal and any member of the University can access this tool.  For help, please contact researchpolicy@lboro.ac.uk.  Other citation tools may make similar calculations but call them something different. The SNIP and SJR values are also derived from Scopus data, but are based on slightly different document types. The SNIP value is based on article, review and conference papers and then further excludes those documents that do not contain cited references. SJR values are based on article, review and conference papers and short surveys. See this video on how the SNIP and SJR are calculated.

Internal discussions have taken place around the use of indicators for the visibility and impact of monographs.  However, as there are no established, normalised and benchmarkable sources of data for such indicators as yet.

 Table 1 A selection of publication and citation indicators with their advantages and disadvantages

Indicator

Advantages

Disadvantages

Useful for

H-index

Number of publications, n, with at least n citations. (There are a number of variations on the H-index – see http://www.harzing.com/resources/publish-or-perish#metrics for a list)

In common usage and well-understood. Easy to calculate using a variety of sources (e.g. Web of Science, Scopus & Google Scholar) some of which include books.

Not normalised by field.  Correlates with career length and therefore disadvantages early career researchers.

Comparing entities in the same field of similar age.

H-5 Index

Number of publications published in the last 5 years, n, with at least n citations.

Offers less of a disadvantage to early career researchers (working for >5-years) than the h-index.

Only offered by Google Scholar and Publish or Perish (based on GS data).  GS data is not as robust as other sources.

Comparing entities in disciplines such as the Arts & Humanities publishing in non-journal outputs.

Field-Weighted Citation Impact (FWCI)

Actual citations vs expected citations considering age, field, and document type of publication(s).  The Mean Normalised Citation Score (MNCS) from Thomson Reuters is similar.

Normalised by document type, age and subject field.

Not easy to reverse engineer.  Small publication sets can skew the results.

Comparing larger entities such as departments, universities, countries, etc.

Citations per publication (Mean citation rate)

Total number of citations divided by the total number of publications.

Easy to calculate.  Correlates well with FWCI

Not normalised by field, age or document type. Older publications will have had longer to accrue citations.

Comparing entities in the same field of similar age and document type.

Cited publications

Number or percentage of publications which have been cited at least once.  (Could include or exclude self-cites).

Easy to calculate.

Not normalised by field, age or document type. Older publications will have had longer to accrue citations.

Highlighting groups of publications that have had no citation impact.

% outputs in top percentiles

% of an entity’s outputs in the top 1/10/25% most cited outputs in the world

Based on percentiles so a single highly cited paper shouldn’t skew the results so heavily.

Not normalised by field.

Evaluating absolute citation impact, independent of discipline.

% outputs in top percentiles (Field Weighted)

% of an entity’s outputs in the top 1/10/25% most cited outputs in their relevant subject categories the world

Normalised by age, subject and document type. Based on percentiles so a single highly cited paper shouldn’t skew the results

Hard to reverse engineer.

Comparing entities of any size in any discipline.

% outputs in top journal percentiles (SNIP or SJR)

% of an entity’s outputs in the top 1/10/25% of journals according to their SNIP or SJR value

The SNIP and SJR are both subject normalised (see caveats above).  Correlates with citation indicators in some disciplines.  Targeting certain journals or conferences is more achievable than targeting citations

Conference proceedings tend to have lower or no SNIP or SJR values so it may disadvantage disciplines that rely on conference publication. Does not directly assess the output itself, i.e. an output in a high SNIP journal is not necessarily excellent and an output in a low SNIP journal may be excellent

Comparing entities of any size in any discipline.

% international collaborations

% of an entity’s outputs which are internationally co-authored.

Easy to calculate.  Correlates with citation indicators.

Not normalised.  International collaboration is not appropriate for all subject areas, nor is co-authorship.

Comparing international co-authorship levels in the same discipline.

Field-weighted international collaborations

Actual international co-authored papers vs expected co-authored papers considering the age, field, and document type of publication(s).

Normalised by age, subject and document type.

Hard to reverse engineer. 

Comparing the international co-authorship levels of entities in different disciplines.

SNIP Source Normalised Impact Per Paper.

Actual citations vs potential citations to a journal.

Subject normalised indicator.  Calculated on a journal by journal basis so suitable for comparing one discipline with another, or interdisciplinary titles that fall between subject categories.

Journals with high SNIP values don’t correlate as well with citation indicators as journals with high SJR values.

Comparing the citation impact of journals in different disciplines.


Further indicators may be found at the following locations:

  • Harzing.com has a list of key of metrics relating to individuals   
  • Journal Metrics has a list of three key metrics relating to journals

Semantometrics

Semantometrics is the name given to a new branch of research evaluation that uses an assessment of the full-text of a research paper rather than the number of citations alone to assess its impact.  It works on the assumption that a publication's contribution to science can be measured by the semantic distance between the papers it cites, and the papers that go on to cite it.  Thus a publication forms a 'bridge' between what is already known and what has been discovered.  The longer the 'bridge', the greater the contribution the paper makes to scholarship.  Thus, a paper could be very highly cited (e.g. a review paper) but not actually make any significant advances as measured by semantometric indicators.  Read more in this paper by Knoth and Herrmannova.

An Alpha service based on semantometric principles for papers in Computer Science is available at http://semanticscholar.org/