Skip to Main Content

Journal Evaluation & Measuring Author Impact

Article Citation Count

Citation count measures the impact of an article by counting the number of times a work has been cited by other articles. However, it is not a direct measure of quality. Many factors can influence how often an article is cited, such as: the reputation of the author, the language in which the article is written, the type of article (e.g. review articles tend to be cited more), the length of the article, how visible/available the full text of the article is, and the relevance of the topic being discussed to popular topics the day.

Citation counts for articles can be found in many tools and databases, including Scopus and Google Scholar.

Citation counts can best compare articles written by authors in the same discipline at the same general point in their careers. Authors who have been writing for longer will tend to have more citations, and disciplines have varying citation practices, with some citing much more or much less than others.

It is also important to remember that a simple citation count lacks context. While publications with more citations may "on average" be more likely to have a higher impact. Articles can also be cited for both positive and negative reasons. The more citations a publication has, the more additional citations it tends to receive, and some of those citations may be perfunctorily in nature. Author citation behavior can also be biased. Articles with a female lead author tend to have fewer citations than those with a male as lead author, and authors might be more likely to cite friends and colleagues, than a competitor. Citations in the methods or discussion section, also tend to have more actual impact than articles cited in the literature review section of a journal.

Field Weighted Citation Impact / Benchmarking

In addition to article citation count, Scopus has two additional article-level metrics that are field-normalized indicators, meaning that they can accurately compare articles from different fields. The Field Weight Citation Impact shows how well-cited an article has been cited compared to other similar articles. A value greater than 1.00 means that the article is cited more than is average for the discipline. Citation Benchmarking places articles in a percentile compared with how often other articles in the discipline listed are cited. An article in the 99% is among the top 1% of journals in its field.

A potential weakness of this metric is the fact that it relies on the set of journals that Scopus assigns to a particular "discipline." Sometimes those assignments can be inaccurate, or a field can be defined too broadly to be useful. Articles published in multidisciplinary journals are also not included in the metric.

Relative Citation Ratio

Developed by the NIH to help evaluate grants, the Relative Citation Ratio (RCR) is a new, non-proprietary metric, introduced in 2015, that was designed to allow for a comparison of citation counts across disciplines. Typical article citation counts, as described above, don't allow for this.

The RCR works by normalizing the number of times an article has been cited. It does this not by dividing journals into different disciplines, but by using citation data from NIH funded papers published in the same field and year, essentially creating a field for each paper based on co-citations. 

A paper with an RCR of 1 has received more citations per year than 50% of NIH-funded papers in its field. Multiple PubMed ID's can be input into the database at the same time so that the impact of a group of researchers can be measured together.