The San Francisco Declaration on Research Evaluation [Editorial]

Home  >>  BIO  >>  The San Francisco Declaration on Research Evaluation [Editorial]

The San Francisco Declaration on Research Evaluation [Editorial]

On June 29, 2013, Posted by , In BIO, By ,,,, , With Comments Off on The San Francisco Declaration on Research Evaluation [Editorial]

  1. Jordan W. Raff, Editor-in-Chief

  1. Sir William Dunn School of Pathology, University of Oxford, South Parks Road, Oxford OX1 3RE, UK; bio.editor{at}biologists.com

On December 16, 2012, a group of editors and publishers of scholarly journals gathered together at the Annual Meeting of The
American Society for Cell Biology in San Francisco, CA, USA to discuss current issues related to how the quality of research
output is evaluated, and how the primary scientific literature is cited.

The impetus for the meeting was the consensus that impact factors for many cell biology journals do not accurately reflect
the value to the cell biology community of the work published in these journals; this also extends to other fields in the
biological sciences. The group therefore wanted to discuss how to better align measures of journal and article impact with
journal quality.

There is also an alarming trend for the citation of reviews over primary literature, driven in part by space limitations that
are imposed by some journals. As this contributes to lower citation indices for journals that focus mainly on primary literature,
the group discussed ways to combat this trend as well.

The outcome of this meeting and further discussions is a set of recommendations that is referred to as the San Francisco Declaration
on Research Assessment, published in May 2013. The recommendations are listed below, or you can read the entire Declaration
here: http://www.ascb.org/SFdeclaration.html.

The Company of Biologists (COB) and its journals Biology Open, Journal of Cell Science, The Journal of Experimental Biology and Development fully support this initiative. In concordance with the recommendations, all COB journals provide impact factor alongside
a variety of other journal-based metrics; request an author contribution statement for all research articles; place no restrictions
on the reuse of reference lists; and have no limitations on the number of references. The COB is also working with its online
hosts, HighWire, to provide a range of article-level metrics.

It is our hope that this initiative will help to ensure that research assessment remains informed and fair.

General Recommendation

  1. Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research
    articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

For funding agencies

  1. Be explicit about the criteria used in evaluating the scientific productivity of grant applicants and clearly highlight, especially
    for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the
    identity of the journal in which it was published.

  2. For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software)
    in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research
    impact, such as influence on policy and practice.

For institutions

  1. Be explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for
    early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity
    of the journal in which it was published.

  2. For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software)
    in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research
    impact, such as influence on policy and practice.

For publishers

  1. Greatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor
    or by presenting the metric in the context of a variety of journal-based metrics (e.g., 5-year impact factor, EigenFactor,
    SCImago, h-index, editorial and publication times, etc.) that provide a richer view of journal performance.

  2. Make available a range of article-level metrics to encourage a shift toward assessment based on the scientific content of
    an article rather than publication metrics of the journal in which it was published.

  3. Encourage responsible authorship practices and the provision of information about the specific contributions of each author.

  4. Whether a journal is open-access or subscription-based, remove all reuse limitations on reference lists in research articles
    and make them available under the Creative Commons Public Domain Dedication.

  5. Remove or reduce the constraints on the number of references in research articles, and, where appropriate, mandate the citation
    of primary literature in favor of reviews in order to give credit to the group(s) who first reported a finding.

For organizations that supply metrics

  1. Be open and transparent by providing data and methods used to calculate all metrics.

  2. Provide the data under a licence that allows unrestricted reuse, and provide computational access to data, where possible.

  3. Be clear that inappropriate manipulation of metrics will not be tolerated; be explicit about what constitutes inappropriate
    manipulation and what measures will be taken to combat this.

  4. Account for the variation in article types (e.g., reviews versus research articles), and in different subject areas when metrics
    are used, aggregated, or compared.

For researchers

  1. When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific
    content rather than publication metrics.

  2. Wherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give
    credit where credit is due.

  3. Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published
    articles and other research outputs.

  4. Challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote and teach best practice
    that focuses on the value and influence of specific research outputs.

Biology Open 000, 1-2

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly
attributed.


Biology Open current issue

Comments are closed.