What is the future of scientific publishing and evaluation?

Publications are our primary scientific currency and play a major role in how we are evaluated when applying for jobs, tenure, and funding. Thus, the editorial, peer-review, and communication practices of major journals are important for all of us and can affect the progress of our careers, as well as the progress of science.

However, publishing has gotten out of control, with scientists valuing journal brand names over real discovery, and with hiring and funding committees relying on Journal Impact Factors to evaluate individual scientists. In addition, scientists are now spending more time than ever on the review process, slowing down the pace of science and especially affecting young scientists who need publications to advance. Has this gone too far?  How are we going to tackle these critical problem, and what’s being done already?

To address the growing concerns about the publication process, the MPI-CBG recently hosted a discussion featuring Jennifer McLennan, head of marketing and communications at the journal eLife, and three MPI scientists at different stages of their careers: Felipe Mora-Bermúdez (senior postdoctoral fellow), Simon Alberti (junior Group Leader), and Tony Hyman (senior Group Leader and MPI Director). A core part of the eLife mission is to inspire change in research communication across the entire system. As a researcher-driven initiative, eLife wants to know what the scientific community values in order to help shape what happens next, and the main purpose of Jennifer’s visit was to listen to our ideas and concerns.

Jennifer kicked off the discussion by familiarizing us with the goals of eLife and updating us on some exciting new developments at the journal, including:

  • eLife Advances – a platform for researchers to build in increments on their published papers, with even a single experiment if it advances the story in an exciting way.
  • Early-career Spotlight – a number of ways in which eLife supports and promotes graduate students, postdocs, and young group leaders. These initiatives include letters of recommendation from Senior Editors, interviews and podcast highlights on their website, and sponsored presentations at high-profile (otherwise closed) meetings organized by HHMI, Max Planck, and Wellcome Trust.

A few recurring themes emerged over the course of the discussion and Q&A:

  • Quality of the Review Process
  • Transparency
  • Reproducibility
  • Metrics

Quality of the Review Process
Many expressed frustrations with elements of the typical review process. Common scenarios include: receiving a laundry list of lengthy experiments to perform; being asked for something outside of the scope of the paper (i.e. “come back when you have a mouse model/crystal structure/mechanism/etc”); receiving harshly negative comments; and receiving reviews that indicate the reviewer did not take the time to thoroughly read or understand the paper. As Felipe said, “We are demanded to produce high quality research, and we demand back that our research is evaluated with the highest standards possible. You cannot expect great research if you have poor evaluation.”

Many of these problems are addressed with eLife’s streamlined review process, a key feature of which is consultation among the reviewers. The identities of the reviewers are revealed to one another, and they sit down together for an online discussion about the paper in order to come up with a consensus review, which is later published alongside the paper. The outcome is a single, clear set of instructions, leading to a more focused revision process. Tony, who is a Reviewing Editor for eLife, stressed that they will not ask for any experiments that would take more than 2 months to do.

Tony also discussed the importance of including positivity and removing the vitriol from reviews, two points he emphasizes during his reviewer consultations. “We all know how to be positive,” he said, so include positive feedback alongside constructive criticism, keeping in mind that people spent years of their lives on the work you’re reviewing. Tony quoted Bruce Alberts as saying that the “journal club culture,” where young scientists are encouraged to rip papers apart, is at fault for many of today’s vitriolic and nitpicky reviews, especially when junior scientists are asked to review papers for their PIs. Tony wants to have a discussion about positivity with all incoming Reviewing Editors at eLife, in order to begin to change this culture. In a related point, Marino Zerial called for creating a Code of Conduct for reviewers, which should be applied across all journals, not just eLife.

In many ways, the theme of transparency was closely linked to the discussions of a high quality review process. While the question of a double-blind review process was raised, Simon and others felt strongly that this would not work, because one can almost always figure out from which lab a paper originated. Felipe argued in favor of as much transparency as possible, as opposed to complete anonymity. Holding reviewers accountable by name to their fellow reviewers (as in the eLife process), or even to the author and the public, ensures that they will put more time and effort into their review, while also mitigating any political motivations a reviewer might have, good or bad.

Pavel Tomancak suggested reforms even more “radical” than those eLife has put into place thus far, such as including the authors in the discussion with the reviewers so that they may defend their ideas among their peers.

Felipe discussed how junior scientists would like to be more informed on how decisions are made on everything from why someone is hired, to why a grant is funded, and why a paper is published. The eLife review process adds transparency to publication, and the San Francisco Declaration of Research Assessment (DORA) encourages funding agencies and research institutions to “be explicit about the criteria used” to evaluate researchers, which should help demystify the process for those being evaluated.

At the heart of DORA, and the mission of eLife, is the desire to “eliminate the use of journal-based metrics”¹ (i.e. Impact Factor) as a means for research assessment. Much of our discussion on this theme focused on two questions: 1) Can we really trust those deciding our fate to not take journal names and impact factors into account going forward? 2) If we move away from the impact factor, what metrics do we use instead?

Tony told the story of a publishing recent paper in the journal Biology Open, which quickly became the most highly cited paper in the journal’s history. Despite the paper’s success, Tony was asked by a postdoc, “but does it count?” He expressed bewilderment and frustration that the younger generation is more focused on journal name than the actual discovery. However, Felipe countered that young scientists are reacting to the general feeling that “there’s the best, and there’s the rest,” knowing that journal names stand out very boldly when you apply for a job, and he asked for senior scientists to show leadership in changing this culture. According to Tony, the MPI-CBG implemented the principles of DORA in its most recent search for new group leaders: in their discussions, no one was allowed to bring up where the candidates papers were published, so Tony didn’t even really look. After the candidates were selected, he went back to look at the journal names and saw that some had Cell/Nature/Science papers, but some did not, so Tony emphasized that “believing in DORA” is the key to changing the hiring culture.

So what metrics should replace the impact factor? In Tony’s opinion, the only good metrics for a paper are, “how many people have read it? …and did it they make a discovery?” though the latter can only really be determined in the long term. But when journals openly state that they don’t care about Impact Factors, then it allows them to make publishing decisions based on discovery potential rather than citation potential. When it comes to evaluating job candidates, ideally, the hiring committee would read all the papers of each candidate, but for practical reasons, this isn’t usually possible. To address this, Jennifer said that eLife creates “a plain language summary of every single paper” and asks for the authors to write “a statement on the impact that can go straight into your NIH profile” or job application.

Felipe also discussed the idea that ‘impact’ has become a maligned word, which should be refocused to address the real impact of a paper. In the long-term, your paper’s online webpage could be “like the seed of a tree from which you can follow all of the ramifications that your paper produced,” including confirmations by other groups and advances built on your findings, thus tracing “the real, tangible impact of a paper.”

Felipe’s idea relates to both impact and reproducibility: linking published confirmations of a paper to the original work would make the story stronger over time. A commenter in our discussion argued for the validity of publishing work even if it’s just been scooped, since publishing it confirms and strengthens the findings. Tony fully agreed, and recounted a recent scenario where he received a paper for review that had just been scooped by another paper in eLife, and they decided to send it out for review anyway for exactly that reason: “If it’s reproducing another paper in eLife, why wouldn’t you publish it? Obviously it came in close enough that they couldn’t have read the paper and then gone and repeated it.” While the official policy of eLife is currently that they will proceed with publication if you’re scooped while under review, but not if you’re scooped before you submit, Tony’s story proves that different outcomes are possible, and he said he would continue to push the idea of publishing contemporaneous papers at eLife.

Another commenter brought up the idea of asking renowned colleagues in your field to reproduce your experiments, either creating a separate publishable object, or adding it as a section in the original paper, recognizing those who did the reproduction. This would be a way to serve the scientific community, while also getting recognized for the work.

As for eLife’s current stand on reproducibility, Jennifer said that they have a reproducibility checklist underway. In the meantime, eLife is supporting the Reproducibility Project for cancer biology by “publishing the registered reports and replications studies for the 50 top cancer biology papers from 2010 to 2012.”

Join the discussion!

You can now watch the panel discussion event on YouTube! And please take our survey about the event, where you have a chance to voice your comments, opinions, and questions on these topics (whether or not you were able to attend in person). We will compile your answers and add an update here.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: