Reviewing “Meet Science: What is Peer Review?” Part Two
Continuing my comments on the Boing Boing article on scientific peer review by Maggie Koerth-Baker. As I said before, I only slapped myself in the head a couple of times while reading this article, so I consider it pretty good. Once we got past the difficulties in describing accurately what a journal article is and is supposed to do, the description of what goes on during the peer review process was (scarily) accurate.
Part Two, in which we talk about the behind-the-scenes aspect of peer review: who are peer reviewers and what really goes on during the peer review process?
When an article is submitted to a peer-reviewed journal, the editor chooses two or three people in the field and sends them the article to read and comment on. As I said in Part One, the reviewers can comment on any part of the work, from the initial hypothesis to the inclusion of references. The only guideline is usually that the article fit the quality standards and theme of the journal. These are admittedly very vague criteria.
The theme of the journal is usually pretty obvious; no one submits an article on cells of the immune system to a journal called “Neuroscience” unless it contains information they think will be important to people who study the brain and nervous system. But for journals like Cell, Science and Nature, the criteria for fitting the theme are more vague, and usually go to “high impact” (read from a large, well known lab unless your idea could change the world and then maybe we’ll let you in) papers.
If the theme of a journal is vague, the quality standards necessary for publication are even more so. These standards are largely unwritten and, as Ms. Koerth-Baker points out, left to the individual reviewing the paper to determine.
“For the most part, scientists are not formally trained in how to do peer review, nor given continuing education in how to do it better. And they usually don’t get direct feedback from the journals or other scientists about the quality of their peer reviewing.“
This is true, for the most part. Unless a student’s advisor gives them articles to review and then checks over their work to provide feedback, there are few ways to learn how to critique an article and little feedback on how relevant or helpful the comments are. Since advisors have very little time to even review papers, let alone get someone else to help them and guide their progress, this only happens if a student is very lucky and has an advisor dedicated to the well-rounded training of students.
There is one formal mechanism where young scientists are given training and feedback in peer review, at least where I was trained: journal clubs. Journal club is like a book club.* One member of the group picks an article that they think will interest the group. Then everyone reads it, and the person who chose the article presents a summary of the article and their thoughts on the quality of the work. Graduate programs often have journal clubs for their students. Everyone thinks they are a big pain in the you-know-what because they take up a lot of time that we (and our advisors) feel that we should be spending doing research. But they can be valuable tools not only for learning about new articles that come out in a field, but for learning how to read and critique a report on a scientific study.
“If a paper is peer reviewed does that mean it’s correct? In a word: Nope. Papers that have been peer reviewed turn out to be wrong all the time. That’s the norm. Why? Frankly, peer reviewers are human.”
Um, frankly, scientists are human. If the ideas in a paper turn out to not be “correct” it could be that the peer reviewers didn’t do a very good job. Articles that are a load of crap get published all the time. Usually the lesser known the journal, the more likely this is to happen.
But even well reviewed papers in good journals have ideas that look correct at first, and then turn out in the end to be not really the way things work. That’s just how science is. Ideas get published, then people in other labs look at those ideas and think about ways that they can refine and extend them. Sometimes in refining and extending them, they find out that the idea in the original paper wasn’t exactly correct. Or that there were aspects of the idea that the first group didn’t see or didn’t get around to addressing. This is why, like we said in Part One, it is very rare that one journal article is the definitive word on any scientific idea.
“You should think critically and skeptically about any paper—peer reviewed or otherwise.”
Yes, so true. Whenever I read anything someone has written, my critical thinking cap is always on. The main question to keep in mind whenever you read a piece of scientific research is “does this make sense in the context of the field?” Context is very important whenever scientific ideas are discussed, which is why every paper begins with a discussion of the field in general and the context in which the new ideas presented should be placed. If the data in the paper can connect to other ideas published by other groups, this makes the whole field stronger. If the data in the paper are refuting other people’s ideas, or appear to be in conflict, the authors should carefully explain why they think that others’ ideas may not be entirely correct, or why their ideas are different but still fit into the overall framework of the field.
This brings up a related point that some science journalists miss, which is that in order to adequately review a paper you must first know the field, and if you are taking your review to a wider audience, you must first explain the field. The right context for scientific ideas is not always the same as an individual’s worldview, but when the current knowledge is not explained properly, that’s often the only context a reader has.
” Scientists are frustrated that most journals don’t like to publish research that is solid, but not ground-breaking.”
This is true. If you are the first person to come out with something truly novel, you may get published, if your lab is headed by a big name researcher, or you may not, if your lab is more obscure. If you are somewhere near the beginning, after the idea has been accepted, you’ll very likely get published. But if you are following up on an idea that has been “done,” especially if it’s to publish more evidence that it is right, you will have a hard time finding a journal for your ideas. Which is odd, considering what we just talked about in the previous paragraph, which is that context is critical and that connecting to and being supported by other ideas in the field is an important thing for a new publication to achieve.
Ironically, the opposite is true when it comes to funding. Scientists are also frustrated that government agencies like the National Institutes of Health seem to want to fund research that is more of the same solid stuff that has been done before and will work again, even though they say they want to fund research that is risky: potentially ground-breaking but potentially a spectacular failure. But that’s a story for a different day.
“They’re frustrated that most journals don’t like to publish studies where the scientist’s hypothesis turned out to be wrong.”
Ah yes, “The Journal of Stuff that Didn’t Work.” How we would love to have that journal. It would be the largest, most widely read journal in all of science, right next to “The Journal of Lovely Preliminary Data for Ideas Ultimately Too Hard to Complete.” Why? Time for research is limited, so if someone else already tried a series of experiments and it didn’t work out, it would be nice to know so as not to waste time going down the same path.
From the comments section:
“I expect the cost for most modern scientific experiments to be rather exorbitant — time, materials, equipment, staffing etc. I also understand that existing research is based on findings of earlier research, but in that case, how many scientists will first go back and verify the original findings, before going forward with their own research?”
All the time. We have to. Repeating a couple of experiments from someone else’s publication is a good way for a lab to figure out if the system they have set up in their lab matches the system another lab used to create their data. If the two systems are similar, it will be easier to fit the ideas from both labs into the same context and advance the field. However, these repeats are rarely published, because journals are often only looking for the new and novel, not independent confirmation of exactly the same experiments. This is why labs often create elaborate new systems in which to test their ideas, instead of building on others. It increases the novelty of their work, but also the difficulty in comparing it to and linking it with other work in the field.
*I know this is an odd thing to say when in Part One I ragged on the author of the original article for likening journal articles to book reports, but, such is life.