While I have complained (justifiably I think) about peer review on here before a few times and generally the occasional poor quality of some published papers (pointing no fingers, even the best of us write bad papers) peer review certainly acts as a barrier to truly dreadful things getting published and serving as a general quality filter on the nature of science reaching the printed pages of journals. To a degree I think this is very important, and I’ll expound on that a little in the context of the debates / random thoughts going on at SV-POW! right now (try, here, here and here for starters).
I came a bit late to these discussions owing to being in the field and thus have yet to jump in properly (so I may be repeating things others have already said or getting things backwards, ap[ologis if this is the case), but the comments of Mike Taylor especially hit partly on one point that I’d like to expand on. The issue is one of trust, that is, what can you trust in the scientific literature or outside of it, and how can you judge this? That is, how likely is it that something you are reading is accurately reflecting reality (be it the data, the statistics, or even the opinion of an individual) or be partly or even very wrong. As Mike (and other posts on that blog there) note, the walls are starting to come down with blogs, online media, journals, personal comments, twitter and the rest all starting to get bundled together with lines blurring and start-and-ends becoming increasingly difficult to spot. (Mike has a list of sources in order or relative trustworthiness about half-way down the linked post).
However, I would argue that one important area has been omitted from this debate (or from what I have read of it, apologies if I have missed out) the difficulty of assessing how good or bad something is. The debate as I see it is being conducted by people who either know or care or both a lot about science (or here at least vertebrate palaeontology) and thus have a good working knowledge of what kind of research is good, and who is doing that work and in what way on what fields. Even coming into something blind like plant physiology say, I think I have enough of a basic scientific mental checklist of what to look for in a paper or body of work to spot the good from the bad (or at least say which of two given papers is better, or which hypothesis is better supported). However, many people lack this experience and skill set, most notably students and those just coming into science
(As a side note I have actually ‘tested’ former undergraduate students under my tuition by giving them very conflicting papers and asking them to evaluate the quality of the work / arguments in each – they often picked what I would call the ‘worse’ paper / article over the one with better science).
This is an important issue. Even within the framework of peer-reviewed research in recognized journals, those without the necessary training and experience will struggle to pick out good science over bad. If the review process was eliminated (hypothetically, I don’t think anyone is arguing for that) or even if the boundaries were blurred much more than they are currently then it would be much harder to do just this. How much harder would it be to tease apart the BAD-BAND stuff from say BCF if one could not pick from a peer-reviewed respectable online journal and an online, freely published un-reviewed journal without extensive digging into the background of the journal and its editors and the authors themselves. You’d end up reading more and more papers and more and doing more and more research to try and find out if the research you wanted to read was any good. Similarly, the more license we give to people to publish any thoughts / ideas / critiques etc. online the harder it is to pick between them. Pterosaur research is for example a narrow field – there are probably only a dozen people regularly publishing pterosaur related research (and I’m not sure I’d count myself in that list with all my current theropod work), but there are at least as many blogs dealing regularly with pterosaur research (admittedly with some people featuring on both lists) and still more on forums and mailing lists. If you have 50 people ‘publishing’ stuff for every 10 researchers with the quality of those comments and blogs varying from the moronic and uninformed but through to the academic how does a non-expert separate the wheat from the chaff as ever more words are piled in front of him?
With the retention of peer-review two things happen simultaneously. First we get a kind of minimum-quality kite-mark attached to anything published. It might still be awful, but at least some referees and editors think it’s worthy of publication, so you can have at least a basic faith in the quality of the research and by extension the researcher. Secondly however, it also acts as a ‘volume’ filter. No matter how fast and streamlined peer-review gets, and how many journals show up, new specimens appear and new authors write scientific papers, there will always be far, far, far more blogs, e-mails and online media than there are peer-reviewed papers (with luck I’ll produce 10 papers this year, a huge number for me and mostly as a junior author, but I’ll do over 300 blog posts in that time on my own). One can therefore also use the existence of peer-review as a filter to get past the volume of material out there and cut to that which is most likely to be worth reading, or at least identify people whose non-peer reviewed material is likely to be of high quality and worth pursuing.
I always remember the trauma I had of adjusting from being a zoologist interested in animal behaviour / ecology, fish locomotion and a bit of systematics to firstly hardcore taxonomy and macroevolution and later reptile anatomy and pterosaur flight mechanics. Identifying key papers, key researchers, important concepts and areas of controversy and consensus was not always easy (and was often very hard, most especially as a raw undergraduate suddenly shifting from a set textbook to being let loose in an academic library and told to write about ‘archosaurs’ [archo-what?]). I imagine doing that now would be hard, much harder and indeed as noted before there are worrying signs that students are not good at separating out the wheat from the chaff, eroding one of the few obvious and clearly defined barriers to this would only make it much harder still.
Recent Comments