Posts Tagged 'science'

Constructing hypotheses on behaviour in the fossil record

Those keeping up with papers on palaeoethology may well have noticed that a number of papers have gone online in the Journal of Zoology of late with a common theme. Darren Naish has a paper on the behaviour of fossil birds, Andy Farke has one on combat in ornithischians, and Pete Falkingham has a paper on interpretation of trackways. This is not a coincidence, but part of a special issue of the journal out today on behaviour in the fossil record and all of these contributions will eventually be published together with a number of others in a collection I have assembled as a guest editor. The volume has ended up rather dinosaur-biased which is unfortunate as a number of other papers were promised from other fields (including on whales and the Burgess shale) which never appeared and giving the set a more dino-centric appearance than I had planned or hoped for.

Adding to this is in fact my own paper in the volume. This was something I had been working on for a while before being asked to compile the special issue (indeed the fact that I was working on it, and it was intended got the journal may have precipitated the invitation) and in the context was the perfect home for the paper. As with similar cases I had nothing to do with my own manuscript and it was submitted separated and edited and refereed independently by the journal, and only after acceptance could it be added to the list. Most of the papers are reviews of one form or another, and in my case the paper written with my friend Chris Faulkes looks primarily at issues of hypothesis creation on behaviours for fossil taxa.

Our main contention is that in the past palaeontologists have been a bit over zealous in the production of hypotheses and the way in which they have been generated has made them difficult to assess or even simply discuss and in at least a few cases hould probably not have been suggested at all. We don’t think it inappropriate to generate hypotheses that cannot be immediately tested, or those that are difficult generally to assess, but a hypothesis must have at least some support behind it to make it valid in the first place, and poor uses of terms, lack of specificity, or even use of fundamentally flawed concepts have meant that there are problematic ideas in the scientific literature.

Mutual sexual selection is perhaps a good example here. I’ve now penned a number of papers with various authors about the issues surrounding this idea and how it may fit into archosaur evolution. The point is not whether or not we are right about this, but more the fact that this was something hinted at by Darwin, written about by Huxley and extensively studied by numerous ethologists for decades, and yet many palaeontological papers discussed sexual selection purely in terms of dimorphism, or the fact that sexually selected features should feature on only one gender, or indeed that sexual selection should be mutually exclusive of other functions. None of these things are true, so hypotheses that rely on one of these as are starting point are going to be fundamentally flawed, or at least problematic.

Thus the paper sets out to identify some key areas where we feel mistakes have tended to be made (myself drawing on examples from dinosaurs and pterosaurs, Chris from his area of expertise the mammals) and to also then try to find a set of guidelines that might help the better generation of hypotheses allowing for reduced confusion and better testing. Naturally we think this is going to benefit researchers, but given the rampant hypothesising that often accompanies any online discussions of the behaviours and ecology of extinct animals online and in other informal venues, it might just help clean up some of the more egregious suggestions that can be put forwards based on the most tenuous of links. Some of it may sound excessively simple and even obvious, but that doesn’t mean it hasn’t been an issue in the past. I actually had a chat with an ecologist the other day who bemoaned a similar set of problems in her field, and I think the issue is more one of advancement and general improvement that systematic errors or poor science.

Naturally we did try hard not to pick on individual papers (or people) but we did also want to point to some specific examples of the kinds of problems we were discussing and so a few things get the finger pointed at them, but they have mostly had specific rebuttals in the literature already, or were very much generic issues. Hopefully then, we’ve not bent any noses out of joint. I was certainly grateful to Andy Farke for reading an earlier version to check for overall tone and to see if it was working the way we wanted. Anyway, here are a few of the things we looked at.

Terms need to be more specific. Talking about ‘parental care’ say in general terms isn’t very helpful when this can encompass pre- or post-natal care, or both, and differing degrees of commitment from parents over very different timescales. So a statement like ‘X showed parental care and Y didn’t’ may not mean much if the parental care shown was minimal, or two papers might say this where one is referring to all parental care, and the other only post-natal, making them hard to compare.

Overlooking counterexamples or complexity. Descriptions of species or clades as ‘social’ has been creeping into the literature on dinosaurs and yet even if you do somehow have super evidence for sociality in a species, applying that to other taxa, or even other member of the same species is not necessarily a great idea. While we do have highly social species that basically can’t function when not in a group (like some molerats) even famously social animals like lions often spend part or much of their time apart, and some like cheetahs can be incredibly plastic, switching from social to solitary multiple times in their lives, and yet it would be a big mistake to suggest tigers are fundamentally social because their nearest relatives the lions are.

Extreme examples or oddities are useful to provide context or even limits on ideas. Some species have incredibly specific requirements or only live in certain environments, while others are much more adaptable. You don’t really find sand cats outside of deserts or dry environments, and while lions show up in quite a few places, you can get puma in everything from high mountains to praries, deserts and rainforests, yet there’s not especially obvious about their osteological anatomy that they could occupy so many more environments.

Make sure the analogy or reasoning behind it is actually correct. Not too long ago it was suggested that azhdarchids had long necks to reach into carcasses of large dinosaurs. However, given that the heads of the biggest azhdarchids (estimated at getting on for 2 m) are longer already than the longest sauropod ribs we know of (2 m) then any kind of neck is redundant in this context, let alone a long one, and vultures do fine with absolutely short necks and heads while feeding on carcasses of animals many times their size. The analogy that the hypothesis is being based on is fundamentally false and if that is the sole support for it as a concept, then it’s really not much of a hypothesis.

The short version of much of this could well be summarised as “look more at the behaviour of extant organisms”. I know Darren bangs this drum a lot on TetZoo and I’ve said it in plenty of talks and to lots of people if less so online. It is confounding when people say that such-and-such behaviour isn’t seen in reptiles when it plainly is, or that only animals with feature Y can do this behaviour when it’s known in numerous species, that are just less specialised towards it (or even show no obvious adaptations – like tree-climbing crocs). True this may not be common or normal, but to assume that it’s impossible, or that there is a perfectly consistent correlation is incorrect.

Part of the difficulty is a lack of good data on many of these things. Ethologists can simply observe behaviours and therefore don’t necessarily go looking for osteological or other correlates that we might be able to detect in the fossil record. That does make things harder, but we need to try and avoid getting trapped by ‘we don’t know if this correlates therefore this hypothesis is valid since we don’t know’. I am actually not against (in principle) hypotheses that are difficult if not currently impossible to test, but as with the azhdarchid neck example, there is a difference between something that can’t be tested, and something which is not even supported at the most basic of level. A hypothesis has to have some support, and some specificity about that will go a long way to making things much more clear and amenable to testing and allow a great fit of existing and future data.

What is most remarkable is how far things have come so quickly. So many modern analyses are using things like FEA and functional morphological analyses, are looking for correlates of behaviour (or aspects of ecology that link to behaviour), and more and better comparisons to extant forms and their anatomy are being used. Such important work or our understanding of the biology of extinct animals should not be let down by poor hypotheses and we do hope that, while things are improving already, this will help better communication and understanding of ideas.

 

D. W. E. Hone and C. G. Faulkes 2014 A proposed framework for establishing and evaluating hypotheses about the behaviour of extinct organisms (292: 260–267)

 

 

 

 

Butterflies & moths

Another little display from the Carnegie I’ve had sat in my files for too long. OK so there’s nothing here that’s linked to archosaurs, or even evolution in general. But what it does do is address just the kind of question that often bugs people. I think a very big proportion of the public would recognise that moths and butterflies are close relatives and that they are different, but aside from the diurnal / nocturnal split and the fact that butterflies tend to be more colourful, they would probably struggle to say how you could tell them apart, or for that matter what linked them together.

My experiences with Ask A Biologist suggest this kind of thing is really common. People have bits of knowledge and part of the full picture, but don’t realise they have only part of the story and even if they did, don’t know how to go about filling in the gaps or putting their knowledge into context. In the case of AAB, someone has realised that don’t know the full picture, or has had their interest piqued by some incident.

In the case it’s actively prompting people – it’s easy to imagine someone looking at this and thinking “Oh yeah, what *is* the difference?”. The headline is a nice attention grabber and it’ll get people to read the short captions below and, hopefully, get them thinking a little more about taxonomy and diversity (if not in those terms) and the world around them. In short, neat idea, well done. I can easily see this being a nice series too – a line of panels of ‘What’s the difference between a shark and a fish?’ or frogs vs toads, newts vs salamanders, goats vs sheep and the like.

What is also nice about this is how much that is conveyed in such a small amount of space and few words. Maximum communication but without filling the place or making people struggle through dense text to get the message across, and all the time filling in other gaps in their knowledge with little extras like the addition of skippers or the relative numbers of species. Great stuff.

Referee selection roulette

The other day I had a little Twitter exchange with Andy Farke (of the Open Source Paleontologist) about the issues of finding referees for papers as an editor. Andy noted that there was not only a high refusal rate (people not wanting to review papers) and some referees being repeatedly nominated as choice targets by the authors of papers. I’ve not done that much as an editor and that’s probably why I’ve not seen as much of this as he has, but I can certainly see how it can be an issue.

Either as an author suggesting referees or referees picking one, there are lots of people to try and avoid. Clearly you’re not supposed to go for close collaborators or former students of the authors as they might be biased, and equally avoid people with an axe to grind (oddly many researchers don’t like you publishing papers that take down their pet hypotheses). You also need to try and pick people who provide good, fair, reviews and on time. I’ve catalogued so of my own travails with late referees before and it’s not a lot of fun to wait months and months for a reply only to get a few lines worth of comment.

Of course the referee also needs to be an expert in the area(s) concerned. It’s perhaps not a big surprise that this can prove tricky. By the time you’ve eliminated the referees that can’t or won’t review something, the ones that are always late, the nemesis of the lead author, his former students and best friends, and the ones you have asked 10 times already this year you can imagine the pool runs very shallow indeed. If that starting pool is small enough or has a lot of antagonists (he said while totally not thinking about pterosaurs at all) then it’s perhaps not a surprise that editors can struggle.

While the pool can’t easily be expanded it would appear that some people do need to be more willing to review at all, or on time if they do. I do know that some editors will keep a list of good and bad referees, but I wonder if any journals / editors offer feedback to referees (if they do I’ve never had any or heard of it). It’s odd, we go to a lot of trouble for authors to reply to and comment on the feedback they get from referees and argue things through, but why is less attention paid to the referees themselves? The can be every bit as influential on the work, and certainly I’ve come across reviews that paint the referee in far from a good light. Is it time to start handling and even reviewing the referee’s performances?

Not a fossil

One occupational hazard of being a palaeontologist is that it’s quite a rare field to be in and yet fossils animals and especially dinosaurs are familiar to the public. Thus odd rocks and cattle bones can, to the untrained eye, look very exciting. With a regular feed on media stories along the lines of “Sam Smith found an odd bone on the beach and it turned out to be a new dinosaur” it’s no great surprise that people are eager to push these to the nearest palaeontologist or geologist, no matter how far fetched the idea or unlikely the interpretation. Pseudofossils cause particular problems, but any old chunk of rock or bone can be prized as a shell, dinosaur bone, mammoth tusk or usual shell.

It’s a common enough problem that people in the past have had to take action. I got hold of this recently from one of the curators at the Natural History Museum in London. It’s rather old (the fact that it has a telegram address is rather a give away, as is the style of the phone number) and shows that even what was (I’d guess) 70 years ago or more, that it was considered an important time saver to have these printed up with the most obvious candidates prelisted and ready to be checked off.

This is true of other fields as well. Archaeology perhaps unsurprisingly suffers from a near identical syndrome (prompting this piece of humour) but others get it too. I recall in Simon Singh’s superb book on Fermat’s Last Theorum that so many mad and bad attempted proofs of the theorum were sent to a university professor who was supposed to assess them that he had thousands of cards printed similar to that above that ran along the lines of “Dear…., the first error in your proof is on page….. line….., thus the proof is flawed”. It’s an ongoing struggle and one we cannot win. But in the meantime it can at least be fun.

The incredible links across science

A few days ago I discovered that the paper on the flexion of theropod wrists in the ancestors of birds I contributed to has been cited in a journal I would never have expected. Namely ‘Frontiers in Psychology’ and then intriguingly title paper “Sea Slugs, Subliminal Pictures, and Vegetative State Patients: Boundaries of Consciousness in Classical Conditioning”. The paper appears to be open access so you can read it here if you so wish. Naturally this is quite cool and odd, but it does make the point about just how connected very disparate bits of science can be. When we wrote the paper we were thinking purely in terms of bird anatomy, evolution and behaviour and no thought ever passed that this would be linked to sea slugs let alone psychology.

I’ve been asked enough times (generally in a friendly, rather than confrontational manner) why we should fund palaeontology etc. with no apparent applications to mankind. Three are two stock answers to this. First that knowledge should be cherished in its own right and we should try and learn about or world and it’s past, present and future. The second, which I think is more intriguing, is that it’s hard, even impossible, to see how some bits of science might fit together. I’d never have predicted our work would be used to make a point in a psychology paper.

This does show just how interlinked very different branches of science can be and how they can interrelate. Ultimately all science is linked of course, but it need not be just by the obvious slight overlaps between say dinosaurs and fossil birds to living birds to flight to biomechanics to aerodynamics to physics etc. but that huge leaps across the science network are possible, even if they’re not that common.

 

 

Online resources for palaeontologists

I was chatting to Mike Taylor the other day about Cladestore as I couldn’t find the page I needed and was surprised he didn’t know of it. To be fair it did start off well and then rather sank, but the principle is sound and it seemed relevant enough that he might know of it. It is, in short, an archive for the various files and datasets used for phylogenetic analyses. Obviously these are generally published alongside any paper that they feature in, but typing these out again or taking the raw data and formatting it into a useable manner can be a pain, and it’s not always easy to get things out of the original authors. The idea therefore was to create and archive for these files so they were easily accessible to all. Since this does seem little known, it’s well worth advertising. And I should add that despite it’s slight antiquity, I believe they still take submissions so send ‘em your nexus and tre files.

Coupled with my reference to the Paleobiology Database earlier and it got me thinking. It would be nice if there was a single, simple, one-stop-shop for all manner of palaeo websites and online resources that are useful to researchers and those interested in the field. So I’ll try and create one, as it’ll help me learn and I expect, help my colleagues. So, anything you can think of, do submit it below. I’m thinking general stuff – a database of tyrannosaur specimens, or pterosaur papers is fine but it won’t be of much use to too many people so it’s not really worth putting here. I’m thinking of major resources that cover whole fields or are simply so vast with the data collection that they are must-know-abouts.

Here’s the few I can think of, add yours below and I’ll package them all up. And do spread the word please – blog and tweet this. This could, I think be very useful to a lot of people.

VertNet – online registry of vertebrae specimens (recent and fossil)

iDigBio – index of specimens in museums (often with photos)

Cladestore – archived phylogenetic datasets

Morphobank – more phylogentic datasets

FigTree – creates phylogeny diagrams for publication

Palaeobiology Database – data of fossil specimens, deep and wide set of data

Tree of life – phylogenetic tree of the whole diversity of life

Palaeotology Journals – Jerry Harris’ lists of journals, major and minor, that publish palaeo papers

Rankings of Palaeo Journals – Kenneth de Baets’ list of journals and things like IF, SJR, OA etc.

Polyglot Paleontologist – translations of non-English papers

The Marsh Archive – PDFs of papers by Marsh

Stratigraphy.net – archive of stratigraphic data

Phylogeny programs – list of phylogenetics software

Morphometrics – various resources for morphometric analyses

Morphobank -hmm, link doesn’t load for me…

Digimorph – digital anatomy archive of extant and extinct taxa

Comparative osteology database – mostly mammals and a few birds, but very good

3D skulls – Witmer Lab visualisations and scans of various taxa extant and extinct

Paleoportal – search museum collections for specimens

Data Dryad – data of all kinds from published papers

Figshare – data of all kinds from unpublished studies.

Biomesh – FEA models and properties.

Biodiversity library – huge archive of books and paper.

Microstrat – stratigraphy database

I’ve started adding these as the comments come in so it’s easier for people to see and avoid duplicates rather than have to hunt through the comments to see if they have been suggested or not.

On review papers

Papers that act as summaries, syntheses of data, or basic, outright reviews are both important and successful parts of science. The prominence and importance of journals like the Quarterly Reviews in Biology and especially Trends in Ecology and Evolution shows their relevance, and let’s not forget that classic academic texts like The Dinosauria or Romer’s Osteology of Reptiles are more or less reviews of the existing literature. Yes of course there are new interpretations tacked onto these and corrections made to taxonomies, anatomy and the rest, but mostly they are a compilation of the most important and significant papers on the subject and present a consensus view of the current scientific positions.

Reviews are really useful. After all, it’s impossible or at least exceedingly hard to dive into a new subject (or even keep up to date effectively, or simply refresh your memory) from scratch. This is true of course for academics, but also for students of all levels, technicians and the general public – we all have to start somewhere. Review papers (in general) provide a foundation on a subject, giving the first principles of the issues at hand and the outline of what is known and how and what it means. It is not an end in itself, anyone with serious pretensions to work in that subject should be reading much further and wider, but this will be the place to start and of course provides a great resource for a given topic.

However, oddly it seems an awful lot of journals don’t really like publishing them. While there are dedicated review journals out there, my recent experiences with them is that they are overflowing with submissions or requests by people to produce submissions, so clearly lots of people are writing them or want to write them. The huge interest in things like TREE and the massive citations accrued by papers in them or something like the Dinosauria shows people are using them. However, a great many journals simply say that they will not publish reviews, or review-like manuscripts (or similarly useful things like catalogues of specimens, lists of localities or whatever). We seem to be in the odd position where people want to write something, readers recognise their value, they are widely used and cited, but the journals don’t want them.

Moreover, even the review journals can be difficult when it comes to reviews! A paper of mine was ultimately rejected from QRB because one referee demanded that we include new primary data and an analysis of this. I don’t disagree that a review can still contain new data and new ideas, but really? A review paper in a review journal has to contain new work and analyses? Even those that do publish seem to go in for either the tiny (TREE papers are just a few pages) or the monolithic (Annual Reviews in Earth Sciences) with little scope for something say in the 10 page range.

A few more journals willing to accept such manuscripts or even a couple more journals dedicated to reviews would seem to benefit all and sundry and I for one hope this can be encouraged.

On why I really, really dislike all things that rank papers

It’s a long one, best get a cup of tea and make sure you’re in a comfy seat… Ready? OK.

Science has long struggled to rank the worth of the actual science itself. There are all kinds of metrics to rank journals and papers and the contributions of authors. I have yet to meet one that I don’t profoundly dislike and it really all comes back to the same central point in that all of them seem to be so massively dependent on factors that have very little to do with the actual quality or use of the research. I also know that (and understand why) hiring committees need quick and dirty ways to cut through several hundred applications to a few dozen and these kinds of things will scream out at them for use. I am, frankly, worried that I (and many other colleagues) might be missing out badly for no other reason that these metrics can be biased against or towards various kinds of research or researcher, regardless of their actual ability or the quality of their work.

Have a few too few citations or lose a few points on an index and you might never make the shortlist no matter how good or valuable your research is. Moreover, this will promote practices that are at best, not conducive to good science.

Let’s start with the recent version set up on Google Scholar (for those interested, here’s me). This is pretty standard as they go, it looks at my citations, top papers, and a couple of indices that basically look at how often my papers get cited. I have a few problems with some of the things it’s doing at the moment and it’s new and needs work, but the basic principles are the same here as on other metric sites.

So what’s the beef? If my work is good and is being read and cited then my ranks will go up right? Well yeah, in theory. But are these things happening correctly? What things might skew how often (or rarely) a paper is cited (or counted as being cited) and how relevant is that?

First off, there and obvious one. If you’re cited in a paper you get one citation. But someone whose work is critical to a manuscript might be cited dozens of time, but a single tangential point or general review paper might pick up a mention somewhere. Both are scored equally – one citation. So right off the bat, two papers can appear to be equal when they are not. I was recently delighted to get a copy of the new Scipionyx monograph from Cristiano Dal Sasso. Included was a note telling me I was being gifted a copy as I was one of the most cited authors in the paper. Only 1 of my papers was cited in a reference list that ran to something like 400 papers – so in short on this occasion at least a single ‘ping’ for me doesn’t represent the significance of the paper.

Secondly, things can be cited often even if, or even because, they are wrong? How many papers are there out there on birds and their dinosaurian ancestry which mention the BAND group? Most of them give it at least a token mention in their introductions, which means the same half dozen BAND papers rack up the citations even though they are only ever being cited by people saying they are wrong! On a not unrelated note, a big pool of papers that make the same point may be sampled more or less at random (no need to cite 50 papers to say birds are dinosaurs) or the same few pick up all the hits, even if there are better or more appropriate ones out there.

There’s also a lot of journals out there which simply don’t get picked up by the indices at the moment because they’re not considered of sufficient calibre or are simply rather obscure and so citations in those journals won’t be added to the list. You can make a case that if only minor papers are citing your work then it can’t be that important, but I think this isn’t right. After all, the biggest journals count just as much as the smallest ones, there’s no direct rank by journal quality, and I’d argue there’s a bigger gap between the biggest and smallest journals that would count than between the lower ones that are ticked off and the best that are not.

Subject with numerous researchers are likely to rack up the citations far faster than smaller research groups. There are probably four or five people working on theropods for every one who works on pterosaurs, so assuming people publish similar papers at similar rates, theropod papers might get four citations for every one a pterosaur paper picks up, even if both are of hypothetically similar values in quality and usefulness. Chance can play a big part here too, I remember a theropod-worker colleague of mine noting wryly that his one paper on a mammal (a tooth he’d happened to find in the field which turned out to be very important) had accrued more citations than his entire back catalogue of dinosaur research combined.

Some of these ranks are dependent on rates of citations too, or only count those accrued within the first 2-3 years of publication. Well again, some journals are much faster than others, indeed some entire fields are. I know in some branches of science, 2-4 weeks in review is normal, and submission to publication can be in weeks. There are few palaeo journals that are not measured in many months for those kinds of turnaround times, so it’s simply harder to get a few citations that quickly.

So all of these have obvious problems. Someone can write a terrible paper on HIV say, but with lots of researchers out there, and all of them keen to stick the knife in, it could rack up hundreds of hits fast in major journals. But a truly brilliant and groundbreaking paper in a relatively obscure palaeo journal on a subject with only a handful of specialists might take years to get half a dozen. According to these indices (or for that matter an outside observer or non-expert) the former will look much more appealing than the latter.

Moreover, these things can also be manipulated, or at least have the potential to be. People can cite themselves where they don’t need to, to get a few more hits in. Cartels might form of people citing each other to jack their citations up, or supervisors (or even referees and editors) can pressure people to cite their work. People might start splitting big papers into multiple smaller ones, each of which can then cite a few things and bulk the number up again. Or you can put each other on your papers to bump up the number of papers you have apparently contributed to and get all the free citations that go with it down the line. A brilliant student might still struggle to get papers published in good journals if they are not getting the support they should, and a poor student can be gifted credit on papers in major journals by a generous and talented researcher (and I know the latter already happens – it’s dispiriting to meet an alleged author of a paper and discover they don’t speak English, or on one memorable occasion, realise they are on the paper you’re talking to them about….).

Other metrics have been tried or are being considered, like numbers of views or downloads, or number of pages published. Again, this will vary enormously between different fields but can also be screwed up. I remember my Microraptor paper coming out and a colleague got it early and e-mailed it to a couple of massive mailing lists. Within minutes, hundreds of researchers had a PDF (whether they wanted it or not). A few days late I checked the PloS metrics and according to that about half a dozen people had downloaded it, and only a few dozen had visited the page. But then that would happen, no-one needed it because they already had it! But not to worry, it could always be jacked up, just set it as a required reading for a course taught to a few hundred undergrads and the numbers can soon skyrocket. Or be savvy enough to get it pimped on the right media site and you can drive thousands of people to the page.

What about numbers of pages published? Stick to small format journals, make sure your figures are big, pack in extra references and use some nice big tables. The number of pages will soon go up.

In short, I have yet to see a metric which is anything but highly capricious and makes no real measure of all of these problems. Bad papers in popular fields with fast turn around times and short manuscripts will surge ahead of a field with few researchers who tend to turn in long papers of superb quality. Moreover, there’s an obvious risk of escalation – people can start tailoring their work to these ends, focusing on more popular fields, keeping papers short, bumping up their citations (especially to their own work or those of close colleagues) and so on. None of this is good for science.

Discussions with a number of colleagues show that hiring committees, promotion boards and grant bodies are actually using these metrics, or ones like them, to decide things like who gets money or a job. For someone working in a field where turn around times are huge, papers often long, and the number of colleagues small, you can see why I’m worried. I may be competing for positions with people who have apparently a much greater academic record simply because they work in a popular field. I can’t and don’t expect a prospective employer to read, let alone understand, a whole bunch of papers on theropod ecology, HIV transmission and fish mechanics, but equally, if you’re only evaluation is an H-index or the number of citations in 2 years it’s clearly weighted (or can be) for one field and against another. Sure, a theropod researcher is going to spot the better student of a pair or people working in the field, or understand that the egg specialist is likely to suffer from a lack of citations compared to the maniraptoran worker, but that’s always been the case.

I freely admit that there’s no obvious solution (better minds than me have looked I’m sure). And yes, there is certainly something to be said for these metrics: good papers will, I’m sure, on average, get more citations than bad ones. But at the same time I think it’s hard to look at these and how they are built and think that it is entirely fair and ‘on average’ is fine until you discover you’re the one at the end of the statistical tail and are getting shafted by it. Some fields, some people, are going to suffer. And these look like they can be manipulated relatively easily in ways that will not benefit the subject but will those who bother to do so.

Where are my papers?

Let’s face it, it’s been a while since I had a good complain about something, so in the usual holiday manner (the spirit is  supposed to be merriment, but let’s face it, the tradition isn’t!) here’s someone grumpy complaining. There’s another moan to follow tomorrow but I’ll sweetn the deal by following this with comments on theropod sociality and my reviews on the zoo and aviary in Pittsburgh.

………………………….

It’s customary for me to whine about reviewers and editors periodically and for once it has been a while since my last effort. However, the Christmas break has allowed me to try and catch up with a few little things, one of which has been to see what has befallen various papers I’m involved in and if there has been any news of them. While I do have a pretty large volume of manuscripts with various journals, to be honest it’s not pretty reading. Now sure there are valid reasons for papers being delayed (and of course the Christmas period doesn’t help), but you would hope that the occasional paper would run to time, or be processed in a timely manner.

By my count I currently have 9 manuscripts in out with journals. Of these, based on the ideal review times listed by the journals or what I can remember when being asked to review for them, 8 are now late. The last one will be late if it’s not back to me this week and I have good reason to think it won’t be.

There are couple more which have recently been returned, one in a timely manner and one late, and there are a couple of book chapters which are literally years overdue. More than that, despite contacting editors about them, in some cases I have no news at all what is happening to the paper (including one submitted in July!) and in one case the manuscript is awaiting assignment of referees when it is a resubmission. You’d assume they’d be sending the paper back to the same people, and even if they refuse to review something a second time, does it really take three months to send them out?

In short my manuscripts are late by the standards of the journals themselves. In more than one case things are profoundly late, and in a couple I can’t even find out what has happened to the manuscript. Referees seem to run late as a near matter of course and often they are given months toe review something a handful of pages long anyway, something that annoys me profoundly. But when papers aren’t even being sent to referees for weeks, even months, it’s very annoying. Even if a referee is superb and turns around a review in a few days, if it didn’t reach him for weeks, or the review doesn’t reach me for weeks, then the whole thing is going to be late. It’s especially when journals try all these little tricks like publish uncorrected proofs and the like to get the papers out as early as possible. So, it’s clear they value a paper that’s ready for publication being hurried into availability, but then they make no effort at all to actually have papers edited or refereed in a timely manner.

Now sure, maybe this is happening to everyone, but really is that an excuse? As I’m fond of saying about this, writing that review, or mailing out to ask for referees, or check a set of corrections or whatever takes the same amount of time to do today as it will in 2 weeks, or 6 weeks or even 6 months. And while you might be busy this week, and even next, I don’t think it’s excusable to sit on something for months at a time. It does the author a disservice and for that matter both the journal and the field as a whole. Science is not served by papers, perhaps important papers, being held up by months, even years, because people won’t do the work they said they would.

Is it really this bad for palaeo, or am I profoundly unlucky? Looking back over my past papers and various submissions I would say the average review time for a manuscript of mine is about 5 months, and I’ve had half a dozen that were over 6 months from submission to return. Conversations with colleagues suggests that I have had some bad luck and the extremes I’ve occasionally suffered (over a year on 2 occasions, and several more over 6 months each) are the exception. Even so, I’d be intrigued to know what this is like for my colleagues and indeed for those in other fields of science and research.

Mutual sexual selection in dinosaurs and pterosaurs

As of last night my latest paper has come out, coauthored with Darren Naish and Innes Cuthill. Those with access to the journal Lethaia can get it here. Believe it or not I’ve been juggling with the idea as to whether or not to blog about this for quite some time. This is, I think, the most significant paper that I’ve produced and it’s the product of literally years of work (though at least part of that was as a result of very difficult editors and referees at various times, this was started back in 2007!) and I’m really rather proud of it.
Then why not blog it? Well the short answer is that this is a long and complex paper and it ultimately deals with a huge range of difficult issues (and not in the length we’d have liked, it had to be cut down severely to fit the journal and we still incurred page charges). It touches at various times on pterosaurs, sauropod body size, various ornithischian lineages, theropod sociality, the origins of feathers among other themes. All of this means that it’s very hard to blog about and cover the salient points for a non-expert audience without writing thousands upon thousands of words and, well, I did that for the paper.

This is obviously counterintuitive for a blog that is effectively about science communication, but I can’t do everything all the time (I certainly haven’t blogged all of my papers of the last few years). Moreover, in my experience, a paper like this which rather stomps a bit over some much cherished hypotheses of people tends to attract huge number of comments along the lines of “but what about *this* contrived example!” which I can assure you gets very annoying when people won’t let it drop.

None of this means I *won’t* be blogging it at length. But I know it’s likely to be covered a bit elsewhere on the web and thus it’ll look odd that I’m not doing it right away and it seemed sensible to provide an explanation up front. What I will at least talk about it mutual sexual selection – it’s right there in the title and the abstract and is, I suspect, a concept unfamiliar to most, perhaps nearly all, readers. it is after all, something almost entirely absent from the literature on dinosaurs and pterosaurs, Darren and I could only find two other references to it ever and one of those was what we put into the Taylor et al. paper on sauropod necks and the other sprang from Portsmouth. So it’s something that’s only really just coming into the literature.

Sexual selection is probably familiar – the idea that some traits are selected for by the opposite sex and can drive the development of bight colours, crests. displays and all manner of other things. The obvious one that’s endlessly used is the train of a peacock, that makes the male look very different to the drab female. This is typically coupled with sexual dimorphism (again, like the peacock) where the male is bigger than the female and has the extra ornaments etc. and males compete for females, with the best ornaments males advertising their fitness through the size and quality of their fitness (though in some cases like jacanas, this is reversed with bigger females).

So far, so simple. Mutual sexual selection is simply an extension of this into both genders. Both males *and* females are ornamented (or rather, have sexually selected traits) and just as males are competing with other males for the best females, so too the females are competing with each other for the best males. This means that dimorphism is limited or even absent – both genders having such traits. This is in fact, well known for quite a number of bird species and the number of papers on the subject in living species is increasing in leaps and bounds.

In the paper we hypothesis that this may have been common in the ornithodirans. It explains (potentially) quite a lot and solves a couple of previous paradoxes about crests evolution and development. Critically it means that you *don’t need* dimorphism of a feature for it to be sexually selected – both genders can have a crest and it can still be a sexually selected feature. This needs testing, this paper does little more than lay out all the conceptual issues and evolutionary biology and ecology behind the hypothesis, but at the same time, I think we do have some pretty good support for our ideas.

But as ever, what really needs to happen is for you to go and read the paper! And yes, I do have a PDF if you want it.

 

HONE, D. W., NAISH, D. and CUTHILL, I. C. (2011), Does mutual sexual selection explain the evolution of head crests in pterosaurs and dinosaurs?. Lethaia. doi: 10.1111/j.1502-3931.2011.00300.x

Traps for journalists to avoid

Quite some time ago I put together a post advising journalists on how to not screw up their coverage of palaeontology. It seemed to have mixed results but at least it’s out there. Recently a friend of mine asked me if I had any more general advice (knowing who to write clade names is not really much use in a story on physics) and I decided to have a crack at it. Some of what I had put first time around is still relevant, but here I though I would focus on how bad stories make it into the news – or rather stories that should never have been reported.

Any researcher will tell you that there are regular stories on the media that are built on nothing but hyperbole and BS. Now this is not necessarily the journalists fault – he’s chasing a good story and here is one on a plate. It sounds good, has enthusiastic backing from the researcher who is giving up their time to promote it, let’s run with it. So what’s wrong with it? Here are a few tell-tale warning signs.

 

Is there actually a proper paper? If this story is coming from a conference abstract, grant proposal, self-published manuscript, website etc. then simply leave it be. If this thing cannot get past peer review, or has not tried, it’s not even passed the most basic test of the scientific process. You’re simply asking to be taken in by a nutty idea that has simply slipped, unreviewed, into a conference (and quite possibly sneakily – the content to a talk can be quite different to the title). If there is at least a proper paper in a proper journal that’s a good start. (Note: even some ‘proper’ journals publish non-reviewed papers occasionally. It’s dropping away but this does happen).

 

Does the content of the paper match what you are being told? Again, a dishonest researcher can easily publish a paper on say ankylosaurs and talk about their taxonomy, but then push a press release about his amazing new hypothesis on how they could run at 50 mph backwards. So, read the press release and read the paper. Do the two match or are you being pushed something that’s not really supported or even mentioned in the supposed ‘groundbreaking’ research paper.

 

Is this really odd? For sure some amazing papers appear on occasion and can we well supported and taken to heart as it were. But if something looks very odd, and if it’s only appearing in a very short manuscript with little text and few figures or references then I’d be smelling a rat. This seems to good to be true, something this cool and new yet it can all be explained away in just a few hundred words and a drawing? Hmmmm. If so, call / email a few people. Ask around. And try to avoid regular collaborators of the person in question – their friends might well support them. But if you keep hearing “he said that? really?” then be careful. This might have got through peer-review but no-one seriously buys it.

 

Stick to these and you should be able to avoid a mountain of stupid and disingenuousness. Sure, some other guys are going to report on these stories and very occasionally you might miss out. But ultimately if your job is to inform the public you are doing them a far great disservice by putting out confident and supporting articles on utter nonsense that you are in occasionally missing something. If a major % of what you tell people is wrong (and let’s face it, these big, exciting stories are really appealing because they are so shocking or seemingly impossible) then you might as well not bother. So stick to the well-reviewed papers and make sure they match what you’re being sold. It’ll benefit you, the reader and the researcher.

Extinct: a Horizon guide to dinosaurs

The third and final show of Wednesday’s hat-trick of dinosaur shows was an odd Horizon special. For those who don’t know, Horizon, is, (or sadly, rather was) the UK’s flagship science show on the BBC, with really detailed explanations of properly cutting edge science. It has rather fallen away in the last 10 years or so and become a bit more about flashy graphics and controversy though it’s still important. (Oddly enough I managed to chat with a former producer of the show a couple of years back and he lamented how far he felt it had fallen, so this is not me just whinging).

This show was an odd conglomeration of various Horizon shows over the last 30 some years to show how our impression and understanding of dinosaurs has changed. As a result this was for me almost the opposite arc of ‘How to build a dinosaur’ in that it would have probably been more interest to an expert than perhaps the general public. The clips themselves were fascinating and it was genuinely great to hear from people like John Ostrom and Luis Alvarez talking about their then brand new discoveries and see the reaction this brought from their contemporaries as well as looking at how this was presented to the public and the style in which this was brought forward and explained.

However, in order to cram in a fair bit of this kind of stuff there was a noticeable lack of real background to each clip and the whole thing was a bit disjointed. That’s no surprise really, Horizon generally does a great job of building the story and giving the audience the background and showing why the experts are at the heart of things and how they go there. Shorn of that then you’re left with little more than a series of talking heads and quick exposition on a long and complex subject and of course one that was novel perhaps decades ago.

As such I found it fascinating as I knew the history and the science and the people involved so it all fitted together fine for me. However, I do wonder if the casual viewer was not a bit lost being somewhat bombarded with three decades of developments in dinosaur science and dotting around through bird origins, the KT extinction, homeothermy and others all in an hour. Still, it was great from a scientific and historical perspective and i at least enjoyed it thoroughly.

 


@Dave_Hone on Twitter

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 503 other followers