ようこそ! This week we take a look at a super important topic for all academics (and their egos) – research impact.
I have to start again by coming clear. In my never-ending naivety I thought that the academic life would be one of harmony and endless intellectual debate. There would be no right or wrong; no better or worse; just good arguments and productive discourse. Most importantly no politics, personal or other. Turns out I was dead wrong. The more I delve into this world the more I realise that academia is just as much a competition as everything else in life. The life of an academic is a constant race where there can be only one winner (and you damn better make sure it’s you).
In academia, you get evaluated based on how productive you are and how much of an impact your research seems to have. In other words, how many peer-reviewed articles you’ve published, in which journals and in what time; who you’ve cited, who’s cited you back… You basically get assessed based on the quantity and quality of your scholarly output. Sounds pretty fair, yeah? But how does it work in practice – how do we decide who’s the very best (like no one ever was?)
Well we, in all our wisdom, have created different tools for this. Of course we have. We’ve even developed an entire field of science around it – it’s called Bibliometrics. Look it up – it’s a thing. (You may also be interested in checking out Altmetrics, a more contemporary (read: more relevant) contesting approach). I’m not going to go into much detail here (read the wiki pages if you really care). I’ll just leave this here:
“The underlying assumption of bibliometrics is that, by citing, scientists are engaging in an ongoing poll to elect the best-quality academic papers. But we know the real reasons that we cite. Chiefly, it is to refer to results from other people, our own earlier work or a method; to give credit to partial results towards the same goal; to back up some terminology; to provide background reading for less familiar ideas; and sometimes to criticize.
There are less honourable reasons, too: to boost a friend’s citation statistics; to satisfy a potential big-shot referee; and to give the impression that there is a community interested in the topic by stuffing the introduction with irrelevant citations to everybody, often recycled from earlier papers” (Werner, 2015).
So yeah, that’s basically how your friendly neighbourhood academic is evaluated amongst his/her peers. Harmonious? No. Fair? Na-ah. Un-biased? Yeah, right.
Disclaimer: I agree that there’s good science and bad. It only follows that there’s good scientists and bad, and this leads to the need to somehow measure it all. I get that. I just think that metrics are often given way too much value, and I think it’s important to say it out loud every now and again. In my opinion, research shouldn’t be carried out just to boost one’s personal ratings. It should be carried out because it’s meaningful, because it has a real impact. Because it does good out there, in the real world. But how to measure good… That is the question!
Sees you later!