Community Content
This article has been written and edited by the research team. It has not passed through the quality control procedures applied to content produced by the Research Outreach team.
April 27, 2022

Peer review, fake knowledge, and the quest for simple and useful science

Reports of scientific results in the media are often prefaced by the warning 'not yet peer reviewed' implying that they are only provisional. But does peer review really, and reliably, distinguish sense from nonsense, or truth from fake knowledge? In practice the process is often unreliable, mistakes are made, and it encourages the fragmentation of academic knowledge into narrow specialisms whose audience is often largely confined to workers in the same area. The scholarly communication ecosystem needs to be opened up by extending the evaluation process beyond the peer group.

Peer review is the process that most academic journals use to check the credibility of papers submitted to them. Typically, the editor chooses two or three ‘peers’ – academics working in the same field – and sends them a copy of the paper to review. When the reviews come back, the editor then decides whether to publish the paper unaltered (this is rare in many fields), or ask the author to revise it taking the comments of the reviewers into account, or to reject the paper. This process can sometimes take a long time if successive revisions are sent back to the reviewers for comment: my record from submission to acceptance is 4 years.

Having your paper published in a peer-reviewed journal is important for the careers of academics. Promotions and reputations depend on it. The status of the journal is important, which is why media reports often name the journal to enhance the credibility of an article. Peer review is – supposedly – what differentiates ‘science’ from ‘fake science’.

But, if we step back, and look at this system from the perspective of an outsider, the problems are almost too obvious to be worth stating. Papers are reviewed only by peers – people working in the same field. What other profession could get away with ignoring the views of other stakeholders: potential users of or audience for the research, experts in areas like statistical techniques which may be used in the research, and so on? Furthermore, there are typically only two or three reviewers, chosen by the editor, whose identities are, in most case, not revealed, and who are not formally paid or rewarded for their work so may not be as diligent as they should be. This is the opposite of a transparent process. All the reader gets to know about the result of the reviewing process is the fact that the paper is published in a named, hopefully prestigious, journal. But what criteria were used in the decision to accept? And there is occasionally the suspicion that editors might be more motivated by helping their mates than by advancing human knowledge.

Are two or three reviewers sufficient to come to a reliable and accurate assessment? I got interested in this area when I asked a colleague organising a conference if I could look at the results of the reviewing process. What I found astonished me. Each paper submitted to the conference was reviewed by two people: in an ideal scenario we might expect each pair of reviewers to agree – both would say either accept or reject the paper. In fact, the number of pairs of reviewers who agreed with each other was slightly less than would be expected if they were choosing accept or reject by tossing a coin (the exact figures are in Wood et al, 2004). Even apart from the problems discussed in the paragraph above, this reviewing process was so unreliable as to be useless. This was an extreme example: the degree of agreement is likely to vary depending on the academic field and other factors, but in general, reviewers agree less than one might expect. And even when they agree, they may not be right. Relying on just two or three peer reviewers means that mistakes will happen; peer reviewed journals do sometimes retract articles and fail to publish important and credible articles (see Wood, 2021a for some examples).

The defence of restricting reviews to peers is that, given the technical nature of much academic writing, peers, colleagues in the same academic discipline, are the only people who know the field well enough to make meaningful comments. This makes sense, but the danger is that they may be so steeped in the prejudices of their discipline that they fail to see things that are obvious to outsiders. There are strong arguments for input from workers in other relevant fields, and other stakeholders.

There is also a problem with specialist journals. The obvious outlet for some research on the risks of bungee jumping may be the Annals of Bungee Jumping, which aims to bring together all research on this topic. However, there may be relevant material in other journals – perhaps Trampolining Studies, or general risk management journals. There are a very large numbers of academic journals (one estimate  is 30,000), which may make it difficult for readers to find all relevant research on their topic. And even what do find, they might not understand if they are not familiar with the jargon and conventions of each specialist field.

A few years ago I wrote an article on the problems caused by the growing volume and complexity of academic knowledge, and suggesting that it is often possible to simplify it without losing any of its usefulness. This could have a range of obvious benefits: specialists reaching the frontier of their discipline sooner, and more accessibility to other academics and the general public. Human knowledge is getting too complicated, so it is always worth trying to make new  developments as simple as possible. However, I was then faced with the problem of finding a journal: I couldn’t find a journal specialising in the simplification of knowledge. This is a minor example of another problem with specialist journals: it makes it difficult for new ideas to get a foothold. I eventually published it in an education journal (Wood, 2002), but the argument actually went far beyond education. I have since written a further, more general, article on this theme (Wood, 2021b).

I think the best way forward for the academic ecosystem is rather obvious. The process of distributing academic papers (and other outputs) should be separated from the process of review and validation. There are many repositories where academic papers can be posted on the web before they have been accepted by, or submitted to, a journal. Some of these (e.g. arxiv.org, ssrn.com) are restricted to particular subject areas, but at least one (preprints.org) is multidisciplinary. In an ideal world, there might be a single global repository so that the problem of the 30,000 journals is completely avoided. However, what is missing is any sort of evidence that the paper has been vetted by the appropriate experts: this could be provided by organisations whose purpose is to review academic work on a particular topic.

There might, for example, be a reviewing organisation devoted to assessing research on COVID 19. It could consider papers published in any repository, or in any journal, and it might examine them from a medical, statistical, economic, behavioural and whatever other criteria were deemed appropriate. This would include both peer review and review from other perspectives – which should encourage the criterion of simplicity to be taken more seriously than it is at the moment. It could offer both a quick ‘OK from Perspective X’, and a more detailed review for people who are interested. With the peer reviewed journal system papers can only be published in one journal, but with the system I am proposing here it would possible for another reviewing organisation (perhaps one focusing on infectious diseases in general) to assess the same paper. This system would be far more flexible and responsive to the changing environment and needs of different audience groups than the present system. Like journals, the reputation of reviewing organisations would depend on the quality of their reviews.

I think it is inevitable that the scholarly ecosystem will evolve in this direction over time. Some aspects of it are already in place in a few corners of academia, but the stranglehold of peer-reviewed journals, and their publishers, is still strong. I have given a more detailed analysis of this proposal in Wood (2021a), which also has references to other suggestions and publications on this theme.

References

Wood, M. (2002). Maths should not be hard: the case for making academic knowledge more palatable. Higher Education Review, 34(3), 3-19.
Wood, M., Roberts, M., & Howell, B. (2004). The Reliability of Peer Reviews of Papers on Information Systems. Journal of Information Science, 30: 2-11.
Wood, M. (2021a). Beyond journals and peer review: towards a more flexible ecosystem for scholarly communication. arxiv.org/abs/1311.4566v2
Wood, M. (2021b). If knowledge were simpler we would all be wiser. papers.ssrn.com/sol3/papers.cfm?abstract_id=3911835.

Written By

Michael Wood
University of Portsmouth

Contact Details

Email: michaelwoodslg@gmail.com
Telephone:
+07411145424

Want to read more articles like this?

Sign up to our mailing list and read about the topics that matter to you the most.
Sign Up!

Leave a Reply

Your email address will not be published. Required fields are marked *