Across publishing, standards of evidence have become practically random, IMHO. I was agog with disbelief when I realized staid and respectable scientific journals were blatantly misstating false information as factual, destroying in some cases a century of hard won integrity in a single paper. Being able to track adherence, or failure to do so, to standards of evidence may go far to enable trust in certain resources that have seen their integrity degraded merely because they are in fields where others have abandoned it. I'm thinking of BMJ, for example, that has not abandoned it's integrity where JAMA and NEJM did, which most people may not be aware of because they have not read these journals and discerned these lapses and adherence to standards, but just discount the entire edifice of science publishing, and even science itself.
You are viewing a single comment's thread from:
It's interesting you bring up academic journals - that level of academia operates on peer review! However, anonymity in the peer review process makes auditing and transparency difficult. The other thing about academia is that different disciplines have different standards for evidence. Each community of discourse establishes their own and I think there's value in those differences that a p2p system could express.
This is true. There are also means of assigning reviewers to papers, and other mechanisms that reduce the integrity of the peer review process.
Thanks!