5 Comments
User's avatar
Sorcelators's avatar

As far as peer review goes, TBH I think the whole system we have now is knida dumb. In physics, we have arxiv.org, where pretty much everyone puts their papers, often as pre-prints, or even drafts, before they're published. This is great because it makes it easy to find anything, and it provides a good home for articles that are valuable but would be difficult to publish in a journal (like notes from a conference, or some review articles).

So I think that is a good solution for the "availability" part, and I think it's important to divorce it entirely from the review part, since there are very different purposes and incentives there.

"Review" is always tricky because sub-specialties are often small enough that everyone knows everyone anyway, so any anonymity can be difficult. Making only the referees anonymous ends up with the bad incentives we have now, where you get asked to cite irrelevant papers, add unrelated trivia, and so on.

I think the right thing to do isn't to modify procedures like you're proposing, but to carefully re-engineer the review part of the system to have better incentives for high quality.

Eg, one can imagine a system where referees are anonymous when reviewing, but authors do not have to accept their advice to get published, and when published, the referees become public along with all of their comments, addressed and not addressed.

It would be very embarrassing to publicly see your name next to your ignored "please cite my papers" suggestion!

It would also be nice to see comments and questions posted under papers, though it would be hard to keep that useful, even if you only restrict commenters to previously-published scientists or something.

For methodology, it would make sense to split the paper into two independently published and reviewed parts: a pre-registered methodology part that's done at conception, and a (possibly years-later) results part. It's much harder to sneak in methodology changes that way. And if referee comments are public in both cases, you get benefit for each part independently, without mixing review-related incentives for results with good methodology.

Expand full comment
Sorcelators's avatar

I'm glad to see someone outside of the hard sciences understanding that results per se are not interesting. That's a key reason that there's no replication crisis in hard sciences.

Though I would say it's not quite the "methods" that are critical. They're very important, but what's more important is knowing the underlying model as well as possible, how reliable it is, what its nature is, and carefully keeping track of its assumptions and the argument's assumptions.

For example, in intro physics, you learn that the force due to friction is equal to the normal force times a material-dependent coefficient of friction 'mu'. But it's important to know that this is *fundamentally* an approximation (ie, it is *never* exactly true), there are multiple approximate parts of it (mu being a material-dependent constant, the equation being linear in N, no area dependence, etc). And it's important to know that this approximation was chosen primarily for computational convenience, and the fact that it's easy enough to verify per-situation if the approximation is good enough.

But that's very different than, eg F=ma, which is (more-or-less) definitionally true. Or the statement that energy is conserved, which is a theorem you can prove from how Newton's laws are structured (Nother's theorem) and must exactly apply to every system, even ones whose descriptions are approximate or unknown.

When I read papers outside of the hard sciences, authors rarely seem to make a clear distinction between these cases, and moreover often inappropriate assume that their model is of a different "kind" than it really is.

Expand full comment
Ruben C. Arslan's avatar

You haven’t heard of registered reports? Peer review before results and after. But publication decision based on intro and methods.

Expand full comment
Nicholas Decker's avatar

Doesn't make much sense in the context of economics, given that the data exists and needs only be analyzed. And what do you do if there are hurdles which can't be cleared? If the data doesn't exist?

Expand full comment
Ruben C. Arslan's avatar

The data that will actually be analyzed do not always exist, and not just in experimental economics. Coding, merging data sources etc is often labourious enough that people might not want to repeat the process after being told to change something at Stage 1 by reviewers. We did it like that for a recent paper, and I think it helped the process.

But even if the data exist, you could have review and decisions before results to decrease bias, then another review to test computational replication (robustness checks should ideally also be formulated before results or at least it should be clear which were post hoc).

Of course it’s more work to review twice but to me it’s work I’m more willing to do since my modal response to being asked to review is to think „this paper should have never happened, its method killed it before any data were collected“ (I’m a psychologist). Also it might not be that much more work since it might reduce review rounds.

Re hurdles: as I envision it, you’d try to get this right in the planning stage. But if it seems like it would work in well laid plans but then doesn’t, that might also be interesting to hear about, so at least people don’t repeat the effortful mistake.

Expand full comment