On paper, academia is a dream for curious, independent people. In reality, the publish-or-perish cycle, with peer-review in its heart, dramatically limits the creativity of scientific work.
I love my job. I think academia is a loophole in the universe. Not only do we get to follow our curiosity, we are granted unparalleled freedom. We choose when and where to work (“Let’s meet at 11 a.m., there’s a quiet coffee shop by the beach where we can sit and work on our draft”). Reading counts as working. We’re surrounded by smart and interesting people. Our efforts don’t need to have a bottom line, it’s enough that we’re doing something we find interesting. We travel and get to see the world. We can take sabbaticals.
That’s all great, in theory. But for some unknown reason, or maybe just due to historical accident, we spoiled it. Instead of using our privileges to maximize creativity, we trapped ourselves in the “publish or perish” cycle and outsourced publishing. This ruins it for me. Everything we do is viewed through the publishing lens: “Would it pass review?”, “Is it enough for a publication?”, “Can I graduate if I don’t have three first-author papers?”
There are many problems with scientific publishing, but I think the review system at its heart is a major source of failure. Peer review, in its current form, is a suboptimal (to put it nicely), archaic invention that limits our academic freedom and dramatically reduces the creativity of scientific work. Importantly, it also drains the joy out of the job. Tragically, we’re the ones to blame—we gave up control over the review process.
Everyone knows that publishing papers is painful. My Twitter account was dedicated to making fun of it. Publishing takes years, and scientists waste significant amounts of time and energy trying to “get past the editor” or “satisfy the reviewers.” We’re fooling ourselves. Moreover, peer review is often a humiliating experience, especially for students and postdocs. You can’t blame a young scientist for wanting to quit after reading cruel reviews. And to tell the truth, even I, despite being a tenured professor, fantasize about quitting every time a paper of mine gets rejected.
Some reviewers selflessly dedicate huge amounts of their time to provide high-quality feedback that genuinely helps their peers. That is fantastic and greatly appreciated. But it’s not always the case. It’s just as common for reviewers to behave unprofessionally. Reviewers are human, and humans are not perfect - it’s part of the deal. But even if you set aside the tone and attitude of the occasional bad actor, there are several systemic reasons why peer review often fails to assess scientific work properly: 1. It’s very hard (only a handful of experts can properly evaluate a given study), 2. Scientists are overworked (properly reviewing a paper takes many hours or even days, and we barely have free time), 3. Scientists aren’t really paid or credited for reviewing, 4. There are many potential conflicts of interest; the person who best understands your paper might be a competitor, or someone who dislikes you (or maybe just someone who skipped breakfast). Be careful not to step on anyone’s toes.
Multiple studies have shown reviewer bias, for example (but not only) against women or minorities. If you’ve published papers, I bet you’ve experienced the shortcomings of peer review firsthand. There are too many papers, and too few qualified reviewers.
The problem is: we really need review! We are desperate for high-quality feedback. There’s no doubt that good review improves the work. We need review to improve our own papers, and to identify solid science we can build on and trust. That’s how science progresses.
I started q.e.d with Niv, in an attempt to reimagine review and scientific publishing. We want to make it pleasant, constructive, fair, and fast. We want to give the power back to the authors. The idea, in a nutshell, is to build a strong AI reviewer that provides authors with deep, insightful feedback. We want you to use this feedback to improve your work, and eventually, to refine and strengthen the entire scientific corpus.
Can AI do that? Despite their many incredible capabilities (e.g. data retrieval), AI systems are notoriously poor at critical thinking and often fail to distinguish between what matters and what doesn’t. What kind of feedback would they be able give you on your research? Well, we’re building a “critically thinking AI” to overcome these challenges exactly. Our system keeps improving and can already do many of the good things that expert, hardworking, well-intentioned human reviewers do, while avoiding many of the bad things that lazy or malicious reviewers do.
Importantly, we know that an AI reviewer is not a “peer,” so we don’t think of AI-based review as peer review. “Peers” were originally nobles entitled to certain feudal rights; today, peers are people of equal status or ability. The q.e.d AI isn’t a person and it’s not our equal, but that might be an advantage. We want to democratize the review and publishing process by adding new approaches. AI review will do things most of us can’t or won’t do. For example: 1. It will know the entire literature, 2. It will read every word, including in the supplemental materials, 3. It will re-run your analyses and check your calculations, 4. It won’t succumb to emotion (you won’t need to “thank the reviewer for the helpful comment”, if it isn’t helpful), 5. It won’t compete with you.
For these reasons, I believe AI can complement human peer review and improve it. But you know what - just try it yourself! Upload a paper (or just an early draft) and the q.e.d AI will review it, suggest improvements, and, if you choose, generate a “bottom-line report”. It’s up to you what to do with it, the power is in your hands. You can share q.e.d reports, attach them to your preprint, or submit them alongside your cover letter. If you think a q.e.d report highlights your manuscript’s strengths, the editor might think so too, and it might improve your chances. Maybe the q.e.d report will be enough. Or maybe, with a q.e.d report in hand, the editor will only need to consult one additional human reviewer instead of three (saving months). The reports are live documents that continue to update when new evidence arrives. The platform allows you to respond to the AI’s comments and add clarifications, saving you the trouble of writing a traditional “response to reviewers” (again, saving months).
We want review to be an interactive process, something you look forward to, something you do with colleagues. We want to change review from something you fear to something you embrace. We want the q.e.d platform to be a mentoring tool, helping team members and collaborators improve each other’s work every day.
Join us!