We built q.e.d to scale scientific critical thinking, evolve evidence-based answers, and empower research decisions across disciplines and time.
Critical thinking is integral to everything we do, from the moment we wake up we make countless judgment calls, deciding what to wear, evaluating whether this is the right bus to take or choosing which car to buy. In our day to day decision-making we’re constantly required to apply sound judgment and in many cases we can’t, so we just hope for the best.
In the work of a scientist, it’s even more crucial, abundant and challenging. It shows up when evaluating a surprising observation (is this a real phenomena?), a published paper (can I trust these claims?), the robustness of a control experiment (what am I missing here?), a meta analysis, or a new hypothesis, a reagent, and more.
When we started consolidating our thoughts about developing an AI capable of critical thinking applied for life science we quickly realized that it requires a deep dive into the philosophy of science. While Popper famously taught us that you can’t “prove” things in science (you can only falsify), in real life this is often not practical, especially when the very thing we aim to falsify (or validate) is still being defined in real time. If there is no “ground truth” for your novel claims, how can we establish whether our study design, our findings, are valid, reproducible, complete?
q.e.d stands for quod erat demonstrandum, which was to be demonstrated. In mathematics this signify that a theorem has been logically proven, a binary, yes or no. But science thrives in the gray. While you might be able to prove some elements of your scientific claim to some extant, making a holistic, bulletproof work, is, oftentimes, elusive.
There are numerous examples of studies that have attempted to reproduce previously reported discoveries, only to find that they couldn’t. The reason for that, as we see it in q.e.d, isn’t necessarily misconduct or incompetence. It’s simply because science is, well, hard! True findings discovered in one lab, under one’s specific conditions, can sometime be very well suited, for that lab alone. As scientists we are often faced with a decision between multiple compromises, constrained by time, resources, and attention. Should I sequence 1000 samples to be 90 percent sure? You can, but it’s probably impractical.
We all, even non-scientists, often turn to the scientific corpus for answers. How much coffee is good for my health? Should I start a keto diet? Is this the right medication for this specific patient? All of these are questions whose answers live within the scientific corpus. But since the scientific corpus is far from perfect, with varies and unpredictable degrees of validity, then making a sound, logical judgment becomes very hard.
For that, century old mechanisms were developed to assess and establish the validity of discoveries. At the core lies the traditional peer review system. But reviewing a scientific paper is hard. It requires extremely high attention, hours of work, unique specialization in a specific field, and pure, unbiased judgment. And it’s imperfect because we, as human beings, are imperfect. But what if we could empower, elevate, and refine this process with specialized solutions? The ripple effect could run wide. Papers and entire domains could be evaluated at scale. A single question, for example “how many cups of coffee should one drink?” could now be answered accurately. Imagine a world where every paper that touches upon a specific question is evaluated efficiently, quickly, and accurately. The combined validity of that body of work could be assessed and a time relevant answer could be produced. And next week, when a new solid paper comes out and undermine a previous conclusion, the status of the “truth” would be automatically updated
No more static literature reviews, but instead, answers that evolve. Bits of knowledge, constantly evaluated, constantly refined.
This would empower anything from time critical research decisions to understanding mechanisms of action, determining which experiments are best suited to a given question, and even answering non-scientific questions (about coffee, for example).
And it all starts with the ability to judge a specific scientific claim or set of claims. A task this is as scientific as it is philosophical. That’s where q.e.d began.
We aim to tackle the years-long, tedious, and often imperfect process of evaluating a research discovery, release that bottleneck, and scale it to unlock the true potential of the scientific corpus.
But even before that, our focus is set to ease the pain of the review process itself for scientists around the world, helping them getting their research validated, published and cherished.