Breaking down the analysis:
Newspaper articles, verdicts, blog posts, and other investigative pieces aren’t equipped to break the analysis down into manageable components, and even if they could, they won’t be able to integrate them into a final conclusion correctly (read why). Rootclaim provides a platform for such a breakdown, and uses proven mathematical models to combine the results into an accurate likelihood for each hypothesis.
Objective calculations counteract flawed human intuitions:
Since human intuition is innately limited when it comes to probability, using mathematically sound models ensures that our faulty intuitions don’t lead us astray. In addition, readers can easily miss flaws in the arguments when they’re hidden in well-written, well-structured prose. Since Rootclaim analyses are a direct mathematical product of the evidence, they can’t rely on rhetorical or literary devices to disguise faulty arguments.
No cherry-picking of evidence:
Investigative articles don’t have to consider all the evidence; they can cherry-pick only the information that supports their views. A Rootclaim analysis must explicitly list all the relevant evidence under the “Evidence” section, which is open to public scrutiny, encouraging the crowd to ensure the inclusion of all relevant evidence, from all sides of the issue.
Seeing the whole picture:
Since the author of an article paints a certain picture in order to convince you that their conclusion is correct, they won’t necessarily consider all the alternative hypotheses. A Rootclaim analysis must explicitly list all relevant hypotheses, and assess their likelihood in the same process.
Considering source reliability:
Some articles rely on eyewitness accounts, reports, and statistics whose accuracy or relevance is questionable. On the other hand, they totally discount other sources just because their accuracy is debatable. To deal with these problems, a Rootclaim analysis assesses each source probabilistically, incorporating it while accounting for the possibility of mistakes/fabrications.
Other analytical methods such as court proceedings and investigative committees address some of these shortcomings (especially getting a full picture of the evidence and hypotheses), but still rely on human intuition to combine all the evidence into a conclusion - something we know is well beyond the brain’s capabilities.
Humans like to think that their reasoning is very “logical”, and if only the other side would listen to their sound logic, they would be convinced.
Sadly, logic has little applicability to real-world problems, and very few people ever made an interesting logical deduction their entire lives.
Read more about this issue on our blog post about logic.
Probabilistic models are flexible, fundamentally sound, definitive, and it lets us incorporate all the available information into one model. Once we’ve correctly assessed all the inputs, Rootclaim calculates the results, processing the connections in a way that the human brain can’t.
Probability theory allows us to break a complex issue into many smaller questions that are easier to answer using research, statistics, and human reasoning. Once these questions are answered, the model provides a clear conclusion, that is calculated (not estimated!) mathematically, and is therefore proven to be the most accurate answer.
Indeed, reality is very complex, and any attempt to model it will, by definition, be a simplification. However, not being able to perfectly model everything does not imply that a meaningful "simplification" can’t go a long way in reducing uncertainty—and this is the goal: minimizing uncertainty as much as possible.
Rootclaim utilizes the best methods humanity has developed to reduce uncertainty. Other approaches (opinion pieces, academic papers, court decisions, investigative committees, etc.) may be valid, but are usually less accurate representations of reality than the models created here.
Simply put, when relying on the Rootclaim method, you will be wrong less often.
Structure of the model:
The probabilistic model itself is a mathematical calculation, so it doesn’t ‘know’ or care which information was entered by which side of the issue (or by how many different people).
If information is missing or incomplete, this could definitely lead to a flawed result. However, this is rare, since the analysis only requires that the main evidence for each side be provided, and this can be done by even one person. A piece of evidence doesn’t weigh more just because 100 people provided it rather than a single person—if the evidence ends up supporting a less popular hypothesis, then that’s what the results will show.
Assessment of the inputs:
Similarly, the likelihood estimates are not based on voting, but rather on statistics and sound reasoning. The structure of the analysis forces us to break down this reasoning into small increments, minimizing the potential for bias and manipulation.
In addition to our internal research methodologies that mitigate bias, we make each analysis open to the crowd from all sides and encourage anyone to improve the analysis and add any missing evidence that may influence the outcome. We regularly add evidence, update inputs, and even add entire hypotheses based on input from the crowd. In order to help improve future Rootclaim analyses, and to find out when we publish new analyses, you can follow us on Facebook or Twitter.
What defines probability is not the observed event on its own, but the uncertainty of the observer with respect to the event. Probability is a representation of the uncertainty that the observer has, regardless of when the event may have happened.
For example, if a die has already been rolled inside a closed box, the observer is still uncertain about the outcome. The fact that there has already been a real outcome - say, a six - does not affect the observer’s uncertainty, which remains at ⅙ for each outcome, just as it was before the throw. Read more
Interestingly, this mistake was also made by the English Court of Appeal.
uncertainty of the observerwith respect to the event. Since probability is a representation of this uncertainty it can:
For example, two friends, David and Steven, consider the probability of rolling a six. David uses a computer vision system to track the die’s position and velocity immediately after the throw, and based on that information assesses the probability of having rolled a six at 90%, while for Steven the probability of having rolled a six remains at ⅙.
If David uses his tracking system to continually refine his assessment of the probabilities as the die flies through the air, the system will provide fine-tuned assessments at every moment. In that case, a single observer (David) has different probability assessments for the same event at different times. Read more
Rootclaim shines when there is lots of evidence regarding an event, but no clear winning hypothesis as to what actually occurred. Since human intuition does not work well in complex situations with lots of interwoven details, Rootclaim’s proven probabilistic analysis lets us get to the bottom of the issue with much less bias, and much greater accuracy than any other method available.
Currently, Rootclaim focuses on factual controversies with a discrete number of hypotheses. For example: Who shot down Malaysia Airlines flight 17? In the future, Rootclaim will expand to also handle quantitative analyses (e.g. How much carbon dioxide was in the atmosphere 10,000 years ago?), specialized models (e.g. What is causing global warming?), forecasting (What will be the earth’s temperature in 2050?), and decision-making (e.g. Should the US increase taxes on carbon emissions?).
After taking into account all of the pieces of the puzzle (including the “starting point” of a hypothesis and its compatibility with all the evidence), the Rootclaim probabilistic model integrates all of the elements, and concludes how likely each hypothesis is overall.
So what does it really mean if a hypothesis turns out to have a 90% probability? If we run 10 different analyses and in each one the leading hypothesis results in a 90% likelihood, we should expect 9 of those 10 hypotheses to be true.
Note, however, that the analyses consider only the most reasonable and interesting hypotheses—not necessarily every possible variation of events. This means that the conclusions tell us how likely each hypothesis is, relative to the other hypotheses. It would be rare for an analysis to overlook a hypothesis which is dramatically different than the ones considered, but it is possible.
Wherever possible, Rootclaim uses statistics to determine the effect of each piece of evidence. For example, Flight MH370 didn’t broadcast a distress signal before it went missing. To estimate the effect of this evidence, we use statistics of past crashes, showing no cases of distress signals during pilot suicide, compared to at least 75% of hijackings that did send a distress signal.
Even when there are no applicable statistics or exact calculations, it is still important to quantify the effect of the evidence using the available knowledge. In human reasoning, conclusions are reached by integrating the evidence in the mind, without explicitly quantifying the effect. But since a conclusion is reached, and the effect does exist, there is some number implicit in the process. It is far better to make a best effort inaccurate estimate than to ignore it altogether.
Using a different example, it is known that approximately 40% of suicides include a suicide note, while there are no publically available statistics on the prevalence of fake suicide notes in homicide cases staged to look like a suicide.
In cases like this, we estimate the effect using common sense and analysis of the motivations. In order to protect the conclusion’s robustness from mistakes in these estimates, it is our policy to skew estimates so they support the less likely hypotheses. Such estimates are marked by the keyword ‘conservative’ or ‘generous’, as appropriate. Future versions of Rootclaim will integrate the confidence of the estimates into the analysis, avoiding the need for this protection.
A common mistake in human investigations is to focus on only those pieces of evidence that match the assumed narrative, without considering them as part of a larger process generating large amounts of evidence, some of it matching and some of it not matching.
For example, a judge who is presented only with the matching information derived from an interrogation could wrongly deduce that the suspect indeed knows a lot of details regarding the murder scene, which is very unlikely if he is innocent. However, it’s possible that when considering all the questions asked and all the wrong guesses given, then the distribution would be exactly as expected from an innocent person trying to guess details to please his interrogator. In such a case, it should be clear that he’s guessing the easy guesses at a frequency that is expected by chance (e.g. correctly guessing around half the questions that have two equally likely answers). By examining all the evidence from the source together, including mismatches, or missing evidence, viewed as a unit, the significance of the evidence can be assessed more accurately.
It is interesting to note that if enough information is gathered and we have a good reason to believe it wasn’t filtered and cherry picked (for example, if it’s all taken from a continuous recording from the interrogation), then the number of hits, misses, and their probabilities could together provide very strong evidence for one of the hypotheses - meaning that that exact distribution is extremely unlikely under all the other hypotheses.
If a hypothesis is reasonably likely even before examining the evidence, one would need less evidence to back it up. However, a hypothesis that starts out already unlikely needs more evidence to prove that it’s true. As Carl Sagan said, “extraordinary claims require extraordinary evidence.”
Imagine that you and your friend frequent a neighborhood swimming pool. One day, your friend comes back from the pool and tells you, “My friend Alice was at the swimming pool today." You are likely to believe that Alice was there and the evidence is true. The next day, your friend tells you, “George Clooney was at the swimming pool today." You would be less likely to believe that the famous actor was there and the evidence is true. The day after that, your friend tells you, "Aliens from Mars were at the swimming pool today." You won’t believe that aliens from Mars were there or that the evidence is true.
In all three cases, you got your evidence from the same source (your friend), and the evidence was stated in exactly the same way (“[X] was at the swimming pool today”). The only difference is how plausible each situation is in general. It is more plausible that a random neighbor would be at the pool than George Clooney. And both of these are much more plausible than aliens from Mars. Although it’s not absolutely impossible that aliens from Mars would show up at your neighborhood swimming pool, you’d certainly need a lot more evidence before being convinced.
This difference in prior probabilities is crucial to understanding how likely each competing hypothesis really is to begin with. If you only look at the evidence specific to the case at hand (i.e. “your friend told you that [X] was at the swimming pool”), you risk missing a very important part of the story. Ignoring the “starting points” could lead you to conclude that a specific hypothesis is much more likely than it actually is. There are many court cases, where the evidence pointed towards a suspect’s guilt, leading to a conviction. However, had the prior probability been considered, it would have been clear that the suspect was very unlikely to be guilty. In other words, the suspect’s likely involvement in a crime was so remote that a heavy load of evidence should have been required to conclude otherwise.
Rootclaim’s analyses are transparent and open to constant scrutiny by the crowd. The sources for each evidence, and the reasoning behind each number, are clearly displayed and available for inspection. We calculate the conclusions based on the inputs using probability theory and display our calculations.
Thus, to trust the conclusion you don’t need to verify every detail by yourself, but rather assume there is at least one other person out there who would publicly expose any mistakes or missing elements.
Another approach is to examine how past Rootclaim analyses withstood the test of time.
Yes, it is possible, but depending on the already available evidence, it may be very unlikely. For example, consider a murder case. Imagine that all available evidence, forensics, and witness testimony (and related reasoning) point to the conviction of the primary suspect. How likely is it for a video to surface of someone else committing the murder? If this new documentation existed how likely would it be that all the previous evidence, pointing in the opposite direction, also existed?
New pieces of evidence countering existing conclusions are more likely in situations of higher uncertainty. As we get closer to certainty, we expect fewer surprises. With high uncertainty, such as if two competing hypotheses were equally likely, we would not be surprised by evidence in either direction.
In more colorful words: if a Rootclaim analysis reached a confident conclusion (say over 95%), based on public evidence, and an intelligence agency or other authority claims to have confidential information that proves this is wrong, then it is more likely that they are misinterpreting their data due to human reasoning flaws (or, maybe, just outright lying).
Rootclaim Launches Open Analysis Platform That Surpasses Human ReasoningA substantial body of research has shown that the human brain is unreliable when it comes to accurately assessing complex problems. This means the only way to navigate a sea of half-truths is to complement humanity's fallible intuition with objective probabilistic analysis.
Anti-Fraud Experts Launch News-Accuracy Site, Find U.S. Probably Blamed Wrong Side for Syria Chemical AttackIn applying the fraud-detection approach, Rootclaim seeks to break news events into similar bite-sized pieces and assign values to the individual pieces of evidence, factoring for uncertainty and source reliability. The individual pieces are then loaded into an algorithm that draws big-picture conclusions.