Q: How does TruthFilter avoid just being one more place for people to argue personal opinions based on political bias?

A: Unlike commenting systems that allow all sorts of opinions, TruthFilter only solicits actual evidence. If a submission is merely opinion it is flagged by other users and does not get elevated to supported evidence status.


Q: How is TF different from other Fact-Checking organizations?

A: The TruthFilter system raises the level of discourse beyond opinion and addresses the main complaint about fact-checking sites that are accused of having a liberal bias. In our system the fact-checkers can fact-check each other.


Q: How does the system teach critical thinking and media literacy?

A: All evidence submitted by users is evaluated by other users using a well defined scale of evidence reliability. If the user is unfamiliar with evaluation criteria such as 'hearsay' or anecdotal evidence. Hovering over each evaluation choice reveals a deeper description of the evaluation criteria including examples. The level of explanation available can be determined by the host site.

For example, hovering over the 'illogical conclusion' choice may reveal the definition of a 'non-sequitur' as a conclusion or statement that does not logically follow from the previous argument or statement. This is one of the most common forms of misleading information on the Web. The optional AI module which adds a weighting factor to the algorithm based on overall reliability of the source can transparently flag words that give a clue to when a statement is and opinion; ie: the use of the word 'should' in contrast to 'is' as a value judgement and unsupported claims such an indication of damage that is not backed up by additional facts.


Q: How is color-coded text used to reveal the reliability of content?

A: An algorithm is used to calculate the collectively determined reliability of a statement, and then converting the aggregate score for the statement to an html text color ranging from 'true-blue' for the most highly corroborated statements to 'warning red' for the most highly disputed statements. Any undesired political overtones from the color blue can be addressed by substituting shades of green for supporting evidence. The algorithm considers the ratio of supporting to disputing evidence, the level of evidentiary corroboration and a weighting factor determined by an AI machine learning module that looks at the reliability history of the source and the use of editorial vs factual language structure.

Please address any question or comments to: