Raising the Bar on Critical Thinking: Factmata’s AI-Based Scoring System Helps Users Evaluate the Quality, Safety, and Credibility of Online Content

Raising the Bar on Critical Thinking: Factmata’s AI-Based Scoring System Helps Users Evaluate the Quality, Safety, and Credibility of Online Content

TL; DR: Factmata’s content verification platform empowers businesses, consumers, and researchers to critically evaluate online information using machine learning, curated feedback, and expert knowledge. The transparent AI-based tool targets the proliferation of problematic news by flagging and scoring hateful, deceptive, biased, and incorrect information within each article. Now, with plans to take reputation management to the next level, Factmata is furthering its mission to preserve the quality of the internet.

Misinformation is a lot like the stomach flu. It’s dangerous, it can cause public panic, and — according to a recent study on news stories distributed via Twitter from 2006 to 2017 — it’s highly transferable. MIT researchers found that inaccurate news is 70% more likely to be retweeted than the truth, indicating that Twitter is somewhat of a breeding ground for falsehoods.

Fortunately, there may soon be a cure for this epidemic. Organizations such as Factmata are combining expert knowledge with the latest in AI technology to help identify problematic content and prevent consumers and businesses from inadvertently supporting it.

Factmata logo

Factmata’s goal is to help ensure a more credible internet.

“Factmata has a pipeline that scores content using machine-learning models to predict how hateful, hyperpartisan, deceptive, or incorrect a given piece of content is,” said Mike Hamilton, Head of Platform at Factmata. “Those scores then serve as an indicator of misinformation.”

The company believes in tackling misinformation through an open and honest approach. For example, its algorithms were developed based on expertise from a wide range of journalists and social science researchers. In an effort to be transparent about possible bias from within these groups, Factamata makes demographic information about annotators openly available and provides its datasets for public scrutiny.

“We also make use of community feedback to improve our algorithms on an ongoing basis,” Mike said.

This information strengthens the system, helping consumers, businesses, and researchers better evaluate online content in terms of quality, safety, and credibility. Over time, and with the help of domain expert groups, NGOs, and researchers, Factmata aims to help improve collective critical thinking in the fight against misinformation, and ultimately slow its spread.

Gain Insights From Experts, Machine Learning, and Crowd Feedback

Factmata, founded in 2017 by Andreas Vlachos, Dhruv Ghulati, and Sebastian Riedel, received a significant financial boost in its formative days from Google News Initiative’s Digital News Innovation Fund. Since then, the team has raised $1 million in venture capital from investors including Mark Cuban, Founder of Broadcast.com; Biz Stone, Co-Founder of Twitter; and Craig Newmark, Founder of Craigslist.

The founders used their previous research in automated fact-checking, as well as contributions from a distributed team of natural language processing researchers, professors, and scientists from across the globe, to build the company.

Researchers using AI to derive insights

Factmata leverages the power of AI to flag misinformation and, hopefully, slow its spread.

“They explored the business market in automated fact-checking, but there was already lots of great work being done by the likes of Snopes and others — so that seemed to be a game better played by those types of organizations,” Mike said.

Instead of focusing on fact-checking, Factmata turned its attention to slowing the spread of misinformation through the use of expert knowledge and AI-assisted tools. Today, CEO and Research Scientist Dhruv Ghulat leads a full-time staff of diverse talent dedicated to doing just that.

“We are able to identify political bias, clickbait, racism, obscenity, sexism, insults, and threatening language,” Mike said. “What we do now, and what we will continue to do, is to build tools that help improve the quality of the online ecosystem — and that’s very important to most people.”

Helping Consumers, Businesses, and Researchers Evaluate Content

In a world where the vast majority of internet users share articles without actually reading them, tools like Factmata are proving increasingly valuable for those looking to save face.

“These days, people see a headline that matches their opinions and instantly share it,” Mike said. “Our goal is to offer tools that sort of slow people down, keep them informed, and stop the spread of misinformation.”

To that end, the company is working with developers of browsers, apps, community platforms, and ad-blockers to integrate the Factmata trustmark into their software as an indicator of reliable content. Only a Beta subscription to Factmata is currently available. Alternatively, individuals may screen any English language news article for hate speech and political propaganda simply by pasting a link to the article in the trial bar on the Factmata site.

From the business perspective, Factmata is all about understanding content. For example, if you’re a business looking to advertise your product on a website, a subscription to Factmata would help you screen the site for content you wouldn’t want to be associated with.

Depiction of researcher at desk

A wide range of use cases: Factmata helps individuals, businesses, and researchers alike achieve their respective goals.

“You can send us the online property you’re thinking of advertising on and make your decision based on how the site scores according to our models,” Mike said. “In that way, hopefully, we’ll start to demonetize a lot of these fake or hateful sites.”

But Factmata is not in the business of censoring the web — it’s intended to help existing researchers and organizations do their jobs more quickly and efficiently.

“Imagine a small organization has to review 1,000 pieces of potential hate speech in a day; there’s no way they’re going to get through it all,” Mike said. “But they could send it to us so we can order it for them and say, ‘This 20% is definitely fine, this 20% is definitely hateful, and here are some that you should review with your own expertise.’”

An Open and Transparent Mission to Fight Misinformation

Mike said Factmata is a key resource for NGOs, charity organizations, and researchers working to combat hate speech. The company often partners with such groups in somewhat of a symbiotic relationship.

“It’s a nice trade — they’ll have loads of content to review and only a few people to do it, so our technology makes a significant difference,” he said. “Then they share insights that allow us to hold our algorithms to account and make sure they’re doing an unbiased job.”

Of course, the company is careful not to claim it has reached anything near perfection in that regard.

“Behind these algorithms are real people contributing data,” he said. “Hopefully we can come to a place where we feature a nice representative spread of the population that enables the human beings in this whole process to make better, more informed judgments.”

Factmata stands apart from the competition in that the company focuses on analyzing the content itself, rather than assessing how the content is created. Ultimately, Factmata hopes to fill a blank space in the online credibility puzzle, augmenting existing organizations that focus on ethical journalism, civil liberties, and freedom of speech.

“I think it will take a few different angles to really crack this, and that’s why we’re more focused on analyzing actual content,” he said. “We have a blacklist of domains that we think advertisers don’t want to be on, and the way to get a site off of that list is to change the content instead of just promising to change your editorial practices.”

In the Pipeline: Next-Level Reputation Management

As Factmata evolves, its goal is to continue to push technical boundaries. Factamata is currently working on a product intended to help public relations firms and other organizations detect rumor propagation so they can create counter-narratives.

“The first one that we’re working on is around vaccination,” Mike said. “The idea is to provide a good view of the landscape in terms of what arguments people are making to help either a PR or a medical team run a messaging campaign to get more people vaccinated.”

By arming such organizations with the intelligence they need to do their jobs — which, in this case, would improve societal health — Factmata hopes to conquer misinformation on a new level.

“It’s exciting to me because it’s something that I haven’t seen out there yet,” Mike said.

Christine Preusler

Questions or Comments? Ask Christine!

Ask a question and Christine will respond to you. We strive to provide the best advice on the net and we are here to help you in any way we can.