Disinformation: Its Origin, Evolution, and the Solutions to Curb Its Spread


Mark Little, CEO and co-founder of Kinzen, an Irish-based start-up that protects communities from the threat of disinformation, defines disinformation as organized campaigns of discussion created to deceive communities and intend real-world harm. They are more than just the crazy and wrongful things shared by your family and friends online. Disinformation intends to reduce the level of trust in everything that we see on the internet.

Today, how we create, share, and consume information is at an unprecedented scale. With the constant evolution of social media, people are always bombarded with content daily. Without a designated gatekeeper, consumers are exploited and overwhelmed—and that’s when disinformation starts. Here are some specific ways to curb its spread.

Intervene at the Inflection Point

Disinformation is a bottom-up phenomenon. It begins with seemingly low-level chatter and then picked up by a “super spreader,” which communicates false information to mainstream channels and disrupts the network.

“That’s the key to disinformation: spotting those networks, mapping them, and constantly knowing that this is a particularly dangerous account. [The account] has an alias… What you want to do is not restrict the freedom of speech, but the freedom of reach. You want to stop things from going viral. We’ve gotten to be much more about preemption and early warning. Understanding what is happening on one platform is going to be our problem tomorrow,” Little stressed.

Counter with “Good” Information

Information gaps are opportunities for spreaders. They fill in these gaps with false information, deceiving people that everything else they see on the internet is a lie. So while there are pieces of verified information being spread, the public’s level of trust has already been reduced. 

As a countermeasure, it is important to work closely with verified subject matter experts, build clear algorithms, and design product solutions to supply verified information. Little suggests the use of recommendation systems that endorse all verifiable information while making sure they filter those that are untrue. 

According to Little, recommendation systems are designed by engineering teams, yet require collaboration with content moderation and policy teams. 

“[We need to have] a very tight feedback loop with engineering teams to gain precise data on the evolution of language. Policy teams also need to support them to evolve and match the way policy is executed, but also formulate strategies to know new threats have arisen,” Little added. 

Invest in Human Solutions and Innovative Technology

Algorithms can help detect the agile and fast-moving nature of disinformation, but need human ingenuity to “understand the nuances that define disinformation.” Invest in technology that empowers human solutions and better data for more responsible and ethical AI. 

Battling disinformation requires collaboration between humans and technology. Empowering ordinary people with the right information and tools will allow them to become agents of positive change. Little suggests redefining the idea of an influencer “and creating systems where they can build reputation and authenticity, rather than simply just throwing it open to the world to try to crowdsource what’s going on.”

Empowering people also involves recognizing that disinformation poses a threat to mental health—especially to content moderators who are exposed to extreme content. 

TaskUs has developed a comprehensive, global psychological health and safety program for content moderators, guided by the practice of evidence-based psychology and grounded in neuroscience. You may also learn more about our Content Security service here.

For more insights, watch the Forward Webinar: Flattening the Curve of Disinformation on-demand via this link.