To fight disinformation in a chaotic election year, Ruth Quint, a volunteer for a nonpartisan civic group in Pennsylvania, is relying on tactics both extensively studied and frequently deployed. Many of them, however, may also be futile.

She has posted online tutorials for identifying fake social media accounts, created videos debunking conspiracy theories, flagged toxic content to a collaborative nationwide database and even participated in a pilot project that responded to misleading narratives by using artificial intelligence.

The problem: “I don’t have any idea if it’s working or not working,” said Ms. Quint, the co-president and webmaster of the League of Women Voters of Greater Pittsburgh, her home of five decades. “I just know this is what I feel like I should be doing.”

Holding the line against misinformation and disinformation is demoralizing and sometimes dangerous work, requiring an unusual degree of optimism and doggedness. Increasingly, however, even the most committed warriors are feeling overwhelmed by the onslaught of false and misleading content online.

Researchers have learned a great deal about the misinformation problem over the past decade: They know what types of toxic content are most common, the motivations and mechanisms that help it spread and who it often targets. The question that remains is how to stop it.

A critical mass of research now suggests that tools such as fact checks, warning labels, prebunking and media literacy are less effective and expansive than imagined, especially as they move from pristine academic experiments into the messy, fast-changing public sphere.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.


Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.


Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.