Facebook Pages that “repeatedly” share stories that have been marked as false by third-party fact-checkers will no longer be able to buy ads on the site, the social platform announced.

While ads to promote fake news are removed from the site once flagged, Facebook has taken it one step further by punishing repeat offenders. The announcement did not specify how many offenses are considered too many, although it does say that once the pages stop sharing fake news, they may become eligible to purchase ads again in the future.

According to an official blog post by Facebook product managers Satwik Shukla and Tessa Lyons, this latest update was designed to prevent people from monetizing false information. Sensational posts—especially those politically motivated—often direct readers to a site that earns money through advertising.

Earlier this year, Facebook partnered with independent, third-party fact-checkers like Snopes, Associated Press and Fact Checkers to review flagged stories on the site. Those determined to be misinformation are marked as “disputed” and rank lower on feeds. Users are notified of the disputed status before a story can be shared. Sharing disputed stories is still possible, but Facebook hopes this warning will help deter the unintentional spread of misinformation.

According to a Pew Research Center survey from December, 64 percent of US adults said false news stories had caused a great deal of confusion about the basic facts of current events and 23 percent said they have shared a fake news story whether knowingly or unknowingly.

“People told us that Related Articles gave them more context about topics and that having more context helps them make more informed decisions and about what they read and what they decide to share,” Lyons told TechCrunch. “Seeing Fact Checker’s articles in Related Articles actually helps people identify whether what they’re reading is misleading or false.”

This tactic assumes that readers will take the time to fact-check on their own, and that seeing Related Stories that all mirror the original story’s claim won’t result in confirmation bias. Of course, readers must also trust Facebook and the third-party fact-checkers as well. Those who don’t may interpret a disputed story as an intended cover-up.

Such was the case with a fabricated story about Irish slavery in the US, published by a site called Newport Buzz. A disclaimer on the site’s About page says, “If we ever do anything that resembles ‘real’ journalism, it was purely by mistake.” When Facebook marked the story as disputed, several readers took that to mean that Facebook was trying to hide the truth. The result was not fewer shares, but a whole lot more.

“A bunch of conservative groups grabbed this and said, ‘Hey, they are trying to silence this blog—share, share share,’” Christian Winthrop, the site’s editor, told The Guardian. “With Facebook trying to throttle it and say, ‘Don’t share it,’ it actually had the opposite effect.”

This was especially prevalent during the presidential election last year when hundreds of fake news stories circulated across the world. After Donald Trump was declared president, many wondered if the viral spread of fake news on Facebook contributed to the outcome—especially since these stories allegedly outperformed mainstream journalism on the social network.

Although CEO Mark Zuckerberg dismissed the idea that Facebook influenced the election, his company is nonetheless feeling pressure to keep hoaxes away from its readers. Forty-seven percent of teenagers name Facebook as their go-to source for news, according to a recent study by Common Sense Media.

“The bottom line is, we take misinformation seriously,” Zuckerberg said. “Our goal is to connect people with the stories they find most meaningful, and we know people want accurate information. We’ve been working on this problem for a long time and we take this responsibility seriously. We’ve made significant progress, but there is more work to be done.”