Deepfakes and false reporting – How to communicate facts from fiction

admin
7 Min Read


Italian Brainrot memes – a surreal artificial intelligence-generating creature with a gorgeous Italian name that became a virus in Tiktok – is just the latest internet trend that spreads AI-generated content.

The Italian Brainrot content is very clearly fake. However, the increasing sophistication of AI technology means that so-called “deepfakes” (where AI-generated images, videos, or audio can deceive users) are becoming more common. And so are articles and social media posts designed to spread the truth. How can facts be separated from fiction?

Many experts use misinformation as all terms to cover the spread of false or misleading information, intentional or not.

Disinformation is more specifically the intentional spread of falsehoods, usually because it manipulates public opinion and affects politics. In many cases, these operations are secret. The people behind them create fake profiles, pose as others, and persuade innocent influencers to spread their message.

Young people are “particularly vulnerable” to misinformation, according to Timothy Caulfield, a law professor at the University of Alberta. “It’s not because they’re not smart, it’s because of exposure,” he says. “They are totally, constantly struck by information.”

At the same time, people need to deal with changes such as police officers on large social media platforms such as X and Meta (owners of Facebook and Instagram). Instead of hiring a professional team to hire fact-check content, for example, they are now relying on themselves to add context to their posts.

Historically, experts in the field of misinformation have presented material indications for finding deepfakes. Perhaps the edges of a person’s face are a bit blurry, or the shadows in the image make no sense.

But “AI only keeps progressing,” says Neha Shukla, founder of the youth-led movement that campaigns a youth-led movement for the responsible use of technology. “It’s not enough to tell students to look for anomalies. Or you can look for someone with 13 fingers.”

Instead, Shukla says, “This is when we need to think critically.”

It’s not just you know the facts, you know how people can shake you

This means understanding how high-tech platforms work. The platform’s algorithms are designed to keep users involved as long as possible to display ads, and controversial content tends to attract users. The algorithm can be played back to your emotions and fear. As a result, misinformation and disinformation can spread faster than the truth if it is persuasive.

Shukla points out that when Hurricane Helen devastated Florida in September 2024, disinformation spreaders got tens of millions of views for X’s content, while “the fact checker and the truth terror got thousands.”

“Students need to know that many of these platforms are not designed to spread the truth,” Shukla says.

Meanwhile, Dr. Jen Golbeck, a professor at the University of Maryland who focuses on social media, says that people who push misinformation may have different reasons for doing so.

Some may have an “agent” – often political. But she warns that some people don’t have an agenda for “just want to make money.”

Checklist to find incorrect information

Think critically about the content – ask who created it and why

Understand how social media platforms can help you with your content and what their incentives are

Cross-check information with reliable sources

Take your time offline and look for other views

Look for signs that tell you that the image may be fake – maybe it’s a bit blurry or the shadows don’t make sense

Against this background, it is important to consider the sources. “Think through incentives where people may have to present something in a certain way,” says Sam Hiner, 22-year-old executive director of Young Peoph’s Alliance, a nonprofit focused on advocacy for young people’s issues.

“We need to understand what other people’s values ​​are, and that can be a source of trust. It’s not only knowing the facts, but we also understand how people shake you and what language they use to do so,” he adds.

Cross checks are also helpful, Shukla says. Because some AI-generated news costumes flood the internet with multiple versions of the same false article, simply copying and pasting headlines into Google search is not the answer. Instead, he adds that he is checking for, for example, a verified journalist or official government resources work.

Experts are split over the utility of Meta and X’s new crowdsourcing moderation systems known as community notes. Here, people with different perspectives work together to decide whether to add comments to make the post clear.

Hinner says that this type of shared decision-making will “probably be the future” when it comes to helping young people establish facts.

However, others believe that these labels could be played by the game and may not be true even if they rely on non-experts. “Because of these changes, young people may think that the truth is not objective, but something that you can discuss, debate, and settle for a middle compromise,” Shukla says. “That’s not always the case.”

Simply going offline is one of the best ways to ensure you are thinking critically rather than being sucked into the echo chamber or inadvertently manipulated by algorithms. Hinner also recommends finding people with different opinions offline “to gain true diversity in perspective.”

Despite the danger, Shukla is optimistic. “If anyone has the ability to handle this information integrity crisis, it’s a young man,” she says. “If the pandemic has taught us anything, then Gen Z is crude and resilient and can handle that much.”

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *