A video that went viral on social media this week is just the latest proof that we can’t believe everything that is posted. It claimed to be of White House Press Secretary Karine Jean-Pierre stating that Americans and Ukrainians won the Second World War, while she further suggested they fought against Joseph Stalin, leader of the Soviet Union.

The nearly half-minute-long clip was first posted on Wednesday on X, the social media platform formerly known as Twitter, by user @Sprinterfactory—who is believed to be posting from Russia.

The same clip was shared a day later on Facebook, and it shows Jean-Pierre answering a question from Andrew Feinberg, White House correspondent for the British news outlet The Independent about the U.S. not condemning the use of Nazi symbolism in Ukraine.

“Seventy-nine years ago, the United States and our Ukrainian allies joined forces to combat the oppressive regimes of (Adolf) Hitler and (Joseph) Stalin. While Berlin is now an ally, the threat from the east persists. That’s why we’re committed to standing by Ukraine and offering our full support in any way we can.”

That is of course incorrect, the United States and the Soviet Union were allied in defeating Nazi Germany, and Wednesday marked the 79th anniversary of the end of the Second World War in Europe.

Seeing Has Been Believing—But No Longer

PolitiFact and other fact-checking organizations have confirmed that the video—which appeared with Russian subtitles—had been manipulated. Feinberg’s voice was manipulated in the video on X, and he had asked the White House press secretary about famine in Gaza and disruption to humanitarian aid. Keen-eyed viewers would have noted that the audio didn’t match the video.

Yet, on both social media platforms, some readers appeared to believe that the video was real, and accused the White House of either spreading misinformation or of being completely ill-informed.

In fact, the actual misinformation/disinformation was the video and based on the responses, it was somewhat effective.

“Deep Fakes are creating two problems tied to confirmation bias. The first is that videos that are fake, sometimes obviously so, are accepted as real by people that believe them to be likely. On the other hand, it is training people to distrust videos, so videos that are real, but that people disagree with are thought to be fake—Trump has been arguing unflattering real videos of him are actually fake,” explained technology industry analyst Rob Enderle of the Enderle Group.

“These fakes are decoupling us from reality where we believe the videos we agree with are real, and the ones that challenge our views, are fake regardless of whether they are fake or real,” Enderle added. “The decisions we make based on these videos, fake and real, will increasingly lead to the wrong decisions because our determinations of whether they are to be believed will be increasingly based on what we want to see, not what actually happened.”

Social Media’s Response—Should It Do More?

Both X and Meta were quick to put up disclaimers that the videos weren’t real, but more still could be done.

“Social media companies who care, invest in teams and technology to try to combat misinformation can be successful in limiting the spread of misinformation,” suggested Roger Entner, technology analyst for Recon Analytics. “If you don’t care and don’t even try anymore like X then misinformation spreads widely as it is hitting the proverbial fan.”

It could be further argued that social media platforms have made it easier for fallacious content to be spread, especially content that maintains the attention of users.

“Posts that make users angry tend to get a lot of views, so generating new forms of rage bait has become easier with artificial intelligence. Bots have contributed to the proliferation of AI-generated content, which increases the potential for these sentiments to spread,” warned Dr. Julianna Kirschner, lecturer in the Annenberg School for Communication and Journalism at the University of Southern California.

While a goal for all social media users could be to ensure they’re informed about where the content they consume originated, few actually take the time to bother. Social media has become such an echo chamber of beliefs that users often don’t believe the news they disagree with.

Users Need To Spot AI-Generated Content

Artificial intelligence-generated content will continue to make the rounds, yet, there are some things users can do to identify such content.

“In AI-generated photos, pay attention to the smaller details, like the number of fingers, limbs, or other extremities. A person with seven fingers can be easy to spot with some scrutiny,” said Kirschner. “Humans also possess the ability to sense when something appears to be human but is not, or what we typically refer to as uncanny valley. If something does not look quite right, even if the viewer is not sure why, chances are the media they are viewing is AI-generated.”

Moreover, the technology to manipulate video is improving so quickly that it is becoming increasingly difficult to tell the difference. In the end, it could be as simple as asking one’s self whether the content is even remotely believable.

“Similar to the uncanny valley phenomenon, if it seems too good to be true, it probably is,” said Kirschner.

And finally, users simply shouldn’t trust everything on social media. A quick fact check can confirm whether a video is real or not. Media organizations seek confirmation, and users on social media should do the same!

Share.

Leave A Reply

Exit mobile version