Last week, BuzzFeed CEO Jonah Peretti called out social media platforms TikTok and Facebook parent Meta in an open letter to their respective CEOs, Zhang Yiming and Mark Zuckerberg, writing that they no longer “care very much about content” and instead have become “much more interested in technology and AI.”

Peretti further suggested that the social networks prioritized content that could create anger and result in more negative emotions to increase user engagement. While Peretti’s comments were also used to announce that BuzzFeed would be entering the social media space with a platform designed to “spread joy” and return to “playful creative expression,” the BuzzFeed chief did put the spotlight on how artificial intelligence and other technology has changed social media.

And possibly for the worse.

“Social media platforms increasingly leverage advanced machine learning techniques – e.g., deep neural networks and reinforcement learning – to shape feeds, moderate content, and incentivize user engagement,” explained Dr. Pablo Rivas, assistant professor of Computer Science at Baylor University.

Rivas said those systems excel at predicting individual preferences and surfacing the most clickable material. That can inadvertently reward sensational or polarized content. It doesn’t have to be that way, however.

“If used thoughtfully, AI can serve as a powerful force for positive online interactions by spotlighting fact-based, meaningful contributions,” Rivas added. “This requires not just robust algorithmic design but also a commitment to ethical principles—transparency, fairness, and respect for user autonomy.”

AI is Taking Out the Social Component

There has also been no shortage of stories of how AI is enabling the spread of misinformation and disinformation, as the technology can be used to create misleading images and videos. AI is already making social media far less social as a result.

“AI is playing a major role in how we see the world. For better or worse, machine learning algorithms have enabled a new and incredibly profitable business model – surveillance capitalism – by which platforms spy on users to pander to their biases, impulses, fears and desires,” warned Dr. Arthur O’Connor, academic director of data science at the City University of New York School of Professional Studies.

With more and more Americans relying on social media for news and information, the ability of AI to manipulate what the users see and read is a major concern.

“For all the imagined dystopias of Skynet or the Matrix, perhaps the biggest risk of AI is not how we change and advance AI, but how AI changes us,” O’Connor added. “When algorithms can instantly generate content optimized for quick consumption, it creates a feedback loop: shorter attention spans lead to demands for simpler content, which further reduces attention spans. This means that as machines get smarter, most of us may get dumber.”

O’Connor noted that psychologists call this “cognitive offloading,” where the tendency is to rely on external tools rather than developing internal capabilities.

“Instead of wrestling with a problem, we simply prompt AI for an immediate answer or solution,” he continued. “In doing so, we risk atrophying our critical thinking and reasoning – doing what the calculator did to our arithmetic skills, smartphones did to our memory of phone numbers, and GPS navigation is doing to our sense of direction.”

Moreover, chatbots and virtual assistants simulate social interaction without the challenges and growth opportunities of real human dialog, offering a kind of pseudo-sociality that can exacerbate alienation and loneliness.

“AI use in social media certainly didn’t create social fragmentation and isolation of our times, but it’s clearly not making it any better,” suggested O’Connor.

Not The Focus On AI, But How It Is Being Used

Though Peretti may have tried to blame the social media companies’ increased focus on AI as being the problem, the real issue may simply be how on how the technology is being used. AI has the potential to improve social media, but that would require some key changes to be made.

The social media platforms would have to fundamentally rethink how they’re using AI, said Dr. Mel Stanfill, associate professor in the Texts and Technology Program and the Department of English at the University of Central Florida.

“The recommendation algorithm would have to not just optimize for attention but also things like posts that result in productive, respectful conversation,” noted Stanfill.

“That’s a thing that could be figured out and implemented, but it’s contrary to the platforms’ interests in attention above all else,” added Stanfill. “Similarly, for moderation, it’s a bit of a game of whack-a-mole whereas the tactics of platforms change to try to decrease harmful use, so do the tactics of people trying to be terrible to each other, but it’s totally possible to use human judgment to understand those issues and change how content moderation algorithms respond based upon that understanding.”

In other words, generative AI isn’t inherently a harmful technology, and it can do amazing things. Yet, like a lot of technologies, it can be harmful and needs human oversight to be used properly. AI may also need to be better “trained,” and it is already being used to prevent the technology from generating responses to prompts deemed anti-social or dangerous.

“These techniques basically play the same role as human content moderators at some of the major social media and internet companies at removing hate speech from user-generated content, although many firms have recently abandoned such efforts,” said O’Connor.

Moreover, some of the training can be time-consuming and would require large teams to work on it.

“That’s expensive compared to just letting machine learning do its thing, but it would go a long way toward more positive uses of these technologies,” Stanfill continued.

The final question is what role, if any, should social media companies play in regulating content that has AI’s digital fingerprints on it.

“Social media companies favoring any type of point of view – positive or negative – conflicts with the principle of freedom of speech, although there’s plenty of evidence that major news outlets are associated with some degree of political bias,” said O’Connor. “At the end of the day, it’s really about what end users choose to post, subscribe and read.”

Share.

Leave A Reply

Exit mobile version