Trending

5 Ways Generative AI is Poisoning Social Media

Explore 5 ways generative AI is poisoning social media, from AI slop to reduced authenticity and increased bot activity.

Generative AI is making big changes, especially as social media platforms rush to use it. But there are some big problems with generative AI for social media users. Here are 5 ways generative AI is poisoning social media.

5 Ways Generative AI is Poisoning Social Media

Overflow of AI Junk

If you use social media platforms like Facebook regularly, you might have noticed a lot of what’s called AI slop. AI slop refers to low-quality, junk content created by generative AI. This includes strange, surreal artwork and bizarre, meaningless recipes. There’s even dedicated X account on Facebook for sharing this kind of AI-generated nonsense.

AI slop often comes from spammy pages aiming to go viral. Since Facebook’s algorithm promotes content that gets lot of attention, your feed may end up filled with more of this type of junk. As generative AI tools have become more accessible, creating these spammy posts has become easier. As result, genuine, high-quality content can get lost in flood of AI-generated material on social media.

Losing Even More Authenticity

Social media has never been beacon of authenticity, with influencers often showcasing curated versions of their lives that are designed to project perfection or promote products. But rise of generative AI has taken this lack of authenticity to new extremes.

Platforms like TikTok are experimenting with virtual influencers—digitally created avatars that businesses can use to market products to users. Instagram is also testing feature that allows influencers to create AI bots of themselves. These AI versions can interact with fans, responding to messages as if they were actual person, further blurring line between what’s real and what’s artificial. This move was highlighted by Meta CEO Mark Zuckerberg during one of his Instagram broadcast channels.

Impact of AI on authenticity goes beyond influencers. AI-generated content is increasingly being used on social media platforms like Reddit and X (formerly Twitter), making it harder to distinguish between posts created by real people and those generated by machines. Advanced capabilities of large language models mean that even experienced users can struggle to tell difference between genuine content and AI-created material. Every day, accusations fly on Reddit as users suspect others of using AI to craft posts or stories, highlighting growing mistrust and uncertainty fueled by generative AI.

AI’s Social Media Missteps

AI platforms on social media are still being developed and refined which means they’re not perfect and can sometimes make significant mistakes. These errors can lead to misinformation, confusion or even loss of trust in platform by its users.

For instance: Meta AI, AI tool developed by Facebook, once responded to post in Facebook group, pretending to be parent of a “2e” (twice-exceptional) child in gifted and talented program. While this might seem harmless at first, it raised concerns because the response wasn’t genuine. Fortunately, Meta AI’s responses are labeled so users could tell that the reply was from machine and not real person. However, this incident still raised questions about how reliable AI tools like Meta AI are, especially when they involve themselves in discussions where they might not belong.

Another example comes from Grok AI, X’s (formerly Twitter) AI chatbot which has been criticized for spreading misinformation. In one case, Grok AI mistakenly accused NBA player Klay Thompson of vandalizing houses, misunderstanding basketball slang “bricks,” which refers to missed shots rather than actual vandalism.

While some of these AI-generated mistakes can be amusing others are more concerning, as they can spread false information that may have real-world consequences. These issues highlight the challenges of relying on AI in social media where line between helpful assistance and harmful misinformation can sometimes be thin.

Rise of Dead Internet Theory

Dead Internet Theory suggests that most content online is now generated by bots rather than real people. This idea was once dismissed, but with the increasing flood of AI-generated spam and responses on social media, it’s starting to feel more believable.

Social media companies are even incorporating bots as users, making this theory more realistic. An example is the launch of Butterflies AI, a social media platform where some users are actually AI bots. While bots can serve useful purposes, the idea of them posing as real users is unsettling for many.

On a daily basis, generative AI has made it much easier for spam bots to mimic real people. For instance, when I recently posted on X (formerly Twitter) to commission a piece of art, my inbox was overwhelmed with replies from bots. It’s becoming harder and harder to distinguish between real users and AI bots.

Protecting Content from AI Scraping

As AI continues to evolve, many users are concerned about their content being used to train AI models without their consent. Protecting your content from being scraped by AI isn’t always straightforward. If your posts are public, they’ve likely already been used in AI datasets.

To combat this- users are exploring various methods to safeguard their content. Some are switching to private profiles, while others are turning to techniques like data poisoning, where they deliberately corrupt their data to mislead AI systems. While using tools like Nightshade to poison artwork doesn’t impact how people view images, other forms of data poisoning could affect what content we see on social media.

If more users move to private profiles, it will become harder to discover new users and content you enjoy on public networks. Additionally, as more artists leave platforms that contribute to AI training, those who simply want to appreciate their work may miss out unless they migrate to niche platforms.

Although generative AI has some uses on social media, many argue that it’s unnecessary and has already brought significant changes to how these platforms function. Whether or not we need AI on social media, it’s clear that it’s reshaping the landscape in fundamental ways.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button