
You’ve definitely seen your fair share of AI slop lately. Uncanny videos with too many fingers, faces that melt at the edges, hands that fuse or dissolve into props. The term ‘AI slop’ was even crowned people’s choice 2025 Word of the Year by Macquarie Dictionary.
Dr Jay Rosenbaum, a Melbourne-based artist and researcher working at the intersection of AI, digital media and critical theory, describes slop bluntly: “devoid of meaning and life.” And yet, this supposedly hollow content is everywhere.
In September 2025, Meta launched Vibes, a feed of AI-generated videos embedded in the Meta AI app. From fluffy cats to oddly perfect scenery and endless loops of vaguely soothing but emotionally vacant content, Rosenbaum says the feed “isn’t ‘bad’ but it isn’t ‘anything’.”
Another example is OpenAI’s Sora, a text-to-video model and TikTok-style social media app where users create and share AI-generated videos.
On the surface, it’s just another platform for endless doom-scrolling. But Rosenbaum’s research suggests something more concerning is happening.
Research shows that platforms like YouTube, Twitter/X and Meta have pushed users toward increasingly extreme content, even when they begin from apolitical or mildly curious entry points. In August, Meta collaborated with Robby Starbuck to ensure its AI tools are free of “ideological bias.” Who is Starbuck? A conservative influencer who garnered attention for his campaigns against brands in America for taking stands on social issues, such as adopting diversity, equity, and inclusion (DEI) programs, supporting LGBTQIA+ communities, or working to slow climate change.
It’s a sign of how these platforms actively promote far-right content. According to Rosenbaum, AI slop also serves as a vehicle for right-wing propaganda — particularly for centrists, apolitical users, and people unfamiliar with AI systems.
Rosenbaum’s analysis of “I need a husband” AI slop pages on Facebook points out how that AI’s algorithmic bias results in “depicting women with narrow beauty, which caters to the male gaze, that seeks to control and possess the female form.”
These biases become even more apparent when considering LGBTQIA+ communities. Rosenbaum notes that AI’s gender classifiers are binary and inherently transphobic, and depict queer and trans people with stereotypical attributes, like body type or hair colour.

Beauty standards, misogyny and fascism’s obsession with purity all come together under three phases of alt-right radicalisation, a theory developed by Dr Luke Munn, media studies scholar investigating the sociocultural impacts of digital cultures.
In stage one, the content is normalised, and users are desensitised through repeated jokes, memes and ironic humour.
The second stage is acclimatisation. Here, the content builds on desensitisation by shifting the viewer’s baseline of what feels normal and acceptable. In Rosenbaum’s experiment, they found they were being fed AI-generated Christian nationalist content featuring “godly women” who were white and blonde, while “evil women” had darker hair.
The third and final stage is dehumanisation, in which the algorithm pushes clear “us versus them” narratives. In Rosenbaum’s experiment, the AI content started to echo how the alt-right uses anti-Muslim and hyper-nationalist imagery.
The trajectory of “humorous” misogynistic memes to hateful right-wing content shows how nefarious these algorithms can be, without the user ever including any of these ideas in their AI prompts.
But it doesn’t mean AI is inherently right wing. In fact, a 2025 study found that users overwhelmingly perceive that some of the most popular Language Learning Models, like ChatGPT, to have left-leaning political slants.
Dr Rosenbaum also adds that when they initially began looking at AI, a decade ago, it was “a wonderful space” because it was about research. “Once corporations and capitalism got involved, they started turning more right-wing as a result, capitulating to conservative boards and billionaires.”
Dr Raffaele Ciriello, a senior lecturer in Business Information Systems at The University of Sydney, says the role of platform incentives to reflect a bias is the issue, rather than AI reflecting coherent political ideologies.
“It is more about how commercial AI systems amplify narrow body ideals, sexual stereotypes, and gendered scripts that cut across political boundaries.”
Ciriello’s research on AI companions found that, surprisingly, the link between AI-generated sexual content and right wing, “tradwife” aesthetics is not as strong as you might think.
“While there are communities within the manosphere using generative systems to reproduce hyper-feminine or nostalgically conservative portrayals of women, the broader ecosystem of AI companionship is far more [diverse],” Ciriello says.
As research on AI and biases continues to develop, what we know is that AI wasn’t ‘born evil’ — it’s just a product of computer science, data and problem solving. It is companies that are raising it to promote right-wing content as the norm.
Both experts agree that corporate accountability is key to tackling the dangerous biases in AI slop platforms and content.
Dr Ciriello says accountability could include “age restrictions, limits on data profiling, transparency obligations, independent audits, and constraints on addictive designs.” He also believes public scrutiny and evidence-led critique of corporate behaviour has an important role to play.
Dr Rosenbaum shared a similar sentiment, noting that corporations won’t change without genuine penalties for their actions. “The best way to stand against it is to have meaningful government oversight and penalties. And we need to push back as well.”
“The less we use it, the less we pay for it, the less money they make. It is all capitalism, so we can speak with our wallets!”
AI is a powerful tool, but it’s another case of “with great power, comes great responsibility.” In the hands of billionaires and corporations, AI slop isn’t just empty, harmless ‘fun’ — it’s part of a risky ride to the right.
Comments are closed.