Explainers

Internet Regulation Sucks Right Now. Is It An Impossible Challenge?

Pinterest LinkedIn Tumblr

We’re caught in a tough spot with Internet regulation. We know we need strong regulation online – to prevent criminal activities, curb the spread of dangerous ideologies, and make sure people are safe in a space where we spend a lot of time. But the rules, laws and regulations that governments around the world have been giving us are… lacking. While we can’t leave Internet regulation up to private individuals (*cough* Elon) or companies, government action tends to encroach too far on individual privacy and push conservative ideals under the guise of ‘safety’.

With the rapid rise of AI and the new era of Internet it’s ushering it, getting online regulation right is crucial. But is striking the right balance actually possible? Can we have adequate online protection while maintaining our digital freedom? Good questions, tough answers.

Why the current regulation isn’t cutting it?

The UK, US and Australia have all recently taken a shot at passing laws to make the Internet safer. And while the intentions may have been pure (really stressing the ‘may have’) the proposals are harsh and have serious consequences for privacy and freedom of speech. A few examples…

United States: The Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act was proposed in the US in 2020 with the aim of preventing online child sexual exploitation. However, the original version of the bill raised concerns that it could undermine end-to-end encryption and potentially weaken individuals’ privacy and security. While the bill has undergone revisions (and has now been proposed to congress three times), there are still concerns it could lead to unintended consequences and threaten digital rights.

The US has also recently proposed the Kids Online Safety Act (KOSA) – great name, sinister repercussions. The bill has faced criticism from activist groups like the ACLU, GLAAD, and the National Center for Transgender Equality, who have voiced concerns about the bill’s ambiguous wording about what determines ‘dangerous’ online content for children. The vague language leaves the door open for conservative-led censorship to target and erase LGBTQ+ content, reproductive health care resources, and many other topics, under the false premise of this information being dangerous for kids to learn.

@sarahephilips @lizzo ♬ original sound – sarah 💫

United Kingdom: The UK first introduced the Online Safety Bill in 2021 to tackle harmful content and protect users, particularly children, from online harms. While the primary goal of the bill is to promote online safety, critics argue that it grants broad powers to the government which could be used to censor and restricted access to certain types of content. Again, the bill’s definition of harmful content is broad and subjective (a recurring issue in these kinds of legislation). Critics worry that this broad definition could allow authorities to target content that is not necessarily harmful, but may be considered controversial or unpopular. The bill also gives regulatory authorities the power to issue fines and block access to non-compliant websites. These measures could be used to suppress freedom of expression, limit dissenting opinions, and control the flow of information.

Australia: Australia’s approach has been similar to the US and UK (unsurprising, since we’re all part of the same super secret intelligence club the Five Eyes Alliance). Australia introduced the Online Safety Bill in 2021 to enhance online safety and combat harmful content, cyberbullying, and abuse. However, as we wrote at the time, the bill includes provisions that grant a lot of power to the government and regulatory bodies restrict sexual content, issue takedown notices and impose significant fines. It’s not the only time the Australian government wanted to use its power over the internet. Scott Morrison introduced the Social Media (Anti-Troll) Bill during his term, and while claiming it was to address online trolling, this bill was really a tool that allowed public figures (including politicians) to more easily sue social media users for defamation. Thankfully, he ran out of time to push it through before the 2022 Federal election and it has since been abandoned.  

While censorship may not be the explicit intent of these regulations, what all these regulations have in common are a highly subjective interpretation of what is ‘dangerous’ and broad powers to block content based on this. In the wrong hands, these laws have a chilling effect on freedom of speech and expression.

Why can’t we get it right?

The general public desperately wants regulations that keep the darker parts of the Internet in check without completely invading the privacy of every normal online user. Why is it so difficult to strike that balance?

Samantha Floreani from Digital Rights Watch says governments are giving tech platforms too much responsibility. “A lot of policymakers just automatically [assume] these tech companies can determine what is and isn’t harmful and take it down. As though it’s really simple. The trouble is that context really matters.” The way things like racism, misogyny, homophobia, transphobia, and even child exploitation are presented online are constantly changing and depending heavily on context. Automated moderation technology can’t always accurately assess what’s harmful. “They aren’t magic, as sometimes I think people imagine.”

Another huge issue is that many (older) policymakers really don’t understand how the Internet and social platforms work. We all saw that video of TikTok CEO Shou Chew explaining how wifi works to a US politician during a congressional hearing! “They just slap a technical solution onto complex social problems,” Samantha says. “You end up with the industry kind of scrambling trying to meet the requirements of the legislation and ending up having to do all kinds of things that they might not otherwise design into their products.”

Lastly, Samantha points out our understanding of ‘safety’ itself is part of the problem. “It is a deeply ingrained approach to safety being tied to surveillance and policing.” That’s why so many of the ‘solutions’ offered feel so invasive.  “By approaching these complex problems with a policing mentality, we end up developing solutions that can be harmful in and of itself.”

Of course, it’s not just the government to blame – a lot of these issues wouldn’t exist if digital platforms have different motivations. “Underneath a lot of issues comes back to what these platforms, these technology companies, are trying to do — they’re for-profit companies. The underlying profit motive that lead to the algorithms that push for ever-increasing engagement and amplification have really harmful consequences,” Samantha says. “So when we try to attack or defend against the symptoms, like misinformation or spread of abuse, we’re not really targeting the underlying issue which is that it benefits these companies financially to operate in this way.”

Samantha urges us to consider a different type of internet: “What would the internet and digital platforms look like if they weren’t designed and built and governed for profit? What would it mean, if we had public ownership or like collective’s governance over our online spaces?” Seriously, what if?!

This is not an impossible challenge

There are efforts to regulate the internet that take our needs into account, and they’re happening right now. These usually come in the form of proportional and risk-based approaches: instead of imposing blanket restrictions or bans, regulations can be tailored based on the specific risks posed by different activities or platforms. This keeps it proportionate to the potential harm while minimising unintended consequences and preserving digital freedoms. 

An example from this year is the European Union’s Digital Services Act and Digital Markets Act. These acts aimed to address the power and influence of digital platforms while taking into account the potential risks they pose to competition, consumer rights, and societal well-being. By focusing on targeted regulation of specific practices (mostly content moderation) and only applying these to a list of Very Large Online Platforms (those that have over 45 million users in the EU), the Acts try to strike a balance between fostering innovation and protecting public interests. It’s a more measured approach that the blanket TikTok bans being proposed in the U.S. and Australia.

While the EU seems to be in the minority of taking a targeted approach to internet regulation, we’re hopeful the global nature of the internet (and it’s problems) will see more international cooperation and the development of common standards for effective regulation. Evidence-based decision-making, inclusivity, and preserving the rights and freedoms that are essential in this digital age – and it is possible to create laws with these principles at the core. Protection with freedom isn’t too hard, and it’s not too late for our needs to be listened to.


Related Posts

Comments are closed.