Rose Joyce, Technology
Leave a Comment

Big Tech: Balancing Autonomy and Federal Overreach

Image source: Sergey Zolkin on Unsplash

Tech companies in the United States have incredible autonomy over how they regulate their platforms. But with current legislation in Congress and cases in the Supreme Court, how we view big tech, and how we interact in the digital sphere, is likely to change. 

By Rose Joyce

Last year, the New York Times released a report on a website that both directed users on how to take their lives and fostered a community for suicide ideation. Members of the site nudged others to commit suicide, provided them directions, and then applauded their plans calling them “heroes” and “legends.” As more and more young people began taking their lives in eerily complex and similar ways, the governments whose citizens had been most impacted — the United States, Italy, Scotland, Germany, England, and Australia — moved to take it down. 

Yet immediately, officials began running into roadblocks. In the United States, search engines refused to take the site down, citing both the First Amendment and Section 230 of the Communications Decency Act, which protects companies from being held responsible for the content their users post, and in good faith, enables them to regulate content on their platforms. Specifically, Jess Miers, a legal policy specialist at Google’s Trust and Safety division, argued that the site was legal, and that since both the site and Google were protected by the CDA and the First Amendment, it wouldn’t be taken down.

This case study with the New York Times investigation acts as a simplification of the complex argument in the United States over the ethical implications of allowing companies to regulate themselves. Over the past decade, lawmakers from both sides of the aisle have tried to overhaul and revise Section 230. Most recently, a group of U.S. representatives introduced the Stop Online Suicide Assistance Forums Act, which would overrule part of Section 230, and make it a felony offense for anyone who uses mail or interstate communications such as online forums to assist in another person’s suicide attempts. The impact of this legislation and the future outcome of current discussions on Section 230 would be so consequential because it would indirectly impact a core business model of tech companies: user-based algorithms. Specifically, while tech companies may not be directly promoting harmful content to users, their algorithms do, which opens up a legislative loophole in the CDA that the SOSAFA and current Supreme Court cases are aiming to close.

Right now, the Supreme Court is ruling on two cases relating to Section 230, Gonzales v. Google and Twitter v. Taamneh, both of which directly relates to whether or not a platform can be held responsible for the content it promotes, or more precisely, whether tech companies will be able to use algorithms in their user software. Even more notable when discussing these cases, is that Supreme Court Justice Clarence Thomas has already questioned the broad protections Section 230 provides to companies. Specifically, in Knight Institute v. Trump (2021), while the court upheld that President Donald Trump’s blocking of seven users from an official account was a violation of their First Amendment rights, in sole concurrence to this case, Justice Clarence Thomas suggested that social media companies should be regulated as common carriers–companies that operate in service to the general public, like buses, and therefore must provide their services to the public without discrimination. Common carrier law in the social media sphere would prevent companies from using their power as private corporations to regulate users. 

If Thomas’s logic is applied to final decisions in Gonzalez v. Google and Twitter v. Tameneh, social media companies would lose their autonomy, and federal and state governments would have the authority to compel companies to host certain content and users. Theoretically, this outcome would allow a state in which it was politically beneficial for a legislature to support misinformation, decide that a piece of fake or dangerous media, like the common conspiracy that the 2020 election was stolen, could not be removed from online platforms. While currently, it is ethically questionable for companies to regulate their content in good faith, as their algorithms have also historically rewarded misinformation, opening up regulation to state and federal review would also open the floodgates for state endorsement of misinformation. 

Even if social media companies are not defined as common carriers, there is a historical precedent that can be applied to Gonzalez v. Google and Twitter v. Tameneh, in which private companies including shopping centers, universities, and cable television companies have been required to host speakers they would not otherwise host. For example, in Turner Broadcasting v. Federal Communications Commission (1994), SCOTUS upheld that must-carry provisions were in line with the First Amendment, because they protected the interest of the state. Applying this concept broadly to social media companies would open up the opportunity for government officials’ communication rhetoric to be unregulated in the digital sphere. 

Given the sheer multitude of legislation and policy-driven discourse relating to big tech’s algorithms, within the next year, a decision will be made on their futures. With the current conservative nature of the Supreme Court and the bipartisan legislative movement to revise Section 230, it is most likely that the legislative outcome will go against the autonomy of tech companies. While tech companies regulating their user content isn’t ideal, private autonomy is the lesser of two overreaching evils, and leaves room for specific content-based revisions like those introduced in the SOSAFA. Yet any resulting decision, whether it be the destruction of algorithms, defining companies as common carriers, or creating a must-carry provision, will severely curb the autonomy of social media companies, while simultaneously creating a new (and perhaps dangerous) ethical standard for content moderation. 

Out of all of these potential outcomes, the most impactful regulation for both consumers and producers would be the destruction of algorithms. At their core, social media companies aren’t platforms: they’re businesses that use algorithms to drive their profits. Tech companies make their money through ad consumption–which is maximized by both user engagement and the collection of user data to personalize user experience. In this business model, a user and their data act as a cyclical product and consumer for tech companies, which, perhaps unethically, encourages companies to keep their users on the site for as long as possible. For example, if a user is searching for dangerous content, algorithms will not only show the user said content, but they will continuously promote it to keep them engaged. 

Historically, this business model has sparked outrage. In 2021, when the Wall Street Journal released a series of reports called the Facebook Files, it was revealed that Facebook’s own research had determined that the Instagram Explore page had dangerous potential to send users into a harmful spiral– especially users who were young girls struggling with their body image. However, as a company, Facebook was incentivized to continuously put profits over people and not change its algorithm. 

Since algorithms are not directly protected under the CDA, any legislation passed or ruled on will determine whether tech companies could continue to use lines of code that both drive their profits and make up how users interact in the digital sphere. While algorithms remain ethically dubious, destroying them would have a massive impact on how humans interact in the digital sphere, costing of the hyper-personalized content they provide. Further, with pure autonomous regulation tech companies have the option to take down misinformation, and manipulate their algorithms to exclude it. Yet in an absolute government-regulated environment, there is room for a dominant political party to reward political rhetoric over public safety and livelihood. Essentially, companies are flexible but the government is bureaucratic, and once passed, legislation isn’t easily changed.

It’s clear that there are ethical issues with the current business models of tech companies. Yet on another hand, forms of regulation like common carrier and must-carry outcomes will give authoritarian precedent to the digital age which could have far-reaching consequences of federal overreach, promoting politicized media regulation in the United States. Therefore, broad government regulation is not the answer to the future of big tech. Instead, a more balanced solution is found in legislation like the SOSAFA, which, if revised with further clarity in regards to algorithmic regulation, would prioritize public safety concerns, leave companies with a degree of structural autonomy, and incentivize Big Tech to put people over profits.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s