One single case coming up in front of the highest court in the U.S. could singularly determine how the internet can operate going forward, and you can bet your bottom dollar there’s a few tech companies, civil liberties groups, and politicians on both sides of the aisle willing to offer an opinion on the matter.
Last October, the Supreme Court announced it would hear the case Reynaldo Gonzalez v. Google, a case that could very well overturn nearly every aspect of the current way the internet works. It’s all based around the famous/infamous Section 230 of the 1996 Communications Decency Act, a small section of law that has kept tech companies and websites from being considered publishers of the content users post on their platforms.
What’s been dubbed Section 230, or often just “230,” remains the cornerstone of the internet, allowing companies to keep their websites running with user-made content without having to worry about outside entities and governments from either suing them or censoring their platforms. In that time, algorithmic-based content feeds have become par for the course on the world’s most-visited social media. All this has led to considerable differences in how different platforms moderate content. Gonzalez v. Google rests on the question of whether 230 has gone too far protecting companies from the content allowed on the internet.
The Gonzalez family sued Google over content shown on YouTube. It relates to the 2015 terror attack in Paris, when the Islamic State claimed responsibility for the 130 people killed and many more injured. Nohemi Gonzalez was a U.S. citizen killed in the attacks, and the family has claimed Google is responsible for the YouTube content that radicalized the people who committed the attack. Google, on the other side, has said it works constantly to remove such material from its platforms. The company has also said the potential loss of 230 would break the “central building block” of the internet.
Because the stakes couldn’t be higher, politicians, trade groups, civil liberty groups, and a number of other tech companies big and small have submitted briefs to the court, most of them extolled the benefits of Section 230, though a few Republican congresspeople and a few other organizations took issue with how the law’s been interpreted since 1996.
In an amicus brief supporting 230, Santa Clara University School of Law professor Eric Goldman said that the law as it currently stands doesn’t just promote freedom of speech, it’s allowed a national standard that gives new companies more room to break through a crowded and near-monopolized online space. Without automated content systems, Goldman wrote that companies would be pushed toward “more costly solutions that would circumscribe author-users’ abilities to publish content of all types.”
While Goldman has also come down on states’ attempts to limit content moderation decisions, he’s not the only one claiming that 230 is the main way users’ speech is protected online. Numerous groups across the tech sphere and beyond posted friend of the court briefs, the vast majority of them supporting keeping 230 the way it is. Of course, these are just arguments and the nation’s top court judges have no requirement they even glance at them.
So yes, I read through a whole load of amicus briefs to gauge where various groups and prominent individuals stand on the Section 230 controversy. Here are a few of the most notable arguments. The Supreme Court is set to hear oral arguments in just a month, starting Feb. 21.
Democratic Senator Ron Wyden and former Republican congressman from California Chris Cox were, in part, the original authors of the Communications Decency Act and 230 in particular. They told the court in a brief that the original intent behind 230 was to make the person who created the content, whether that’s a video, post, or a tweet, are also the ones legally responsible for it, rather than the site or medium where it was posted. They further argued the original bill was drafted in a “technology-neutral manner” to apply to any future means of content distribution.
The two original authors argued that companies were already doing content recommendations back when the law went into effect in 1996.
“The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike,” the pair wrote.
There’s more than a few Republicans who have a thing against big tech. One brief submitted by Texas Senator Ted Cruz and Congressman Mike Johnson, along with 15 other members of congress, relies on the usual anti-big tech rhetoric and erroneous complaints of conservative censorship (AKA content moderation, the very thing that upending 230 would promote more of). The congress members argued that tech companies censor “opposing viewpoints… without the slightest fear of legal liability.”
Though the brief isn’t in explicit support of one side or the other, Cruz and the other Republicans asked the court to “return 230 to its textual scope,” that being it doesn’t provide “immunity” to tech companies.
Senator Josh Hawley, a fellow conservative who’s known for criticizing big tech, also included his own piece saying that 230 shields platforms from publisher liability “not distributor liability.” He further argued that the court should make companies liable for the content they have “actual knowledge” existing on the platform, though that line of argument flies in the face of how content algorithms actually work.
Industry-supporting groups including NetChoice and TechFreedom all put their names to briefs in support of 230. TechFreedom argued that the internet has “flourished” under 230 since it fills the internet with “different speech environments” where people online can “feel comfortable speaking.” Of course, that has a dark side to it too, and there’s been multiple such “communities” like the online harassment cesspool known as Kiwi Farms whose members have attacked people online for years, safe in their own home forums until last year.
Still, the trade groups praised the safety net that allowed platforms to no longer fear censorship. TechFreedom specifically cited Cruz’ brief and said if the anti-big tech politicians’ got what they want they would limit free speech for everyone, including conservatives.
The Chamber of Progress, a tech trade group, said that without 230, platforms would be “discouraged” from moderating content such as hate speech against LGBTQ+ people. The Chamber argued YouTube uses algorithmic ranking to deprioritize videos by groups sharing misinformation, such as the PragerU channel. Nevermind the fact that both LGBTQ+ and far right YouTubers have both complained YouTube has discriminated against their videos in the past.
Yes, even social aggregation and discussion site Reddit came out to support 230, and it had one of the more interesting arguments of the bunch. Reddit, for its part, argued that 230 doesn’t just protect its site but also the volunteer moderators who handle the online discussion boards. A spokesperson for Reddit told Gizmodo “everyday people, including our users, could face lawsuits for participating in these activities.” The company pointed to mods of r/Screenwriting who were sued for leaving up posts questioning if a competition was a scam. Reddit was also sued by another user when r/StarTrek mods banned another user for saying one of the show’s characters was a “soy boy.”
Of course, by no means did Reddit invent the idea of community mods, but it points to just how much the discussion-oriented parts of the internet depend on community organizing. The idea of volunteer moderators being in the crosshairs of internet content has very different ring to it than the platforms themselves being responsible for what users post.
The Wikimedia Foundation, which runs Wikipedia and other Wiki sites, has an enormous stake in the game. As the foundation pointed out in its brief, practically all of the content on its site is user generated, as is all the other content posted to the plenty of other sites using the Wiki format.
The foundation argued that sites like theirs would be wholly in the crosshairs of lawsuits over user-generated educational posts, even though they don’t have the legal capabilities of any of the major tech companies. The foundation argued that the URLs that it “in part creates” would make them liable to the content itself should somebody take umbrage with the post.
“Section 230 ensures that websites with small budgets but large impacts can exist and compete against the big players,” the foundation wrote in its brief.
There are many online businesses that wholly depend on how they’re not targeted for the content uploaded by users, and it’s not just the largest social media platforms. A whole range of websites and tech companies, from Etsy, to Pinterest, to Roblox, and Vimeo, came together in one brief to argue that defeating 230 wouldn’t just hurt the “handful of large social media platforms” it would “disrupt the basic operation of the entire internet.” The sites further argued that without 230 “even a website’s users could be held liable for content created by other people.” This is similar to what Google has said, inviting the idea that users who like, comment, or repost content could be held liable for it.
ZipRecruiter, Yelp, and even the venerable Craigslist also filed separate briefs in support of 230.
Craiglist called itself an extension of the classified ads seen in newspapers for centuries, adding this means it fills a classic “‘publisher’ function.” The company said its functions letting users organize and filter user-generated content on the site are all protected under Section 230. Most importantly, the company said that there’s no inherent difference between somebody “searching” for content and a company “recommending” content since both are ways of organizing and prioritizing the vast quantities of information online.
The nonprofit journalism advocacy group Reporters Committee for Freedom of the Press wrote in its brief that 230 tempers companies’ willingness to remove controversial speech “including lawful, public interest journalism” out of fear it could be subject to litigation. This could even include pre-screening content before its published, meaning controversial subjects would be under even heavier scrutiny.
For journalists and their work, it means breaking news can proliferate quickly, and it also allows reporters to identify sources and investigate sources without having to fear repercussions for doing so in public forums.
At the same time, media advocacy group Free Press Action instead argued that both sides of the debate are missing the point. Google, it said, should not receive blanket immunity for its systems when they host and recommend videos that, for example, promote terrorism.
Though 230 means Google is not necessarily a publisher of user content, “it does not preclude classifying Google as a distributor and obligating it to remove content it knows to be unlawful.” Essentially, if Google knew it was publishing specific instances of harmful content, only then would they have no defense.
All the other major tech companies with an enormous presence online have a massive stake in keeping 230 alive. Meta, Twitter, and Microsoft all wrote briefs arguing the merits of 230.
Meta and its product Facebook, for one, essentially created the current digital ecosystem that relies upon monetizing user information and giving users a stream of personalized ads. The company argued that it’s “impossible” to operate an online platform without recommending content, comparing it to those who edit a story anthology where editors have to “recommend” a story to appear first in the pages.
“Given the sheer volume of content on the internet, efforts to organize, rank, and display content in ways that are useful and attractive to users are indispensable,” the company wrote.
Twitter noted that when users open up the “following” tab on their apps, the company prioritizes posts from people they follow based on newer content over older content, but the exact way the algorithm worked wouldn’t matter to the outside viewer.
“In such circumstances, any message implied by the selective display of content cannot be distinguished from its mere publication,” the company wrote.
A few groups submitted briefs that weren’t explicitly supporting one side of the debate. The Anti-Defamation League argued that while 230 isn’t explicitly bad by itself, the courts have allowed sites to shield themselves from accountability for amplifying hate. Further the ADL suggested that sites have been so focused on making sure users keep clicking and scrolling with algorithm-based content, they have “actively pulled users toward extremism and hate.” The anti-hate organization argued these sites have not done enough to fight extremism because they are shielded by 230.
Still, organizations like the Electronic Frontier Foundation, alongside a few other internet-focused civil liberty groups, argued that narrowing the function of 230 would lead to more online “censorship.”
Similar to what Google said in its response brief, the EFF claimed doing away with 230 would create an “artificially stunted” online environment, making a “sanitized, bland, homogenous online experience.” This could also greatly harm all kinds of user generated content, as “a content creator or original poster may be able to see the content they uploaded, but online platforms would prevent other users from knowing where to find it or how to share it.
"company" - Google News
January 24, 2023 at 01:30AM
https://ift.tt/lzRVX2p
Here's Where Tech Companies, Politicians Stand on Section 230 - Gizmodo
"company" - Google News
https://ift.tt/WINnoBK
https://ift.tt/PHCOlj2
Bagikan Berita Ini
0 Response to "Here's Where Tech Companies, Politicians Stand on Section 230 - Gizmodo"
Post a Comment