Google has warned that it could be sued in a Supreme Court (SC) case. This would remove key protections from lawsuits regarding content moderation decisions involving artificial Intelligence (AI).
Section 230 of the Communications Decency Act of 1996(Opens in a new window)It currently provides a comprehensive “shield from liability” regarding how companies modify content posted to their platforms.
As I have already mentioned, however, CNN(Opens in a new window)Google Books in a legal deposit(Opens in a new window)In Gonzalez v. Google, where YouTube algorithms recommend pro-ISIS content, the SC ruled in favor the plaintiff. This could lead to the Internet becoming full of offensive, dangerous and extremist content.
Moderation in automation
Being part of a nearly 27 year-old statute Already targeted for reform by US President Joe Biden(Opens in a new window)Section 230 is not well-equipped to regulate recent developments, such as artificial intelligence algorithms. This is where the problems begin.
Google’s argument centers around the fact that the Internet has grown so rapidly since 1996 that AI integration into content moderation solutions is a necessity. She stated in the filing that almost no modern website can function if the users have to manually sort through the content.
Technology companies are faced with a content glut. They must use algorithms to present the information to users in a manageable format, from search engine results to flight offers to job recommendations on job boards.
Google also mentioned that, according to current law, refusing tech companies to change their platforms is a perfectly legal way to avoid liability. However, this puts the Internet at high risk of becoming a virtual cesspool.
The tech giant also pointed out that YouTube’s Community Guidelines explicitly deny violence, terrorism, and “any other dangerous and offensive content” and that it constantly adjusts its algorithms in order to prevent banned content.
He also claimed that 95% of YouTube’s violent extremism policies were automatically detected in the second quarter 2022.
The petitioners in the case claim that YouTube failed remove all ISIS-related material and that it helped ISIS “rise” to prominence.
Google responded by saying that YouTube algorithms recommend content to users based upon similarities between a piece and content the user is interested in. This was to distance itself from any liability.
This is a complex case. While it’s easy enough to believe that the Internet is too big to handle manual moderation, it’s convincing to suggest that companies should be held responsible when their automated solutions fail.
Users of filters and other tools can guarantee the content of tech giants’ websites. Parental controlIt is not possible to ensure that they take effective action in blocking offensive content.
Source link
[Denial of responsibility! reporterbyte.com is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – reporterbyte.com The content will be deleted within 24 hours.]