- CDA's Section 230 protects online service providers from liability for user-generated content
- It’s not clear if Section 230 applies to Google, OpenAI, generative AI services
- GPT API supports innumerable AI services in blockchain and crypto
US Senators Richard Blumenthal (D.-Conn.) and Josh Hawley (R.-Missouri) proposed a Senate bill yesterday, under which AI firms will lose special protections currently provided by 1996’s Communications Decency Act (CDA) to online computer services providers, Cointelegraph reported.
Title 47, Section 230 of the act gives online service providers protection from liability for content users publish. They cannot be prosecuted for illegal content as long as they make good faith efforts to remove such content once detected.
Social media eschew responsibility
Opponents of the statutory provision claim it lets social networks and other online service providers eschew responsibility for the content posted on their platforms. Recently, the US Supreme Court issued a judgment against amending Section 230 of the CDA in a lawsuit where the plaintiff claimed social media were accountable for damages they sustained through purported promotion and hosting of terrorist-related content.
The Supreme Court ruled that a social medium couldn’t be held accountable for any suggestions its algorithms make to suggest content. Analogically, a phone or email service provider can’t be held accountable for any content transferred using their services.
Does the CDA even apply to AI?
At this time, it’s not clear if Section 230 applies to Google, OpenAI, and generative AI service providers in general. According to OpenAI CEO Sam Altman, Section 230 doesn’t apply to his company. He said as much in a recent Q&A with a US senator.
Defining an online service
Future discussions of Section 230’s relevance to generative AI could center on defining “online service” as no specific language covers it in the text of the law. For example, the GPT API supports innumerable AI services in blockchain and crypto. It could be challenging to hold people or companies accountable for damage incurred due to bad advice or misleading information generated by AI if the law applied to generative AI.