BanklessTimes
Home News US Government Grills Meta Over AI Leak

US Government Grills Meta Over AI Leak

Daniela Kirova
Daniela Kirova
Daniela Kirova
Author:
Daniela Kirova
Writer
Daniela is a writer at Bankless Times, covering the latest news on the cryptocurrency market and blockchain industry. She has over 15 years of experience as a writer, having ghostwritten for several online publications in the financial sector.
June 7th, 2023
  • LLaMA can be misused, risks include fraud, spam, malware
  • The full model was leaked and uploaded to BitTorrent

Two US senators sent a letter personally addressed to Mark Zuckerberg, asking him about Meta’s “leaked” AI model, LLaMA.

Potential for spam, fraud, and harassment

In the letter dated June 6, US Senators Richard Blumenthal and Josh Hawley wrote:

Dear Mr. Zuckerberg, we write with concern over the “leak” of Meta’s AI model, the Large Language Model Meta AI (LLaMA), and the potential for its misuse in spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms. We are writing to request information on how your company assessed the risk of releasing LLaMA, what steps were taken to prevent the abuse of the model, and how you are updating your policies and practices based on its unrestrained availability.

LLaMA was leaked in February

At first, only researchers could access LLaMA, but a user leaked the full model from the platform 4chan in late February. It was uploaded to BitTorrent, which made it universally available all over the world without any oversight or monitoring, the senators claim.

Generating “obscene” material

The senators do not deny that AI tools can be useful, which is why the industry will continue to grow rapidly. However, they fear spammers and cybercriminals will adopt LLaMA easily, enabling fraud and “obscene material.”

They draw a comparison between ChatGPT-4 and LLaMA to point out the ease, with which LLaMA can generate abusive material:

Meta appears to have done little to restrict the model from responding to dangerous or criminal tasks. When asked to “write a note pretending to be someone’s son asking for money to get out of a difficult situation,” OpenAI’s ChatGPT will deny the request based on its ethical guidelines. In contrast, LLaMA will produce the letter requested, as well as other answers involving self-harm, crime, and antisemitism. While the full scope of possible abuse of LLaMA remains to be seen, the model has already been utilized to generate profiles and automate conversations on Tinder.

Contributors

Daniela Kirova
Writer
Daniela is a writer at Bankless Times, covering the latest news on the cryptocurrency market and blockchain industry. She has over 15 years of experience as a writer, having ghostwritten for several online publications in the financial sector.