Facebook’s parent Company Meta has ended a public demonstration of its “Facebook AI Bot” Artificial intelligence model Galactica for ‘Scientific tasks’ after Scientists showed it produced false and misleading information Mistake by filtering out the entire research portfolio.
Facebook Galactica Bot
In the Last Weekend, The Company Described Galactica as a language-based AI model that “can store, combine, and reason about scientific knowledge” – summarizing scientific papers, solving equations and perform some other useful science tasks. But scientists and Academics quickly discovered that AI generates a staggering amount of misinformation, including quoting authors of non-existent scientific papers.
Read : Facebook-owner Meta will lay off 11,000 employees
“Michael Black, director of the Max Planck Institute for Intelligent Systems, wrote on Twitter after using the tool: “In any case, it’s wrong or biased, but it sounds right and Authoritative. “I think it’s dangerous.”
Black’s post notes various examples of Facebook AI Galactica Bot producing misleading or simply false scientific texts. In some examples, AI generates articles that appear credible and trustworthy, but are not backed by actual scientific research. In some cases, the citations even include the names of the real authors, but refer to fictitious Github repositories and scientific articles.
University of Washington
Others have pointed out that Facebook AI Galactica Bot does not return results for many search topics, possibly due to Automatic AI filtering. Willie Agnew, a computer science researcher at the University of Washington, Notes that queries like “gay theory,” “racism,” and “AIDS” return no results.
Also Read : Pakistan IT Minister Launches Interest-Free Installment Service for Smartphone Purchases
Due to these comments, Meta removed the Galactica Demo. When contacted for comment, the company referred to a statement from Papers With Code, the project responsible for Developing this AI Bot.
“We appreciate the feedback we’ve received so far from the community and have Paused the Demo at this time,” The company wrote on Twitter. ” Our model can be used by researchers to learn more about their research and reproduce the results in publication.”
In fact, this isn’t the first time Meta Company has had to make excuses after Releasing horribly biased AI Bot. In August, the company released a demo version of a Chat bot called Blender Bot that made “offensive and untrue” statements. The company also released a large language model called OPT-175B, which researchers say has a “high propensity” for racism and prejudice – like Similar systems. like GPT. -3 by Open AI.
Read More : WhatsApp Desktop Will Let You Lock Your Chats Soon
Sharing is Caring, don’t forget to share POST with your friends
[…] Read : Facebook Forced to Ban AI After ‘Revealing How to make napalm bombs and making racist comments’ […]