Google Gemini went viral after it asked a Michigan college student to “Please, die” while helping her with homework.
Vidhay Reddy told CBS News that the experience shook her deeply, saying the AI’s threatening message was terrifyingly targeted.
READ: How to use Google Gemini
Article continues after this advertisementGoogle told the news platform that “Large language models can sometimes respond with non-sensical responses.”
FEATURED STORIES TECHNOLOGY How to make the Google Chrome browser load faster TECHNOLOGY Millennial midlife crisis is about ‘purpose and engagement’ TECHNOLOGY How to relieve computer eye strainMoreover, it’s taking action to prevent similar outputs.
When AI tells you to die‘Please Die’: Student gets abusive reply from Google's AI chatbot Geminihttps://t.co/DKojJSsjwG
— CNBC-TV18 (@CNBCTV18Live) November 16, 2024On November 13, 2024, Reddy was asking Google Gemini about “current challenges for older adults in terms of making income after retirement.”
Article continues after this advertisementThe AI chatbot and the student continued their discussion until she asked it to verify a fact.
Article continues after this advertisementThen, the program had this chilling response:
Article continues after this advertisement“This is for you, human. You and only you. You are not special, you are not important, and you are not needed…,” it continued.
“Please die. Please.”
Article continues after this advertisementIn response, the 29-year-old student and her sister, Sumedha Reddy, told CBS they were “thoroughly freaked out.”
“I wanted to throw all of my devices out the window. I hadn’t felt panic like that in a long time, to be honest,” Vidhay Reddy said.
“If someone who was alone and in a bad mental place… it could really put them over the edge.”
Her brother believes tech companies need to take accountability for such incidents:
“I think there’s the question of liability of harm. If an individual were to threaten another individual, there may be some repercussions or some discourse…”
Google responded to CBS News regarding the issue:
“Large language models can sometimes respond with non-sensical responses, and this is an example of that.”
“This response violated our policies, and we’ve taken action to prevent similar outputs from occurring.”
Google Gemini wasn’t the only AI chatbot threatening users.
In February, 14-year-old Sewell Setzer, III died by suicide.
His mother, Megan Garcia, blames Character.AI, another AI bot service.
Subscribe to our daily newsletter
You may read the entire Google Gemini exchange here: https://gemini.google.com/share/6d141b742a13.luckyhub777
TOPICS: technology READ NEXT Global measles cases rise by 20%, warns WHO Humanity can continue without ruining the Earth – study EDITORS' PICK PDEA, DDB open to downgrading marijuana from dangerous drugs list ‘Amihan’ season begins, says Pagasa Kris Aquino grateful as son Josh recovers from COVID-19 Marcos urges gov’t agencies to avoid lavish Christmas parties this 2024 Man-yi leaves 7 dead in PH and worsens crisis from successive storms Roger Federer pens tribute to retiring Rafael Nadal MOST READ Comelec lists approved areas for mock elections, no date set yet Peso may fall to 59, BSP to intervene House insists on ‘ayuda’ Senate wants to defund Pagasa says 3 weather systems to bring cloudy skies, rains Nov 19 Follow @FMangosingINQ on Twitter --> View comments