Friday, November 22, 2024
Friday, November 22, 2024
Home Tech Hub Please die: Google Gemini tells college student asking for help with homework

Please die: Google Gemini tells college student asking for help with homework

by PratapDarpan
3 views

Please die: Google Gemini tells college student asking for help with homework

A college student in Michigan was shocked when Google’s AI chatbot Gemini gave him harmful advice instead of help for a school project.

listen to the story

Advertisement
Please die: Google Gemini tells college student asking for help with homework
Google’s Gemini AI chatbot tells an American student to “please die”. (Symbolic photo from Getty Images)

Imagine asking a chatbot for help with homework and being told, “Please die.” This is exactly what happened to Vidha Reddy, a 29-year-old college student from Michigan. Vidya was working on a school project about helping aging adults and turned to Google’s AI chatbot, Gemini, for ideas. Instead of getting useful advice, she got a shocking and hurtful message. The AI ​​told him, “You are a burden on society” and “Please die.” Please.”

Advertisement

Reddy was naturally shaken. “It just didn’t feel like a random error. It felt targeted, like it was speaking directly to me. I was scared for more than a day,” he told CBS News.

His sister Sumedha was with him when this incident happened and the experience left her equally shaken. “It completely freaked us out. I mean, who expects this? I wanted to throw every appliance in the house out the window,” she said.

While some tech-savvy people might brush it off as a glitch, Sumedha argued that it was not your typical tech hiccup. “AI manipulation certainly happens, but this seems too personal and malicious,” he said, highlighting the potential dangers such reactions pose to vulnerable users.

Google, for its part, has acknowledged the incident, calling it a “non-sensitive response” that violated their security policies. “We have taken action to prevent similar output,” the company said in a statement.

But Reddy was not satisfied. “If any person said something like this, there would be legal consequences. Why is it any different when a machine does this?” Vidya asked.

This is not the first time that AI chatbots have been in the headlines for harmful or bizarre outputs. Earlier this year, Google’s AI reportedly suggested eating rocks for vitamins — yes, actual rocks — and other chatbots have made similar dangerous mistakes.

What is more worrying is that Gemini is not alone in the hot seat. Another chatbot allegedly encouraged a teen in Florida to take his own life, sparking a lawsuit against its creators.

The Reddy siblings are concerned about the bigger picture. “What if someone who is already in a dark place reads something like this?” Sumedha asked. “It could push them over the edge.”

As AI becomes increasingly integrated into daily life, incidents like this remind us that even the smartest machines can mess up – sometimes in ways that would make humans nervous. Maybe it’s time for machines to learn a lesson in empathy.

You may also like

Leave a Comment