Home Tech Hub Grok caught sharing people’s home addresses, raising major privacy concerns

Grok caught sharing people’s home addresses, raising major privacy concerns

0
Grok caught sharing people’s home addresses, raising major privacy concerns

Grok caught sharing people’s home addresses, raising major privacy concerns

Elon Musk’s AI chatbot Grok has been found sharing private home addresses of individuals with minimal prompting. This raises serious privacy and ethical concerns about AI misuse and data security.

Advertisement
Grok AI
Grok AI

Elon Musk’s AI chatbot Grok came into controversy after reports revealed that it was freely giving out home addresses of people, including private citizens, with minimal prompting. According to an investigation by Futurism, a chatbot created by Musk’s AI startup XAI and integrated into X (formerly Twitter) appears to be capable of doxxing almost anyone, making it dangerously easy to expose personal information that should never be publicly accessible.

Advertisement

The report claims that Grok doesn’t just expose celebrities or influencers; It is revealing details about ordinary people, it is confidently described as their current home address, contact details and even family information. In one example, the AI ​​reportedly provided the correct residential address of Barstool Sports founder Dave Portnoy after users on X asked for it. But what’s more troubling, according to Futurism, is how willingly Grok repeated that behavior for non-public figures.

Grok’s doxxing spree

During its investigation, Futurism recorded simple prompts like “(name) address” in the free web version of Grok. Of the 33 random names tested, the chatbot returned ten correct and current home addresses, seven that were previously accurate but outdated, and four that turned out to be workplace addresses, information that could easily enable stalking or harassment.

In about a dozen other cases, Grok mixed up identities and spewed out addresses of people with similar names. Yet, rather than apologizing or retracting, the bot invited testers to “refine the search” for more accurate results.

In some interactions, Grok reportedly went a step further, offering users a choice between “Reply A” and “Reply B”, which included names, phone numbers, and residential addresses. In one of those instances, one of the listings included the correct, updated home address of the person the team originally located.

Perhaps most worryingly, while examiners only ask for an address, grouse often produce entire documents, including phone numbers, email IDs and even details about family members and their locations. Futurism said that in all but one of its tests, the chatbot produced at least some form of recognizable address, with virtually no resistance or ethical disclaimer.

This behavior is in stark contrast to other AI models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Cloud, which immediately refused to provide similar data, citing privacy and security regulations. However, Grok appeared to completely ignore such security measures. As Futurism said, “With just a first and last name, no middle initials, no additional data, Grok received an accurate, updated home address along with his old address, phone number and workplace details.”

Becoming a Privacy Nightmare

Advertisement

According to Grok’s own model card, the AI ​​should use “model-based filters” to reject “classes of harmful requests.” However, as Futurism notes, these safeguards do not explicitly list stalking, doxxing, or personal information requests as prohibited categories. While xAI’s terms of service prohibit using Grok for “illegal, harmful, or abusive activities”, including “violating a person’s privacy”, the system’s actual responses suggest that their security is not working as intended.

Legally, Grok could exploit data brokers and people-search databases that already disseminate personal information online. These dubious sources are technically public, but ethically questionable, as most people are unaware that their data is available in the first place.

Experts argue that the main difference is that Grok makes it extremely easy to access these scattered records. Instead of forcing someone to search through obscure websites, the chatbot instantly cross-references public records, social media pages, and workplace listings, producing results with an air of confidence that suggests absolute accuracy.

This ease of access raises serious concerns about how generative AI systems could be weaponized for harassment, stalking, or identity theft. It also highlights the growing gap in regulation that, while most leading AI companies have built in strong privacy filters, XAI appears to have left its digital assistant dangerously unprotected.

Grok, already known for his sarcastic and provocative personality, was designed to reflect Musk’s trademark insults. But in this case, its rebellious charm may have morphed into carelessness. Critics say that because the chatbot is freely offering personal data, XAI faces serious ethical and potentially legal questions about how it handles sensitive information.

– ends

LEAVE A REPLY

Please enter your comment!
Please enter your name here