Home Tech Hub OpenAI’s internal messaging system was hacked in 2023 but the company kept...

OpenAI’s internal messaging system was hacked in 2023 but the company kept it a secret from the public, know why

0

OpenAI’s internal messaging system was hacked in 2023 but the company kept it a secret from the public, know why

OpenAI’s internal messaging system was reportedly hacked last year, leading to the design of its AI technology being stolen. The company that makes ChatGPT informed its employees and board about the hack, but hid this information from the public. The company explained the reason for this.

Advertisement
OpenAI’s internal messaging system was hacked in 2023 but the company kept it a secret from the public, know why
OpenAI logo

In 2023, a hacker infiltrated OpenAI’s — the company behind ChatGPT — internal messaging system, according to a report published by The New York Times on Thursday. According to the report, the hacker extracted information from an online forum where OpenAI employees were discussing the company’s latest technologies, and stole information about the design of the company’s AI technologies. However, the hacker could not access the system where the company builds its AI. According to the report, OpenAI had informed its employees and board about the hack, but it did not make the information public. The report explains the reason for the company doing so.

Advertisement

OpenAI claims that this information was hidden from customers because the company found that no user information was leaked in the alleged hack. The company says that OpenAI officials decided not to share the information publicly because no information or data related to customers or its partners was stolen in the hack.

OpenAI officials reportedly did not consider the incident a threat to national security. They believed the hacker was an independent individual with no ties to any foreign government. As a result, the Microsoft-backed company did not notify federal law enforcement agencies about the breach.

In May this year, OpenAI reported that it had foiled five covert influence operations that were attempting to misuse its AI models for deceptive activities online. The company revealed that these threat actors used its AI technology to create short comments, long articles in multiple languages ​​and fake names and bios for social media profiles over the past three months. The Sam Altman-led company said in a statement that the operations were “attempts to manipulate public opinion or influence political outcomes.”

The campaign featured artists from Russia, China, Iran and Israel, and discussed issues such as Russia’s invasion of Ukraine, the Gaza conflict, the Indian elections, and political affairs in Europe and the United States.

The reports about the hack underscore the need for robust cybersecurity measures and transparency, especially for organizations at the forefront of AI development. As AI continues to shape our future, ensuring its ethical and secure deployment is paramount.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version