Openai Locks Below: Add biometric checks to protect AI secret from chat-mekar detectives, says reports
Openai fingerprint scans are tightening internal security with internet restrictions and classified project policies, accusing the Chinese firm Deepsek of copying his AI through unauthorized model distillation techniques.
Listen to the story

In short
- Openai fingerprint scans adds strict rules to protect AI mysteries
- Accuses China of copying their AI model through distillation
- New “information tenting” limits staff access to sensitive projects
Openai, the company behind the chatgpt, is carrying on its internal security on a large scale, after the apprehension that rivals, especially foreigners, can try to steal their techniques. According to a report by financial TimesOpenai has introduced new security rules and systems, including fingerprint scans for office access and tight control, which can look or talk about its most sensitive work. The move accused OpenEE of copying its AI technology through unauthorized model distillation techniques at Chinese AI firm Dipsek.
Openai has allegedly introduced biometric access control like fingerprint scanner for some areas of its offices. Data centers have also seen tight security, and the company has brought cyber security experts with defense background to help secure its operations. To add it, the company has allegedly begun to separate its most valuable techniques on computers that are never connected to the Internet. It is clearly working under the “denial-by-default” Internet policy, which means that no system or software can connect to the external network unless it is particularly approved.
In addition, Openai has rolled out strict new “information tenting” policies, designed to keep projects highly compartmentalized. For example, during the development of the O1 model of Openai, which was named “Strawberry”, only select employees were allowed to discuss the project, and only in private sectors. Other people working nearby were kept completely in darkness, and even contingent office conversations were restricted. An employee told the Financial Times, “You had either everything or nothing.”
Earlier this year, Deepsek shocked the industry by releasing a powerful AI model, which rivals the choice of chat and Gemini – except that it was built at less than half of these Google and Openai models. As the Chinese model began to gain popularity, Openi shared its claims, saying that Deepsek may have used “distillation” techniques – where a small AI model is trained to copy a large, more advanced one behavior – to re -create its technique at a fraction of the cost. Openai shared these claims earlier this year. The company eventually stated that it has evidence that Deepsek copied its technique. Although Deepsek did not respond to the allegation, the incident allegedly brought serious changes to the openi.
In particular, distillation is a common machine learning practice, but Openai says that it violates the conditions of its service when it is done using the output of Chatgpt. It is like a famous artist’s painting, imitating brushstroke for brushstroke for brushstroke, without permission. Do you pay attention to irony?
While these changes started quietly last year, they allegedly intensified after the release of Deepsak in January, causing an alarm in Tech Circle and expressed concern about how a low-term company could make such a competent AI model so quickly.