Home Tech Hub OpenAI, XAI and Meta lag far behind global AI security standards, new report says

OpenAI, XAI and Meta lag far behind global AI security standards, new report says

0
OpenAI, XAI and Meta lag far behind global AI security standards, new report says

OpenAI, XAI and Meta lag far behind global AI security standards, new report says

A new report exposes serious security shortcomings at leading AI companies like OpenAI and Meta. Experts warn that these shortcomings could lead to unchecked AI risks and societal harm.

Advertisement
Report finds top AI models lag far behind global AI security standards

A new report has raised red flags over the security standards of the world’s leading artificial intelligence companies, warning that AI models from OpenAI, XAI, Anthropic and Meta fall “far short” of emerging international norms. These findings come from the Future of Life Institute’s latest AI Safety Index, which paints a worrying picture of an industry that is growing faster than it can control itself. The report, compiled by an independent panel of AI and ethics experts, found that tech giants are racing to create superintelligent systems, but few have reliable safeguards in place to ensure such technology does not go beyond human control. It also accused companies of prioritizing speed and market dominance over the stability and security of the systems they were building.

Advertisement

According to the study, none of the major AI labs currently meet the “robust governance and transparency standards” that experts argue are necessary for the safe development of next-generation models. The authors warn that these oversight gaps could leave society vulnerable to unintended consequences, ranging from misinformation to more extreme consequences associated with rogue AI behavior.

Max Tegmark, an MIT professor and president of the Future of Life Institute, said, “Despite the recent uproar over AI-powered hacking and AI causing people to go psychotic and suicidal, US AI companies are less regulated than restaurants and continue to lobby against binding safety standards,” Reuters reported.

AI leaders face safeguards

The publication of the report comes at a time of growing public concern about artificial intelligence. In the past year alone, several cases of self-harm and suicide have been linked to erratic chatbot interactions, leading to global calls for tighter monitoring.

The Future of Life Institute (FLI) has been one of the most vocal proponents of slowing down AI development until clear ethical frameworks are in place. The group argues that the world is racing toward “smarter than human” systems without the tools to manage them responsibly.

In the new index, companies like Anthropic, OpenAI, Meta, and XAI scored particularly poorly on accountability and transparency. The assessment revealed limited disclosure about how these companies test for bias, handle security incidents, or how they plan to control advanced autonomous behavior in the future.

In contrast, smaller European and Asian research laboratories were praised for being more transparent with their model safety documentation and public risk assessments. Still, the report concludes that none of the major players in the global AI race are currently operating at a level of security that will meet the emerging international norms being discussed by regulators in the EU and the United Nations.

Industry reaction and increasing pressure

Not surprisingly, these findings drew mixed reactions from the companies named in the report. A spokesperson for Google DeepMind told Reuters the company will “continue to innovate on security and governance with capabilities” as its AI models become more sophisticated. xAI, founded by Elon Musk, responded very clearly, “Legacy media lies,” which appeared to be an automated response rather than an official statement.

The report comes amid a growing debate within the AI ​​community over how to balance innovation with moderation. While regulators in Europe and Asia are pushing for stronger laws to rein in AI risks, the Future of Life Institute says the United States still lacks enforceable regulations. “The race to AI is happening faster than security,” the report concludes. Unless companies like OpenAI, Meta, and XAI improve their governance frameworks, the gap may grow before the world realizes the cost.

– ends

LEAVE A REPLY

Please enter your comment!
Please enter your name here