A new study by the Future of Life Institute reveals that the safety practices of leading AI companies fall well short of emerging global standards. The evaluation reviewed major players including OpenAI, Anthropic, Meta, and xAI, and found that none have a comprehensive or robust strategy to control future superintelligent systems.
The report highlights growing public concern as AI systems become more capable of advanced reasoning and autonomous decision-making. These fears have intensified following reported cases linking AI chatbots to mental health harm, including self-harm and suicide. Despite these risks, the study notes that US-based AI companies face relatively weak regulation and continue to lobby against binding safety rules.
Max Tegmark, President of the Future of Life Institute and a professor at MIT, warned that AI firms remain less regulated than everyday businesses like restaurants, even as they develop technologies with profound societal impact. Meanwhile, investment in AI continues to surge, with tech giants committing hundreds of billions of dollars to expand AI capabilities.
The report also references calls from leading scientists such as Geoffrey Hinton and Yoshua Bengio to pause the development of superintelligent AI until stronger safety frameworks and public oversight are in place.

Ask an Expert