--Advertisement--
Advertisement

Meta to suspend AI systems with ‘severe risks’ under new guidelines

Meta to lay off 5% of workforce over low performance Meta to lay off 5% of workforce over low performance

Meta has introduced a new artificial intelligence (AI) risk assessment framework that could lead to the suspension of AI systems deemed too dangerous.

In its recent policy report titled, The Frontier AI Framework, Meta outlined a structured approach to evaluating potential risks.

The company’s framework categorises AI risks into three levels: critical, high, and moderate.

Meta said models posing the highest level of threat will not be developed or released.

Advertisement

The process begins with identifying “catastrophic” threat scenarios, such as cyberattacks or biological weapon creation.

AI models are then assessed on whether they have the capabilities to enable such threats. The third step evaluates the model’s risk level based on its potential impact.

According to the report, a system is classified as critical risk if it can uniquely enable a catastrophic scenario, while a high-risk model is one that improves the chances of such an outcome.

Advertisement

Meta added that a small group of experts will have limited access to high-risk AI models under strict oversight.

Meanwhile, AI models considered moderate risk are those that do not significantly escalate the execution of a harmful scenario, but the company will still implement appropriate safeguards based on the release strategy.

The framework comes amid growing scrutiny of Meta’s AI strategy, particularly its open-source approach. Unlike OpenAI and Google, which restrict access to their AI models, Meta has made its Llama models widely available, leading to concerns over misuse.

Last year, reports surfaced that Chinese military researchers used Llama to develop a defense chatbot.

Advertisement

Regulatory challenges have also forced Meta to adjust its AI plans.

In 2024, the company halted plans to train AI models using user data in the European Union and United kingdom after regulatory pushback.

It also paused the rollout of some AI systems in Europe due to legal uncertainties.

Advertisement
Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected from copying.