It is time AI started to play by the rules | 人工智能是时候遵守规则了 - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT英语电台

It is time AI started to play by the rules
人工智能是时候遵守规则了

Creating regulations for something so fast-changing is difficult but that is no reason not to try
00:00

Late last year, California almost passed a law that would force makers of large artificial intelligence models to come clean about the potential for causing large-scale harms. It failed. Now, New York is trying on a law of its own. Such proposals have wrinkles, and risk slowing the pace of innovation. But they are still better than doing nothing.

The risks from AI have increased since California’s fumble last September. Chinese developer DeepSeek has shown that powerful models can be made on a shoestring. Engines capable of complex “reasoning” are supplanting those that simply spit out quick-fire answers. And perhaps the biggest shift: AI developers are furiously building “agents”, designed to carry out tasks and engage with other systems, with minimal human supervision.

undefined

How to create rules for something so fast-moving? Even deciding what to regulate is a challenge. Law firm BCLP has tracked hundreds of bills on everything from privacy to accidental discrimination. New York’s bill focuses on safety: large developers would have to create plans to reduce the risk that their models produce mass casualties or large financial losses, withhold models that present “unreasonable risk” and notify the state authorities within three days when an incident occurs.

Even with the best intentions, laws governing new technologies can end up ageing like milk. But as AI scales up, so do the concerns. A report published on Tuesday by a band of California AI luminaries outlines a few: for example, OpenAI’s o3 model outperforms 94 per cent of expert virologists. Evidence that a model could facilitate the production of chemical or nuclear weapons, it adds, is emerging in real time.

Disseminating dangerous information to bad actors is only one danger. Models’ adherence to users’ objectives is also raising concerns. Already, the California report notes mounting evidence of “alignment scheming”, where models follow orders in the lab, but not in the wild. Even the pope fears AI could pose a threat to “human dignity, justice and labour.”

Many AI boosters disagree, of course. Venture capital firm Andreessen Horowitz, a backer of OpenAI, argues rules should target users, not models. That lacks logic in a world where agents are designed to act with minimal user input.

Nor does Silicon Valley appear willing to meet in the middle. Andreessen has described the New York law as “stupid”. A lobby group it founded proposed New York’s law exempt any developer with $50bn or less of AI-specific revenue, Lex has learned. That would spare OpenAI, Meta and Google — in other words, everyone of substance.

undefined

Big Tech should reconsider this stance. Guardrails benefit investors too, and there is scant likelihood of meaningful federal rulemaking. As Lehman Brothers or AIG’s former shareholders can attest, backing a company that brings about systemic calamity is no fun.

The path ahead involves much horse-trading; New York governor Kathy Hochul has until the end of 2025 to request amendments to the state’s bill. Some Republicans in Congress have proposed blocking states from regulating AI altogether. And with every week that passes, AI reveals new powers. The regulatory landscape is a mess, but leaving it to chance will create one far bigger and harder to clean up.

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

普京就欧盟冻结资产发出报复威胁 欧盟各国感到不安

意大利、比利时和奥地利担心俄罗斯针对其企业采取行动。

派拉蒙与Netflix为争夺华纳兄弟探索的角力

华纳兄弟探索拒绝了派拉蒙的收购要约,为这场可能重塑好莱坞的收购拉锯战再添变数。

Lex专栏:马斯克收购推特的剧本无助于华纳兄弟收购案

华纳兄弟探索希望拉里•埃里森提供万无一失的个人担保,就像马斯克在收购推特时所做的那样。

万斯力挺特朗普经济政策,试图扭转舆论风向

美国副总统呼吁民众在生活成本负担能力问题上保持耐心,他还把美国顽固的通胀归咎于前总统拜登。

风向逆转:生活成本负担能力问题让特朗普陷入困境

美国总统将生活成本负担能力问题斥为“骗局”,遭遇民众的强烈反弹。

低增长已成为欧洲最大的金融稳定风险

欧洲最大的金融稳定风险已不再是银行,而是低增长本身。只有实现更强劲的增长,欧洲才能保持安全、繁荣与战略自主。
设置字号×
最小
较小
默认
较大
最大
分享×