網易首頁 > 網易號 > 正文 申請入駐

OpenAI:如何設計 AGI 時代的產業政策(全文翻譯)

0
分享至

BLOG

本文翻譯自 OpenAI 剛剛發布的政策白皮書「Industrial Policy for the Intelligence Age: Ideas to Keep People First」,共 13 頁,以下為逐段中英對照翻譯

OpenAI 剛剛發了一份 13 頁的政策文件,探討 AGI 時代的產業政策應該如何設立,這份文件有幾個值得注意的地方:

第一,OpenAI 在正式的政策文件里承認了「經濟收益可能集中在少數公司(包括 OpenAI 自己)」

第二,它提出了一系列具體方案,包括公共財富基金、四天工作制、AI 接入權、自適應安全網等

第三,安全和治理部分同樣值得注意:模型遏制手冊(危險模型釋放后怎么辦)、事件報告機制(類似航空業的 near-miss 報告)、以及要求前沿 AI 公司采用使命對齊的公司治理結構

第四,這是 Sam Altman 那篇「Intelligence Age」博客的政策落地版本,從愿景走到了操作層面

不管你怎么看 OpenAI 這家公司,這份文件本身值得從業者讀一遍。翻譯出來供參考

https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%%20Policy%%20for%%20the%%20Intelligence%%20Age.pdf


Industrial Policy for the Intelligence Age, OpenAI, April 2026

開場白

The drive to understand has always powered human progress—creating a flywheel from science to technology, from technology to discovery, and from discovery onward to more science. That inexorable forward movement led us to melt sand, add impurities, structure it with atomic precision into computer chips, run energy through those chips, and build systems capable of creating increasingly powerful artificial intelligence.

理解世界的驅動力一直推動著人類進步,形成了一個飛輪:從科學到技術,從技術到發現,從發現再到更多的科學。這種不可阻擋的前進力量讓我們熔化沙子、摻入雜質、以原子級精度將其結構化為芯片、給芯片通電,最終構建出能創造越來越強大的人工智能的系統

In just a few years, AI has progressed from systems capable of fast, narrow tasks to models that can perform general tasks people used to need hours to do. Now, we’re beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI. No one knows exactly how this transition will unfold. At OpenAI, we believe we should navigate it through a democratic process that gives people real power to shape the AI future they want, and prepare for a range of possible outcomes while building the capacity to adapt. That’s what this document is for—to start a conversation about governing advanced AI in ways that keep people first.

短短幾年內,AI 從只能做快速窄域任務的系統,進化到能完成人類需要幾小時才能做完的通用任務。現在,我們正在開始向超級智能過渡:能夠超越最聰明的人類(即使這些人也在用 AI 輔助)的 AI 系統。沒人確切知道這個過渡會如何展開。在 OpenAI,我們認為應該通過民主程序來引導它,讓人們有真正的權力去塑造他們想要的 AI 未來,同時為各種可能的結果做準備。這份文件就是為此而生的:開啟一場關于如何以人為本治理高級 AI 的對話

The promise of superintelligence is extraordinary. Just as electricity transformed homes, the combustion engine remade mobility, and mass production lowered the cost of essential goods, superintelligence will speed up scientific and medical breakthroughs, significantly increase productivity, lower costs for families by making essential goods cheaper, and open the way for entirely new forms of work, creativity, and entrepreneurship.

超級智能的前景是驚人的。正如電力改變了家庭、內燃機重塑了出行、大規模生產降低了基本商品成本,超級智能將加速科學和醫學突破,大幅提高生產力,通過降低基本商品價格來減輕家庭負擔,并為全新形式的工作、創造力和創業開辟道路

Today, AI’s impact on work is often measured by the time required for tasks that systems can reliably complete. Frontier systems have advanced from supporting tasks that take people minutes to complete, to tasks that take them hours to complete. If progress continues, we can expect systems to be capable of carrying out projects that currently take people months. This shift will reshape how organizations run, how knowledge is created, and how people find meaning and opportunity. It will also highlight the limitations of today’s policy toolkit and the need for more ambitious ideas to keep people at the center of the transition to superintelligence.

今天,AI 對工作的影響通常用系統能可靠完成的任務所需時間來衡量。前沿系統已經從輔助人類幾分鐘能完成的任務,進展到輔助需要幾小時的任務。如果進展持續,我們可以預期系統將能執行目前需要人類幾個月的項目。這種轉變將重塑組織運行方式、知識創造方式,以及人們尋找意義和機會的方式。它也將暴露當前政策工具包的局限性,以及在向超級智能過渡中保持以人為本所需要的更大膽的想法

While we strongly believe that AI’s benefits will far outweigh its challenges, we are clear-eyed about the risks—of jobs and entire industries being disrupted; bad actors misusing the technology; misaligned systems evading human control; governments or institutions deploying AI in ways that undermine democratic values; and power and wealth becoming more concentrated instead of more widely shared.

雖然我們堅信 AI 的好處將遠超其挑戰,但我們對風險保持清醒認知:工作崗位和整個行業被顛覆,惡意行為者濫用技術,失調的系統逃脫人類控制,政府或機構以破壞民主價值的方式部署 AI,以及權力和財富更加集中而非更廣泛地共享

Indeed, we highlight these risks here to raise awareness of the need for policy solutions to address them. Unless policy keeps pace with technological change, the institutions and safety nets needed to navigate this transition could fall behind. Ensuring that AI expands access, agency, and opportunity is a central challenge as we move towards superintelligence. We should aim for a future where superintelligence benefits everyone, and where we:

事實上,我們在這里強調這些風險,正是為了提高對政策解決方案需求的認識。除非政策跟上技術變革的步伐,否則引導這一轉型所需的制度和安全網可能會落后。確保 AI 擴大人們獲取資源、自主行動和抓住機會的能力,是向超級智能邁進過程中的核心挑戰。我們應該追求一個超級智能惠及所有人的未來,在這個未來中:

1. 廣泛分享繁榮

Share prosperity broadly. The promise of advanced AI is not just technological progress, but a higher quality of life for all. Everyone should have the opportunity to participate in the new opportunities AI creates. Living standards should rise and people should see material improvements through lower costs, better health and education, and more security and opportunity. If AI winds up controlled by, and benefiting only a few, while most people lack agency and access to AI-driven opportunity, we will have failed to deliver on its promise.

高級 AI 的承諾不僅是技術進步,而是所有人生活質量的提高。每個人都應有機會參與 AI 創造的新機遇。生活水平應該提升,人們應該通過更低的成本、更好的醫療和教育、更多的安全感和機會看到實質性的改善。如果 AI 最終被少數人控制和獨享,而大多數人缺乏自主權和獲取 AI 驅動機遇的途徑,那我們就辜負了它的承諾

2. 降低風險

Mitigate risks. The transition toward superintelligence will come with serious risks—from economic disruption, to misuse in areas like cybersecurity and biology, to the loss of alignment or control over increasingly powerful systems. Without effective mitigation, people will be harmed. Avoiding these outcomes requires building new institutions, technical safeguards, and governance frameworks so that advanced systems remain safe, controllable, and aligned—reducing the risk of large-scale harm, protecting critical systems, and ensuring people can rely on AI in their daily lives. As capability scales, safety must scale with it.

向超級智能的過渡將伴隨嚴重風險:經濟動蕩,網絡安全和生物領域的濫用,以及對越來越強大的系統失去對齊或控制。沒有有效的緩解措施,人們將受到傷害。避免這些后果需要建立新的制度、技術保障和治理框架,確保高級系統安全、可控、對齊,從而減少大規模傷害的風險、保護關鍵系統,并確保人們能在日常生活中依賴 AI。能力擴展的同時,安全也必須同步擴展

3. 民主化 AI 的獲取和自主權

Democratize access and agency. As capabilities advance, some systems may need to be controlled for safety. But broad participation in the AI economy should not depend on access to the most powerful models—it should depend on access to AI that is useful, affordable, preserves people’s privacy and expands their individual agency. Avoiding a concentration of wealth and control will require ensuring that people everywhere can use AI in ways that give them real influence at work, in markets, and through democratic processes.

隨著能力提升,某些系統可能需要出于安全考慮而受到控制。但廣泛參與 AI 經濟不應取決于能否使用最強大的模型,而應取決于能否使用有用的、負擔得起的、保護隱私并擴展個人自主權的 AI。避免財富和控制權的集中,需要確保各地的人們都能以賦予他們在工作、市場和民主程序中真正影響力的方式使用 AI

為什么需要新的產業政策

The Case for a New Industrial Policy. Society has navigated major technological transitions before, but not without real disruption and dislocation along the way. While those transitions ultimately created more prosperity, they required proactive political choices to ensure that growth translated into broader opportunity and greater security. For example, following the transition to the Industrial Age, the Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production. They did so by building new public institutions, protections, and expectations about what a fair economy should provide, including labor protections, safety standards, social safety nets, and expanded access to education.

社會過去也經歷過重大技術轉型,但過程中總伴隨著真實的顛覆和錯位。雖然這些轉型最終創造了更多繁榮,但它們需要積極主動的政治選擇,才能確保增長轉化為更廣泛的機會和更大的安全感。比如,工業時代轉型之后,進步時代和羅斯福新政幫助更新了社會契約,以適應被電力、內燃機和大規模生產重塑的世界。它們通過建立新的公共機構、保護措施和對公平經濟應提供什么的期望來實現這一點,包括勞動保護、安全標準、社會安全網和擴大教育機會

History shows that democratic societies can respond to technological upheaval with ambition: reimagining the social contract, mediating between capital and labor, and encouraging broad distribution of the benefits of technological progress while preserving pluralism, constitutional checks and balances, and freedom to innovate. The transition to superintelligence will require an even more ambitious form of industrial policy, one that reflects the ability of democratic societies to act collectively, at scale, to shape their economic future so that superintelligence benefits everyone.

歷史表明,民主社會能夠以雄心壯志回應技術劇變:重新構想社會契約,在資本與勞動之間調和,鼓勵技術進步收益的廣泛分配,同時保持多元主義、憲政制衡和創新自由。向超級智能的過渡將需要一種更加雄心勃勃的產業政策,一種反映民主社會集體行動能力的政策,以塑造其經濟未來,讓超級智能惠及所有人

On this path to superintelligence, there are clear steps we need to take today. People are already concerned about what AI will mean for their lives—whether their jobs and families will be safe, and whether data centers will disrupt their communities and raise energy prices. AI data centers should pay their own way on energy so that households aren’t subsidizing them; and they should generate local jobs and tax revenue. Governments should implement common-sense AI regulation—not to entrench incumbents through regulatory capture but to protect children, mitigate national security risks, and encourage innovation.

在通往超級智能的路上,有一些明確的步驟需要今天就采取。人們已經在擔心 AI 對他們生活的影響:工作和家庭是否安全,數據中心是否會擾亂社區并推高能源價格。AI 數據中心應該自己承擔能源成本,而不是讓家庭來補貼;它們應該創造本地就業和稅收。政府應該實施常識性的 AI 監管,目的不是通過監管捕獲來鞏固現有企業,而是保護兒童、緩解國家安全風險、鼓勵創新

But the magnitude of the changes we expect and the potential risks we foresee demand even more. We are entering a new phase of economic and social organization that will fundamentally reshape work, knowledge, and production. It requires not just incremental policy responses but ambitious policy ideas for tomorrow that we must start discussing today. This is the moment to start the conversation: to think boldly, explore new ideas, and collaboratively develop a new industrial policy agenda that ensures superintelligence benefits everyone.

但我們預期的變化規模和預見的潛在風險要求更多。我們正在進入一個經濟和社會組織的新階段,它將從根本上重塑工作、知識和生產。這不僅需要漸進式的政策回應,更需要面向未來的大膽政策構想,而這些構想必須從今天就開始討論。現在是開啟對話的時刻:大膽思考,探索新想法,合作制定一個確保超級智能惠及所有人的新產業政策議程

In normal times, the case for letting markets work on their own is strong. Historically, competition, entrepreneurship, and open economic participation have lifted living standards and expanded opportunity. Capitalism, imperfect as it is, remains an effective system for translating human ingenuity into shared prosperity.

在正常時期,讓市場自行運作的理由是充分的。歷史上,競爭、創業和開放的經濟參與提升了生活水平、擴大了機會。資本主義雖不完美,但仍然是一個將人類創造力轉化為共享繁榮的有效體系

But industrial policy can play an important role when market forces alone aren’t sufficient—when new technologies create opportunities and risks that existing institutions aren’t equipped to manage. It can help translate scientific breakthroughs into scaled industries and broad-based economic growth.

但當市場力量本身不足時,產業政策可以發揮重要作用:當新技術創造了現有制度無法管理的機遇和風險時。它可以幫助將科學突破轉化為規模化產業和廣泛的經濟增長

A new industrial policy agenda should use government’s existing toolbox for aligning public and private activities: research funding, workforce development, market-shaping tools, and targeted regulation. But governments should not act alone. Nongovernmental institutions should pilot new approaches, measure what works, and iterate quickly, then governments should reinforce successes by aligning incentives and scaling what works through procurement, regulation, and investment. This public-private collaboration should stave off regulatory capture and centralized control, instead preserving the freedom to innovate while ensuring that the onset of superintelligence isn’t dominated by the most powerful forces in society.

新的產業政策議程應該利用政府現有的工具箱來協調公共和私人活動:研究資金、勞動力發展、市場塑造工具和有針對性的監管。但政府不應單獨行動。非政府機構應該試點新方法,衡量什么有效,快速迭代,然后政府通過采購、監管和投資來強化成功案例。這種公私合作應該避免監管捕獲和集中控制,保留創新自由,同時確保超級智能的到來不被社會中最強大的力量所主導

We don’t have all, or even most of the answers. Different paths will require different policy responses, and no single set of tools will be enough in any scenario. But we should aim to build an AI economy that is both open and resilient through policies that expand participation, broaden access to opportunity, and ensure that society has the safeguards and institutions needed to manage risk.

我們沒有全部答案,甚至沒有大部分答案。不同的路徑需要不同的政策回應,沒有任何一套工具在所有情景下都夠用。但我們應該致力于建設一個既開放又有韌性的 AI 經濟,通過擴大參與、拓寬機會獲取、確保社會擁有管理風險所需的保障和制度來實現

This document offers initial ideas for an industrial policy agenda to keep people first during the transition to superintelligence. It is organized in two sections: 1) building an open economy with broad access, participation, and shared prosperity; and 2) building a resilient society through accountability, alignment, and management of frontier risks. OpenAI is offering these ideas to help start a broader conversation about the kinds of policies and institutions needed to navigate the transition. These ideas are intentionally early and exploratory, offered not as a comprehensive or final set of recommendations, but as a starting point for discussion that we invite others to build on, refine, challenge, or choose among through the democratic process.

這份文件提供了一些初步想法,為在向超級智能過渡期間以人為本的產業政策議程。它分為兩部分:1)建設一個具有廣泛準入、參與和共享繁榮的開放經濟;2)通過問責、對齊和前沿風險管理來建設一個有韌性的社會。OpenAI 提出這些想法是為了啟動一場更廣泛的對話。這些想法是刻意早期和探索性的,不是作為全面或最終的建議,而是作為討論的起點,我們邀請他人在此基礎上完善、挑戰或通過民主程序做出選擇

They also focus on the United States as a starting point, but the conversation—and the solutions—must ultimately be global. The transition to superintelligence is not a distant possibility—it’s already underway, and the choices we make in the near term will shape how its benefits and risks are distributed for decades to come.

這些想法以美國為起點,但對話和解決方案最終必須是全球性的。向超級智能的過渡不是遙遠的可能性,它已經在進行中,我們在近期做出的選擇將決定其收益和風險在未來幾十年如何分配

第一部分:建設開放經濟

The promise of advanced AI is that it can benefit everyone by translating abundant intelligence into extraordinary progress. It can lower the cost of essential goods, expand opportunity, and give people more time for what is meaningful, relational, and community-building. It can help solve scientific challenges that still elude human effort: curing or preventing diseases, alleviating food scarcity, strengthening agriculture under climate stress, and speeding up breakthroughs in clean, reliable energy. The benefits of major investments in science could emerge within a single lifetime and reach communities far beyond traditional research hubs.

高級 AI 的承諾是,它可以通過將充裕的智能轉化為非凡的進步來惠及所有人。它可以降低基本商品成本,擴大機會,給人們更多時間用于有意義的、關系性的、社區建設的事情。它可以幫助解決人類努力仍未攻克的科學挑戰:治愈或預防疾病,緩解糧食短缺,在氣候壓力下加強農業,加速清潔可靠能源的突破。重大科學投資的收益可以在一代人的時間內涌現,惠及遠超傳統研究中心的社區

Yet the same capabilities making this progress possible will also disrupt jobs and reshape entire industries at a speed and scale unlike any previous technological shift. Some jobs will disappear, others will evolve, and entirely new forms of work will emerge as organizations learn how to deploy advanced AI.

然而,使這些進步成為可能的同樣能力,也將以前所未有的速度和規模顛覆工作崗位并重塑整個行業。一些工作將消失,另一些將演變,隨著組織學會如何部署高級 AI,全新形式的工作將出現

These changes will not arrive evenly. Without thoughtful policies, AI could widen inequality by compounding advantages for those already positioned to capture the upside while communities that begin with fewer resources fall further behind, excluded from new tools, new industries, and new opportunities. There is also a risk that the economic gains concentrate within a small number of firms like OpenAI, even as the technology itself becomes more powerful and widely used. Workers using AI might well agree that it’s increasing their productivity without believing they’re seeing the benefits.

這些變化不會均勻到來。沒有周全的政策,AI 可能會加劇不平等:為那些已經處于有利位置的人疊加優勢,而資源較少的社區進一步落后,被排斥在新工具、新行業和新機遇之外。也存在經濟收益集中在少數公司(包括 OpenAI)的風險,即使技術本身變得更強大、使用更廣泛。使用 AI 的勞動者可能承認它提高了自己的生產力,但并不認為自己從中獲益了

Maintaining an open economy that is easily accessed and participatory will require ambitious policymaking. The enclosed ideas include proposals to ensure that workers have a voice in the AI transition, since workers have deep knowledge about how work is actually performed and where AI can make work better and safer. Other proposals suggest new mechanisms to share returns from AI-driven growth by expanding access to capital, sharing economic gains more widely, and aligning the benefits of AI-enabled growth with higher living standards. And they aim to modernize economic security by helping people navigate transitions, access new opportunities, and maintain stability as work changes.

維持一個易于進入和參與的開放經濟將需要大膽的政策制定。文中的方案包括:確保勞動者在 AI 轉型中有發言權,因為勞動者對工作實際如何完成最有發言權;提出分享 AI 驅動增長回報的新機制,通過擴大資本獲取、更廣泛地分享經濟收益來實現;以及通過幫助人們應對轉型、獲取新機會、在工作變化中保持穩定來實現經濟安全的現代化

勞動者視角

Worker perspectives. Give workers a voice in the AI transition to make work better and safer, including a formal way to collaborate with management to make sure AI improves job quality, enhances safety, and respects labor rights. Workers have deep knowledge about how work is actually performed and where AI can improve outcomes. They will be critical voices in understanding how AI can be used in workplaces to ensure that technological change will not only lead to improved productivity, but also lead to better jobs and stronger, safer workplaces.

在 AI 轉型中給勞動者發言權,讓工作更好更安全,包括建立與管理層合作的正式機制,確保 AI 提升工作質量、增強安全、尊重勞動權利。勞動者對工作實際如何完成、AI 在哪里能改善結果有深入的認知。他們將是理解 AI 如何在工作場所使用的關鍵聲音,確保技術變革不僅提高生產力,還帶來更好的工作和更安全的工作場所

Allow workers to prioritize AI deployments that improve job quality by eliminating dangerous, repetitive, administrative, or exhausting tasks so employees can focus on higher-value work. At the same time, set clear limits on harmful uses of AI that could erode job quality by intensifying workloads, narrowing autonomy, or undermining fair scheduling and pay.

允許勞動者優先推動那些通過消除危險、重復、行政或繁重任務來改善工作質量的 AI 部署,讓員工專注于更高價值的工作。同時,對可能通過加大工作量、縮小自主權或破壞公平排班和薪酬來侵蝕工作質量的 AI 使用設置明確限制

AI 優先的創業者

AI-first entrepreneurs. Help workers turn domain expertise into new companies by using AI to handle the overhead that usually blocks entrepreneurship (e.g., accounting, marketing, procurement). Pair microgrants or revenue-based financing with practical “startup-in-a-box” supports such as model contracts and shared back-office infrastructure so that new small businesses can compete quickly. Worker organizations could serve as enablers by offering training, providing shared services, and helping workers negotiate fair commercial terms and protect IP.

幫助勞動者利用 AI 處理通常阻礙創業的開銷(如財務、營銷、采購),將領域專長轉化為新公司。將小額撥款或收入分成融資與「創業工具箱」支持(如模板合同和共享后臺基礎設施)結合,讓新的小企業能快速參與競爭。工會組織可以充當賦能者的角色:提供培訓、共享服務,幫助勞動者談判公平商業條款和保護知識產權

AI 接入權

Right to AI. Treat access to AI as foundational for participation in the modern economy, similar to mass efforts to increase global literacy, or to make sure that electricity and the internet reach remote parts of the globe. (The internet still isn’t fairly deployed across the globe or even the US; learn from this and seek to rectify those issues when it comes to AI.) Expand affordable, reliable access to foundational models—the building blocks of modern AI systems—and make a baseline level of capability broadly available, including through free or low-cost access points. Support the education, infrastructure, connectivity, and training needed to use these systems effectively, and make sure that workers, small businesses, schools, libraries, and underserved communities are not excluded from the capabilities that drive productivity and opportunity.

將 AI 接入視為參與現代經濟的基礎,類似于提高全球識字率的大規模努力,或確保電力和互聯網到達偏遠地區。(互聯網至今在全球甚至美國都沒有公平部署,應該從中吸取教訓,在 AI 方面糾正這些問題。)擴大對基礎模型的可負擔、可靠的訪問,并使基線能力廣泛可用,包括通過免費或低成本的接入點。支持有效使用這些系統所需的教育、基礎設施、連接和培訓,確保勞動者、小企業、學校、圖書館和服務不足的社區不被排斥在驅動生產力和機會的能力之外

現代化稅基

Modernize the tax base. As AI reshapes work and production, the composition of economic activity may shift—expanding corporate profits and capital gains while potentially reducing reliance on labor income and payroll taxes. This could erode the tax base that funds core programs like Social Security, Medicaid, SNAP, and housing assistance—putting them at risk. Tax policy should adapt to ensure these systems remain durable.

隨著 AI 重塑工作和生產,經濟活動的構成可能發生變化:企業利潤和資本收益擴大,而對勞動收入和工資稅的依賴可能減少。這可能侵蝕為社會保障、醫療補助、食品券和住房援助等核心項目提供資金的稅基,使它們面臨風險。稅收政策應當適應以確保這些體系持久

Policymakers could rebalance the tax base by increasing reliance on capital-based revenues—such as higher taxes on capital gains at the top, corporate income, or targeted measures on sustained AI-driven returns—and by exploring new approaches such as taxes related to automated labor. These reforms should be paired with wage-linked incentives that encourage firms to retain, retrain, and invest in workers, similar to existing R&D-style credits. Together, these changes would help stabilize funding for essential programs while supporting workforce transitions in an AI-driven economy.

政策制定者可以通過增加對資本收入的依賴來重新平衡稅基,比如對高額資本利得、企業所得征收更高稅率,或對持續的 AI 驅動回報實施定向措施,同時探索與自動化勞動相關的新稅種。這些改革應與工資掛鉤激勵配套,鼓勵企業留住、再培訓和投資于勞動者,類似于現有的研發稅收抵免。這些變化將共同幫助穩定基本項目的資金,同時支持 AI 驅動經濟中的勞動力轉型

公共財富基金

Public Wealth Fund. Create a Public Wealth Fund that provides every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth. While tax reforms help ensure governments can continue to fund essential programs, a Public Wealth Fund is designed to ensure that people directly share in the upside of that growth.

創建公共財富基金,為每個公民(包括那些沒有投資金融市場的人)提供 AI 驅動經濟增長的份額。稅收改革幫助確保政府能繼續資助基本項目,而公共財富基金旨在確保人們直接分享增長的上行空間

Policymakers and AI companies should work together to determine how to best seed the Fund, which could invest in diversified, long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI. Returns from the Fund could be distributed directly to citizens, allowing more people to participate directly in the upside of AI-driven growth, regardless of their starting wealth or access to capital.

政策制定者和 AI 公司應合作確定如何為基金注入種子資金,基金可以投資于多元化的長期資產,捕獲 AI 公司和更廣泛的采用和部署 AI 的企業的增長。基金的回報可以直接分配給公民,讓更多人直接參與 AI 驅動增長的收益,不論其起始財富或資本獲取能力

讓每個公民都擁有 AI 經濟增長的份額


加速電網擴張

Accelerate grid expansion. Establish new public-private partnership models to finance and accelerate the expansion of energy infrastructure required to power AI. Use these models to address financing constraints, permitting delays, and siting risks that have limited high-voltage interstate and interregional transmission—and to deliver infrastructure at speed and scale, limit taxpayer risk, and share the upside with the public. Partnerships should be structured to minimize taxpayer exposure to commercial losses and ensure that expanded energy infrastructure translates into lower energy costs for households and businesses.

建立新的公私合作模式,為 AI 所需的能源基礎設施擴張提供融資并加速推進。利用這些模式解決融資約束、審批延遲和選址風險等限制州際和跨區域高壓輸電的問題,以速度和規模交付基礎設施,限制納稅人風險,并與公眾分享收益。合作關系的設計應最大限度減少納稅人面臨的商業損失風險,并確保擴大的能源基礎設施轉化為家庭和企業更低的能源成本

效率紅利

Efficiency dividends. Convert efficiency gains from AI into durable improvements in workers’ benefits when routine workload declines and operating costs fall, including incentivizing companies to increase retirement matches or contributions, cover a larger share of healthcare costs, and subsidize child and eldercare.

當常規工作量下降和運營成本降低時,將 AI 帶來的效率提升轉化為勞動者福利的持久改善,包括激勵企業增加退休匹配或繳款、承擔更大份額的醫療成本、補貼育兒和養老

Incentivize employers and unions to run time-bound 32-hour/four-day workweek pilots with no loss in pay that hold output and service levels constant, then convert reclaimed hours into a permanent shorter week, bankable paid time off, or both.

激勵雇主和工會試行每周 32 小時 / 四天工作制,在不減薪、保持產出和服務水平的前提下進行限時試點,然后將回收的工時轉化為永久性的縮短工作周、可存儲的帶薪休假,或兩者兼具

自適應安全網

Adaptive safety nets that work for everyone. Make sure the existing safety net works reliably, quickly, and at scale, because if the transition to superintelligence is going to benefit everyone, the systems designed to provide economic and health security need to deliver without delay or gaps. That starts with unemployment insurance, SNAP, Social Security, Medicaid, and Medicare that are not just in place but fully functional, accessible, and responsive to the realities people will face during the transition.

確保現有安全網可靠、快速、大規模地運作。如果向超級智能的過渡要惠及所有人,那么為提供經濟和健康安全而設計的系統就必須沒有延遲和缺口地交付。這首先意味著失業保險、食品券、社會保障、醫療補助和醫療保險不僅要到位,還必須全面運作、可及,并能回應人們在轉型中面對的現實

Next, invest in clear, real-time measurement of how AI is affecting work, wages, job quality, and sectoral dynamics, using public metrics such as unemployment rates and indicators of regional or industry-specific displacement. These systems should provide policymakers with timely visibility into where disruption is occurring and how severe it is.

其次,投資于對 AI 如何影響工作、工資、工作質量和行業動態的清晰實時衡量,使用失業率和區域或行業特定位移指標等公共指標。這些系統應為政策制定者提供對顛覆發生在哪里、嚴重程度如何的及時可見性

Then, define a package of temporary, expanded safety nets (e.g., expanded or more flexible unemployment benefits, fast cash assistance, wage insurance, training vouchers) that activates automatically when these metrics exceed pre-defined thresholds. When disruption rises above those levels, support would scale up; as conditions stabilize, it would phase out. This ensures that assistance is targeted, time-bound, and proportional to the scale of disruption, and also avoids a permanent expansion of programs.

然后,定義一套臨時性的擴展安全網(如擴大或更靈活的失業救濟、快速現金援助、工資保險、培訓券),當指標超過預設閾值時自動啟動。顛覆加劇時支持升級,情況穩定時逐步退出。這確保了援助是有針對性的、有時間限制的、與顛覆規模成比例的,也避免了項目的永久性擴張

可攜帶福利

Portable benefits. Over time, build benefit systems that are not tied to a single employer by expanding access to healthcare, retirement savings, and skills training through portable accounts that follow individuals across jobs, industries, education programs, and entrepreneurial ventures. Public programs can decouple key benefits from employment status by expanding access to retirement and training support regardless of where or how someone works. Implementation can run through portable benefit platforms that pool contributions from multiple sources and route them into standardized accounts attached to the individual, not the job. Retirement systems can also be modernized through pooled structures that allow workers to accrue benefits continuously across employers, reducing gaps and preserving continuity over time.

逐步建立不綁定單一雇主的福利體系,通過可攜帶賬戶擴大醫療、退休儲蓄和技能培訓的獲取,這些賬戶跟隨個人跨越工作、行業、教育項目和創業活動。公共項目可以通過擴大退休和培訓支持的獲取來將關鍵福利與就業狀態脫鉤,不論一個人在哪里或如何工作。實施可以通過可攜帶福利平臺進行,匯集來自多個來源的繳款并將其導入綁定個人而非工作崗位的標準化賬戶。退休體系也可以通過匯集結構進行現代化,讓勞動者跨雇主持續積累福利,減少缺口并保持連續性

面向以人為本工作的通道

Pathways into human-centered work. Expand opportunities in the care and connection economy—childcare, eldercare, education, healthcare, and community services—as pathways for workers displaced by AI. Although AI can enhance these roles by reducing administrative burdens and enabling greater personalization, human connection will remain an essential part of the profession. As AI reshapes the labor market, these sectors can absorb transitioning workers if supported with investments in training, wages, and job quality. Governments can build training pipelines, support transitions into care roles, and incentivize employers to raise pay and improve conditions in fields facing chronic shortages.

擴大關愛和連接經濟中的機會:育兒、養老、教育、醫療和社區服務,作為被 AI 替代的勞動者的轉型通道。雖然 AI 可以通過減少行政負擔和實現更大的個性化來增強這些角色,但人際連接仍將是這些職業的核心部分。隨著 AI 重塑勞動力市場,如果有培訓、薪資和工作質量方面的投資支持,這些行業可以吸收轉型中的勞動者。政府可以建設培訓管道,支持向護理角色的轉型,激勵雇主在面臨長期短缺的領域提高薪資和改善條件

These initiatives could be complemented with a family benefit that recognizes caregiving as economically valuable work and supports evolving work patterns. This benefit could help cover childcare, education, and healthcare while remaining compatible with part-time work, retraining, or entrepreneurship. Together, these efforts would expand access to care, strengthen communities, and create meaningful, human-centered work.

這些舉措可以與一項家庭福利相結合,該福利承認照顧工作是有經濟價值的勞動,并支持不斷演變的工作模式。這項福利可以幫助覆蓋育兒、教育和醫療,同時與兼職工作、再培訓或創業兼容。這些努力將共同擴大護理服務的獲取,加強社區,并創造有意義的、以人為本的工作

加速科學發現并推廣收益

Accelerate scientific discovery and scale the benefits. Build a distributed network of AI-enabled laboratories to dramatically expand the capacity to test and validate AI-generated hypotheses at scale. These labs would integrate AI systems directly into experimental workflows by automating routine processes, capturing high-quality data, and enabling rapid iteration between hypothesis generation and testing.

建設分布式的 AI 賦能實驗室網絡,大幅擴展大規模測試和驗證 AI 生成假說的能力。這些實驗室將 AI 系統直接集成到實驗工作流中,自動化常規流程,采集高質量數據,實現假說生成和測試之間的快速迭代

Then, build the physical systems and infrastructure needed to translate validated discoveries into real-world use at scale. This includes expanding the capacity of organizations to deploy new technologies, upgrading facilities and systems required for implementation, and aligning financing and incentives to support adoption. It also includes a sustained investment in people: training scientists, technicians, and operators to contribute to AI-enabled science. These investments ensure that breakthroughs move beyond laboratories and into widespread use, while strengthening the workforce and operational systems required to build, maintain, and run the infrastructure that supports AI-enabled discovery. Both laboratory and production infrastructure should be deployed broadly across universities, community colleges, hospitals, and regional research hubs, not concentrated in a small number of elite institutions.

然后,建設將經過驗證的發現轉化為大規模實際應用所需的物理系統和基礎設施。這包括擴大組織部署新技術的能力,升級實施所需的設施和系統,以及調整融資和激勵以支持采納。還包括對人的持續投資:培訓科學家、技術人員和操作員以參與 AI 賦能的科學。這些投資確保突破從實驗室走向廣泛應用,同時加強支持 AI 賦能發現所需的勞動力和運營系統。實驗室和生產基礎設施都應廣泛部署在大學、社區學院、醫院和區域研究中心,而不是集中在少數精英機構

第二部分:建設有韌性的社會

As AI systems become more capable and more embedded across the economy, they may introduce new vulnerabilities alongside new abundance. Some systems may be misused for cyber or biological harm. Others may create new pressures on social and emotional well-being, including for young people, if deployed without adequate safeguards. AI systems may act in ways that are misaligned with human intent or operate beyond meaningful human oversight. And as advanced AI reshapes how people, organizations, and governments operate, it may place new strain on the institutions and norms that societies rely on to remain stable, secure, and free.

隨著 AI 系統變得更強大、更深入地嵌入經濟,它們可能在帶來新豐裕的同時引入新的脆弱性。一些系統可能被濫用于網絡或生物危害。另一些如果沒有充分保障就部署,可能對社會和情感健康(包括青少年)造成新的壓力。AI 系統可能以與人類意圖不一致的方式行事,或超出有意義的人類監督。隨著高級 AI 重塑人、組織和政府的運作方式,它可能對社會賴以保持穩定、安全和自由的制度和規范施加新的壓力

We should be clear-eyed about the resilience required here. These new risks won’t be isolated or suitable for addressing one at a time—AI will reshape how work is performed, how decisions are made, how organizations operate, and how states interact. Building resilience therefore means making sure people and institutions can adapt quickly, maintain meaningful agency over how these systems are used, and preserve broadly shared prosperity even as economic and social structures evolve.

我們應該對所需的韌性保持清醒。這些新風險不會是孤立的或適合逐一應對的:AI 將重塑工作方式、決策方式、組織運作方式以及國家互動方式。因此,建設韌性意味著確保人和機構能快速適應,對這些系統的使用方式保持有意義的自主權,并在經濟和社會結構演變時保持廣泛共享的繁榮

Over the past several years, leading AI developers including OpenAI have focused heavily on upstream safeguards: development of global standards, transparency around evaluations, mitigations, and risks, and investments in model testing, red teaming, and usage policies designed to identify and mitigate risks before deployment. Policymakers have also focused here, codifying requirements in the EU AI Act and in US state-based regulation. These upstream efforts should continue.

過去幾年,包括 OpenAI 在內的領先 AI 開發者大量關注上游保障:制定全球標準,圍繞評估、緩解措施和風險的透明度,以及投資于模型測試、紅隊和使用政策,旨在部署前識別和緩解風險。政策制定者也在這方面著力,在歐盟 AI 法案和美國州級法規中將要求編入法律。這些上游努力應該繼續

But as AI systems become more capable and more widely deployed, resilience will also depend upon what happens after deployment—when systems must be monitored in real time, operate under uncertainty, and integrate into institutions not designed for agentic workflows.

但隨著 AI 系統變得更強大、更廣泛部署,韌性也將取決于部署之后發生的事情:當系統必須實時監控、在不確定性下運行、并集成到不是為 Agent 工作流設計的機構中時

This is not a new challenge. As electricity spread, societies built safety standards and regulatory institutions. As automobiles transformed mobility, safety systems reduced risk while preserving freedom of movement. In aviation, continuous monitoring and coordinated response systems made flying one of the safest forms of transportation. In food and medicine, testing and post-market surveillance helped ensure safety in everyday use. In each case, resilience was not automatic—it was built with the luxury of time.

這不是一個新挑戰。電力普及時,社會建立了安全標準和監管機構。汽車改變出行時,安全系統降低了風險同時保留了出行自由。航空領域,持續監控和協調響應系統使飛行成為最安全的交通方式之一。食品和藥品領域,測試和上市后監測幫助確保了日常使用中的安全。在每種情況下,韌性都不是自動產生的,而是在時間的從容中建設的

As we move toward superintelligence, building a resilient society will require a similar but speedier effort that kicks into gear now. The ideas below are a slate of ambitious approaches to building a more resilient society. They focus on building and scaling safety systems that operate in real-world conditions by establishing mechanisms for trust, accountability, and auditing. They suggest opportunities for strengthening governance so that advanced AI remains controllable, transparent, and aligned with democratic values. And they suggest approaches to improve coordination across companies, governments, and countries so that risks can be identified early, information can be shared, and responses can be executed quickly when needed. Together, these proposals extend important safety work already underway and represent initial ideas to keep AI safe, governable, and aligned with democratic values.

向超級智能邁進的過程中,建設有韌性的社會將需要類似但更快速的努力,而且現在就要啟動。以下是一系列建設更有韌性社會的大膽方案。它們聚焦于通過建立信任、問責和審計機制來構建和擴展在真實世界條件下運行的安全系統。它們提出了加強治理的機會,使高級 AI 保持可控、透明,并與民主價值一致。它們還提出了改善公司、政府和國家之間協調的方法,以便盡早識別風險、共享信息,并在需要時快速執行應對。這些提案共同延續了已經在進行中的重要安全工作,代表了保持 AI 安全、可治理和與民主價值一致的初步想法

應對新興風險的安全系統

Safety systems for emerging risks. Research and develop tools that protect models, detect risks, and prevent misuse across high-consequence domains, including cyber and biological risks as well as other pathways to large-scale harm. Expand the use of advanced AI systems for threat modeling, red teaming, net assessments, and robustness testing to identify and anticipate novel risks early and inform mitigation strategies. Develop and scale complementary protective systems; for example, rapid identification and production of medical countermeasures in the event of an outbreak and expanded strategic stockpiles to prepare for future risks. Then, catalyze competitive safety markets by creating sustained demand for these capabilities through procurement, standards, insurance frameworks, and advance-purchase commitments. Over time, this approach can make safeguards an output of innovation and competition, ensuring that defenses improve as quickly as the risks they are designed to address.

研發保護模型、檢測風險和防止濫用的工具,覆蓋高后果領域,包括網絡和生物風險以及其他大規模傷害途徑。擴大高級 AI 系統在威脅建模、紅隊、凈評估和魯棒性測試中的使用,以盡早識別和預測新型風險。開發和擴展互補保護系統,比如在疫情爆發時快速識別和生產醫療對策,以及擴大戰略儲備以應對未來風險。然后,通過采購、標準、保險框架和預購承諾創造對這些能力的持續需求,催化競爭性的安全市場。隨著時間推移,這種方法可以使保障措施成為創新和競爭的產出,確保防御措施與其所針對的風險同步改進

AI 信任棧

AI trust stack. Research and develop systems that help people trust and verify AI systems, the content they produce, and the actions they take—especially as these systems take on more real-world responsibilities. Advance the development of provenance and verification standards and tools that can build trust in AI systems while preserving privacy. This could include enabling secure, verifiable signatures for actions such as generating content or issuing instructions, and developing privacy-preserving logging and audit systems capable of supporting investigation and accountability without enabling pervasive surveillance.

研發幫助人們信任和驗證 AI 系統、其產出內容和采取行動的系統,尤其是當這些系統承擔更多現實世界職責時。推進溯源和驗證標準及工具的開發,在保護隱私的同時建立對 AI 系統的信任。這可以包括為生成內容或發出指令等行為提供安全、可驗證的簽名,以及開發能支持調查和問責但不會導致普遍監控的隱私保護日志和審計系統

These types of solutions should capture key information about system behavior and use while minimizing the collection of sensitive data, and be designed to support investigation or intervention under clearly defined legal or safety conditions. This work could also include developing and testing governance frameworks that clarify responsibility within organizations, including how accountability could be assigned to specific roles and how delegation, monitoring, and escalation processes could function as systems become more capable. Over time, these efforts could establish a foundation for accountability by building trust in AI interactions and helping ensure that when harm occurs, responsibility can be appropriately allocated.

這類解決方案應在最小化敏感數據收集的同時捕獲關于系統行為和使用的關鍵信息,并被設計為在明確定義的法律或安全條件下支持調查或干預。這項工作還可以包括開發和測試治理框架,明確組織內部的責任,包括如何將問責分配給特定角色,以及隨著系統變得更強大,委托、監控和升級流程如何運作。隨著時間推移,這些努力可以通過在 AI 交互中建立信任并幫助確保當傷害發生時責任能被適當分配來建立問責的基礎

審計制度

Auditing regimes. Strengthen institutions such as the Center for AI Standards and Innovation (CAISI) to develop auditing standards for frontier AI risks in coordination with national security agencies. Use tools such as government procurement, advance-purchase commitments, insurance frameworks, and standards-setting to create and scale a competitive market of auditors and evaluators capable of assessing AI systems and products for safety and security risks, building auditing capacity alongside the technology. Standards should be designed for international adoption to reduce fragmentation and avoid creating unnecessary compliance burdens for small companies, as well as those operating across jurisdictions.

強化 AI 標準與創新中心(CAISI)等機構,與國家安全機構協調制定前沿 AI 風險的審計標準。利用政府采購、預購承諾、保險框架和標準制定等工具,創建和擴大能夠評估 AI 系統和產品安全與安保風險的審計師和評估師競爭性市場,使審計能力與技術同步增長。標準應為國際采納而設計,減少碎片化,避免為小公司和跨轄區運營的公司造成不必要的合規負擔

As we progress toward superintelligence, there may come a point where a narrow set of highly capable models—particularly those that could materially advance chemical, biological, radiological, nuclear, or cyber risks—require stronger controls, including pre- and post-deployment audits using the standards developed in advance. Apply these requirements only to a small number of companies and the most advanced models, preserving a vibrant ecosystem of less powerful systems and the startups building on them. This approach maintains broad access to general-purpose AI while applying targeted safeguards where failures could create the greatest harm, avoiding unnecessary barriers that could limit competition or enable regulatory capture.

隨著向超級智能推進,可能到達一個節點,少數高度能干的模型(特別是那些可能實質性推進化學、生物、放射、核或網絡風險的模型)需要更嚴格的控制,包括使用預先制定標準的部署前和部署后審計。這些要求僅適用于少數公司和最先進的模型,保留較弱系統和基于它們構建的初創企業的活躍生態。這種方法保持了對通用 AI 的廣泛訪問,同時在失敗可能造成最大傷害的地方實施有針對性的保障,避免可能限制競爭或導致監管捕獲的不必要壁壘

模型遏制手冊

Model-containment playbooks. Develop and test coordinated playbooks to contain dangerous AI systems once they have been released into the world. As AI capabilities advance, societies may face scenarios where dangerous systems cannot be easily recalled—because model weights have been released, developers are unwilling or unable to limit access to dangerous capabilities, or the systems are autonomous and capable of replicating themselves. In these cases, the challenge is containment: limiting the spread of dangerous capabilities, reducing harm, and coordinating responses under real-world constraints. Experience from other high-consequence domains, such as cybersecurity and public health, shows that even when full containment is not possible, coordinated action can still meaningfully reduce impact.

制定和測試協調手冊,在危險 AI 系統已經釋放到世界后進行遏制。隨著 AI 能力推進,社會可能面臨危險系統無法輕易召回的情景:模型權重已經公開,開發者不愿或無法限制對危險能力的訪問,或系統是自主的且能自我復制。在這些情況下,挑戰是遏制:限制危險能力的擴散,減少傷害,在現實世界約束下協調響應。網絡安全和公共衛生等其他高后果領域的經驗表明,即使完全遏制不可能,協調行動仍能有意義地減少影響

使命對齊的公司治理

Mission-aligned corporate governance. Frontier AI companies should adopt governance structures that embed public-interest accountability into decision-making, such as Public Benefit Corporations with mission-aligned governance. These structures should include explicit commitments to ensure that the benefits of AI are broadly shared, including through significant, long-term philanthropic or charitable giving. At the same time, harden frontier systems against corporate or insider capture by securing model weights and training infrastructure, auditing models for manipulative behaviors or hidden loyalties, and monitoring high-risk deployments so no individual or internal faction can quietly use AI systems to concentrate power.

前沿 AI 公司應采用將公共利益問責嵌入決策的治理結構,如使命對齊治理的公共利益公司。這些結構應包含明確承諾,確保 AI 的收益廣泛共享,包括通過重大的長期慈善捐贈。同時,通過保護模型權重和訓練基礎設施、審計模型是否存在操縱行為或隱藏忠誠度、監控高風險部署,使前沿系統免受企業或內部人員捕獲,確保沒有個人或內部派系能悄悄利用 AI 系統來集中權力

政府使用 AI 的護欄

Guardrails for government use. Have policymakers establish clear rules for how governments can and cannot use AI, with especially high standards for reliability, alignment, and safety. These standards should be codified in law and reinforced through technical safeguards. At the same time, use AI to strengthen democratic accountability. As more government decisions are made through AI-assisted workflows, these systems will create clearer digital records of government reasoning and action that can be logged alongside other public records. With appropriate safeguards, oversight institutions such as inspectors general, congressional committees, and courts could use AI-enabled auditing tools to detect abuse, identify harms, and improve accountability at scale.

由政策制定者建立關于政府如何使用和不使用 AI 的明確規則,對可靠性、對齊性和安全性設置特別高的標準。這些標準應被編入法律并通過技術保障加以強化。同時,利用 AI 加強民主問責。隨著更多政府決策通過 AI 輔助工作流做出,這些系統將創建更清晰的政府推理和行動的數字記錄,可以與其他公共記錄一起歸檔。在適當的保障下,監察長、國會委員會和法院等監督機構可以使用 AI 賦能的審計工具來檢測濫用、識別傷害,并大規模提升問責能力

Also, modernize transparency frameworks (including the Freedom of Information Act) to allow citizens and watchdog organizations to use AI to review targeted questions about government actions while protecting sensitive information. This could include clarifying when AI-interaction logs and agentic action logs constitute federal records that must be retained for specified periods.

此外,現代化透明度框架(包括信息自由法),允許公民和監督組織使用 AI 審查關于政府行為的針對性問題,同時保護敏感信息。這可以包括明確 AI 交互日志和 Agent 行動日志何時構成必須保留指定期限的聯邦記錄

公眾意見輸入機制

Mechanisms for public input. Create structured ways for public input so that alignment isn’t defined only by engineers or executives behind closed doors. As advanced AI makes more decisions that affect people’s lives, societies need shared clarity about what these systems are supposed to do, what values should guide them, and how well they are performing. Make alignment more democratic, legible, and accountable through transparent specifications, evaluation frameworks, and representative input processes. Developers should publish model specifications that describe how systems are intended to behave and share information about how those systems are evaluated. Governments and public institutions should help shape these standards by anchoring them in democratic laws and values, while establishing mechanisms for representative public input to be considered alongside traditional business stakeholders. Together, these approaches help ensure that the advancement of AI reflects the perspectives of the societies that must live with its consequences.

創建結構化的公眾意見輸入渠道,使對齊不僅僅由工程師或高管在閉門會議中定義。隨著高級 AI 做出越來越多影響人們生活的決策,社會需要就這些系統應該做什么、什么價值觀應指導它們、以及它們表現如何達成共同的清晰認知。通過透明的規格說明、評估框架和代表性輸入流程,使對齊更加民主、可讀和可問責。開發者應發布描述系統預期行為的模型規格書,并分享系統評估的信息。政府和公共機構應通過將這些標準錨定在民主法律和價值觀中來幫助塑造它們,同時建立機制讓代表性的公眾意見與傳統商業利益相關者一起被考慮。這些方法共同幫助確保 AI 的發展反映必須與其后果共存的社會的視角

事件報告

Incident reporting. Establish a mechanism for companies to share information about incidents, misuse, and near-misses with a designated public authority. The system should emphasize learning and prevention over punishment, with appropriately scoped public disclosures that ensure transparency and democratic oversight while protecting sensitive technical, national security, and competitive information. Near-miss reporting could include cases where models exhibited concerning internal reasoning, unexpected capabilities, or other warning signals—even if safeguards ultimately prevented harm—so the ecosystem can learn from close calls before they become real incidents.

建立企業向指定公共機構共享事件、濫用和未遂事件信息的機制。該系統應強調學習和預防而非懲罰,通過適當范圍的公開披露確保透明和民主監督,同時保護敏感的技術、國家安全和商業競爭信息。未遂事件報告可以包括模型表現出令人擔憂的內部推理、意外能力或其他警告信號的案例,即使保障措施最終防止了傷害,生態系統也可以在險情變成真正事故之前從中學習

國際信息共享

International information-sharing around AI capabilities, risks, and mitigations. Strengthen national evaluation institutions as the foundation for international coordination, beginning with expanding the role of the CAISI as a trusted technical body for evaluating frontier systems, assessing safeguards, and informing government understanding of advanced AI capabilities. Building on this foundation, develop a global network of AI Institutes that collaborate through shared protocols for information exchange, joint evaluations, and coordinated mitigation measures.

圍繞 AI 能力、風險和緩解措施的國際信息共享。以強化國家評估機構作為國際協調的基礎,首先擴大 CAISI 作為評估前沿系統、評估保障措施和促進政府理解高級 AI 能力的可信技術機構的角色。在此基礎上,發展一個全球 AI 研究所網絡,通過共享的信息交換協議、聯合評估和協調緩解措施進行合作

Over time, this network could evolve into an international framework akin to the other multilateral institutions focused on safety and standards, one that gives trusted public authorities visibility into frontier AI development; and creates secure cross-lab and cross-country channels for sharing evaluation results, alignment findings, and emerging risks; and likewise supports communicating during crises. To enable effective collaboration, policymakers should ensure that companies can share safety- and risk-related information through these channels without running afoul of antitrust or competition constraints, using clear safe harbors and narrowly scoped information-sharing rules. This system should expand beyond a narrow focus on national security to include a broader range of societal risks, including impacts on youth safety and well-being.

隨著時間推移,這一網絡可以演變為類似于其他專注于安全和標準的多邊機構的國際框架:給可信的公共機構提供對前沿 AI 開發的可見性,創建安全的跨實驗室和跨國渠道用于分享評估結果、對齊發現和新興風險,并同樣支持危機期間的溝通。為實現有效合作,政策制定者應確保企業能通過這些渠道分享安全和風險相關信息,而不違反反壟斷或競爭約束,使用明確的安全港和范圍窄小的信息共享規則。該系統應擴展到超越對國家安全的狹隘關注,納入更廣泛的社會風險,包括對青少年安全和福祉的影響

開啟對話

We offer these ideas not as fixed answers but as a starting point for a broader conversation about how to ensure that AI benefits everyone. That conversation should be inclusive and ongoing—engaging governments, companies, researchers, civil society, communities, and families—and should be mediated through democratic processes that give people real power to shape the AI future they want. It also needs to expand globally—bringing in the perspectives of cultures, societies, and governments around the world.

我們提出這些想法不是作為固定答案,而是作為關于如何確保 AI 惠及所有人的更廣泛對話的起點。這場對話應該是包容的和持續的,納入政府、企業、研究者、公民社會、社區和家庭,并應通過賦予人們真正權力來塑造他們想要的 AI 未來的民主程序來進行。它也需要擴展到全球,引入世界各地文化、社會和政府的視角

These ideas are our first contribution to that effort, but only the beginning. Progress will depend on continued iteration, experimentation, and collaboration across institutions and sectors. To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.

這些想法是我們對這一努力的第一份貢獻,但只是開始。進展將取決于跨機構和跨部門的持續迭代、實驗和合作。為維持勢頭,OpenAI 正在:(1)通過 newindustrialpolicy@openai.com 收集和組織反饋;(2)設立試點項目,提供最高 10 萬美元的研究金和最高 100 萬美元的 API 額度,資助基于這些政策構想的研究;(3)在 5 月將在華盛頓特區開設的新 OpenAI Workshop 召集討論

https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%%20Policy%%20for%%20the%%20Intelligence%%20Age.pdf

特別聲明:以上內容(如有圖片或視頻亦包括在內)為自媒體平臺“網易號”用戶上傳并發布,本平臺僅提供信息存儲服務。

Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.

相關推薦
熱點推薦
北京樓市:西直門臨水豪宅,跌幅26%

北京樓市:西直門臨水豪宅,跌幅26%

跟著拆哥看房
2026-04-21 20:11:31
男子爬上泰山“五岳獨尊”石刻拍照,景區:將核查其身份進行處理

男子爬上泰山“五岳獨尊”石刻拍照,景區:將核查其身份進行處理

揚子晚報
2026-04-17 12:09:40
【公告精選】600382,去年業績大增超3200%!擬10派2.13元!

【公告精選】600382,去年業績大增超3200%!擬10派2.13元!

證券時報e公司
2026-04-21 22:24:31
馬筱梅抑郁有跡可循,徐媽黃春梅被曝在其產后第四天向她催債

馬筱梅抑郁有跡可循,徐媽黃春梅被曝在其產后第四天向她催債

悄悄史話
2026-04-19 19:03:04
男子心臟手術一年后回鄉參加葬禮酒后死亡 家屬起訴同桌8人索賠30余萬 一審被駁回

男子心臟手術一年后回鄉參加葬禮酒后死亡 家屬起訴同桌8人索賠30余萬 一審被駁回

紅星新聞
2026-04-20 16:47:20
成都“牽手門”事件女主現今狀況曝光,太慘了......

成都“牽手門”事件女主現今狀況曝光,太慘了......

許三歲
2026-03-17 07:34:05
1958年周恩來突然提出辭去總理職務,毛主席聽后只說了一句話,全場沉默

1958年周恩來突然提出辭去總理職務,毛主席聽后只說了一句話,全場沉默

文史明鑒
2026-03-24 18:49:17
順德莫氏雞煲4月20日停業!網紅流量狂歡,鄰居抗議問題根源在哪

順德莫氏雞煲4月20日停業!網紅流量狂歡,鄰居抗議問題根源在哪

世界圈
2026-04-21 13:17:41
1974年擬定人大代表名單,毛主席說:別人我不管,錢學森一定要有

1974年擬定人大代表名單,毛主席說:別人我不管,錢學森一定要有

史之銘
2026-04-22 00:33:00
“生娃率”持續走低,廈門教授給出建議:不生孩子就下調養老金

“生娃率”持續走低,廈門教授給出建議:不生孩子就下調養老金

大果小果媽媽
2026-04-02 13:16:39
16強誕生8席,資格賽選手全部倒下!范爭一4-5,吳宜澤創造歷史?

16強誕生8席,資格賽選手全部倒下!范爭一4-5,吳宜澤創造歷史?

郝小小看體育
2026-04-21 07:13:24
日本正式允許出口殺傷性武器,外交部:嚴重關切,高度警惕

日本正式允許出口殺傷性武器,外交部:嚴重關切,高度警惕

澎湃新聞
2026-04-21 15:34:26
他是福建原省長,身居高位卻和群眾同吃粗茶淡飯,享年86歲

他是福建原省長,身居高位卻和群眾同吃粗茶淡飯,享年86歲

牛鍋巴小釩
2026-04-21 15:56:35
美赫格塞思在論壇上表示:“我們不是要掐死中國,也不想羞辱中國

美赫格塞思在論壇上表示:“我們不是要掐死中國,也不想羞辱中國

安安說
2026-04-20 11:04:33
楊鈺瑩濟寧演唱會獻唱,路人鏡頭下虎背熊腰,臉上滿是歲月的痕跡

楊鈺瑩濟寧演唱會獻唱,路人鏡頭下虎背熊腰,臉上滿是歲月的痕跡

陳意小可愛
2026-04-21 19:33:45
巴薩鋒線大洗牌,巴黎冠軍邊鋒主動上門

巴薩鋒線大洗牌,巴黎冠軍邊鋒主動上門

小鞄搞笑解說
2026-04-20 20:56:32
楊世元條款出爐!媒體人曬腦震蕩方案:被換下球員6天不得出場

楊世元條款出爐!媒體人曬腦震蕩方案:被換下球員6天不得出場

奧拜爾
2026-04-21 19:02:39
距千球僅差31球!41歲C羅能在世界杯決賽上演千球神跡嗎?

距千球僅差31球!41歲C羅能在世界杯決賽上演千球神跡嗎?

仰臥撐FTUer
2026-04-21 21:18:22
年紀越大,越要吃肉?提醒:這3種肉要舍得吃,吃對了比吃補品強

年紀越大,越要吃肉?提醒:這3種肉要舍得吃,吃對了比吃補品強

秀廚娘
2026-04-18 21:10:42
安徽油價降了:4月21日24時起,92號汽油每升8.40元,95號汽油每升8.98元,0號柴油每升8.18元

安徽油價降了:4月21日24時起,92號汽油每升8.40元,95號汽油每升8.98元,0號柴油每升8.18元

肥東論壇
2026-04-21 22:05:31
2026-04-22 02:11:00
賽博禪心
賽博禪心
拜AI古佛,修賽博禪心
396文章數 50關注度
往期回顧 全部

科技要聞

創造4萬億帝國、訪華20次,庫克留下了什么

頭條要聞

三國取消飛航許可 賴清德無法竄訪斯威士蘭

頭條要聞

三國取消飛航許可 賴清德無法竄訪斯威士蘭

體育要聞

一到NBA季后賽,四屆DPOY就成了主角

娛樂要聞

宋承炫曬寶寶B超照,宣布老婆懷孕

財經要聞

現實是最大的荒誕:千億平臺的沖突始末

汽車要聞

全新坦克700正式上市 售價42.8萬-50.8萬元

態度原創

房產
教育
數碼
藝術
游戲

房產要聞

年薪40-50萬!海南地產圈還在猛招人

教育要聞

對不起,我有點“摳”

數碼要聞

大疆發布Osmo Mobile 8P:售899元 分體式遙控器設計

藝術要聞

任伯年寫竹,真帶勁

漲價兩周即回調!索尼官方PS5數字版定價重回399美元

無障礙瀏覽 進入關懷版