自拍偷在线精品自拍偷,亚洲欧美中文日韩v在线观看不卡

比GPT-4更強(qiáng)大的AI模型訓(xùn)練應(yīng)該被暫停嗎?

安全 應(yīng)用安全
在這封公開信中,詳細(xì)表述了暫停AI模型訓(xùn)練的理由:AI技術(shù)已經(jīng)強(qiáng)大到能和人類進(jìn)行某些方面的競爭,將給人類社會(huì)帶來深刻的變革。因此,所有人都必須思考AI的潛在風(fēng)險(xiǎn):假新聞和宣傳充斥信息渠道、大量工作被自動(dòng)化取代、人工智能甚至有一天會(huì)比人類更聰明、更強(qiáng)大,讓我們失去對(duì)人類文明的掌控。只有當(dāng)我們確信強(qiáng)大的人工智能系統(tǒng)的影響將是積極的,其風(fēng)險(xiǎn)將是可控的時(shí)候,才應(yīng)該繼續(xù)開發(fā)和訓(xùn)練它們。

GPT-4的發(fā)布,在全球范圍再一次掀起AI技術(shù)應(yīng)用的熱潮。與此同時(shí),一些知名計(jì)算機(jī)科學(xué)家和科技業(yè)界人士也對(duì)人工智能技術(shù)的快速發(fā)展表示了擔(dān)憂,因?yàn)檫@對(duì)人類社會(huì)存在著不可預(yù)知的潛在風(fēng)險(xiǎn)。

北京時(shí)間3月29日,由特斯拉公司CEO 埃隆·馬斯克,圖靈獎(jiǎng)得主約書亞·本吉奧(Yoshua Bengio),蘋果聯(lián)合創(chuàng)始人瓦茲尼亞克(Steve Wozniak),以及《人類簡史》作者尤瓦爾·赫拉利等人聯(lián)名簽署了一封公開信,呼吁全球所有AI實(shí)驗(yàn)室立即暫停訓(xùn)練比GPT-4更強(qiáng)大的AI系統(tǒng),為期至少6個(gè)月,以確保人類能夠有效管理其風(fēng)險(xiǎn)。如果商業(yè)型的AI研究組織不能快速暫停其研發(fā)進(jìn)程,各國政府應(yīng)該采取有效監(jiān)管措施實(shí)行暫停令。

在這封公開信中,詳細(xì)表述了暫停AI模型訓(xùn)練的理由:AI技術(shù)已經(jīng)強(qiáng)大到能和人類進(jìn)行某些方面的競爭,將給人類社會(huì)帶來深刻的變革。因此,所有人都必須思考AI的潛在風(fēng)險(xiǎn):假新聞和宣傳充斥信息渠道、大量工作被自動(dòng)化取代、人工智能甚至有一天會(huì)比人類更聰明、更強(qiáng)大,讓我們失去對(duì)人類文明的掌控。只有當(dāng)我們確信強(qiáng)大的人工智能系統(tǒng)的影響將是積極的,其風(fēng)險(xiǎn)將是可控的時(shí)候,才應(yīng)該繼續(xù)開發(fā)和訓(xùn)練它們。

截至今天中午11點(diǎn),這份公開信已經(jīng)征集到1344份簽名。?

圖片

各大科技公司是否真的過快推進(jìn)了AI技術(shù),并有可能威脅人類的生存呢?實(shí)際上,OpenAI創(chuàng)始人山姆·阿爾特曼自己也對(duì)chatGPT的火爆應(yīng)用表示了擔(dān)憂:我對(duì)AI如何影響勞動(dòng)力市場、選舉和虛假信息的傳播有些“害怕”,AI技術(shù)需要政府和社會(huì)共同參與監(jiān)管,用戶反饋和規(guī)則制定對(duì)抑制AI的負(fù)面影響非常重要。

如果連AI系統(tǒng)的制造者也并不能完全理解、預(yù)測和有效控制其風(fēng)險(xiǎn),而且相應(yīng)的安全計(jì)劃和管理工作也沒有跟上,那么目前的AI技術(shù)研發(fā)或許確實(shí)已陷入了“失控”的競賽態(tài)勢。

附:公開信原文

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research,and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.We can do so here.Let's enjoy a long AI summer, not rush unprepared into a fall.

責(zé)任編輯:武曉燕 來源: 安全牛
相關(guān)推薦

2023-03-29 13:58:08

GPT-4AI 開發(fā)

2023-09-03 12:56:43

2023-06-28 08:36:44

大語言模型人工智能

2023-08-15 10:33:06

微軟必應(yīng)人工智能

2021-07-21 08:59:10

requestsPython協(xié)程

2023-07-12 16:10:48

人工智能

2024-02-01 14:56:13

GPT-4開源模型

2023-04-09 16:17:05

ChatGPT人工智能

2023-12-26 08:17:23

微軟GPT-4

2025-04-15 06:13:46

2023-06-19 08:19:50

2023-04-11 14:13:23

阿里AI

2023-08-15 15:03:00

AI工具

2024-02-19 00:29:15

2025-04-16 09:35:03

2024-01-12 19:07:26

GPT-4AI產(chǎn)品

2023-11-16 15:57:00

數(shù)據(jù)訓(xùn)練

2023-05-29 09:29:52

GPT-4語言模型

2023-05-24 10:01:24

代碼模型

2024-06-17 18:04:38

點(diǎn)贊
收藏

51CTO技術(shù)棧公眾號(hào)