超級AI將無法被控制,為人類帶來災難?

10 months ago
95

影片描述:近期AI科技發展極快,商業、學術、軍事、一般大眾日常生活上都會應用AI。但有一部份AI專家認為,將來超級AI好可能會為人類帶來災難性危機。本集TM說,我會解釋這些AI專家認為超級AI會為人類帶來危機的理據,然後再講我對他們見解的分析。

—————————————————————

請訂閱和跟蹤:

YouTube第一頻道「TM說」:https://www.youtube.com/@TMSyut
YouTube第二頻道「TM說2」:https://www.youtube.com/@TMSyut2
X:https://www.twitter.com/TMSyut
Mastodon:https://www.mastodon.social/@TMShuet

—————————————————————

商業通訊:tmsyut@proton.me

—————————————————————

本影片參考資料:

AI: Unexplainable, Unpredictable, Uncontrollable, Roman V. Yampolskiy, CRC Press, 23 Feb. 2024: https://www.routledge.com/AI-Unexplainable-Unpredictable-Uncontrollable/Yampolskiy/p/book/9781032576268

There is no Proof that AI can be Controlled, According to Extensive Survey, Taylor & Francis, 12 Feb. 2024: https://newsroom.taylorandfrancisgroup.com/there-is-no-proof-that-ai-can-be-controlled-according-to-extensive-survey/

Building Safer AGI by introducing Artificial Stupidity, Michaël Trazzi & Roman V. Yampolskiy, 11 Aug. 2018: https://arxiv.org/abs/1808.03644

HUMANS AND INTELLIGENT MACHINES — CO-EVOLUTION, FUSION OR REPLACEMENT?, David Pearce, 2021: https://www.biointelligence-explosion.com/parable.html

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis, Adi Robertson, The Verge, 22 Feb. 2024: https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical

Pause Giant AI Experiments: An Open Letter, Future of Life Institute, 22 Mar. 2023: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

AI Is as Risky as Pandemics and Nuclear War, Top CEOs Say, Urging Global Cooperation, Billy Perrigo, TIME, 30 May 2023: https://time.com/6283386/ai-risk-openai-deepmind-letter/

The Godfather of A.I. Has Some Regrets, Sabrina Tavernise and others, The New York Times, 4 Oct. 2023: https://www.nytimes.com/2023/05/30/podcasts/the-daily/chatgpt-hinton-ai.html

How Rogue AIs may Arise, Yoshua Bengio, 22 May 2023: https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/

Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, Eliezer Yudkowsky, TIME, 29 Mar. 2023: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Superintelligence: Paths, Dangers, Strategies, Nick Bostrom, Oxford University Press, 5 May 2015

Gödel, Escher, Bach, and AI, Douglas Hofstadter, The Atlantic, 8 Jul. 2023: https://www.theatlantic.com/ideas/archive/2023/07/godel-escher-bach-geb-ai/674589/

On the Controllability of Artificial Intelligence: An Analysis of Limitations, Roman V. Yampolskiy, Journal of Cyber Security and Mobility, 2022 Vol. 11 Iss. 3: https://journals.riverpublishers.com/index.php/JCSANDM/article/view/16219

Is Artificial Intelligence Permanently Inscrutable?,  Aaron M. Bornstein, Nautilus, 29 Aug. 2016: https://nautil.us/is-artificial-intelligence-permanently-inscrutable-236088/

What are the ultimate limits to computational techniques: verifier theory and unverifiability, Roman V. Yampolskiy, Physica Scripta, Vol. 92 No. 9, 28 Jul. 2017: https://iopscience.iop.org/article/10.1088/1402-4896/aa7ca8/meta

The Last Invention of Man: How AI might take over the world, Max Tegmark, Nautilus, 25 Sep. 2017: https://nautil.us/the-last-invention-of-man-236814/

Would You Survive a Merger with AI? The cost of brain enhancement may be your identity, Susan Schneider, Nautilus, 2 Oct. 2019: https://nautil.us/you-wont-survive-a-merger-with-ai-237563/

“Will AI Destroy Us?”: Roundtable with Coleman Hughes, Eliezer Yudkowsky, Gary Marcus, and me, Scott Aaronson, Shtetl-Optimized: https://scottaaronson.blog/?p=7431

Ingenious: Scott Aaronson - From computational complexity to quantum mechanics, Michael Segal, Nautilus, 29 Jan. 2015: https://nautil.us/ingenious-scott-aaronson-235267/

Superintelligent, Amoral, and Out of Control - AI is no longer playing games. Are we prepared?, Toby Ord, Nautilus, 22 Apr. 2020: https://nautil.us/superintelligent-amoral-and-out-of-control-237785/

https://twitter.com/ylecun/status/1671926268122611727

Th Monk who Thinks the World is Ending - Can Buddhiam fix AI?, Annie Lowrey, The Atlantic, 25 Jun. 2023: https://www.theatlantic.com/ideas/archive/2023/06/buddhist-monks-vermont-ai-apocalypse/674501/

The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines, Hugo de Garis, Etc Publications, 28 Feb. 2005

The Case Against an Autonomous Military, Sidney Perkowitz, Nautilus, 9 Apr. 2018: https://nautil.us/the-case-against-an-autonomous-military-237050/

The AI-Powered, Totally Autonomous Future of War Is Here, Will Knight, WIRED, 25 Jul. 2023: https://www.wired.com/story/ai-powered-totally-autonomous-future-of-war-is-here/

AI weapons pose threat to humanity, warns top scientist, Madhumita Murgia, 29 Nov. 2021, Financial Times: https://www.ft.com/content/03b2c443-b839-4093-a8f0-968987f426f4?segmentID=1c805c2b-5bd2-5477-363a-67e0fc4cf094

https://openai.com/blog/introducing-superalignment

The Monk’s Perspective: A Grim Outlook on the World’s End, Don Williams, Vigour Times, 25 Jun. 2023: https://vigourtimes.com/the-monks-perspective-a-grim-outlook-on-the-worlds-end/

https://x.ai/blog

https://www.tam-hunt.com/

Artificial Superintelligence: A Futuristic Approach, Roman V. Yampolskiy, Chapman and Hall/CRC, 19 Jun. 2015

—————————————————————

Courtesy:

The Verge
TIME
The Atlantic
Nautilus
WIRED
Financial Times
Vigour Times

—————————————————————

This video is produced for informational or/and educational purpose(s). I have tried with diligence to ascertain usage of any media in this video does not lead to any copyright infringement.

Loading comments...