Przejdź do trybu offline z Player FM !
#431 – Roman Yampolskiy: Dangers of Superintelligent AI
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on February 19, 2025 08:53 ()
What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 421648770 series 2430467
Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
– Yahoo Finance: https://yahoofinance.com
– MasterClass: https://masterclass.com/lexpod to get 15% off
– NetSuite: http://netsuite.com/lex to get free product tour
– LMNT: https://drinkLMNT.com/lex to get free sample pack
– Eight Sleep: https://eightsleep.com/lex to get $350 off
Transcript: https://lexfridman.com/roman-yampolskiy-transcript
EPISODE LINKS:
Roman’s X: https://twitter.com/romanyam
Roman’s Website: http://cecs.louisville.edu/ry
Roman’s AI book: https://amzn.to/4aFZuPb
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips
SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman
OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(09:12) – Existential risk of AGI
(15:25) – Ikigai risk
(23:37) – Suffering risk
(27:12) – Timeline to AGI
(31:44) – AGI turing test
(37:06) – Yann LeCun and open source AI
(49:58) – AI control
(52:26) – Social engineering
(54:59) – Fearmongering
(1:04:49) – AI deception
(1:11:23) – Verification
(1:18:22) – Self-improving AI
(1:30:34) – Pausing AI development
(1:36:51) – AI Safety
(1:46:35) – Current AI
(1:51:58) – Simulation
(1:59:16) – Aliens
(2:00:50) – Human mind
(2:07:10) – Neuralink
(2:16:15) – Hope for the future
(2:20:11) – Meaning of life
470 odcinków
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on February 19, 2025 08:53 ()
What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 421648770 series 2430467
Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
– Yahoo Finance: https://yahoofinance.com
– MasterClass: https://masterclass.com/lexpod to get 15% off
– NetSuite: http://netsuite.com/lex to get free product tour
– LMNT: https://drinkLMNT.com/lex to get free sample pack
– Eight Sleep: https://eightsleep.com/lex to get $350 off
Transcript: https://lexfridman.com/roman-yampolskiy-transcript
EPISODE LINKS:
Roman’s X: https://twitter.com/romanyam
Roman’s Website: http://cecs.louisville.edu/ry
Roman’s AI book: https://amzn.to/4aFZuPb
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips
SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman
OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(09:12) – Existential risk of AGI
(15:25) – Ikigai risk
(23:37) – Suffering risk
(27:12) – Timeline to AGI
(31:44) – AGI turing test
(37:06) – Yann LeCun and open source AI
(49:58) – AI control
(52:26) – Social engineering
(54:59) – Fearmongering
(1:04:49) – AI deception
(1:11:23) – Verification
(1:18:22) – Self-improving AI
(1:30:34) – Pausing AI development
(1:36:51) – AI Safety
(1:46:35) – Current AI
(1:51:58) – Simulation
(1:59:16) – Aliens
(2:00:50) – Human mind
(2:07:10) – Neuralink
(2:16:15) – Hope for the future
(2:20:11) – Meaning of life
470 odcinków
Wszystkie odcinki
×Zapraszamy w Player FM
Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.