Artwork

Treść dostarczona przez Chad Woodford. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Chad Woodford lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
Player FM - aplikacja do podcastów
Przejdź do trybu offline z Player FM !

Impediments to Creating Artificial General Intelligence (AGI)

52:23
 
Udostępnij
 

Manage episode 428954827 series 3396760
Treść dostarczona przez Chad Woodford. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Chad Woodford lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

Send us a text

Artificial general intelligence, or superintelligence, is not right around the corner like AI companies want you to believe, and that's because intelligence is really hard.
Major AI companies like OpenAI and Anthropic (as well as Ilya Sutskever’s new company) have the explicit goal of creating artificial general intelligence (AGI), and claim to be very close to doing so using technology that doesn’t seem capable of getting us there.
So let's talk about intelligence, both human and artificial.
What is artificial intelligence? What is intelligence? Are we going to be replaced or killed by superintelligence robots? Are we on the precipice of a techno-utopia, or some kind of singularity?
These are the questions I explore, to try to offer a layman’s overview of why we’re far away from AGI and superintelligence. Among other things, I highlight the limitations of current AI systems, including their lack of trustworthiness, reliance on bottom-up machine learning, and inability to provide true reasoning and common sense. I also introduce abductive inference, a rarely discussed type of reasoning.
Why do smart people want us to think that they’ve solved intelligence when they are smart enough to know they haven’t? Keep that question in mind as we go.
YouTube version originally recorded July 1, 2024....
Support Chad
James Bridle’s Ways of Being (book)
Ezra Klein’s comments on AI & capitalism
How LLMs work
Gary Marcus on the limits of AGI
More on induction and abduction
NYTimes deep dive into AI data harvesting
Sam Altman acknowledging that they’ve reached the limits of LLMs
Mira Murati saying the same thing last month
Google’s embarrassing AI search experience
AI Explained’s perspective on AGI
LLMs Can’t Plan paper
Paper on using LLMs to tackle abduction
ChatGPT is Bullshit paper
Philosophize This on nostalgia and pastiche
Please leave a comment with your thoughts, and anything I might have missed or gotten wrong. More about me over here

Support the show

Want to talk about how Chad can assist you with your own transformation?
Book a free consultation!
Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.
Finally, you can support the podcast here.

  continue reading

Rozdziały

1. Intro (00:00:00)

2. What Is Intelligence? (00:03:19)

3. Overpromising and Underdelivering on AGI (00:04:20)

4. Apple Intelligence (00:08:05)

5. What Is AGI? (00:08:33)

6. AI as Cultural Mirror (00:11:00)

7. Defining Intelligence (00:11:57)

8. The Brain and the Mind (00:18:00)

9. The Mind and Reasoning (00:19:48)

10. Current AI Systems (00:24:28)

11. What's Missing (00:27:17)

12. Abductive Reasoning (00:35:43)

13. Asking the Chatbots (00:41:41)

14. Other Challenges (00:43:25)

15. AI & Trust (00:44:22)

16. Getting To AGI (00:45:13)

17. OpenAI Acknowledges the Limits of LLMs (00:46:15)

18. Conclusion (00:47:15)

17 odcinków

Artwork
iconUdostępnij
 
Manage episode 428954827 series 3396760
Treść dostarczona przez Chad Woodford. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Chad Woodford lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

Send us a text

Artificial general intelligence, or superintelligence, is not right around the corner like AI companies want you to believe, and that's because intelligence is really hard.
Major AI companies like OpenAI and Anthropic (as well as Ilya Sutskever’s new company) have the explicit goal of creating artificial general intelligence (AGI), and claim to be very close to doing so using technology that doesn’t seem capable of getting us there.
So let's talk about intelligence, both human and artificial.
What is artificial intelligence? What is intelligence? Are we going to be replaced or killed by superintelligence robots? Are we on the precipice of a techno-utopia, or some kind of singularity?
These are the questions I explore, to try to offer a layman’s overview of why we’re far away from AGI and superintelligence. Among other things, I highlight the limitations of current AI systems, including their lack of trustworthiness, reliance on bottom-up machine learning, and inability to provide true reasoning and common sense. I also introduce abductive inference, a rarely discussed type of reasoning.
Why do smart people want us to think that they’ve solved intelligence when they are smart enough to know they haven’t? Keep that question in mind as we go.
YouTube version originally recorded July 1, 2024....
Support Chad
James Bridle’s Ways of Being (book)
Ezra Klein’s comments on AI & capitalism
How LLMs work
Gary Marcus on the limits of AGI
More on induction and abduction
NYTimes deep dive into AI data harvesting
Sam Altman acknowledging that they’ve reached the limits of LLMs
Mira Murati saying the same thing last month
Google’s embarrassing AI search experience
AI Explained’s perspective on AGI
LLMs Can’t Plan paper
Paper on using LLMs to tackle abduction
ChatGPT is Bullshit paper
Philosophize This on nostalgia and pastiche
Please leave a comment with your thoughts, and anything I might have missed or gotten wrong. More about me over here

Support the show

Want to talk about how Chad can assist you with your own transformation?
Book a free consultation!
Join Chad's newsletter to learn about all new offerings, courses, trainings, and retreats.
Finally, you can support the podcast here.

  continue reading

Rozdziały

1. Intro (00:00:00)

2. What Is Intelligence? (00:03:19)

3. Overpromising and Underdelivering on AGI (00:04:20)

4. Apple Intelligence (00:08:05)

5. What Is AGI? (00:08:33)

6. AI as Cultural Mirror (00:11:00)

7. Defining Intelligence (00:11:57)

8. The Brain and the Mind (00:18:00)

9. The Mind and Reasoning (00:19:48)

10. Current AI Systems (00:24:28)

11. What's Missing (00:27:17)

12. Abductive Reasoning (00:35:43)

13. Asking the Chatbots (00:41:41)

14. Other Challenges (00:43:25)

15. AI & Trust (00:44:22)

16. Getting To AGI (00:45:13)

17. OpenAI Acknowledges the Limits of LLMs (00:46:15)

18. Conclusion (00:47:15)

17 odcinków

Wszystkie odcinki

×
 
Loading …

Zapraszamy w Player FM

Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.

 

Skrócona instrukcja obsługi