Przejdź do trybu offline z Player FM !
Irina Rish–AGI, Scaling and Alignment
Manage episode 344527957 series 2966339
Irina Rish a professor at the Université de Montréal, a core member of Mila (Quebec AI Institute), and the organizer of the neural scaling laws workshop towards maximally beneficial AGI.
In this episode we discuss Irina's definition of Artificial General Intelligence, her takes on AI Alignment, AI Progress, current research in scaling laws, the neural scaling laws workshop she has been organizing, phase transitions, continual learning, existential risk from AI and what is currently happening in AI Alignment at Mila.
Transcript: theinsideview.ai/irina
Youtube: https://youtu.be/ZwvJn4x714s
OUTLINE
(00:00) Highlights
(00:30) Introduction
(01:03) Defining AGI
(03:55) AGI means augmented human intelligence
(06:20) Solving alignment via AI parenting
(09:03) From the early days of deep learning to general agents
(13:27) How Irina updated from Gato
(17:36) Building truly general AI within Irina's lifetime
(19:38) The least impressive thing that won't happen in five years
(22:36) Scaling beyond power laws
(28:45) The neural scaling laws workshop
(35:07) Why Irina does not want to slow down AI progress
(53:52) Phase transitions and grokking
(01:02:26) Does scale solve continual learning?
(01:11:10) Irina's probability of existential risk from AGI
(01:14:53) Alignment work at Mila
(01:20:08) Where will Mila get its compute from?
(01:27:04) With Great Compute Comes Great Responsibility
(01:28:51) The Neural Scaling Laws Workshop At NeurIPS
55 odcinków
Manage episode 344527957 series 2966339
Irina Rish a professor at the Université de Montréal, a core member of Mila (Quebec AI Institute), and the organizer of the neural scaling laws workshop towards maximally beneficial AGI.
In this episode we discuss Irina's definition of Artificial General Intelligence, her takes on AI Alignment, AI Progress, current research in scaling laws, the neural scaling laws workshop she has been organizing, phase transitions, continual learning, existential risk from AI and what is currently happening in AI Alignment at Mila.
Transcript: theinsideview.ai/irina
Youtube: https://youtu.be/ZwvJn4x714s
OUTLINE
(00:00) Highlights
(00:30) Introduction
(01:03) Defining AGI
(03:55) AGI means augmented human intelligence
(06:20) Solving alignment via AI parenting
(09:03) From the early days of deep learning to general agents
(13:27) How Irina updated from Gato
(17:36) Building truly general AI within Irina's lifetime
(19:38) The least impressive thing that won't happen in five years
(22:36) Scaling beyond power laws
(28:45) The neural scaling laws workshop
(35:07) Why Irina does not want to slow down AI progress
(53:52) Phase transitions and grokking
(01:02:26) Does scale solve continual learning?
(01:11:10) Irina's probability of existential risk from AGI
(01:14:53) Alignment work at Mila
(01:20:08) Where will Mila get its compute from?
(01:27:04) With Great Compute Comes Great Responsibility
(01:28:51) The Neural Scaling Laws Workshop At NeurIPS
55 odcinków
Alle episoder
×Zapraszamy w Player FM
Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.