The show that goes beyond different. #BlueWaterSound
…
continue reading
The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.
…
continue reading
Deep, Funky, Soulful, Jackin' House Music. 420 Ceis, Acumen, Adriatique, Alex Augello, Alex Niggemann & Superlounge, Alexander East, Andrade, Andrew Chibale, Andrew Mataus, Andry Nalin, Andy Clockwork, Andy Meston, Anhanguera, Aphreme, Arco, Armbar, Artie Flexs, Arts & Leisure, Audio Soul Project, Bang Bang, BeatPimps, Belocca, Bleep District, Boo Williams, Brandon Bass, Brent Vassar, Brett Valentine, Bucked Naked, Butch, Canard, The Candy Dealers, Carleto, Chanson E, Chemars, Chris Lauer, C ...
…
continue reading
1
Owain Evans - AI Situational Awareness, Out-of-Context Reasoning
2:15:46
2:15:46
Na później
Na później
Listy
Polub
Polubione
2:15:46
Owain Evans is an AI Alignment researcher, research associate at the Center of Human Compatible AI at UC Berkeley, and now leading a new AI safety research group. In this episode we discuss two of his recent papers, “Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs” and “Connecting the Dots: LLMs can Infer and Verbalize Latent S…
…
continue reading
1
[Crosspost] Adam Gleave on Vulnerabilities in GPT-4 APIs (+ extra Nathan Labenz interview)
2:16:08
2:16:08
Na później
Na później
Listy
Polub
Polubione
2:16:08
This is a special crosspost episode where Adam Gleave is interviewed by Nathan Labenz from the Cognitive Revolution. At the end I also have a discussion with Nathan Labenz about his takes on AI. Adam Gleave is the founder of Far AI, and with Nathan they discuss finding vulnerabilities in GPT-4's fine-tuning and Assistant PIs, Far AI's work exposing…
…
continue reading
1
Ethan Perez on Selecting Alignment Research Projects (ft. Mikita Balesni & Henry Sleight)
36:45
36:45
Na później
Na później
Listy
Polub
Polubione
36:45
Ethan Perez is a Research Scientist at Anthropic, where he leads a team working on developing model organisms of misalignment. Youtube: https://youtu.be/XDtDljh44DMEthan is interviewed by Mikita Balesni (Apollo Research) and Henry Sleight (Astra Fellowship)) about his approach in selecting projects for doing AI Alignment research.A transcript & wr…
…
continue reading
1
Emil Wallner on Sora, Generative AI Startups and AI optimism
1:42:48
1:42:48
Na później
Na później
Listy
Polub
Polubione
1:42:48
Emil is the co-founder of palette.fm (colorizing B&W pictures with generative AI) and was previously working in deep learning for Google Arts & Culture. We were talking about Sora on a daily basis, so I decided to record our conversation, and then proceeded to confront him about AI risk. Patreon: https://www.patreon.com/theinsideviewSora: https://o…
…
continue reading
1
Evan Hubinger on Sleeper Agents, Deception and Responsible Scaling Policies
52:13
52:13
Na później
Na później
Listy
Polub
Polubione
52:13
Evan Hubinger leads the Alignment stress-testing at Anthropic and recently published "Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training".In this interview we mostly discuss the Sleeper Agents paper, but also how this line of work relates to his work with Alignment Stress-testing, Model Organisms of Misalignment, Deceptive…
…
continue reading
1
[Jan 2023] Jeffrey Ladish on AI Augmented Cyberwarfare and compute monitoring
33:04
33:04
Na później
Na później
Listy
Polub
Polubione
33:04
Jeffrey Ladish is the Executive Director of Palisade Research which aimes so "study the offensive capabilities or AI systems today to better understand the risk of losing control to AI systems forever". He previously helped build out the information security program at Anthropic. Audio is a edit & re-master of the Twitter Space on "AI Governance an…
…
continue reading
Holly Elmore is an AI Pause Advocate who has organized two protests in the past few months (against Meta's open sourcing of LLMs and before the UK AI Summit), and is currently running the US front of the Pause AI Movement. Prior to that, Holly previously worked at a think thank and has a PhD in evolutionary biology from Harvard. [Deleted & re-uploa…
…
continue reading
1
Podcast Retrospective and Next Steps
1:03:42
1:03:42
Na później
Na później
Listy
Polub
Polubione
1:03:42
https://youtu.be/Fk2MrpuWinc
…
continue reading
1
Paul Christiano's views on "doom" (ft. Robert Miles)
4:53
4:53
Na później
Na później
Listy
Polub
Polubione
4:53
Youtube: https://youtu.be/JXYcLQItZsk Paul Christiano's post: https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom
…
continue reading
1
Neel Nanda on mechanistic interpretability, superposition and grokking
2:04:53
2:04:53
Na później
Na później
Listy
Polub
Polubione
2:04:53
Neel Nanda is a researcher at Google DeepMind working on mechanistic interpretability. He is also known for his YouTube channel where he explains what is going on inside of neural networks to a large audience. In this conversation, we discuss what is mechanistic interpretability, how Neel got into it, his research methodology, his advice for people…
…
continue reading
1
Joscha Bach on how to stop worrying and love AI
2:54:29
2:54:29
Na później
Na później
Listy
Polub
Polubione
2:54:29
Joscha Bach (who defines himself as an AI researcher/cognitive scientist) has recently been debating existential risk from AI with Connor Leahy (previous guest of the podcast), and since their conversation was quite short I wanted to continue the debate in more depth. The resulting conversation ended up being quite long (over 3h of recording), with…
…
continue reading
1
Erik Jones on Automatically Auditing Large Language Models
22:36
22:36
Na później
Na później
Listy
Polub
Polubione
22:36
Erik is a Phd at Berkeley working with Jacob Steinhardt, interested in making generative machine learning systems more robust, reliable, and aligned, with a focus on large language models.In this interview we talk about his paper "Automatically Auditing Large Language Models via Discrete Optimization" that he presented at ICML. Youtube: https://you…
…
continue reading
1
Dylan Patel on the GPU Shortage, Nvidia and the Deep Learning Supply Chain
12:22
12:22
Na później
Na później
Listy
Polub
Polubione
12:22
Dylan Patel is Chief Analyst at SemiAnalysis a boutique semiconductor research and consulting firm specializing in the semiconductor supply chain from chemical inputs to fabs to design IP and strategy. The SemiAnalysis substack has ~50,000 subscribers and is the second biggest tech substack in the world. In this interview we discuss the current GPU…
…
continue reading
1
Tony Wang on Beating Superhuman Go AIs with Advesarial Policies
3:35
3:35
Na później
Na później
Listy
Polub
Polubione
3:35
Tony is a PhD student at MIT, and author of "Advesarial Policies Beat Superhuman Go AIs", accepted as Oral at the International Conference on Machine Learning (ICML). Paper: https://arxiv.org/abs/2211.00241 Youtube: https://youtu.be/Tip1Ztjd-so
…
continue reading
1
David Bau on Editing Facts in GPT, AI Safety and Interpretability
24:53
24:53
Na później
Na później
Listy
Polub
Polubione
24:53
David Bau is an Assistant Professor studying the structure and interpretation of deep networks, and the co-author on "Locating and Editing Factual Associations in GPT" which introduced Rank-One Model Editing (ROME), a method that allows users to alter the weights of a GPT model, for instance by forcing it to output that the Eiffel Tower is in Rome.…
…
continue reading
1
Alexander Pan on the MACHIAVELLI benchmark
20:10
20:10
Na później
Na później
Listy
Polub
Polubione
20:10
I've talked to Alexander Pan, 1st year at Berkeley working with Jacob Steinhardt about his paper "Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark" accepted as oral at ICML. Youtube: https://youtu.be/MjkSETpoFlYPaper: https://arxiv.org/abs/2304.03279
…
continue reading
1
Vincent Weisser on Funding AI Alignment Research
18:07
18:07
Na później
Na później
Listy
Polub
Polubione
18:07
Vincent is currently spending his time supporting AI alignment efforts, as well as investing across AI, semi, energy, crypto, bio and deeptech. His mission is to improve science, augment human capabilities, have a positive impact, help reduce existential risks and extend healthy human lifespan. Youtube: https://youtu.be/weRoJ8KN2f0 Outline (00:00) …
…
continue reading
1
[JUNE 2022] Aran Komatsuzaki on Scaling, GPT-J and Alignment
1:17:21
1:17:21
Na później
Na później
Listy
Polub
Polubione
1:17:21
Aran Komatsuzaki is a ML PhD student at GaTech and lead researcher at EleutherAI where he was one of the authors on GPT-J. In June 2022 we recorded an episode on scaling following up on the first Ethan Caballero episode (where we mentioned Aran as an influence on how Ethan started thinking about scaling). Note: For some reason I procrastinated on e…
…
continue reading
1
Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI
1:29:58
1:29:58
Na później
Na później
Listy
Polub
Polubione
1:29:58
Curtis, also known on the internet as AI_WAIFU, is the head of Alignment at EleutherAI. In this episode we discuss the massive orders of H100s from different actors, why he thinks AGI is 4-5 years away, why he thinks we're 90% "toast", his comment on Eliezer Yudkwosky's Death with Dignity, and what kind of Alignment projects is currently going on a…
…
continue reading
1
Eric Michaud on scaling, grokking and quantum interpretability
48:22
48:22
Na później
Na później
Listy
Polub
Polubione
48:22
Eric is a PhD student in the Department of Physics at MIT working with Max Tegmark on improving our scientific/theoretical understanding of deep learning -- understanding what deep neural networks do internally and why they work so well. This is part of a broader interest in the nature of intelligent systems, which previously led him to work with S…
…
continue reading
1
Jesse Hoogland on Developmental Interpretability and Singular Learning Theory
43:11
43:11
Na później
Na później
Listy
Polub
Polubione
43:11
Jesse Hoogland is a research assistant at David Krueger's lab in Cambridge studying AI Safety. More recently, Jesse has been thinking about Singular Learning Theory and Developmental Interpretability, which we discuss in this episode. Before he came to grips with existential risk from AI, he co-founded a health-tech startup automating bariatric sur…
…
continue reading
1
Clarifying and predicting AGI by Richard Ngo
4:36
4:36
Na później
Na później
Listy
Polub
Polubione
4:36
Explainer podcast for Richard Ngo's "Clarifying and predicting AGI" post on Lesswrong, which introduces the t-AGI framework to evaluate AI progress. A system is considered t-AGI if it can outperform most human experts, given time t, on most cognitive tasks.This is a new format, quite different from the interviews and podcasts I have been recording …
…
continue reading
1
Alan Chan And Max Kauffman on Model Evaluations, Coordination and AI Safety
1:13:15
1:13:15
Na później
Na później
Listy
Polub
Polubione
1:13:15
Max Kaufmann and Alan Chan discuss the evaluation of large language models, AI Governance and more generally the impact of the deployment of foundational models. is currently a Research Assistant to Owain Evans, mainly thinking about (and fixing) issues that might arise as we scale up our current ML systems, but also interested in issues arising f…
…
continue reading
1
Breandan Considine on Neuro Symbolic AI, Coding AIs and AI Timelines
1:45:04
1:45:04
Na później
Na później
Listy
Polub
Polubione
1:45:04
Breandan Considine is a PhD student at the School of Computer Science at McGill University, under the supervision of Jin Guo and Xujie Si). There, he is building tools to help developers locate and reason about software artifacts, by learning to read and write code.I met Breandan while doing my "scale is all you need" series of interviews at Mila, …
…
continue reading
1
Christoph Schuhmann on Open Source AI, Misuse and Existential risk
32:24
32:24
Na później
Na później
Listy
Polub
Polubione
32:24
Christoph Schuhmann is the co-founder and organizational lead at LAION, the non-profit who released LAION-5B, a dataset of 5,85 billion CLIP-filtered image-text pairs, 14x bigger than LAION-400M, previously the biggest openly accessible image-text dataset in the world.Christoph is being interviewed by Alan Chan, PhD in Machine Learning at Mila, and…
…
continue reading
1
Simeon Campos on Short Timelines, AI Governance and AI Alignment Field Building
2:03:58
2:03:58
Na później
Na później
Listy
Polub
Polubione
2:03:58
Siméon Campos is the founder of EffiSciences and SaferAI, mostly focusing on alignment field building and AI Governance. More recently, he started the newsletter Navigating AI Risk on AI Governance, with a first post on slowing down AI.Note: this episode was recorded in October 2022 so a lot of the content being discussed references what was known …
…
continue reading
1
Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision
2:34:39
2:34:39
Na później
Na później
Listy
Polub
Polubione
2:34:39
Collin Burns is a second-year ML PhD at Berkeley, working with Jacob Steinhardt on making language models honest, interpretable, and aligned. In 2015 he broke the Rubik’s Cube world record, and he's now back with "Discovering latent knowledge in language models without supervision", a paper on how you can recover diverse knowledge represented in la…
…
continue reading
1
Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment
1:52:26
1:52:26
Na później
Na później
Listy
Polub
Polubione
1:52:26
Victoria Krakovna is a Research Scientist at DeepMind working on AGI safety and a co-founder of the Future of Life Institute, a non-profit organization working to mitigate technological risks to humanity and increase the chances of a positive future. In this interview we discuss three of her recent LW posts, namely DeepMind Alignment Team Opinions …
…
continue reading
1
David Krueger–Coordination, Alignment, Academia
2:45:19
2:45:19
Na później
Na później
Listy
Polub
Polubione
2:45:19
David Krueger is an assistant professor at the University of Cambridge and got his PhD from Mila. His research group focuses on aligning deep learning systems, but he is also interested in governance and global coordination. He is famous in Cambridge for not having an AI alignment research agenda per se, and instead he tries to enable his seven PhD…
…
continue reading
1
Ethan Caballero–Broken Neural Scaling Laws
23:47
23:47
Na później
Na później
Listy
Polub
Polubione
23:47
Ethan Caballero is a PhD student at Mila interested in how to best scale Deep Learning models according to all downstream evaluations that matter. He is known as the fearless leader of the "Scale Is All You Need" movement and the edgiest person at MILA. His first interview is the second most popular interview on the channel and today he's back to t…
…
continue reading
1
Irina Rish–AGI, Scaling and Alignment
1:26:06
1:26:06
Na później
Na później
Listy
Polub
Polubione
1:26:06
Irina Rish a professor at the Université de Montréal, a core member of Mila (Quebec AI Institute), and the organizer of the neural scaling laws workshop towards maximally beneficial AGI. In this episode we discuss Irina's definition of Artificial General Intelligence, her takes on AI Alignment, AI Progress, current research in scaling laws, the neu…
…
continue reading
1
Shahar Avin–Intelligence Rising, AI Governance
2:04:40
2:04:40
Na później
Na później
Listy
Polub
Polubione
2:04:40
Shahar is a senior researcher at the Center for the Study of Existential Risk in Cambridge. In his past life, he was a Google Engineer, though right now he spends most of your time thinking about how to prevent the risks that occur if companies like Google end up deploying powerful AI systems, by organizing AI Governance role-playing workshops. In …
…
continue reading
1
Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk
1:41:14
1:41:14
Na później
Na później
Listy
Polub
Polubione
1:41:14
Katja runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of AI. She is well known for a survey published in 2017 called, When Will AI Exceed Human Performance? Evidence From AI Experts and recently published a new survey of AI Experts: What do ML researchers think about AI in 2022. We sta…
…
continue reading
Markus Anderljung is the Head of AI Policy at the Centre for Governance of AI in Oxford and was previously seconded to the UK government office as a senior policy specialist. In this episode we discuss Jack Clark's AI Policy takes, answer questions about AI Policy from Twitter and explore what is happening in the AI Governance landscape more broadl…
…
continue reading
1
Alex Lawsen—Forecasting AI Progress
1:04:57
1:04:57
Na później
Na później
Listy
Polub
Polubione
1:04:57
Alex Lawsen is an advisor at 80,000 hours, released an Introduction to Forecasting Youtube Series and has recently been thinking about forecasting AI progress, why you cannot just "update all the way bro" (discussed in my latest episode with Connor Leahy) and how to develop inside views about AI Alignment in general. Youtube: https://youtu.be/vLkas…
…
continue reading
1
Robert Long–Artificial Sentience
1:46:43
1:46:43
Na później
Na później
Listy
Polub
Polubione
1:46:43
Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever's slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digit…
…
continue reading
1
Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming
2:01:26
2:01:26
Na później
Na później
Listy
Polub
Polubione
2:01:26
Ethan Perez is a research scientist at Anthropic, working on large language models. He is the second Ethan working with large language models coming on the show but, in this episode, we discuss why alignment is actually what you need, not scale. We discuss three projects he has been pursuing before joining Anthropic, namely the Inverse Scaling Priz…
…
continue reading
1
Robert Miles–Youtube, AI Progress and Doom
2:51:16
2:51:16
Na później
Na później
Listy
Polub
Polubione
2:51:16
Robert Miles has been making videos for Computerphile, then decided to create his own Youtube channel about AI Safety. Lately, he's been working on a Discord Community that uses Stampy the chatbot to answer Youtube comments. We also spend some time discussing recent AI Progress and why Rob is not that optimistic about humanity's survival. Transcrip…
…
continue reading
1
Connor Leahy–EleutherAI, Conjecture
2:57:19
2:57:19
Na później
Na później
Listy
Polub
Polubione
2:57:19
Connor was the first guest of this podcast. In the last episode, we talked a lot about EleutherAI, a grassroot collective of researchers he co-founded, who open-sourced GPT-3 size models such as GPT-NeoX and GPT-J. Since then, Connor co-founded Conjecture, a company aiming to make AGI safe through scalable AI Alignment research. One of the goals of…
…
continue reading
1
Raphaël Millière Contra Scaling Maximalism
2:27:12
2:27:12
Na później
Na później
Listy
Polub
Polubione
2:27:12
Raphaël Millière is a Presidential Scholar in Society and Neuroscience at Columbia University. He has previously completed a PhD in philosophy in Oxford, is interested in the philosophy of mind, cognitive science, and artificial intelligence, and has recently been discussing at length the current progress in AI with popular Twitter threads on GPT-3…
…
continue reading
1
Blake Richards–AGI Does Not Exist
1:15:31
1:15:31
Na później
Na później
Listy
Polub
Polubione
1:15:31
Blake Richards is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Faculty Member at MiLA. He thinks that AGI is not a coherent concept, which is why he ended up on a recent AGI political compass meme. When people asked on Twitter who was the edgiest people at MiLA, his…
…
continue reading
1
Ethan Caballero–Scale is All You Need
51:54
51:54
Na później
Na później
Listy
Polub
Polubione
51:54
Ethan is known on Twitter as the edgiest person at MILA. We discuss all the gossips around scaling large language models in what will be later known as the Edward Snowden moment of Deep Learning. On his free time, Ethan is a Master’s degree student at MILA in Montreal, and has published papers on out of distribution generalization and robustness ge…
…
continue reading
Peter is the co-CEO of Rethink Priorities, a fast-growing non-profit doing research on how to improve the long-term future. On his free time, Peter makes money in prediction markets and is quickly becoming one of the top forecasters on Metaculus. We talk about the probability of London getting nuked, Rethink Priorities and why EA should fund projec…
…
continue reading
1
9. Emil Wallner on Building a €25000 Machine Learning Rig
56:41
56:41
Na później
Na później
Listy
Polub
Polubione
56:41
Emil is a resident at the Google Arts & Culture Lab were he explores the intersection between art and machine learning. He recently built his own Machine Learning server, or rig, which costed him €25000. Emil's Story: https://www.emilwallner.com/p/ml-rig Youtube: https://youtu.be/njbPpxhE6W0 00:00 Intro 00:23 Building your own rig 06:11 The Nvidia …
…
continue reading
1
8. Sonia Joseph on NFTs, Web 3 and AI Safety
1:25:36
1:25:36
Na później
Na później
Listy
Polub
Polubione
1:25:36
Sonia is a graduate student applying ML to neuroscience at MILA. She was previously applying deep learning to neural data at Janelia, an NLP research engineer at a startup and graduated in computational neuroscience at Princeton University. Anonymous feedback: https://app.suggestionox.com/r/xOmqTW Twitter: https://twitter.com/MichaelTrazzi Sonia's …
…
continue reading
1
7. Phil Trammell on Economic Growth under Transformative AI
2:09:54
2:09:54
Na później
Na później
Listy
Polub
Polubione
2:09:54
Phil Trammell is an Oxford PhD student in economics and research associate at the Global Priorities Institute. Phil is one of the smartest person I know, when considering the intersection of the long-term future and economic growth. Funnily enough, Phil was my roomate, a few years ago in Oxford, and last time I called him he casually said that he h…
…
continue reading
1
6. Slava Bobrov on Brain Computer Interfaces
1:39:45
1:39:45
Na później
Na później
Listy
Polub
Polubione
1:39:45
In this episode I discuss Brain Computer Interfaces with Slava Bobrov, a self-taught Machine Learning Engineer applying AI to neural biosignals to control robotic limbs. This episode will be of special interest to you if you're an engineer who wants to get started with brain computer interfaces, or just broadly interested in how this technology cou…
…
continue reading
1
5. Charlie Snell on DALL-E and CLIP
2:53:28
2:53:28
Na później
Na później
Listy
Polub
Polubione
2:53:28
We talk about AI generated art with Charlie Snell, a Berkeley student who wrote extensively about AI art for ML@Berkeley's blog (https://ml.berkeley.edu/blog/). We look at multiple slides with art throughout our conversation, so I highly recommend watching the video (https://www.youtube.com/watch?v=gcwidpxeAHI). In the first part we go through Char…
…
continue reading
1
4. Sav Sidorov on Learning, Contrarianism and Robotics
3:06:47
3:06:47
Na później
Na później
Listy
Polub
Polubione
3:06:47
I interview Sav Sidorov about top-down learning, contrarianism, religion, university, robotics, ego , education, twitter, friends, psychedelics, B-values and beauty. Highlights & Transcript: https://insideview.substack.com/p/sav Watch the video: https://youtu.be/_Y6_TakG3d0
…
continue reading
1
3. Evan Hubinger on Takeoff speeds, Risks from learned optimization & Interpretability
1:44:24
1:44:24
Na później
Na później
Listy
Polub
Polubione
1:44:24
We talk about Evan’s background @ MIRI & OpenAI, Coconut, homogeneity in AI takeoff, reproducing SoTA & openness in multipolar scenarios, quantilizers & operationalizing strategy stealing, Risks from learned optimization & evolution, learned optimization in Machine Learning, clarifying Inner AI Alignment terminology, transparency & interpretability…
…
continue reading