Show notes are at https://stevelitchfield.com/sshow/chat.html
…
continue reading
Treść dostarczona przez LessWrong. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez LessWrong lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
Player FM - aplikacja do podcastów
Przejdź do trybu offline z Player FM !
Przejdź do trybu offline z Player FM !
“Frontier Models are Capable of In-context Scheming” by Marius Hobbhahn, AlexMeinke, Bronson Schoen
MP3•Źródło odcinka
Manage episode 454188016 series 3364760
Treść dostarczona przez LessWrong. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez LessWrong lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
This is a brief summary of what we believe to be the most important takeaways from our new paper and from our findings shown in the o1 system card. We also specifically clarify what we think we did NOT show.
Paper: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations
Twitter about paper: https://x.com/apolloaisafety/status/1864735819207995716
Twitter about o1 system card: https://x.com/apolloaisafety/status/1864737158226928124
What we think the most important findings are
Models are now capable enough to do in-context scheming reasoning
We say an AI system is “scheming” if it covertly pursues misaligned goals, hiding its true capabilities and
objectives. We think that in order to scheme, models likely need to be goal-directed, situationally aware, and capable enough to reason about scheming as a strategy. In principle, models might acquire situational awareness and stable long-term goals during training, and then scheme in pursuit of those goals. We only study the simpler case of “in-context scheming,” where [...]
---
Outline:
(00:28) What we think the most important findings are
(00:33) Models are now capable enough to do in-context scheming reasoning
(03:53) The scheming actions can sometimes be quite subtle
(05:05) Models sometimes double down on their deception
(06:35) Models sometimes scheme without any goal nudging
(07:34) Sonnet-3.5 and opus-3 can sandbag in pursuit of helpfulness
(08:22) Sonnet-3.5 can sandbag without relying on CoT reasoning
(09:28) What we are not claiming:
(10:33) Takeaways
(10:36) Chain-of-thought monitoring
(11:42) Chain-of-thought access
(13:27) Inability safety cases now have to be more specific
The original text contained 7 images which were described by AI.
---
First published:
December 5th, 2024
Source:
https://www.lesswrong.com/posts/8gy7c8GAPkuu6wTiX/frontier-models-are-capable-of-in-context-scheming
---
Narrated by TYPE III AUDIO.
---
…
continue reading
Paper: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations
Twitter about paper: https://x.com/apolloaisafety/status/1864735819207995716
Twitter about o1 system card: https://x.com/apolloaisafety/status/1864737158226928124
What we think the most important findings are
Models are now capable enough to do in-context scheming reasoning
We say an AI system is “scheming” if it covertly pursues misaligned goals, hiding its true capabilities and
objectives. We think that in order to scheme, models likely need to be goal-directed, situationally aware, and capable enough to reason about scheming as a strategy. In principle, models might acquire situational awareness and stable long-term goals during training, and then scheme in pursuit of those goals. We only study the simpler case of “in-context scheming,” where [...]
---
Outline:
(00:28) What we think the most important findings are
(00:33) Models are now capable enough to do in-context scheming reasoning
(03:53) The scheming actions can sometimes be quite subtle
(05:05) Models sometimes double down on their deception
(06:35) Models sometimes scheme without any goal nudging
(07:34) Sonnet-3.5 and opus-3 can sandbag in pursuit of helpfulness
(08:22) Sonnet-3.5 can sandbag without relying on CoT reasoning
(09:28) What we are not claiming:
(10:33) Takeaways
(10:36) Chain-of-thought monitoring
(11:42) Chain-of-thought access
(13:27) Inability safety cases now have to be more specific
The original text contained 7 images which were described by AI.
---
First published:
December 5th, 2024
Source:
https://www.lesswrong.com/posts/8gy7c8GAPkuu6wTiX/frontier-models-are-capable-of-in-context-scheming
---
Narrated by TYPE III AUDIO.
---
399 odcinków
MP3•Źródło odcinka
Manage episode 454188016 series 3364760
Treść dostarczona przez LessWrong. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez LessWrong lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
This is a brief summary of what we believe to be the most important takeaways from our new paper and from our findings shown in the o1 system card. We also specifically clarify what we think we did NOT show.
Paper: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations
Twitter about paper: https://x.com/apolloaisafety/status/1864735819207995716
Twitter about o1 system card: https://x.com/apolloaisafety/status/1864737158226928124
What we think the most important findings are
Models are now capable enough to do in-context scheming reasoning
We say an AI system is “scheming” if it covertly pursues misaligned goals, hiding its true capabilities and
objectives. We think that in order to scheme, models likely need to be goal-directed, situationally aware, and capable enough to reason about scheming as a strategy. In principle, models might acquire situational awareness and stable long-term goals during training, and then scheme in pursuit of those goals. We only study the simpler case of “in-context scheming,” where [...]
---
Outline:
(00:28) What we think the most important findings are
(00:33) Models are now capable enough to do in-context scheming reasoning
(03:53) The scheming actions can sometimes be quite subtle
(05:05) Models sometimes double down on their deception
(06:35) Models sometimes scheme without any goal nudging
(07:34) Sonnet-3.5 and opus-3 can sandbag in pursuit of helpfulness
(08:22) Sonnet-3.5 can sandbag without relying on CoT reasoning
(09:28) What we are not claiming:
(10:33) Takeaways
(10:36) Chain-of-thought monitoring
(11:42) Chain-of-thought access
(13:27) Inability safety cases now have to be more specific
The original text contained 7 images which were described by AI.
---
First published:
December 5th, 2024
Source:
https://www.lesswrong.com/posts/8gy7c8GAPkuu6wTiX/frontier-models-are-capable-of-in-context-scheming
---
Narrated by TYPE III AUDIO.
---
…
continue reading
Paper: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations
Twitter about paper: https://x.com/apolloaisafety/status/1864735819207995716
Twitter about o1 system card: https://x.com/apolloaisafety/status/1864737158226928124
What we think the most important findings are
Models are now capable enough to do in-context scheming reasoning
We say an AI system is “scheming” if it covertly pursues misaligned goals, hiding its true capabilities and
objectives. We think that in order to scheme, models likely need to be goal-directed, situationally aware, and capable enough to reason about scheming as a strategy. In principle, models might acquire situational awareness and stable long-term goals during training, and then scheme in pursuit of those goals. We only study the simpler case of “in-context scheming,” where [...]
---
Outline:
(00:28) What we think the most important findings are
(00:33) Models are now capable enough to do in-context scheming reasoning
(03:53) The scheming actions can sometimes be quite subtle
(05:05) Models sometimes double down on their deception
(06:35) Models sometimes scheme without any goal nudging
(07:34) Sonnet-3.5 and opus-3 can sandbag in pursuit of helpfulness
(08:22) Sonnet-3.5 can sandbag without relying on CoT reasoning
(09:28) What we are not claiming:
(10:33) Takeaways
(10:36) Chain-of-thought monitoring
(11:42) Chain-of-thought access
(13:27) Inability safety cases now have to be more specific
The original text contained 7 images which were described by AI.
---
First published:
December 5th, 2024
Source:
https://www.lesswrong.com/posts/8gy7c8GAPkuu6wTiX/frontier-models-are-capable-of-in-context-scheming
---
Narrated by TYPE III AUDIO.
---
399 odcinków
Wszystkie odcinki
×Zapraszamy w Player FM
Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.