Inside OpenAI's trust and safety operation - with Rosie Campbell
Manage episode 405364376 series 3533871
No organisation in the AI world is under more intense scrutiny than OpenAI. The maker of Dall-E, GPT4, ChatGPT and Sora is constantly pushing the boundaries of artificial intelligence and has supercharged the enthusiasm of the general public for AI technologies.
With that elevated position come questions about how OpenAI can ensure its models are not used for malign purposes.
In this interview we talk to Rosie Campbell from OpenAI’s policy research team about the many processes and safeguards in place to prevent abuse. Rosie also talks about the forward-looking work of the policy research team, anticipating longer-term risks that might emerge with more advanced AI systems.
Helen and Rosie discuss the challenges associated with agentic systems (AI that can interface with the wider world via APIs and other technologies), red-teaming new models, and whether advanced AIs should have ‘rights’ in the same way that humans or animals do.
You can read the paper referenced in this episode ‘Practices for Governing Agentic AI Systems’ co-written by Rosie and her colleagues: https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf
Watch the video of the interview here: https://www.youtube.com/watch?v=81LNrlEqgcM
10 odcinków