Przejdź do trybu offline z Player FM !
Unpacking AI Bias: Impact, Detection, Prevention, and Policy; With Guest: Dr. Cari Miller, MBA, FHCA
Manage episode 360638060 series 3461851
What is AI bias and how does it impact both organizations and individual members of society? How does one detect if they’ve been impacted by AI bias? What can be done to prevent or mitigate it? Can AI/ML systems be audited for bias and, if so, how?
The MLSecOps Podcast explores these questions and more with guest Cari Miller, Founder of the Center for Inclusive Change and member of the For Humanity Board of Directors.
This week’s episode delves into the controversial topics of Trusted and Ethical AI within the realm of MLSecOps, offering insightful discussion and thoughtful perspectives. It also highlights the importance of continuing the conversation around AI bias and working toward creating more ethical and fair AI/ML systems.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
40 odcinków
Unpacking AI Bias: Impact, Detection, Prevention, and Policy; With Guest: Dr. Cari Miller, MBA, FHCA
Manage episode 360638060 series 3461851
What is AI bias and how does it impact both organizations and individual members of society? How does one detect if they’ve been impacted by AI bias? What can be done to prevent or mitigate it? Can AI/ML systems be audited for bias and, if so, how?
The MLSecOps Podcast explores these questions and more with guest Cari Miller, Founder of the Center for Inclusive Change and member of the For Humanity Board of Directors.
This week’s episode delves into the controversial topics of Trusted and Ethical AI within the realm of MLSecOps, offering insightful discussion and thoughtful perspectives. It also highlights the importance of continuing the conversation around AI bias and working toward creating more ethical and fair AI/ML systems.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
40 odcinków
Wszystkie odcinki
×Zapraszamy w Player FM
Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.