Artwork

Treść dostarczona przez Jupiter Broadcasting. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Jupiter Broadcasting lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
Player FM - aplikacja do podcastów
Przejdź do trybu offline z Player FM !

399: Ethics in AI

38:48
 
Udostępnij
 

Manage episode 229370778 series 2438285
Treść dostarczona przez Jupiter Broadcasting. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Jupiter Broadcasting lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

Machine learning promises to change many industries, but with these changes come dangerous new risks. Join Jim and Wes as they explore some of the surprising ways bias can creep in and the serious consequences of ignoring these problems.

Links:

  • Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants — What started out as an entertaining social experiment—get regular people to talk to a chatbot so it could learn while they, hopefully, had fun—became a nightmare for Tay’s creators. Users soon figured out how to make Tay say awful things. Microsoft took the chatbot offline after less than a day.
  • Microsoft's Zo chatbot is a politically correct version of her sister Tay—except she’s much, much worse — A few months after Tay’s disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme.
  • How to make a racist AI without really trying | ConceptNet blog — Some people expect that fighting algorithmic racism is going to come with some sort of trade-off. There’s no trade-off here. You can have data that’s better and less racist. You can have data that’s better because it’s less racist. There was never anything “accurate” about the overt racism that word2vec and GloVe learned.
  • Microsoft warned investors that biased or flawed AI could hurt the company’s image — Notably, this addition comes after a research paper by MIT Media Lab graduate researcher Joy Buolamwini showed in February 2018 that Microsoft’s facial recognition algorithm’s was less accurate for women and people of color. In response, Microsoft updated its facial recognition models, and wrote a blog post about how it was addressing bias in its software.
  • AI bias: It is the responsibility of humans to ensure fairness — Amazon recently pulled the plug on its experimental AI-powered recruitment engine when it was discovered that the machine learning technology behind it was exhibiting bias against female applicants.
  • California Police Using AI Program That Tells Them Where to Patrol, Critics Say It May Just Reinforce Racial Bias — “The potential for bias to creep into the deployment of the tools is enormous. Simply put, the devil is in the data,” Vincent Southerland, executive director of the Center on Race, Inequality, and the Law at NYU School of Law, wrote for the American Civil Liberties Union last year.
  • A.I. Could Worsen Health Disparities — A recent study found that some facial recognition programs incorrectly classify less than 1 percent of light-skinned men but more than one-third of dark-skinned women. What happens when we rely on such algorithms to diagnose melanoma on light versus dark skin?
  • Responsible AI Practices — These questions are far from solved, and in fact are active areas of research and development. Google is committed to making progress in the responsible development of AI and to sharing knowledge, research, tools, datasets, and other resources with the larger community. Below we share some of our current work and recommended practices.
  • The Ars Technica System Guide, Winter 2019: The one about the servers — The Winter 2019 Ars System Guide has returned to its roots: showing readers three real-world system builds we like at this precise moment in time. Instead of general performance desktops, this time around we're going to focus specifically on building some servers.
  • Introduction to Python Development at Linux Academy — This course is designed to teach you how to program using Python. We'll cover the building blocks of the language, programming design fundamentals, how to use the standard library, third-party packages, and how to create Python projects. In the end, you should have a grasp of how to program.
  continue reading

243 odcinków

Artwork

399: Ethics in AI

TechSNAP

549 subscribers

published

iconUdostępnij
 
Manage episode 229370778 series 2438285
Treść dostarczona przez Jupiter Broadcasting. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Jupiter Broadcasting lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

Machine learning promises to change many industries, but with these changes come dangerous new risks. Join Jim and Wes as they explore some of the surprising ways bias can creep in and the serious consequences of ignoring these problems.

Links:

  • Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants — What started out as an entertaining social experiment—get regular people to talk to a chatbot so it could learn while they, hopefully, had fun—became a nightmare for Tay’s creators. Users soon figured out how to make Tay say awful things. Microsoft took the chatbot offline after less than a day.
  • Microsoft's Zo chatbot is a politically correct version of her sister Tay—except she’s much, much worse — A few months after Tay’s disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme.
  • How to make a racist AI without really trying | ConceptNet blog — Some people expect that fighting algorithmic racism is going to come with some sort of trade-off. There’s no trade-off here. You can have data that’s better and less racist. You can have data that’s better because it’s less racist. There was never anything “accurate” about the overt racism that word2vec and GloVe learned.
  • Microsoft warned investors that biased or flawed AI could hurt the company’s image — Notably, this addition comes after a research paper by MIT Media Lab graduate researcher Joy Buolamwini showed in February 2018 that Microsoft’s facial recognition algorithm’s was less accurate for women and people of color. In response, Microsoft updated its facial recognition models, and wrote a blog post about how it was addressing bias in its software.
  • AI bias: It is the responsibility of humans to ensure fairness — Amazon recently pulled the plug on its experimental AI-powered recruitment engine when it was discovered that the machine learning technology behind it was exhibiting bias against female applicants.
  • California Police Using AI Program That Tells Them Where to Patrol, Critics Say It May Just Reinforce Racial Bias — “The potential for bias to creep into the deployment of the tools is enormous. Simply put, the devil is in the data,” Vincent Southerland, executive director of the Center on Race, Inequality, and the Law at NYU School of Law, wrote for the American Civil Liberties Union last year.
  • A.I. Could Worsen Health Disparities — A recent study found that some facial recognition programs incorrectly classify less than 1 percent of light-skinned men but more than one-third of dark-skinned women. What happens when we rely on such algorithms to diagnose melanoma on light versus dark skin?
  • Responsible AI Practices — These questions are far from solved, and in fact are active areas of research and development. Google is committed to making progress in the responsible development of AI and to sharing knowledge, research, tools, datasets, and other resources with the larger community. Below we share some of our current work and recommended practices.
  • The Ars Technica System Guide, Winter 2019: The one about the servers — The Winter 2019 Ars System Guide has returned to its roots: showing readers three real-world system builds we like at this precise moment in time. Instead of general performance desktops, this time around we're going to focus specifically on building some servers.
  • Introduction to Python Development at Linux Academy — This course is designed to teach you how to program using Python. We'll cover the building blocks of the language, programming design fundamentals, how to use the standard library, third-party packages, and how to create Python projects. In the end, you should have a grasp of how to program.
  continue reading

243 odcinków

Tous les épisodes

×
 
Loading …

Zapraszamy w Player FM

Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.

 

Skrócona instrukcja obsługi