Artwork

Treść dostarczona przez David Yakobovitch. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez David Yakobovitch lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
Player FM - aplikacja do podcastów
Przejdź do trybu offline z Player FM !

How Platforms Leverage The Extended AI Community To Address Misinformation with Claire Leibowicz

38:15
 
Udostępnij
 

Manage episode 294098582 series 2512650
Treść dostarczona przez David Yakobovitch. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez David Yakobovitch lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

How Platforms Leverage The Extended AI Community To Address Misinformation with Claire Leibowicz

  • Claire Leibowicz currently leads the AI and Media Integrity program at the Partnership on AI. She holds a BA in Psychology and Computer Science from Harvard College, and a master’s degree in the Social Science of the Internet from Balliol College, University of Oxford, where she studied as a Clarendon Scholar.
  • Not only tech companies should be involved in creating good, responsible, ethical AI, but also civil society organizations, academic venues, other parts of industry and especially media.
  • AI and media integrity proposes a very simple way to have good, healthy, beneficial information online by using AI systems to do that.
  • Not everyone agrees what type of content should be allowed online. Even humans don't agree about what misinformation is or what content should be shown to people through technology. Some tech companies feel empowered to take comments off platforms. So, not only just to declare a label or more context around people, but really to take a public figure off a platform, which is really an emboldening of platform agency in contributing to who is allowed to speak and who's not.
  • In terms of tactics for misinformation, how people create misinformation, how they spread content, is generally applicable to social media. There's misinformation flowing in WhatsApp groups, in texts, in all these different venues. There is a real movement towards this kind of misinformation that's not just total misrepresentation of an event or a fact, but a slant or a leaning, or a caption that may make a post have a different connotation than it would if it was written by someone else.
  • AI and Media integrity seeks to reach a public that can distinguish credible information from misleading information. Labeling is an interesting, almost in-between option, because it's not limiting speech or saying you can't share this post or saying someone's information shouldn't be seen. It's giving you more context. The idea is to find a middle ground for platforms to seem like they're giving the user control and autonomy, and being able to judge for themselves what's credible.
  • Some people are really skeptical about platforms. Labels might encourage major division in user attitudes between those who think they're important for people to be healthy consumers of content and those who find them biased and partisan and error prone. Automating that label deployment is really complicated. And we don't really know what the best intervention is right now to help bolster credible content consumption.
  • With the de-platforming of Donald Trump, we're living in a new society where we are giving the rights of freedoms to platforms to say, we can get content so that we're providing the best interest for our users without acknowledging whether the users really want that.
  • The platforms have been emboldened, and that has a connotation that we're going to become the arbiters of truth. Those who value free speech and principles might frown upon, since the internet was founded as a venue for democratizing speech and allowing people to speak. There are other solutions that the platforms can take to change how content gets shown beyond just labeling. Platform labels alone are insufficient to address the question of what people trust and why there is this general distrust, in the principle of platforms to self-regulate and for fact-checkers and media companies to offer non-politicized ratings.
  • We need to better design interventions that don't repress people, but really respect the intelligence and autonomy that has raised awareness of looking into a source and media literacy.
  • So holistic, digital literacy, educational interventions to focus community-centric moderation,. And that people in the community rather than the platform itself, are the ones doing the moderation, which might increase trust in how the speech is being labeled and ultimately decided upon.
  • A lot of the policies that platforms have about speech on the platforms have to do with the way in which they cause real world harm. You may have a policy that says we don't label speech, we don't do anything until there's a perception that post might prompt real-world harm. Manipulated media is basically any visual artifact that has been altered in some way by any means, and whereas there's no harm to the public square, there might be harm to other types of political speech or those that are misleading. So when we talk about manipulated media, it's really important to underscore what makes that misleading or problematic. So a lot of people have advocated for AI-based solutions to deal with manipulated media.
  • It's not just how an artifact has been manipulated that matters. It's partially the intent, why it's been manipulated and what it conveys that really matters. Just because something has been manipulated doesn't mean it's inherently misleading or automatically misinformation.
  • But rather, what's the effect of that manipulation. And that's a really hard task for machines to gauge, let alone people.

Shownotes Links

https://www.linkedin.com/in/claire-leibowicz-17156a65/

https://twitter.com/CLeibowicz

https://www.partnershiponai.org/manipulated-media-detection-requires-more-than-tools-community-insights-on-whats-needed/

https://medium.com/partnership-on-ai/a-field-guide-to-making-ai-art-responsibly-f7f4a5066ee

https://arxiv.org/abs/2011.12758

https://medium.com/swlh/it-matters-how-platforms-label-manipulated-media-here-are-12-principles-designers-should-follow-438b76546078

About HumAIn Podcast

The HumAIn Podcast is a leading artificial intelligence podcast that explores the topics of AI, data science, future of work, and developer education for technologists. Whether you are an Executive, data scientist, software engineer, product manager, or student-in-training, HumAIn connects you with industry thought leaders on the technology trends that are relevant and practical. HumAIn is a leading data science podcast where frequently discussed topics include ai trends, ai for all, computer vision, natural language processing, machine learning, data science, and reskilling and upskilling for developers. Episodes focus on new technology, startups, and Human Centered AI in the Fourth Industrial Revolution. HumAIn is the channel to release new AI products, discuss technology trends, and augment human performance.


Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
  continue reading

119 odcinków

Artwork
iconUdostępnij
 
Manage episode 294098582 series 2512650
Treść dostarczona przez David Yakobovitch. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez David Yakobovitch lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

How Platforms Leverage The Extended AI Community To Address Misinformation with Claire Leibowicz

  • Claire Leibowicz currently leads the AI and Media Integrity program at the Partnership on AI. She holds a BA in Psychology and Computer Science from Harvard College, and a master’s degree in the Social Science of the Internet from Balliol College, University of Oxford, where she studied as a Clarendon Scholar.
  • Not only tech companies should be involved in creating good, responsible, ethical AI, but also civil society organizations, academic venues, other parts of industry and especially media.
  • AI and media integrity proposes a very simple way to have good, healthy, beneficial information online by using AI systems to do that.
  • Not everyone agrees what type of content should be allowed online. Even humans don't agree about what misinformation is or what content should be shown to people through technology. Some tech companies feel empowered to take comments off platforms. So, not only just to declare a label or more context around people, but really to take a public figure off a platform, which is really an emboldening of platform agency in contributing to who is allowed to speak and who's not.
  • In terms of tactics for misinformation, how people create misinformation, how they spread content, is generally applicable to social media. There's misinformation flowing in WhatsApp groups, in texts, in all these different venues. There is a real movement towards this kind of misinformation that's not just total misrepresentation of an event or a fact, but a slant or a leaning, or a caption that may make a post have a different connotation than it would if it was written by someone else.
  • AI and Media integrity seeks to reach a public that can distinguish credible information from misleading information. Labeling is an interesting, almost in-between option, because it's not limiting speech or saying you can't share this post or saying someone's information shouldn't be seen. It's giving you more context. The idea is to find a middle ground for platforms to seem like they're giving the user control and autonomy, and being able to judge for themselves what's credible.
  • Some people are really skeptical about platforms. Labels might encourage major division in user attitudes between those who think they're important for people to be healthy consumers of content and those who find them biased and partisan and error prone. Automating that label deployment is really complicated. And we don't really know what the best intervention is right now to help bolster credible content consumption.
  • With the de-platforming of Donald Trump, we're living in a new society where we are giving the rights of freedoms to platforms to say, we can get content so that we're providing the best interest for our users without acknowledging whether the users really want that.
  • The platforms have been emboldened, and that has a connotation that we're going to become the arbiters of truth. Those who value free speech and principles might frown upon, since the internet was founded as a venue for democratizing speech and allowing people to speak. There are other solutions that the platforms can take to change how content gets shown beyond just labeling. Platform labels alone are insufficient to address the question of what people trust and why there is this general distrust, in the principle of platforms to self-regulate and for fact-checkers and media companies to offer non-politicized ratings.
  • We need to better design interventions that don't repress people, but really respect the intelligence and autonomy that has raised awareness of looking into a source and media literacy.
  • So holistic, digital literacy, educational interventions to focus community-centric moderation,. And that people in the community rather than the platform itself, are the ones doing the moderation, which might increase trust in how the speech is being labeled and ultimately decided upon.
  • A lot of the policies that platforms have about speech on the platforms have to do with the way in which they cause real world harm. You may have a policy that says we don't label speech, we don't do anything until there's a perception that post might prompt real-world harm. Manipulated media is basically any visual artifact that has been altered in some way by any means, and whereas there's no harm to the public square, there might be harm to other types of political speech or those that are misleading. So when we talk about manipulated media, it's really important to underscore what makes that misleading or problematic. So a lot of people have advocated for AI-based solutions to deal with manipulated media.
  • It's not just how an artifact has been manipulated that matters. It's partially the intent, why it's been manipulated and what it conveys that really matters. Just because something has been manipulated doesn't mean it's inherently misleading or automatically misinformation.
  • But rather, what's the effect of that manipulation. And that's a really hard task for machines to gauge, let alone people.

Shownotes Links

https://www.linkedin.com/in/claire-leibowicz-17156a65/

https://twitter.com/CLeibowicz

https://www.partnershiponai.org/manipulated-media-detection-requires-more-than-tools-community-insights-on-whats-needed/

https://medium.com/partnership-on-ai/a-field-guide-to-making-ai-art-responsibly-f7f4a5066ee

https://arxiv.org/abs/2011.12758

https://medium.com/swlh/it-matters-how-platforms-label-manipulated-media-here-are-12-principles-designers-should-follow-438b76546078

About HumAIn Podcast

The HumAIn Podcast is a leading artificial intelligence podcast that explores the topics of AI, data science, future of work, and developer education for technologists. Whether you are an Executive, data scientist, software engineer, product manager, or student-in-training, HumAIn connects you with industry thought leaders on the technology trends that are relevant and practical. HumAIn is a leading data science podcast where frequently discussed topics include ai trends, ai for all, computer vision, natural language processing, machine learning, data science, and reskilling and upskilling for developers. Episodes focus on new technology, startups, and Human Centered AI in the Fourth Industrial Revolution. HumAIn is the channel to release new AI products, discuss technology trends, and augment human performance.


Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
  continue reading

119 odcinków

Wszystkie odcinki

×
 
Loading …

Zapraszamy w Player FM

Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.

 

Skrócona instrukcja obsługi