Artwork

Treść dostarczona przez John Walter. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez John Walter lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
Player FM - aplikacja do podcastów
Przejdź do trybu offline z Player FM !

Episode 28: Lessons from NEDA's chatbot failure

21:56
 
Udostępnij
 

Manage episode 374813612 series 3447443
Treść dostarczona przez John Walter. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez John Walter lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

This episode proposes best practices for CX leaders navigating the issue of large language model (LLM) hallucination. It was inspired by (1) conversations with several customer support and AI leaders, and (2) research on the recent failure by the chatbot used by the National Eating Disorders Association (NEDA).
To briefly summarize, CX leaders should:

  • Distinguish between (1) the risk of LLM hallucinations that occur during normal usage, and (2) hallucinations that are intentionally triggered by angry customers or trolls.
  • Address these two type of of hallucination in the contract, by shifting greater risk upon the AI vendor for the former, and less risk upon the AI vendor for the latter.
  • Have conversations with senior leadership to ensure everyone is onboard to confront intentionally triggered hallucinations.
  • Use cell phone verification via text message for chats that you suspect may be trying to trigger hallucination.
  • Potentially use the discovery process allowed during litigation to clear a company's reputation in the event of intentionally triggered hallucination.

This show is hosted by John Walter. He is the COO of ZMAXINC, which has been advising large brands on the selection of human agent outsource vendors for 27 years. Today the company also advises on the selection of AI vendors. John is also an attorney and a member of the AI, Big Data, and E-Privacy committees of the American Bar Association.
To contact or follow John on LinkedIn, here is a link to his profile: https://www.linkedin.com/in/jowalter/
To learn more about ZMAXINC, here is a link to the company website: https://www.zmaxinc.com/

  continue reading

47 odcinków

Artwork
iconUdostępnij
 
Manage episode 374813612 series 3447443
Treść dostarczona przez John Walter. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez John Walter lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

This episode proposes best practices for CX leaders navigating the issue of large language model (LLM) hallucination. It was inspired by (1) conversations with several customer support and AI leaders, and (2) research on the recent failure by the chatbot used by the National Eating Disorders Association (NEDA).
To briefly summarize, CX leaders should:

  • Distinguish between (1) the risk of LLM hallucinations that occur during normal usage, and (2) hallucinations that are intentionally triggered by angry customers or trolls.
  • Address these two type of of hallucination in the contract, by shifting greater risk upon the AI vendor for the former, and less risk upon the AI vendor for the latter.
  • Have conversations with senior leadership to ensure everyone is onboard to confront intentionally triggered hallucinations.
  • Use cell phone verification via text message for chats that you suspect may be trying to trigger hallucination.
  • Potentially use the discovery process allowed during litigation to clear a company's reputation in the event of intentionally triggered hallucination.

This show is hosted by John Walter. He is the COO of ZMAXINC, which has been advising large brands on the selection of human agent outsource vendors for 27 years. Today the company also advises on the selection of AI vendors. John is also an attorney and a member of the AI, Big Data, and E-Privacy committees of the American Bar Association.
To contact or follow John on LinkedIn, here is a link to his profile: https://www.linkedin.com/in/jowalter/
To learn more about ZMAXINC, here is a link to the company website: https://www.zmaxinc.com/

  continue reading

47 odcinków

All episodes

×
 
Loading …

Zapraszamy w Player FM

Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.

 

Skrócona instrukcja obsługi