Artwork

Treść dostarczona przez Duncan Epping, Frank Denneman, Johan van Amersfoort, Duncan Epping, Frank Denneman, and Johan van Amersfoort. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Duncan Epping, Frank Denneman, Johan van Amersfoort, Duncan Epping, Frank Denneman, and Johan van Amersfoort lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
Player FM - aplikacja do podcastów
Przejdź do trybu offline z Player FM !

#076 - AI Roles Demystified: A Guide for Infrastructure Admins with Myles Gray

49:13
 
Udostępnij
 

Manage episode 420436461 series 2987137
Treść dostarczona przez Duncan Epping, Frank Denneman, Johan van Amersfoort, Duncan Epping, Frank Denneman, and Johan van Amersfoort. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Duncan Epping, Frank Denneman, Johan van Amersfoort, Duncan Epping, Frank Denneman, and Johan van Amersfoort lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

In this conversation, Myles Gray discusses the AI workflow and its personas, the responsibilities of data scientists and developers in deploying AI models, the role of infrastructure administrators, and the challenges of deploying models at the edge. He also explains the concept of quantization and the importance of accuracy in models. Additionally, he talks about the pipeline for deploying models and the difference between unit testing and integration testing. Unit testing is used to test the functionality of a single module or function within an application. Integration testing involves testing the interaction between different components or applications. MLflow and other tools are used to store and manage ML models. Smaller models are emerging as a solution to the resource constraints of large models. Collaboration between different personas is important for ensuring security and governance in AI projects. Data governance policies are crucial for maintaining data quality and consistency.

Takeaways

  • The AI workflow involves multiple personas, including data scientists, developers, and infrastructure administrators.
  • Data scientists play a crucial role in developing AI models, while developers are responsible for deploying the models into production.
  • Infrastructure administrators need to consider the virtualization layer and ensure efficient and easy consumption of infrastructure components.
  • Deploying AI models at the edge requires quantization to reduce model size and considerations for form factor, scale, and connectivity.
  • The pipeline for deploying models involves steps such as unit testing, scanning for vulnerabilities, building container images, and pushing to a registry.
  • Unit testing focuses on testing individual components, while integration testing ensures the compatibility and functionality of the entire system. Unit testing is used to test the functionality of a single module or function within an application.
  • Integration testing involves testing the interaction between different components or applications.
  • MLflow and other tools are used to store and manage ML models.
  • Smaller models are emerging as a solution to the resource constraints of large models.
  • Collaboration between different personas is important for ensuring security and governance in AI projects.
  • Data governance policies are crucial for maintaining data quality and consistency.

Chapters

  • 00:00 Understanding the AI Workflow and Personas
  • 03:24 The Role of Data Scientists and Developers in Deploying AI Models
  • 08:47 The Responsibilities of Infrastructure Administrators
  • 15:25 Challenges of Deploying Models at the Edge
  • 20:29 The Pipeline for Deploying AI Models
  • 24:45 Unit Testing vs. Integration Testing
  • 28:22 Managing ML Models with MLflow and Other Tools
  • 32:17 The Emergence of Smaller Models
  • 39:58 Collaboration for Security and Governance in AI Projects
  • 46:32 The Importance of Data Governance

Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.

  continue reading

85 odcinków

Artwork
iconUdostępnij
 
Manage episode 420436461 series 2987137
Treść dostarczona przez Duncan Epping, Frank Denneman, Johan van Amersfoort, Duncan Epping, Frank Denneman, and Johan van Amersfoort. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Duncan Epping, Frank Denneman, Johan van Amersfoort, Duncan Epping, Frank Denneman, and Johan van Amersfoort lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

In this conversation, Myles Gray discusses the AI workflow and its personas, the responsibilities of data scientists and developers in deploying AI models, the role of infrastructure administrators, and the challenges of deploying models at the edge. He also explains the concept of quantization and the importance of accuracy in models. Additionally, he talks about the pipeline for deploying models and the difference between unit testing and integration testing. Unit testing is used to test the functionality of a single module or function within an application. Integration testing involves testing the interaction between different components or applications. MLflow and other tools are used to store and manage ML models. Smaller models are emerging as a solution to the resource constraints of large models. Collaboration between different personas is important for ensuring security and governance in AI projects. Data governance policies are crucial for maintaining data quality and consistency.

Takeaways

  • The AI workflow involves multiple personas, including data scientists, developers, and infrastructure administrators.
  • Data scientists play a crucial role in developing AI models, while developers are responsible for deploying the models into production.
  • Infrastructure administrators need to consider the virtualization layer and ensure efficient and easy consumption of infrastructure components.
  • Deploying AI models at the edge requires quantization to reduce model size and considerations for form factor, scale, and connectivity.
  • The pipeline for deploying models involves steps such as unit testing, scanning for vulnerabilities, building container images, and pushing to a registry.
  • Unit testing focuses on testing individual components, while integration testing ensures the compatibility and functionality of the entire system. Unit testing is used to test the functionality of a single module or function within an application.
  • Integration testing involves testing the interaction between different components or applications.
  • MLflow and other tools are used to store and manage ML models.
  • Smaller models are emerging as a solution to the resource constraints of large models.
  • Collaboration between different personas is important for ensuring security and governance in AI projects.
  • Data governance policies are crucial for maintaining data quality and consistency.

Chapters

  • 00:00 Understanding the AI Workflow and Personas
  • 03:24 The Role of Data Scientists and Developers in Deploying AI Models
  • 08:47 The Responsibilities of Infrastructure Administrators
  • 15:25 Challenges of Deploying Models at the Edge
  • 20:29 The Pipeline for Deploying AI Models
  • 24:45 Unit Testing vs. Integration Testing
  • 28:22 Managing ML Models with MLflow and Other Tools
  • 32:17 The Emergence of Smaller Models
  • 39:58 Collaboration for Security and Governance in AI Projects
  • 46:32 The Importance of Data Governance

Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.

  continue reading

85 odcinków

Tutti gli episodi

×
 
Loading …

Zapraszamy w Player FM

Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.

 

Skrócona instrukcja obsługi