AI explained: AI and governance
Manage episode 441591228 series 3402558
Reed Smith emerging tech lawyers Andy Splittgerber in Munich and Cynthia O’Donoghue in London join entertainment & media lawyer Monique Bhargava in Chicago to delve into the complexities of AI governance. From the EU AI Act to US approaches, we explore common themes, potential pitfalls and strategies for responsible AI deployment. Discover how companies can navigate emerging regulations, protect user data and ensure ethical AI practices.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Andy: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape globally. Today, we'll focus on AI and governance with a main emphasis on generative AI in a regional perspective if we look into Europe and the US. My name is Andy Splittgerber. I'm a partner in the Emerging Technologies Group of Reed Smith in Munich, and I'm also very actively advising clients and companies on artificial intelligence. Here with me, I've got Cynthia O'Donoghue from our London office and Nikki Bhargava from our Chicago office. Thanks for joining.
Cynthia: Thanks for having me. Yeah, I'm Cynthia O'Donoghue. I'm an emerging technology partner in our London office, also currently advising clients on AI matters.
Monique: Hi, everyone. I'm Nikki Bhargava. I'm a partner in our Chicago office and our entertainment and media group, and really excited to jump into the topic of AI governance. So let's start with a little bit of a basic question for you, Cynthia and Andy. What is shaping how clients are approaching AI governance within the EU right now?
Cynthia: Thanks, Nikki. The EU is, let's say, just received a big piece of legislation, went into effect on the 2nd of October that regulates general purpose AI and high risk general purpose AI and bans certain aspects of AI. But that's only part of the European ecosystem. The EU AI Act essentially will interplay with the General Data Protection Regulation, the EU's Supply Chain Act, and the latest cybersecurity law in the EU, which is the Network and Information Security Directive No. 2. so essentially there's a lot of for organizations to get their hands around in the EU and the AI act has essentially phased dates of effectiveness but the the biggest aspect of the EU AI act in terms of governance lays out quite a lot and so it's a perfect time for organizations to start are thinking about that and getting ready for various aspects of the AAC as they in turn come into effect. How does that compare, Nikki, with what's going on in the U.S.?
Monique: So, you know, the U.S. is still evaluating from a regulatory standpoint where they're going to land on AI regulation. Not to say that we don't have legislation that has been put into place. We have Colorado with the first comprehensive AI legislation that went in. And we also had, you know, earlier in the year, we also had from the Office of Management and Budget guidelines to federal agencies about how to procure and implement AI, which has really informed the governance process. And I think a lot of companies in the absence of regulatory guidance have been looking to the OMB memo to help inform what their process may look like. And I think the one thing I would highlight, because we're sort of operating in this area of unknown and yet-to-come guidance, that a lot of companies are looking to their existing governance frameworks right now and evaluating how they're both from a company culture perspective, a mission perspective, their relationship with consumers, how they want to develop and implement AI, whether it's internally or externally. And a lot of the governance process and program pulls guidance from some of those internal ethics as well.
Cynthia: Interesting, so I’d say somewhat similar in the EU, but I think, Andy, the consumer, I think the US puts more emphasis on, consumer protection, whereas the EU AI Act is more all-encompassing in terms of governance. Wouldn't you agree?
Andy: Yeah, that was also the question I wanted to ask Nikki, is where she sees the parallels and whether organizations, in her view, can follow a global approach for AI are ai governance and yes i like for the for the question you asked yes i mean the AI act is the European one is more encompassing it is i'm putting a lot of obligations on developers and deployers like companies that use ai in the end of course it also has the consumer or the user protection in the mind but the rules directly rated relating to consumers or users are I would say yeah they're limited. So yeah Nikki well what what's kind of like you always you always know US law and you have a good overview over European laws what is we are always struggling with all the many US laws so what's your thought can can companies in terms of AI governance follow a global approach?
Monique: In my opinion? Yeah, I do think that there will be a global approach, you know, the way the US legislates, you know, what we've seen is a number of laws that are governing certain uses and outputs first, perhaps because they were easier to pass than such a comprehensive law. So we see laws that govern the output in terms of use of likenesses, right, of publicity violations. We're also seeing laws come up that are regulating the use of personal information and AI as a separate category. We're also seeing laws, you know, outside of the consumer, the corporate consumer base, we're also seeing a lot of laws around elections. And then finally, we're seeing laws pop up around disclosure for consumers that are interacting with AI systems, for example, AI powered chatbots. But as I mentioned, the US is taking a number of cues from the EU AI Act. So for example, Colorado did pass a comprehensive AI law, which speaks to both obligations for developers and obligations to deployers, similar to the way the EU AI Act is structured, and focusing on what Colorado calls high risk AI systems, as well as algorithmic discrimination, which I think doesn't exactly follow the EU AI Act, but draws similar parallels, I think pulls a lot of principles. That's the kind of law which I really see informing companies on how to structure their AI governance programs, probably because the simple answer is it requires deployers at least to establish a risk management policy and procedure and an impact assessment for high risk systems. And impliedly, it really requires developers to do the same. Because developers are required to provide a lot of information to deployers so that deployers can take the legally required steps in order to deploy the AI system. And so inherently, to me, that means that developers have to have a risk management process themselves if they're going to be able to comply with their obligations under Colorado law. So, you know, because I know that there are a lot of parallels between what Colorado has done, what we see in our memo to federal agencies and the EU AI Act, maybe I can ask you, Cynthia and Andy, to kind of talk a little bit about what are some of the ways that companies approach setting up the structure of their governance program? What are some buckets that it is that they look at, or what are some of the first steps that they take?
Cynthia: Yeah, thanks, Nikki. I mean, it's interesting because you mentioned about the company-specific uses and internal and external. I think one thing, you know, before we get into the governance structure or maybe part of thinking about the governance structure is that for the EU AI Act, it also applies to employee data and use of AI systems for vocational training, for instance. So I think in terms of governance structure. Certainly from a European perspective, it's not necessarily about use cases, but about really whether you're using that high risk or general purpose AI and, you know, some of the documentation and certification requirements that might apply to the high risk versus general purpose. But the governance structure needs to take all those kinds of things into account. Account so you know obviously guidelines and principles about the you know how people use external AI suppliers how it's going to be used internally what are the appropriate uses you know obviously if it's going to be put into a chatbot which is the other example you used what are rules around acceptable use by people who interact with that chatbot as well as how is that chatbot set up in terms of what would be appropriate to use it for. So what are the appropriate use cases? So, you know, guidelines and policies, definitely foremost for that. And within those guidelines and policies, there's also, you know, the other documents that will come along. So terms of use, I mentioned acceptable use, and then guardrails for the chatbot. I mean, I mean, one of the big things for EU AI is human intervention to make sure if there's any anomalies or somebody tries to game it, that there can be intervention. So, Andy, I think that dovetails into the risk management process, if you want to talk a bit more about that.
Andy: Yeah, definitely. I mean, the risk management process in the wider sense, of course, like how do organizations start this at the moment is first setting up teams or you know responsible persons within the organization that take care of this and we're gonna discuss a bit later on how that structure can look like and then of course the policies you mentioned not only regarding the use but also how to or which process to follow when AI is being used or even the question what is AI and how do we at all find out in our organization where we're using AI and what is an AI system as defined under the various laws, also making sure we have a global interpretation of that term. And then that is a step many of our clients are taking at the moment is like setting up an AI inventory. And that's already a very difficult and tough step. And then the next one is then like per AI system that is then coming up in this register is to define the risk management process. And of course, that's the point where in Europe, we look into the AI Act and look what kind of AI system do we have, high risk or any other sort of defined system. Or today, we're talking about the generative AI systems a bit more. For example, there we have strong obligations in the European AI Act on the providers of such generative AI. So less on companies that use generative AI, but more on those that develop and provide the generative AI because they have the deeper knowledge on what kind of training data is being used. They need to document how the AI is working and they need to also register this information with the centralized database in the European Union. They also need to give some information on copyright protected material that is contained in the training data so there is quite some documentation requirements and then of course so logging requirements to make sure the AI is used responsibly and does not trigger higher are risks. So there's also two categories of generative AI that can be qualified. So that's kind of like the risk management process under the European AI Act. And then, of course, organizations also look into risks into other areas, copyright, data protection, and also IT security. Cynthia, I know IT security is one of the topics you love. You add some more on IT security here and then we'll see what Nikki says for the US.
Cynthia: Well, obviously NIST 2.0 is coming into force. It will cover providers of certain digital services. So it's likely to cover providers of AI systems in some way or other. And funny enough, NIST 2.0 has its own risk management process involved. So there's supply chain due diligence involved, which would have to be baked into a risk management process for that. And then the EU's ENISA, Cybersecurity Agency for the EU, has put together a framework for cybersecurity, for AI systems, dot dot binding. But it's certainly a framework that companies can look to in terms of getting ideas for how best to ensure that their use of AI is secure. And then, of course, under NIST, too, the various C-Certs will be putting together various codes and have a network meeting late September. So we may see more come out of the EU on cybersecurity in relation to AI. But obviously, just like any kind of user of AI, they're going to have to ensure that the provider of the AI has ensured that the system itself is secure, including if they're going to be putting trained data into it, which of course is highly probable. I just want to say something about the training data. You mentioned copyright, and there's a difference between the EU and the UK. So in the UK, you cannot use, you know, mine data for commercial purposes. So at one point, the UK was looking at an exception to copyright for that, but it doesn't look like that's going to happen. So there is a divergence there, but that stems from historic UK law rather than as a result of the change from Brexit. Nikki, turning back to you again, I mean, we've talked a little bit about risk management. How do you think that that might differ in the US and what kind of documentation might be required there? Or is it a bit looser?
Monique: I think there are actually quite a bit of similarities that I would pull from what, you know, we have in the EU. And Andy, I think this goes back to your question about whether companies can establish a global process, right? In fact, I think it's going to to be really important for companies to see this as a global process as well. Because AI development is going to happen, you know, throughout the world. And it's really going to depend on where it's developed, but also where it's deployed, you know, and where the outputs are deployed. So I think taking a, you know, broader view of risk management will be really important in the the context of AI, particularly given. That the nature of AI is to, you know, process large swaths of information, really on a global scale, in order to make these analytics and creative development and content generation processes faster. So that just a quick aside of I actually think what we're going to see in the US is a lot of pulling from what we've seen that you and a lot more cooperation on that end. I agree that, you know, really starting to frame the risk governance process is looking at who are the key players that need to inform that risk measurement and tolerance analytics, that the decision making in terms of how do you evaluate, how do you inventory. Evaluate, and then determine how to proceed with AI tools. And so, you know, one of the things that I think makes it hopefully a little bit easier is to be able to leverage, you know, from a U.S. Perspective, leverage existing compliance procedures that we have, for example, for SEC compliance or privacy compliance or, you know, other ethics compliance programs. Brands and make AI governance a piece of that, as well as, you know, expand on it. Because I do think that AI governance sort of brings in all of those compliance pieces. We're looking at harms that may exist to a company, not just from personal information, not just from security. Not just from consumer unfair deceptive trade practices, not just from environmental, standpoints, but sort of the very holistic view of not to make this a bigger thing than it is, but kind of everything, right? Kind of every aspect that comes in. And you can see that in some of the questions that developers are supposed to be able to answer or deployers are supposed to be able to answer in risk management programs, like, for example, in Colorado, right, the information that you need to be able to address in a risk management program and an impact assessment really has to demonstrate an understanding of, of the AI system, how it works, how it was built, how it was trained, what data went into it. And then what are the full, what is the full range of harms? So for example, you know, the privacy harms, the environmental harms, the impact on employees, the impact on internal functions, the impact on consumers, if you're using it externally, and really be able to explain that, whether you have to put out a public statement or not, that will depend on the jurisdiction. But even internally, to be able to explain it to your C-suite and make them accountable for the tools that are being brought in, or make it explainable to a regulator if they were to come in and say, well, what did you do to assess this tool and mitigate known risks? So, you know, kind of with that in mind, I'm curious, what steps do you think need to go into a governance program? Like, what are one of the first initial steps? And I always feel that we can sort of start in so many different places, right, depending on how a company is structured, or what initial compliance pieces are. But I'm curious to know from you, like, Like, what would be one of the first steps in beginning the risk management program?
Cynthia: Well, as you said, Nikki, I mean, one of the best things to do is leverage existing governance structures. You know, if we look, for instance, into how the EU is even setting up its public authorities to look at governance, you've got, as I've mentioned, you know, kind of at the outset, you've almost got a multifaceted team approach. And I think it would be the same. I mean, the EU anticipates that there will be an AI officer, but obviously there's got to be team members around that person. There's going to be people with subject matter expertise in data, subject matter expertise in cyber. And then there will be people who have subject matter expertise in relation to the AI system itself, the data, training data that's been used, how it's been developed, how the algorithm works. Whether or not there can be human intervention. What happens if there are anomalies or hallucinations in the data? How can that be fixed? So I would have thought that ultimately part of that implementation is looking at governance structure and then starting from there. And then obviously, I mean, we've talked about some of the things that go into the governance. But, you know, we have clients who are looking first at use case and then going, okay, what are the risks in relation to that use case? How do we document it? How do we log it? How do we ensure that we can meet our transparency and accountability requirements? You know, what other due diligence and other risks are out there that, you know, blue sky thinking that we haven't necessarily thought about. Andy, any?
Andy: Yeah, that's, I would say, one of the first steps. I mean, even though not many organizations allocate now the core AI topic in the data protection department, but rather perhaps in the compliance or IT area, still from the governance process and starting up that structure, we see a lot of similarities to the data protection. Protection GDPR governance structure and so yeah I think back five years to implementation or getting ready for GDPR planning and checking what what other rules we we need to comply with who knew do we need to involve get the plan ready and then work along that plan that's that's the phase where we see many of our clients at the moment. Nikki, more thoughts from your end?
Monique: Yeah, I think those are excellent points. And what I have been talking to clients about is sort of first establishing the basis of measurement, right, that we're going to evaluate AI development on or procurement on. What are the company's internal principles and risk tolerances and defining those? And then based off of those principles and those metrics, putting together an impact assessment, which borrows a lot from what, you know, from what you both said, it borrows a lot from the concept of impact assessments under privacy compliance, right? Right, to implement the right questions and put together the right analytics in order to measure whether a AI tool that's in development is meeting up to those metrics, or something that we are procuring is meeting those metrics, and then analyzing the risks that are coming out of that. I think a lot of that, the impact assessment is going to be really important in helping make those initial determinations. But also, you know, and this is not just my feeling, this is something that is also required in the Colorado law is setting up an impact assessment, and then repeating it annually, which I think is particularly important in the context of AI, especially generative AI, because generative AI is a learning system. So it is going to continue to change, There may be additional modifications that are made in the course of use that is going to require reassessing, is the tool working the way it is intended to be working? You know, what has our monitoring of the tool shown? And, you know, what are the processes we need to put into place? In order to mitigate the tool, you know, going a little bit off path, AI drift, more or less, or, you know, if we start to identify issues within the AI, how do we what processes do we have internally to redirect the ship in the right process. So I think impact assessments are going to be a critical tool in helping form what is the rest of the risk management process that needs to be in place.
Andy: All right. Thank you very much. I think these were a couple of really good practical tips and especially first next steps for our listeners. We hope you enjoyed the session today and look forward if you have any feedback to us either here in the comment boxes or directly to us. And we hope to welcome you soon in one of our next episodes on AI, the law. Thank you very much.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or established standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
85 odcinków