AI explained: AI in the workplace
Manage episode 427034997 series 3402558
As part of our new series on artificial intelligence, in the coming months, we explore the key challenges and opportunities in this rapidly evolving landscape. In this episode, our labor and employment lawyers, Mark Goldstein and Carl de Cicco, discuss what employers need to know about the use of AI in the workplace and the key differences and implications between the UK and the U.S.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Mark: Hi, everyone. Welcome to Tech Law Talks podcast. We're starting a new series on artificial intelligence or AI, where the coming months we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in the U.S. and U.K. workplaces. My name is Mark Goldstein. I'm a partner in Reed Smith’s Labor and Employment Group, resident in our New York office. And I'm here joined today by my colleague, Carl De Cicco from our London office. And we're going to talk today about some of the U.S. and U.K. Implications for AI as it relates to the workplace. So, Carl, let me kick it over to you. And if you can tell us, you know, from a high level, what do employers in the UK need to know when it comes to AI related issues in the workplace?
Carl: Thank you, Mark. So, yes, my name is Carl. I'm a partner here in the London Employment Group of Reed Smith. And essentially, I think the AI issues to be concerned about in the UK are twofold. The first is how it pertains to day to day activities. And the second is how it relates to kind of management side of things. So look on the type of day-to-day activities point that's hopefully the things that people are starting to see themselves in their own workplace starting to come in so use of particular generative AI programs or models to help generate content and that's obviously increasing the amount of output individuals can have and so on the one hand it's quite good on the other hand thinking about it there might be some issues to look at so for example are people being overly reliant on their AI are they simply putting the request in and whatever is churned out by the AI system is that being submitted as the work product and if so that could be quite concerning because, AI is obviously a very useful tool and is sure to continue improving as time goes on but where we stand right now AI is far from perfect and you can get what are known as hallucinations and this seems to be quite a nice term of art for effectively what are errors so things that are conclusions that are drawn on the basis of information that doesn't exist, or quotations of things that do not exist either. So really, the content that's produced by AI should be seen as something that's collaborative with the worker that's involved in the matter rather than something which AI should be totally responsible for. So see it as a first pass rather than the finished product. You should be checking the product that comes out, not just the things like making sure that sources stack up and the conclusions draw back to the data underneath, but to make sure also that you're not getting to a stage where there might be plagiarization. So AI takes what is available on the internet and that can lead to circumstances where actually somebody somebody's very good work is already out there is simply being reproduced if not word for word substantially that can obviously lead to issues not just for the person who's submitting the work but for the employer who might use that particular piece of generated work for something that they're doing. Other benefits could be things like work allocation so one of the issues that people look at in the DEI space is our opportunities for work being fairly and equally distributed, people getting enough of a looking at work, both in terms of amount and quality. And obviously, if you have a programme which is blind to who the work is going to, there's potential for that work to be more fairly distributed so that those who don't often get the opportunity to work on particular matters are actually finding themselves onto the kind of work they weren't previously dealing with and they would like to be able to get and experience of. Now, that's the positive side of it. The potential negative there is that there might be some bias in the AI that underpins that resourcing program. So, for example, it might not pick all the individuals who are less occupied than others in a way which a business might have in a view to what's coming up over the next week or two. It might not even pick up quite how the quality of work should be viewed through all particular lenses. It might have a particular skew on how quality of work is viewed. And that could lead perhaps to an individual being even more pigeonholed than before. So all of these things are potentially positive but need to be underpinned by essentially a second human checker so whilst there are many many positives it shouldn't be seen as a panacea. So well how's that holding up for what you're seeing in the states particularly new york?
Mark: I think that that's absolutely right Carl similar principles apply here in the US i think it's by way of background to go through kind of where I've seen AI kind of infiltrate the workplace, if you will. And I'll distinguish between AI, traditional AI, and then generative AI. So I've seen, you know, we've seen AI be used by employers in the U.S. and a whole host of fronts from headhunting, screening job applicants, running background checks, inducting job interviews coming up with a slate of questions. Also to things like performance management for employees and even selection criteria and deciding which employees to select for a reduction in force or mass layoff. I've also seen employers use AI in the context of simple administrative tasks like guiding employees to policy documents or benefits materials and then creating employee and workplace-related agreements and implementing document retention and creation policies and protocols. In terms of generative AI, which is more, as you noted, on the content creation front, I've certainly seen that by employees being used to translate messages or documents. And to perform certain other tasks, including creating responses from manager inquiries to more substantive documents. But as you rightly note, just as in the UK, there are a number of potential pitfalls in the US. The first is that there's a risk, as you noted, of AI plagiarizing or using a third party's intellectual property, especially if the generative AI is going to be used in a document that's going to be outward facing or external, you run substantial risk. So absolutely review and auditing any materials that are created by generative AI, among other things, to ensure that there's no plagiarism or copying, especially when, again, that material is going externally, is incredibly important. Simply reviewing the content as well, just beyond plagiarism, simply to ensure general accuracy. There was a story out of, you know, New York Federal Court last summer about an attorney who had ChatGPT help write a legal brief and asked ChatGPT to, you know, run some legal research and find some cases. And ultimately, the case sites that were provided were fictional, were not actual cases that had truly been decided. So a good reminder that, as Carl said, while generative AI can be useful, it is not, you know, an absolute panacea and needs to be reviewed and conducted, you know, reviewed thoroughly. And then, you know, similarly, you run a risk if employees are using certain generative AI platforms that the employee may be disclosing confidential company information or intellectual property on that third party platform. So we want to make sure that, you know, even when generative AI is used, that employees are doing so within the appropriate confines of company policy and their agreements, of course, things like confidential information and trade secrets and intellectual property. You know, so I think it's important that employers, you know, look to adopt some sort of AI and generative AI policy so that employees know what the expectations are in terms of, you know, what they can and equally, if not more importantly, what they cannot do in the workplace as it relates to AI and generative AI. And certainly we've been helping our clients put together those sorts of policies so employees can understand the expectations. Carl you know we talked we've talked so far kind of generally about you know implications for the workplace is there any specific legislation or regulations from the UK side of things that you all have been monitoring or that have come out?
Carl: The approach of the UK government to date has been to not legislate in this area in a in what I think is an attempt to achieve a balance between regulation and growth the plan I think so far has been to to at some point introduce a voluntary self-regulatory scheme, which bodies sign up to. But we're recording this in June 2024, less than one month away from a UK general election. So matters of AI regulation and legislation are currently on the back burner, not to be revived perhaps for at least another two to three months. But what we can, there is still, of course, a lot of interest in this area. And the UK TUC, which is a federation of trade unions in the UK, has published sort of a framework proposal for what the law might look like. This is far from being a legislation and obviously many hurdles to pass before this might even come before Parliament and whether or not if it is passed, put before Parliament, whether it's approved by all there. But this looks at things very similar to what the EU are looking at, that is to the risks-based approach to legislation in this area. And they draw a distinction between regular decision-making and what they call high-risk decision-making. And the high-risk decision-making is really shorthand for decisions which might affect the employment of an individual, whether that's recruitment. Whether it's a decision, disciplinary decisions, termination decision. Essentially all the major employment related decisions are to go through essentially a system of checking so you couldn't rely purely for example in the framework on a decision made purely by AI. It'd be required that an individual sits alongside that or at least only uses the AI tangentially to decision that they're making. Things like no emotion recognition software would be allowed so that's for example if you were to have a disciplinary hearing and that's to be recorded you could use software which is designed to pick up on things like inflection word pattern things that might infer a particular motive or meaning behind what's been said and what this framework proposal does is say that kind of material could have that kind of software or programming couldn't be used in that kind of setting. So what happens in the UK remains to be seen but i think you guys are a bit further ahead than us and actually have some law and statute. How are things working out for you?
Mark: We've seen a lot of government agencies, as well as state legislatures, put an emphasis on this issue in terms of potential regulatory guidance or proposed legislation. To date, there has not been a huge amount of legislation passed specifically relating to AI in the workplace. We're still at the phase where most jurisdictions are still considering legislation. That said, there was an extremely broad law passed by New York City a few years ago, which finally went into effect last July. And in a nutshell, we can have an entirely separate podcast just on the nuances of the New York City law. But essentially what the New York City law does is it stops employers or bars employers from using an automated employment decision tool or an AEDT to screen job candidates when making employment decisions unless three criteria have been satisfied. First, the tool has been subjected to an independent bias audit within the year prior to the use. A summary of the most recent bias audit results are posted on the employer's website, and the employer has provided prior written notice regarding use of the AEDT to any job applicants and employees who will be subject to screening by it. If any one or more of these three criteria aren't satisfied, then an employer's use of that AEDT with respect to any employment decisions would violate the New York City Human Rights Law, which is one of the most employee-friendly anti-discrimination statutes in America. And other jurisdictions have used the New York City law as somewhat of a model for potential of the legislation. We've also seen the Equal Employment Opportunity Commission or the EEOC weigh in and issue some guidance, though not binding necessarily strongly cautions employers with regards to the use of AI and the potential for disparate impact on certain protected classes of job applicants and employees, and generally cautioning and recommending that employers conduct periodic audits of their tools to ensure no bias occurs. Carl, do you have any final thoughts?
Carl: So whilst we're still a long way from legislation in the UK, there are things employers can be thinking about and doing now to prepare themselves for I think what will inevitably be coming down the road. So just a few suggestions on that front. Establish an AI committee. So take ownership of how AI is used in the business, whether that's in the performance of day-to-day tasks and content generation and such. As Mark said earlier on, setting up things like what can be done what checks should be carried out ensuring that there is a level of quality control and also in terms of decision making ensuring that there is a policy that employers can look to to make sure that they are not going to one fall foul of something in the act and also have something so that if any decisions are challenged in future not just can they look back on the measures they've taken but show that it's consistent with a policy that they've adopted and applied on an equal basis for all individuals going through any particular process may give rise to complaints. And they might also, for example, conduct a risk assessment and audit of their systems. I mean, one of the things that will be key is not just saying that I had AI and that was used in a particular process, but knowing how that AI actually worked and how it filtered or made decisions that it did. So, for example, if you want to be able to guard against an allegation of bias, it would be good to have a good understanding of how the AI system in question that gave rise to decision that's in dispute had made its determination as over one individual than the other that will help the employer to be able to demonstrate first of all that they are an equal opportunities employer in the event of real challenge the discrimination didn't occur, so look those kind of things are things employers can be thinking about and doing. Now what kind of things do you think people on your side of the pond might be thinking about?
Mark: Yeah so I think you know similar similar considerations for U.S. employers. I think among them, considering the pros and cons, if you're going to use an AI tool, building your own, which some employers have opted for versus purchasing from a third party. If purchasing from a third party, particularly given the EEOC and other agencies' stated interest in scrutinizing how tools potentially might create some sort of discriminatory impact, consider including an indemnification provision in any contracts that you're negotiating. And in jurisdictions like New York City, where you're required to conduct an annual audit, but even outside New York City, especially given that it's been recommended by the EEOC, consider periodic auditing of any employee and company AI use to ensure, for instance, that tools aren't skewing a paper of or against a particular protected class during the hiring process. And again, I strongly recommend developing and adopting some sort of workplace AI and generative AI policy. Thank you all for your time today. We greatly appreciate it. Thank you, Carl. And stay tuned for the next installment in this series.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith’s Emerging Technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
85 odcinków