Modern dark data center, all objects in the scene are 3D

In this episode of our new podcast series, The AI Workplace, where we explore the latest advancements in integrating artificial intelligence (AI) into the workplace, Sam Sedaei (associate, Chicago) shares his insights on crafting and implementing effective AI policies. Sam, who is a member of the firm’s Cybersecurity and Privacy and Technology practice groups, discusses the rapid rise of generative AI tools and highlights their potential to boost productivity, spark innovation, and deliver valuable insights. He also addresses the critical risks associated with AI, such as inaccuracies, bias, privacy concerns, and intellectual property issues, while emphasizing the importance of legal and regulatory guidance to ensure the responsible and effective use of AI in various workplace functions. Join us for a compelling discussion on navigating the AI-driven future of work.

Transcript

Announcer: Welcome to the Ogletree Deakins Podcast, where we provide listeners with brief discussions about important workplace legal issues. Our podcasts are for informational purposes only and should not be construed as legal advice. You can subscribe through your favorite podcast service. Please consider rating this podcast so we can get your feedback and improve our programs. Please enjoy the podcast.

Hera Arsen: Hello and welcome to Ogletree’s The AI Workplace Podcast, where we discuss the latest developments in the use of artificial intelligence in the workplace. I’m Hera Arsen, the firm’s Director of Content coming to you from our Torrance, California office. I’m here today with your AI Workplace podcast host, Sam Sedaei, who’s going to be offering us his insights on crafting and implementing an AI policy. Welcome, Sam.

Sam Sedaei: Thank you, Hera. Hello.

Hera Arsen: A little bit about Sam before we get started. Sam is a member of the firm’s Cybersecurity and Privacy Practice Group and the Technology Group. And in addition to hosting our podcast, he regularly advises employers on the use of AI and other forms of technology in the workplace, including in hiring, timekeeping, productivity monitoring, and performance assessment. As part of his practice, Sam prepares AI policies for clients and advises employers on compliance with a wide array of laws governing the use of artificial intelligence tools in the workplace, and that will be the focus of today’s podcast. So, let’s get started. Sam, why would an employer want to consider developing an AI policy?

Sam Sedaei: That’s a good question, Hera. We should start by acknowledging the fact that AI tools are having a little bit of a renaissance moment in 2024 and now 2025. They’re extremely helpful in a number of areas. They could enhance productivity, they could drive innovation, and they could provide valuable insight. The adoption of generative AI or GAI has been occurring at a rapid pace since first launch back in November of 2022.
Now, specifically, employers are utilizing GAI to accomplish a number of tasks, and they create content such as policies and job postings. They’re using GAI to train and engage employees, and they’re streamlining tasks such as research or document creation. These are just to name a few. And so, with every power comes responsibility, and there is a need for users to utilize these new and novel and very powerful technology responsibly. There are risks that are associated with these AI opportunities, and there is a significant level of complexity that could be involved in an organization’s operations, sales, manufacturing, and human capital management operations when they’re using AI tools. On top of all this, many jurisdictions have issued regulations to guard against AI’s misuse. Consequently, I think it is important that employers seek legal, ethical, and regulatory guidance when implementing AI platforms in their workspaces.

Hera Arsen: So, Sam, you discussed some of the risks, or you touched on some of the risks and the corresponding responsibilities that employers would have. Let’s get into that. Let’s discuss the risks of using AI. Can you tell us about some of those risks that are generally associated with the use of AI? And so, how can employers address these risks through their AI policies?

Sam Sedaei: So, there are broad categories of risks I would say that are associated with using AI. A big one, if not the biggest one is the potential for AI tools to be wrong, to be inaccurate. A lot of people, they have a lot of faith in the AI tools and sometimes a little too much, and they think that an AI could do tasks completely on its own, and there is no need for the person to have any kind of an oversight over what the AI tool is doing, but there’s a lot of potential for inaccuracies.
So, the second category of I think issues involve bias. Sometimes, AI tools and AI models can be biased, and this could be due to several different reasons. It could be issues with the training sets that are used to train these AI tools. It could be due to a variety of factors or other issues with the datasets on which the AI tool is trained. So, that is another category of issues.
Then, there are concerns with privacy, security, and protection of confidential information. There’s a potential for, say, for example, employees of an employer that is in possession of confidential information relating to clients or customers to go and put in that information in an AI tool, in an attempt to get a product out of the AI tool later, if it’s a summary or a draft document or just some other task that that individual is trying to do. And meanwhile, you have that person going and putting all that confidential information into this engine not knowing where the information might go.
Finally, there are concerns that relate to intellectual property rights. A lot of times, content that is fed into AI tools and generative AI kind of models for the purpose of training. This is property that’s protected by intellectual property and sometimes the information is used without authorization. We have lawsuits going on right now, where owners of intellectual property rights are suing AI tool developers, stating that “Our information, our rights, our intellectual property rights are being violated and proper authorization has not been issued to using our material to train AI tools.” So, these are just some of the categories of risks, and policies could address all of these. We can go discuss them one by one, but there could be different elements that could be included in an AI policy that could address those concerns.

Hera Arsen: Certainly, a lot to think about, especially with some of those privacy and security concerns. That’s a lot for employers to take in, but I wanted to ask you; I think maybe now is a good time to back up the discussion a little bit. Let’s give listeners kind of a bird’s eye view of the legal landscape. And I was going to ask, what are some of those overarching ideas that companies may want to consider when drafting their AI policies? What are some of those big-picture issues?

Sam Sedaei: A few of those ideas are – the first one is companies may want to consider whether they need an AI policy. The process should be deliberate. They should have a purpose that they want to accomplish through a policy. I don’t think it’s a good idea to decide to have an AI policy just because it’s a fad or fashionable to simply make the company more appealing to clients. There needs to be a purpose for adding a policy, especially an employer would want to put a policy in place that it is willing to enforce. Otherwise, it is just a paper tiger.
Additionally, employers must fully understand what, if anything, they need to contribute to permit the use of AI tools. If there are specific tools that they’re going to internally permit, for example, does the employer’s human resources need to provide information in order to make that tool useful? Is there data in a format that matches the formats required by the AI platform? So, before having an AI policy in place and deciding what AI tools to approve, a company may want to consider whether it is well-positioned to implement their policy and to make sure that the use of those AI tools could be successful.
And finally, employers may want to also test any AI tools that they have in mind for bias to make sure that there’s little chance that that AI tool could operate in a way that could be biased towards specific groups of people. And this becomes especially important if that AI tool is going to be involved in some kind of employment or hiring decisions. So, these are just some overarching ideas to think about.

Hera Arsen: Right. Okay. So, let’s go from these overarching ideas, the purpose of the policy, and go from that into the nitty-gritty a little bit. Can we turn to the important topics to hit in an AI policy? So, what are some of those topics that you have included in AI policies and that you’ve seen other companies include?

Sam Sedaei: I have drafted AI policies for a variety of employers, and there’s really no one-size-fits-all policy. It depends on a variety of factors, including the industry that the employer operates in; the type of work they do; the degree to which they use AI tools; the purpose for which their employees use AI tools. But some of the common elements in these policies that have noticed are these: first, there are sections in certain AI policies where you define AI, and I think that’s very important because a lot of times people, employees, employers, supervisors, managers, they think they know what AI is. But sometimes, it could be broader than what people may imagine, and sometimes it could be narrower.
Then, some policies include approved AI platforms if they want to make sure that only certain AI tools are permitted. Policies at times address adequate training under either mandatory or recommended training. That is something that policies sometimes address. We talked about confidentiality, data privacy and data security. Many policies address that and how to deal with confidential information. Policies at times address accuracy and what steps employees need to take to ensure that the final result is accurate. There are concerns about intellectual property as we discussed, some policies discuss that to ensure that the company can minimize the chances that another party’s intellectual property rights could be violated. Policies at times address bias to ensure that the outcome of using a certain AI tool is not biased against specific groups of people. Companies sometimes decide that they want to have a body or a committee of some kind that would engage in kind of internal governance. And these bodies could be involved in monitoring the use of AI in companies and they could make recommendations, changes to the policy.
And also, another kind of section that I have included in certain AI policies is a little section about how the company would be monitoring and making periodic updates to policies. And that kind of tells employees that you need to, it’s on you to really keep up with any updates to policies, and don’t think that this policy that you’re hearing about is just a policy that is written one time and reading it once is going to be enough. And it kind of raises the possibility that employees are going to be more vigilant and really stay in tune with any updates to the policy.

Hera Arsen: Right, and especially I’m sure as laws change and regulations change, and we hear federal government policies on AI, I’m sure there will be updates. But let me go back. I want to talk about some of these topics with you. So, you started with defining AI. Why would companies want to define AI in their policy? It seems like it’s very basic. You kind of hinted at it that maybe some people think they know what AI is and they don’t know what it is. So, can you talk a little bit about why that definition is important in the policy?

Sam Sedaei: Sure. So, a part of it is just what we discussed, which is that many tools are thought of as AI that maybe are not AI and vice versa. So, it’s very important to have that definition. But secondly, a policy that a company puts in place could be intended to address a specific type of AI. So, the actual definition of artificial intelligence out there, if you put that in and define AI, it could be a very broad definition.
But the company in question may not want to regulate all forms of AI. There’s a specific subset of AI tools that they will want to regulate. So, that’s where it becomes important for AI to be defined within the contours of that specific policy. Most policies that I have seen focus on GAI, or generative AI. But without having that definition for AI, the policy could be seen as being applied to a wider range of products.
When defining AI I have learned that it’s helpful to try to avoid technical language, should constantly… The way that I have, the way that I do it, I always try to remember that the individuals who are going to be reading these policies in many cases, in most cases are not engineers, and they’re not familiar with these technical terms. So, I found that it’s helpful to have definitions that are easily understood. The better, more easily it is understood, the more likely it is that it is going to be used and utilized in an effective way. A policy that not understood, it’s not, I think, not the best way of trying to regulate conduct and behavior by employee.

Hera Arsen: Right. Makes sense. Makes sense. Let me ask you about another one of the topics you brought up, which was including in your policy what the approved AI tool, Approved AI platforms are. I understand some companies want to include a list of those. What’s the purpose of doing that, and what’s the benefit of doing so?

Sam Sedaei: There could be several benefits for that. First, companies can look at the different AI tools and kind of determine, make that determination of which tools could be most helpful for our purposes. And then after making that assessment, they could say, “Okay, we have made the determination that these specific tools could be helpful. They could lead to increased productivity or some other benefit. So we are going to permit the use of these specific AI tools.” And that kind of gives people guidance, so then, employees aren’t going on their own and trying to figure out which tools could be helpful to them when the company has already made that assessment for them.
And second of all, there could be some of those concerns we talked about. So, we talked about privacy and intellectual property issues and all of that. A company could decide that, you know what? Maybe it is not safe for us for employees to go on the internet and put their client information and other kind of private information into some kind of public engine. We want to have some kind of, maybe an internal tool, sort of internal AI tool that we want to develop on our own, and we are going to permit employees to just use these specific tools.
And so, not only is a company taking that first step of determining which tools are going to be most effective for their specific purpose, but they’re also trying to protect certain other interests. There’s privacy or avoiding liability or some such. So that’s why it would be for some employers and companies, it is beneficial to have that kind of limited list, and it also makes it easier to control the use of AI.
Another issue that makes it important maybe to consider having an approved AI tools list that are approved is that constant addition of new AI tools every day. And some of these, we just saw how DeepSeek made a lot of news last week by creating this generative AI model that is very effective, and they were able to develop it at a fraction of the cost compared to some of the American developers of AI tools.
Some of these AI tools are in countries, for example, where privacy is not protected the same way that we try to protect privacy in the U.S. And so, to just permit employees to go to any AI tool and use that for work purposes could be very risky. And so, that’s why it’s important to consider doing that, to have that list of approved AI tools that the company can then amend and revise as time goes by. But it’s a way of keeping some level of control over employees’ use of AI tools without doing an outright ban.

Hera Arsen: Okay. Makes sense. What about – you mentioned including a training component in the list of topics. Do you think you could maybe speak to that a little bit more? What would that look like? How could it be beneficial?

Sam Sedaei: It could be effective in teaching employees how to use AI tools in an effective way and use the features. And, if nothing else, it could be a great way to teach them about the specific AI policies that the company has enacted. When you have people sitting in a room in a training, I think they’re much more likely to actually learn and understand the policies than if you were to just email them the policy and say, “By the way, this is the policy.” Those are just some of the reasons why it might make sense to have those training requirements before anybody can use AI tools.

Hera Arsen: One of the things we talked about earlier, one of the risks associated with using AI, was the accuracy or potential inaccuracy of some of these tools or the output of some of these tools. Can you talk a little bit about what the concern is specifically, and how might an AI policy address this risk?

Sam Sedaei: Sure. So, some AI tools are known to produce results that are based on fictitious facts. Experts call these hallucinations where we’ve heard stories of lawyers that walk into courts assigned to cases that do not exist, and they get reprimanded by judges. And this happens in other, less public, situations as well. And I think really the only way, the most effective way, and possibly the only way that employers could address that would be by including a requirement within their AI policy that requires users to independently verify the output of the AI platform before relying on it or submitting that work as their own.
Human oversight is still very much necessary. Maybe one day it won’t be, but that, I personally consider that to be a critical element in any AI policy to require that human oversight. And it also sends a message to employees that you are ultimately responsible for the work product that you turn in, and you need to make sure that what you’re submitting is something you represent to your work.

Hera Arsen: One of the other things we talked about as a risk, and this is something I think a lot of us have heard about in the news, is the potential for bias; it’s associated with AI tools. Is that something that you could be addressing in your AI policy?

Sam Sedaei: I think that is definitely something that could be addressed in the policy, but this is an area where I think the employer needs to do a lot of the work before they actually approve a specific AI tool. Employees are often not in a position to make a personal assessment about any bias impact that a specific AI tool may have. So, an employer is in a position to initially test a product, see the results, assess the results, and make that determination before they launch the AI tool or approve it for widespread use.
So, it’s something that could be addressed in a policy, but ultimately I think that’s a lot of the kind of initial setup work needs to be done by the employer, the company that wants to have the employees utilize that AI tool. And that’s why you also want to have internal stakeholders who are aware of any proposed AI tool, so they could provide feedback on what they think about launching a specific AI tool.
And this again, finally a point that the policy can address is again, the same thing we were just discussing regarding accuracy. You’ll want to remind employees that ultimately there needs to be human oversight. So, even if a tool is approved by a company or an employer as a tool that is not very prone to producing biased results, you still want to have individual users keep an eye out for any such biases when they get the final outcome or product from the platform, because even the best AI tool could at times make mistakes. So, those are just some of the elements that could be considered for an AI policy.

Hera Arsen: I definitely hear you saying human oversight. That seems like that’s going to be looming large when crafting an AI policy. But as we wrap up, anything else, any additional concerns or insights for employers crafting or implementing their AI policies?

Sam Sedaei: I mean, nothing really other than what we already talked about. Some things that we didn’t talk about in detail, but it might be worth for employers to really think about. One would be those intellectual property rights. One would be the question of governance. Again, if employers think it might be helpful to have that kind of governance committee, it makes sense for a lot of employers that notes about periodic updates in the policy. And to actually have individuals who are charged with monitoring developments in the legal landscape and making adjustments to the policy that would reflect those. I think those are just some final thoughts I have.

Hera Arsen: So, a living policy, something that the employer is looking at regularly.

Sam Sedaei: Exactly. Just want to make sure that employers, when they draft these policies, that they continue to monitor the landscape because it really is changing on a daily basis.

Hera Arsen: Well, thanks to you for keeping us updated on those. And thanks, Sam, for the really informative discussion on what to include in an AI policy and the risks that are out there for employers. Thank you to the listeners also for joining us today. If you haven’t already, we hope you’ll listen to the other episodes of The AI Workplace podcast series, and please stay tuned for the next episode, which will be out soon. Thank you, Sam.

Sam Sedaei: Thank you, Hera.

Announcer: Thank you for joining us on the Ogletree Deakins Podcast. You can subscribe to our podcast on Apple Podcasts or through your favorite podcast service. Please consider rating and reviewing so we may continue to provide the content that covers your needs. And remember, the information in this podcast is for informational purposes only and is not to be construed as legal advice.

Speakers

Share Podcast


Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now