Announcer: Welcome to the Ogletree Deakins Podcast, where we provide listeners with brief discussions about important workplace legal issues. Our podcasts are for informational purposes only and should not be construed as legal advice. You can subscribe through your favorite podcast service. Please consider rating this podcast so we can get your feedback and improve our programs. Please enjoy the podcast.
Lauren Watson: Hi, everybody. Lauren Watson here. I’m an attorney in the Raleigh office of Ogletree Deakins and a member of the firm’s Cybersecurity and Data Privacy Practice group. I’m joined here today by my colleague, Benjamin Perry, who is a shareholder in our Nashville office and acts as the co-chair of the Cybersecurity and Data Privacy Practice Group.
Today we are going to be sitting down to talk through the use of artificial intelligence, specifically artificial intelligence note-taking and recording tools in the workplace. We’re going to talk through some of the risks of using those tools, and then we’ll address considerations associated with their use.
When we’re thinking about AI note-taking and recording tools, some of the benefits of those tools are really obvious, things like automated transcription that can keep you from having to furiously type a stream of consciousness notes. I know I’m guilty of doing that. They can also prepare meeting summaries and identify action items after each meeting to make your business operations much smoother. There are also some less obvious benefits, things like identifying who said what on a call based on their voice, and in some instances, understanding a person’s feelings or sentiments on a call. Lots of good things that can be done here. There are, however, a number of different risks and legal compliance issues associated with the use of these tools.
First up is wiretapping considerations. Anytime you’re going to be recording someone, you want to be thinking about whether a wiretap law might apply. In the United States, we have different state-level wiretapping laws as well as a federal wiretap scheme. The state wiretap laws vary from state to state as to whether one party to the conversation has to consent to the recording or whether all parties have to consent to the recording.
Ben, I know that there are a number of legal requirements associated with recording conversations. Would you mind walking us through some of those considerations?
Ben Perry: It’s primarily all of these laws talk about consent, but it’s really more so about providing notice, making sure that the person understands, A, that the phone call is being recorded, and B, what the purposes of that recording are going to be. I mean, ultimately, these laws only talk about recording. I mean that’s why when you call a customer service line, and you have this phone call may be monitored for quality and training purposes disclaimer, those sorts of disclaimers are meant to address the wiretapping laws. As we get more into these AI-powered tools where there’s all sorts of automated analysis being performed on people’s voices, I think those disclaimers are going to kind of morph over time.
It’ll be interesting to see how companies adjust to that because if you’re using it to train the tool, if it’s a learning and performing some sort of analysis in terms of your emotion or whether or not you might be telling the truth, first of all, let’s talk about what amounts to a polygraph test, right Lauren? Are there other considerations there that employers might need to be thinking about in terms of almost like a human lie detector?
Lauren Watson: Yeah, there are a number of anti-polygraph laws that apply in the hiring context especially. We are starting to see allegations I think out of Massachusetts that certain technologies that provide input on things like a speaker’s truthfulness or their truthfulness in general may actually violate those laws. It’s definitely something that you want to think about before you start deploying this type of tool. I think that we are starting to see sort of an evolution of wiretap claims in general. Ben, I know that you’ve done a whole lot with helping companies deal with SIPA lawsuits. Are you able to speak to that at all?
Ben Perry: Yeah, this is something that’s been going on for several years. Originally plaintiffs, of course, there’s all sorts of online tracking technologies. When you visit a website, most people by now I think understand that there’s some level of tracking that goes on in terms of information that’s collected about what website you’re viewing and how that’s shared with other providers for targeted advertising. That’s kind of the first wave of lawsuits that we saw over the past several years was plaintiffs suing, saying that their interactions with a website were intercepted by a third party.
A good example of that is like a chatbot. You’re having a conversation with a little chat box that pops up on a website and unbeknownst to you, it’s probably, or maybe you did know this, there’s a third party providing that service, and they’re obviously getting that information in real time, providing oftentimes some sort of automated response. That was one of the many theories that people sued under.
There are also ones related to if you entered information into a form field, and there are trackers on that website that maybe whenever you click submit, it’s also sharing that information with third parties that are using that information to identify your interests and all sorts of other things that can be inferred about you for the purpose of serving you targeted ads later on. That was kind of the first wave.
Then the second wave is, it’s a similar theory, but rather than intercepting what you’re saying, it’s just the fact that you visited a particular website. They’ve been bringing these sorts of claims under what’s called a trap and trace law. It’s part of these wiretapping statutes. It’s kind of evolved since then and AI, it seems like it’s going to be the next wave of those sorts of suits.
Lauren Watson: In my understanding of these lawsuits is that sort of a critical component of each suit is that the individual claims they didn’t know that this was happening in the background, right?
Ben Perry: Correct.
Lauren Watson: I think that that’s so important when we think about how these AI tools, these note takers, and these recording tools work generally. If you as a company seeking to deploy these tools don’t understand what is actually going on when you use them, if you don’t know things like what kind of information the tool is going to be collecting, what the tool is going to be using that information to do, whether the tool is training its model on your data, whether it’s selling or sharing your data to others, it’s virtually impossible to provide appropriate notice to the people who are actually going to have their information inputted into these tools. This is where I think due diligence issues become so very critically important. You have to, if you’re going to use one of these tools, before you even deploy it, make sure that you’ve got a really strong understanding of exactly what’s going on there. Don’t be afraid to ask the really tough questions of the vendors.
I know, Ben, you’ve had a number of conversations with vendors on behalf of our clients. It’s my understanding that they’re willing to really give you some granular information about what they’re collecting and what they’re going to do with it. Is that your experience?
Ben Perry: They’re willing to say a lot of things about what their tools do. Obviously there needs to be some level of your own diligence in terms of making sure that your IT companies are comfortable with the level of access that they’re being given, that they’re able to monitor to the extent possible what is actually occurring and making sure that’s lining up with what the vendor is saying they’re collecting. Because it seems like these AI vendors are popping up a dime a dozen, and it’s a little bit concerning to the extent that companies are obviously and understandably rushing to keep up and not fall behind because there are really great use cases for a lot of these things. It’s not all, there are really great tools for productivity and all sorts of other things, but at the same time, as with any new technology, you need to make sure you’ve done your diligence and not just rush in blindly and just accept statements by vendors at face value.
Lauren Watson: It’s interesting that you bring up the productivity tool aspect because we are seeing more and more clients look at using employee productivity tools that are sort of AI enabled to get a better sense of what their employees are doing, how they’re performing, and where there may be opportunities to improve performance.
But a little bit outside of the AI space, there are important other considerations that have to be addressed there, as well. We’ve been talking about if an AI tool is used to identify an individual in a meeting based on their voice, well, there might be potential biometric privacy implications there. If you’re going to use that tool in jurisdictions like Illinois or Colorado, there are some additional steps that you’re probably going to need to take to give appropriate notice to get consent to use that AI tool on your employees.
Similarly, if you’re in a state that has an employee monitoring law, you’re going to need to consider whether you need to give a separate notice to your employees that you’re going to use that tool and probably to get their acknowledgement that the tool’s going to be used. It’s not just this narrow track we’re considering AI implications only. You’ve really got to look at it legally from a very holistic, very broad viewpoint.
Ben, let’s say we’ve sat down, we’ve done our due diligence, we’ve connected with our IT folks, we have engaged in the contract negotiations. We’re comfortable with the representations that are being made about the collection and the storage of our information. There are some additional issues that need to be addressed internally. We need to be thinking about things like the access controls that we’re going to put in place to make sure that only the right people get access to our meeting transcripts. Can you talk just a little bit about what companies should be doing internally if they’re going to be using these tools?
Ben Perry: As a starting point, there needs to be approved tools that people are allowed to use because if there’s not guardrails around, okay, these are the tools that you can use for these purposes and here’s what you can and cannot do with them, people just don’t know. I think there were some early reports of people at some big technology companies plugging stuff into special code and having that resurface later in outputs. Even with some of these coders who are very sophisticated and should be probably some of the folks who have the best understanding of these tools and what they do with information, there is clearly still the potential for employee error, I guess I’ll call it.
Lauren Watson: That’s fair.
Ben Perry: Which depending on what the information is, if that’s sensitive employee personal information, then you potentially have a data breach on your hands. I think that’s the first step would be just kind of defining what these permissible tools are and really just having full some employee training around it, letting them know for stuff that may hallucinate that employees shouldn’t be relying on these tools as a source of truth, that they need to do their own diligence. While they could be a great starting point for research, they need to verify any sources that is providing and make sure that whatever answers it’s giving them, if they’re relying on it to answer questions, that it’s actually providing accurate information.
Lauren Watson: 100% verify, verify, verify. It’s so important because I mean, in the legal space, we’re seeing, I want to say on a weekly basis, courts coming out and issuing show cause orders against attorneys who have used artificial intelligence and who haven’t sort of double checked to make sure that yes, the cases are real. Yes, the holdings in these real cases are real. That absolutely it’s happening in every sector that is starting to use these artificial intelligence tools. I think that’s a really good point.
I think absolutely agree with you with respect to the risk that uncontrolled input of information into artificial intelligence tools can really significantly increase the risk of a data breach for your company. I also think that going back to my earlier point about access controls, if you don’t have a good handle on where your information that is generated and transcribed by artificial intelligence is going, then you do also I think pretty markedly increase the risk of a data breach in the sense that breaches happen. We do everything that we can to help our clients prepare for a breach and defend against a breach, but sometimes they just happen. One of the best ways when a breach does actually happen to protect against and really limit the damage that occurs when a breach happens is by having appropriate access controls in place internally. So, say for example-
Ben Perry: Access controls, sorry, not just access controls, retention, right?
Lauren Watson: Yeah. Retention is extremely important, as well. What you want to do is make sure that you’ve got your system set up so that only the people who need to actually have access to these transcripts have access to them. Then you’re absolutely right. You shouldn’t be holding onto these transcripts or these recordings really for any longer than you need them because if someone does get into your system, if they are able to easily get access to these things, then the scope of your data breach just got so much bigger. If you are holding onto information for years and years and years past when you actually need that information, you’re looking at a much more impactful, much harder to deal with breach than you might otherwise have had.
Ben Perry: I’ve always told clients that, let’s take recordings for example, in terms of trying to manage the retention on that, it seems like it’s much more efficient if any sort of recordings or transcripts are saved to some sort of known location that can be periodically or where you can set the retention period on that specific folder location at a certain shorter time period. Because just thinking of all of the data breaches we’ve worked on and what a nightmare it would be if we had a million hour-long audio files and how much that would cost in terms of data mining. I mean, it would be catastrophic.
Lauren Watson: Absolutely. I think a similar issue actually comes up in litigation. If you are recording employee conversations and you have, to your point, a million hour-long recordings, if you get sued and you need to determine whether those are responsive, it is going to be so expensive to engage in sort of the discovery process.
Announcer: Thanks for joining us for part one. Stay tuned for part two where we cover a discussion of risk mitigation and forthcoming automated decision-making technology regulations.
Thank you for joining us on the Ogletree Deakins Podcast. You can subscribe to our podcast on Apple Podcasts or through your favorite podcast service. Please consider rating and reviewing so that we may continue to provide the content that covers your needs. Remember, the information in this podcast is for informational purposes only and is not to be construed as legal advice.