The increasing prevalence of artificial intelligence (AI) tools and, in particular, generative AI, is creating more reason for employers to adopt workplace policies that communicate to employees whether the use of these applications is appropriate and any commensurate limitations.
Quick Hits
- Employers may want to develop and implement policies regarding the appropriateness and limitations of generative AI in the workplace.
- Generative AI has shown its ability to improve efficiency and boost worker productivity, and many employers are seeking workers with ChatGPT experience.
- Concerns about generative AI include the accuracy of AI-generated information, the potential exposure of confidential or trade secret information, and potential infringement of third-party intellectual property rights.
Since the public release of ChatGPT on November 30, 2022, and a subsequent increase in attention following the upgrade to GPT-4 on March 14, 2023, generative AI has captured the public’s attention. Generative AI refers to a type of application that is able to create content based on a user’s text prompt. While large language model generative AI may currently dominate the conversation, it is not the only type of widely available generative AI. Other applications, such as OpenAI’s DALL-E, are able to create images based on textual descriptions entered by the user.
Employers are certainly taking note of the possible gains in efficiency and productivity that may be achieved in a wide range of workplace settings through the use of generative AI tools. According to an April 2023 working paper published by the National Bureau of Economic Research, generative AI increased worker productivity by an average of 14 percent when used as a conversational assistant by customer support agents. Thus, it is not surprising that companies are beginning to seek employees with generative AI skills and abilities. Indeed, an April 2023 ResumeBuilder.com survey of business leaders found 91 percent of respondents had sought workers with ChatGPT experience.
In addition, the increasing prevalence of AI tools and, in particular, generative AI is creating more reasons for employers to adopt workplace policies that communicate to employees whether the use of these applications is appropriate and any commensurate limitations. Below are three concerns associated with generative AI applications that demonstrate the importance of drafting and implementing workplace policies on AI.
Accuracy of AI-Generated Information
A policy can help employers take precautions to ensure AI-generated information is accurate, if employees will rely on AI-assisted responses to perform job functions. Many users have reported their experiences with large language model generative AI programs producing inaccurate information in response to user prompts. The term “hallucination” has developed into a common shorthand to describe when generative AI provides a response that is not supported by its training data.
While generative AI may quickly and efficiently provide content, there can be dramatic consequences when employees rely on inaccurate results. Where the potential stakes through reliance on inaccurate information are particularly high, an AI policy may direct employees to verify and confirm support for the information produced by AI through human review.
Exposure of Confidential or Trade Secret Information
Additionally, employers may want to include in their policies instructions for employees on the exposure of any confidential or trade secret information to generative AI. Employers may maintain a wide range of information for which there is a high interest and need to prevent its exposure to third parties or the public in general. These interests may include preserving trade secret protections and third-party customer data that companies are contractually bound to protect.
Many, if not the vast majority, of today’s AI applications rely on machine learning to improve their responses through use. While machine learning may allow the program to progress over time, it also means that company information that is included in a prompt for generative AI becomes part of the collective information on which the application relies to create future responses. In other words, protected information may be disclosed and exposed if employees use generative AI without limitation.
Concerns over the disclosure of proprietary information have led several companies to prohibit employees from using generative AI. While this strategy may not be appropriate for all businesses, a workplace policy may educate employees on the risks of using confidential information in AI tools, the limits the company has set, and the necessary steps to follow to protect data.
Infringement of Third-Party Intellectual Property Rights
An inverse concern may also motivate adoption of an AI policy. That is, generative AI introduces novel ways for employees to potentially infringe on third-party intellectual property rights. Communal data from which generative AI draws to create its responses may include information that is subject to copyright, trademark, or other legal protection that limits or prohibits the use of such content.
Definitive answers do not yet exist regarding the intellectual property rights of companies whose content has been assimilated by AI applications. Greater clarity will likely take time to develop and may be established through litigation. Employers that permit the use of AI by employees may want to consider including in their policies limits on how workers use content created by these programs to guard against infringing on others’ intellectual property rights.
Key Takeaways
Individual interest in AI has exploded through the widespread availability of state-of-the-art generative AI tools. While these programs present the opportunity for significant production gains through efficiencies, they are not without legal risks. Employers may seek to mitigate the risk these applications present by clearly communicating to employees whether they are permitted to use them, and, if so, any limits to protect against potential risks.
Ogletree Deakins will continue to monitor developments and provide updates on the Technology and Cybersecurity and Privacy blogs.
A version of this article first appeared in Legaltech News.
Follow and Subscribe
LinkedIn | Twitter | Webinars | Podcasts