Hallway of servers.

More and more organizations are beginning to use or expand their use of artificial intelligence (AI) tools and services in the workplace. Despite AI’s proven potential for enhancing efficiency and decision-making, it has raised a host of issues in the workplace which, in turn, have prompted an array of federal and state regulatory efforts that are likely to increase in the near future.

Artificial intelligence, defined very simply, involves machines performing tasks in a way that is intelligent. The AI field involves a number of subfields or forms of AI that solve complex problems associated with human intelligence—for example, machine learning (computers using data to make predictions), natural-language processing (computers processing and understanding a natural human language like English), and computer vision or image recognition (computers processing, identifying, and categorizing images based on their content).

One area where AI is becoming increasingly prevalent is in talent acquisition and recruiting. Many organizations have adopted machine learning-based software to auto-screen or facilitate the screening of job candidates. Additionally, AI tools are powering many of the increasingly common employee self-service tools that allow for quicker, more efficient answers to common employee relations questions.

The benefits of AI include improved efficiency, lower costs of products and services, improved quality, and fewer errors. But AI is not perfect. Indeed, government and media attention has centered on the potential for AI-driven tools to be biased or discriminatory. For example, during the Obama administration, the Executive Office of the President of the United States issued a detailed report on algorithmic systems, AI opportunity, and potential civil rights issues that highlighted the potential for positive impact from AI while underscoring issues about “the potential of encoding discrimination in automated decisions.” Similarly, the U.S. Equal Employment Opportunity Commission has also been concerned about these issues and has emphasized assessing how employment discrimination protections apply and can be used in the context of AI-powered systems.

Initial City, State, and Federal AI Regulations Rolled Out

Despite this media and governmental attention, the pace of legislation has over the last decade been slow. There are signs, however, that this may be changing. A number of city, state, and federal regulations have been proposed or enacted with a goal of eliminating potential discrimination and increasing transparency related to AI. For example:

Facial Recognition Software Ban. Technology-friendly San Francisco passed a ban in mid-May 2019 on the use of facial recognition software by police and other government agencies. The ban, which does not apply to the use of facial recognition software by private entities, makes San Francisco the first major city to legislatively ban the use of this technology. Similar bans are under consideration in other jurisdictions (such as the Commonwealth of Massachusetts).

AI in Hiring in Illinois. One popular use of AI in the hiring process is through AI “interview bots,” which evaluate personal characteristics such as an applicant’s facial expression, body language, word choice, and tone of voice. The software then provides employers with feedback that can be used to evaluate whether to hire a candidate. On May 29, 2019, the Illinois General Assembly passed a first-of-its-kind measure that would impose restrictions on employers’ use of this kind of artificial intelligence in hiring. It is expected to be signed shortly by Illinois’s governor as the bill passed without a single “no” vote. The law, known as the Artificial Intelligence Video Interview Act, is a disclosure and informed consent rule that would require employers to take the following steps before asking applicants to submit to video interviews:

  1. notify applicants for Illinois-based positions of plans to have their video interviews analyzed electronically;
  2. explain to the applicants how the artificial intelligence analysis technology works and what characteristics will be used to evaluate them; and
  3. obtain the applicants’ consent to the use of the technology.

Notably, Illinois has become something of an incubator for workplace technology legislation. It was the first state to pass legislation regulating employers’ use of employee biometric information (e.g., retinal scans, facial recognition software, and fingerprint information).

Federal Algorithmic Accountability Act. On April 10, 2019, congressional Democrats introduced the Algorithmic Accountability Act of 2019, which seeks to enhance federal oversight of artificial intelligence and data privacy. (In some respects, the proposed law is similar to the European Union General Data Protection Regulation.) If passed, the Algorithmic Accountability Act would regulate AI systems as well as any “automated decision system” that makes a decision or facilitates human decision-making that impacts consumers. For processes that fit the proposed statute’s definition, organizations would be required to audit for bias and discrimination and take appropriate corrective action to resolve any identified issues. The bill would give oversight responsibility to the Federal Trade Commission.

The Algorithmic Accountability Act has limited likelihood of successfully passing Congress. However, it could be a harbinger of things to come. This kind of legislation may gain momentum on a federal level depending on how the 2020 elections play out. And, regardless, proposed federal legislation often catches the attention of legislators in one or more states and spurs similar proposals at the state level. Indeed, California, for example, has passed the California Consumer Privacy Act, a sweeping data privacy law that becomes effective January 1, 2020, and whose scope and application to workplaces remains unclear.

Conclusion

As businesses increasingly rely on AI and other advanced technologies, the pace of legislation and regulatory activity relating to this technology looks to be similarly increasing. As we previously reported, in February 2019 President Trump issued the Executive Order on Maintaining American Leadership in Artificial Intelligence; the Office of Management and Budget is expected to issue draft guidelines for the AI sector this summer. Moreover, the White House recently launched AI.gov, designed as a platform for governmental agencies to share AI initiatives. We will keep you updated on further major legislative and regulatory developments with jurisdictional analyses of potential compliance issues before new technology is implemented in the workplace.

Author


Browse More Insights

Fingerprint Biometric Authentication Button. Digital Security Concept
Practice Group

Technology

Ogletree Deakins is uniquely situated to provide tech employers and users (the “TECHPLACE™”) with labor and employment advice, compliance counseling, and litigation services that embrace innovation and mitigate legal risk. Through our Technology Practice Group, we support clients in the exploration, invention, and/or implementation of new and evolving technologies to navigate the unique and emerging labor and employment issues present in the workplace.

Learn more
Modern dark data center, all objects in the scene are 3D
Practice Group

Cybersecurity and Privacy

The attorneys in the Cybersecurity and Privacy Practice Group at Ogletree Deakins understand that data now accumulates quickly and transmits easily. As the law adapts to technical advancements, we effectively advise our clients as they work to comply with new developments and best practices for protecting the privacy of the data that their businesses collect and retain.

Learn more

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now