On October 30, 2023, President Biden issued an executive order kicking off official efforts to regulate artificial intelligence (AI) at the federal level. The order - panoramic in scope - sets the stage for a multi-agency effort to tame (without stifling) what has quickly become an explosive arena in US tech innovation.
The order has several implications for AI providers, consumers and the general public, but its total impact has yet to come into focus. For employers, the order will at minimum add yet another layer to their current AI strategies, which are no doubt still in flux. As states pass regulations of their own, employers can start by understanding what the executive order has in store for them.
The Executive Order
The agencies tasked with this important mission include, but are not limited to, the National Institute of Standards and Technology, the Department of Homeland Security (which will establish the AI Safety and Security Board), the Department of Energy, the Department of Commerce and the Department of Labor.
The order is organized into eight principles, which will act as the federal government's blueprint for policy creation and regulation:
1. Safety and Security: Ensure that AI is safe and secure.
2. Innovation and Competition: Promote responsible innovation, competition and collaboration.
3. US Workforce: Remain committed to supporting American workers.
4. Equity and Civil Rights: Continue to advance equity and civil rights.
5. Consumer Protections: Uphold consumer protection laws and principles.
6. Privacy and Civil Liberties: Protect privacy and civil liberties.
7. Government Readiness: Ensure responsible AI use and upskilling of government employees.
8. Global Leadership: Lead the way to global societal, economic, and technological progress.
It then directs relevant agencies to take certain actions based on these principles.
What's in It for Employers?
Though the above principles could all impact employers in some way, some provisions are more attenuated than others. Most relevant to employers are the provisions related to safety and security, supporting US workers and advancing equity and civil rights.
Safety and Security
The order's safety and security provisions seek to build a framework for ensuring AI products meet quality, safety and security standards. This includes establishing guidelines for thorough AI testing and reporting systems. These requirements are mostly directed towards AI providers, however they will also impact employers that adopt and/or train AI in-house.
These forthcoming requirements will help employers vet potential AI providers to ensure that the new tools do not expose the organization to additional cyber threats. In addition to protecting the organization against cyber threats, ensuring AI providers meet forthcoming standards could protect the organization from potential liability for using substandard AI products. Though not noted in this order,
recent guidance from the Equal Employment Opportunity Commission reminds employers that they could be held liable for discrimination caused by AI products they use, even if they were created by an outside provider. This common legal rule of imputed liability - that one party could be held responsible for another party's actions - should not be ignored.
To appropriately vet AI providers, employers should be prepared to familiarize themselves with safety and security guidelines and requirements and incorporate those requirements into their vetting process. Employers that plan to train and create their own AI products should be particularly careful to ensure compliance with safety and requirements.
Protecting the US Workforce
The order also includes several provisions focused on protecting and preparing the American workforce, specifically addressing the potential for job displacement and the need to prioritize development. The order calls on relevant agencies to prepare for workforce disruptions by ensuring government programs are prepared to manage job displacement and to prioritize employee upskilling to help the workforce take advantage of new opportunities in an AI-powered future of work.
Employers can support employees and their business by upskilling or reskilling employees most likely to be displaced, engaging in proactive succession planning, and adapting current positions to partner with AI (which will require certain employee skills). FrankCrum offers Training Hub for employers that would like to offer customized training programs to build employee skills through an online platform.
To learn more and to receive a quote, please contact FrankCrum’s TrainingHub Manager at TrainingHub@frankcrum.com.
Employers should be mindful of Fair Labor Standards Act (FLSA) compensation requirements, as the order calls for additional guidance on compensation requirements as they relate to employee use of AI to aid in their work. Additionally, the order signals support for employee labor rights, stating "all workers need a seat at the table, including through collective bargaining, to ensure that they benefit from" the opportunities provided by new jobs and industries created by AI. Employers need to understand their obligations under the National Labor Relations Act (NLRA).
Finally, the order seeks to ensure that AI used in the workplace advances employee well-being. What does that look like? According to the order, principles and best practices to mitigate AI's potential harms to employee well-being should address:
- Job displacement and career opportunities,
- Labor standards and job quality, and
- Implications for those who work for organizations that use AI to collect data about them.
Equity and Civil Rights
The executive order states that AI should advance equity and civil rights. Its provisions primarily address ensuring equity in public programs and benefits. However, it also asks for guidance for federal contractors to prevent unlawful discrimination caused by AI in hiring.
Employers that are federal contractors must be ready to comply with forthcoming guidance. This may involve evaluating current uses of automation in recruitment and hiring and creating new AI provider vetting practices to ensure compliance.
Though non-federal contractors are outside the scope of the order's hiring provision, they must not ignore non-discrimination requirements under existing laws. To help employers understand the role AI could play in discrimination, the EEOC has issued guidance on
preventing AI-created discrimination under the Americans with Disabilities Act (ADA) and, as noted above, Title VII. Employers considering using AI in their organization should become familiar these requirements, particularly when using AI in recruiting new talent.
Preparing for an AI-Powered Future of Work
Employers will continue to explore, adopt, and in some cases, create new AI tools to enhance their work and productivity. Along the way, they will be required to address new issues related to workforce development and labor rights, wage and hour laws, equal employment opportunity, employee well-being, and more. Though the road ahead may be difficult, this blueprint for AI regulation can act as a roadmap for operating in a regulated AI-powered future of work.