Four Points for Your Artificial Intelligence Acceptable Use Policy
As technologies like ChatGPT and other artificial intelligence tools have entered the mainstream, billions of individuals have used such tools for assistance with everyday tasks, both personal and professional. These tools, however, are not without risks. To address such risks, an increasing number of companies have implemented an AI acceptable use policy. Here are four points to consider in implementing your company’s acceptable use policy:
- Establish expectations. What are acceptable and unacceptable uses of AI? What data can be input into what types of AI tools? Under what circumstances? What are the consequences for failure to adhere to the AI acceptable use policy?
- Address AI-specific issues. An AI acceptable use policy can establish policies and procedures for addressing AI-specific issues (e.g., ensuring AI use does not offend any legal requirements or contractual commitments). While some aspects could potentially be folded into other policies (e.g., revising an incident response plan to address data incidents related to AI), drafting an AI acceptable use policy can be an efficient tool for comprehensively addressing AI.
- Explain the “why” behind any restrictions or prohibitions. Many companies restrict AI use to address concerns such as those related to privacy, intellectual property, ethics, and regulatory oversight concerns. Educating employees about the risks of such tools can help mitigate the risk of accidental legal violations and increase employee compliance with such policies.
- Demonstrate diligence. From employee acknowledgment and training to expanded incident response plans, an AI acceptable use policy can show company efforts to comply with legal requirements.
The precise restrictions and explanations to include in an AI acceptable use policy can vary significantly, but these four points should create a useful starting point.