Why you need an Acceptable Use Policy for AI … yesterday.

David Denara
Millennium Communications
8 min readFeb 15, 2024

--

It seems every day, there is another story about how Generative AI solutions and/or large language models (LLMs) will positively or negatively transform our lives. The positive is that everyone (included me) extremely motivated and quickly experimenting with new applications; unfortunately, the downside is that sometimes these technologies are being utilized with no forethought regarding the risks they present to people, privacy, brand reputation, ecosystems, and cultures.

Everything is moving fast, so this article is focused on how to protect your company by identifying a best practices for generative AI cybersecurity and operational risk oversight via an AI Acceptable Use Policy (AUP).

What’s an AI AUP?

If you haven’t developed one yet an AI AUP — whether written from scratch or simply amending existing AUP language — establishes a framework and communication plan that educates your various teams on what is permitted and what isn’t permitted for deploying new generative AI solutions when operating the company.

During the process of writing an AI AUP, your risk/operations leadership team will quickly learn how important it is to have definitions and/or risk frameworks to oversee these exciting new technologies.

Let’s get started.

NIST’s AI Risk Management Framework 1.0 is a helpful reference; however, customizing the AUP so it matches your unique requirements is imperative. Answer the questions in the worksheet below to get a grip on your specific situation.

Questions to guide your content:

Because every culture has different needs/values, varied industry regulations, and different risk exposure, the questions below should guide you when putting together your list of dos and don’ts:

  1. What stakeholders need to be involved in establishing generative AI AUP?
  2. How will my company identify an inventory of all tools that employees are already using and/or want to be using? Should there be a review / change management process put behind each new technology?
  3. How do I vet each of those tools to ensure they can be applied safely
  4. What do our client legal agreements require?
  5. What regulations are applicable?
  6. What are our company’s values/ethics as it relates to reasonable usage to protect our reputation and the potentiation external ecosystems which can be impacted by using AI or LLM?
  7. How will AI be used by our teams?
  8. Is PII data or confidential data permitted to be used in conjunction with AI generative systems?
  • If yes, what steps are taken to ensure no data leakage or inadvertent violation of privacy, confidentiality, and regulatory compliance occurs?
  • What data classification applies to the type of data that is permitted to be used and which shouldn’t be allowed.

9. Is there any intellectual property (IP) at risk?

  • If so, how do we protect our company from either exposing its IP or infringing on another company’s IP rights? Current court precedent is just evolving and based on current U.S Patent and Trademark Office (USPTO) and U.S. Copy Office, it’s unclear on whether these same protections apply to AI generated IP/Content. Go deeper.

10. Will our developers be permitted to produce AI-generated code? If yes, what are further implications to our cybersecurity and compliance teams

11. As more advanced persistent threat is produced by AI-generated malware, what steps do our cybersecurity and risk oversight teams need to take? What tools will they need to keep up with these types of advanced threats?

12. How will our internal rules be extended to 3rd parties who are using these technologies?

Let’s put it all together

You completed the hard part but there’s no need to reinvent the wheel — just copy the sample AI AUP language below and modify it based on your answer above.

— — — — — — — — — — — — — Start Sample AUP — — — — — — — — — — — — —

ARTIFICIAL INTELLIGENCE (AI) ACCEPTABLE USE POLICY

NO AI/ML TRAINING

When using our generative AI features, you agree you will use them only for your creative work product and not to train AI/ML models. You also agree to follow our acceptable usage policies, defined herein and in other policies, such as, the latest version of our Information Security Policy which define more details on which tools can be used and how to use them safely (Artificial Intelligence (AI) Tool Applications).

This means you must not, and must not allow third parties to, use any content, data, output, or other information received or derived from any generative AI features, including any outputs, to directly or indirectly create, train, test, or otherwise improve any machine learning algorithms or artificial intelligence systems, including any architectures, models, or weights.

BE RESPECTFUL AND SAFE

Do not use generative AI to attempt to create, upload, or share abusive, illegal, or confidential content. This includes, but is not limited to, the following:

  • Pornographic material or explicit nudity
  • Hateful or highly offensive content that attacks or dehumanizes a group based on race, ethnicity, national origin, religion, serious disease or disability, gender, age, or sexual orientation.
  • Graphic violence or gore
  • The promotion, glorification, or threats of violence
  • Illegal activities or goods
  • Self-harm or the promotion of self-harm
  • Depictions of nude minors or minors in a sexual manner
  • Promotion of terrorism or violent extremism
  • Dissemination of misleading, fraudulent, or deceptive content that could lead to real-world harm
  • Personal or private information of others in violation of their privacy or data protection rights
  • Please note that we may report any material exploiting minors to the National Center of Missing & Exploited Children (NCMEC).

BE AUTHENTIC

We disable accounts that engage in behavior that is deceptive or harmful, including:

  • Using fake, misleading, or inaccurate information in your profile
  • Impersonating other people or entities
  • Using automated or scripting processes (such as bulk or automated uploading of content through a script)
  • Engaging in schemes or third-party services to boost account engagement (artificially increasing the number of appreciations, views, or other metrics)

BE RESPECTFUL OF THIRD-PARTY RIGHTS

Using generative AI features to create content that violates third-party copyright, trademark, privacy, or other rights is prohibited. This may include, but is not limited to, entering text prompts to generate a third-party brand logo, uploading an input image that includes a third party’s copyrighted content, or using a third party’s personal information in violation of their privacy or data protection rights.

AUP GLOSSARY AND GUIDELINES FOR USING ARTIFICIAL INTELLIGENCE (AI) TOOLS AND SERVICES:

In order to ensure clarity and consistency for our generative AI AUP policy, please review the following glossary of terms and additional policy expectations when using internal and/or external AI-generative technologies:

Artificial Intelligence Definition — The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, translation between languages, authoring of text, and creation of images.

Authorized AI Tools — AI tools sanctioned by the company’s IT division can be used. The IT division will keep an updated list of sanctioned AI tools. Employees are prohibited from using AI tools not on this list for company-related activities.

Tool Approval Process — Employees must submit a request to the IT division if they believe a new AI tool could be advantageous. The IT division will assess the tool for safety, privacy, and compliance before approving or rejecting the request.

Data Accessibility — AI tools should only have access to the data required to perform their tasks. Employees must not grant AI tools access to excess data.

Data Confidentiality — AI tools must adhere to the company’s privacy policy. This includes respecting personal data privacy, confidential data, and proprietary information. Employees must ensure that any AI tool they use manages data in a manner consistent with this policy.

Data Safety — AI tools must implement sufficient security measures to protect data from unauthorized access, modification, or deletion. This includes data encryption, access control, and regular security updates.

Supervision and Auditing — The company will regularly supervise and audit the use of AI tools to ensure policy compliance. This includes verifying that only approved tools are being used, that they are being used correctly, and that they are not accessing or storing data inappropriately.

Incident Reporting: Employees must immediately report any suspected policy violations or issues related to AI tool usage and data privacy to the IT department.

Non-Compliance Penalties — Non-compliance with this policy may lead to disciplinary action, including termination. In some cases, legal action may also be pursued.

Education — Employees must be trained on how to use AI tools in a way that respects data privacy and security. This includes understanding the data that AI tools can access, how to restrict this access, and how to identify and respond to potential data breaches.

— — — — — — — — — — — — — End Sample AUP — — — — — — — — — — — — —

Additional Resources

Please visit the follow AUP Templates and references below. Alternatively, some folks are walking-the-walk and using AI generative tools to write their policies for them.

The policy is written, now what?

Once your new AI AUP is ready, how do you socialize the new policy and train the team so the company can innovate, while not knowingly or inadvertently take on more risk? Here are few suggested avenues:

  • Update your Employee Onboarding Documentation and make AI a section for employee onboarding. Cover the ways it can help with efficiency and how responsible usage means complying with the company’s AUP.
  • Include AI-specific training for all new employees. Pay special attention to application development teams to make sure there are compensating controls to check any coding against the AI AUP.
  • Make sure the AI AUP is shared during a company All-Hands meeting, uploaded to your HR Intranet, ensuring to mention “why it matters” in addition to presenting the policy itself.
  • Present the new AUP at your regular cybersecurity refresher training awareness sessions.
  • As your teams use these new tools, encourage them to contribute “how to” documentation to your internal Wikis, LMS, or Slack/Teams channels that demonstrates the Dos-and-Don’ts.
  • Post a list of approve AI Tools on your company’s Intranet and Wiki systems.
  • Create an Generative AI committee and leadership team to perpetuate and motivate proper usage guidelines.
  • Share some FUD (Fear, Uncertainty, and Doubt) news and PR stories because it will help everyone measure twice and cut once — Before installing or applying tools, consider all of the possible risks.

Final thoughts.

It’s easy to think we have time to deal with this new concern; however, the rapid progression and proliferation for Generative-AI systems make this issue too important to put off. By taking this first “must do” step to write an AI Acceptable Use Policy, it will help your entire company realize the other operational steps, risks, and cybersecurity controls that must be considered when integrating tools, services, and AI-contributed intellectual property into your work product and services.

This is just the beginning…don’t hesitate to define your AI strategy, guidelines, and expectations. By the end of 2024, Generative AI systems will be ever-present.

Need help assembling your AI AUP policy and related policies/controls, or you just want to discuss this topic/article further, you can reach out on LinkedIn or drop me an email.

By David Denara

--

--