Generative AI policy template for in-house legal teams

Summarising emails, drafting content, simplifying jargon-filled explanations, answering questions, brainstorming ideas and speeding up everyday tasks, artificial intelligence usage is happening across departments and industries in a wide range of ways–sometimes officially, often informally.

AI, of course, brings opportunity: time saved, boosted productivity, improved communication, and less tedium. But there is also real risk. And the exposure is broad. IP infringement, data protection breaches, cybersecurity threats, biased data, unreliable outputs, reputational harm and ethical impacts can all stem from irresponsible use of AI. When your company has no guidelines, it can quickly become a liability. And while AI use will look different from team to team and business to business, a solid policy that lays out what responsible AI use looks like is essential for all.

The real risk is having no policy in place

Informal use of AI tools means there are no guardrails; employees have no rules to follow, and inputs go unmanaged. Pasting a few clauses into a tool to avoid a long afternoon of reading feels efficient. But said clauses might be covered by a confidentiality agreement, and what was a shortcut becomes an unintended yet harmful contractual breach.

When AI tools are used informally, there are no guidelines for employees to follow, and data exposure is unmanaged. It often starts innocently. Someone pastes a few clauses into an AI tool to “speed things up.” It feels efficient, harmless. But those clauses might be covered by an NDA or confidentiality agreement, and suddenly a shortcut has turned into a contractual breach no one intended.

And although it’s well-known that generative AI tools can make mistakes, their outputs are sometimes taken as fact. Confident errors have made their way into corporate reports, courtrooms and newspapers in the form of studies, cases and books that don’t actually exist.

This practical generative AI policy template helps you avoid that. It sets out what’s allowed, what’s prohibited, what must be reviewed and who has ownership of the rules.

What your generative AI policy should do

The goal is for your employees to come away from the policy knowing how to use AI ethically and responsibly. It should:

  • be easy to grasp.
  • outline clearly what’s allowed and what’s not.
  • define when human review is necessary.
  • establish basic governance.
  • encourage safe experimentation in low-risk situations.

A generative AI policy template you can use straightaway

This is a starting point. Every industry will use AI technologies differently and there’s no one-size-fits-all approach. Adapt this template to your needs.

Scope

Rather than AI systems that have been formally and reviewed, this policy applies to any third-party or publicly available generative AI tools that generate content or assist with tasks.

Purpose

Powerful and increasingly part of everyday business operations, AI is streamlining tasks and boosting efficiency. But it’s also introducing risks, ones that grow as AI technologies develop. This policy sets out clear, followable guidelines for responsible and ethical use, helping teams use tools safely while protecting the company and adhering to regulations.

Terms and definitions

Term Definition
Generative AI A form of machine learning that is able to produce text, video, images and other types of content.
AI tool A feature, application or program that uses artificial intelligence to perform tasks that would usually require human intelligence.
Confidential information Privileged or proprietary information which, if compromised through alteration, corruption, loss, misuse or unauthorised disclosure, could cause serious harm to the organisation or person who owns it.
Personal information Information relating to an identifiable individual.
Sensitive information High-risk personal data and regulated data types (e.g., health, financial account data, government identifiers).
Privileged information Communications, work or a product protected by legal professional privilege (where applicable).

General principles

  1. Use AI as a tool, not a final decision-maker. Some outputs have been proven to be incorrect or incomplete; they need to be treated as drafts to review and edit.
  2. Only share the minimum data needed to perform the task.
  3. Protect sensitive content by never entering prohibited data into AI technologies.
  4. Outputs that will have a big impact should be reviewed by the appropriate personnel before use.
  5. Don’t disregard existing policies. This one complements those addressing confidentiality, security, privacy and records.

Acceptable use of AI

Responsible AI use is ethical, lawful and professional. Users shall not use the technologies in any way that is illegal or harmful; they shall respect the rights and privacy of others and shall not be used to produce or otherwise interact with digital content not related to company objectives, including obscene or potentially offensive content.

Prohibited use

Do not paste, upload or input sensitive business information into public or unapproved AI tools.

That includes:

  • Draft or executed contracts, deal terms, negotiation strategy, dispute materials or settlement positions
  • Privileged legal advice or internal legal analysis
  • Customer or employee personal data (including HR records or performance information)
  • Sensitive personal data, like health or financial account information
  • Passwords, API keys, access credential or security configurations
  • Non-public financial information, pricing models, forecasts or M&A activity
  • Any information covered by an NDA or other confidentiality obligation
  • Any data that our policies or applicable laws require to stay within approved systems

If you are unsure whether something is appropriate to share with an AI tool, assume it isn’t and check first.

Privacy and data protection

Your use of AI tools shouldn’t compromise individuals’ privacy rights, disobey applicable data protection laws and our internal privacy policies. Personal data and sensitive information shouldn’t be entered into third-party AI applications that have not been formally reviewed or approved.

Security

Access to any company-approved AI systems will be provided in line with job responsibilities. Those systems should be treated like any other corporate platform.

That means:

  • Protecting passwords and access credentials
  • Not sharing accounts
  • Installing updates and patches when required
  • Reporting any unsuspected security incident immediately

Any new AI tool must go through the usual security, privacy and vendor review process before it is used for work purposes.

AI-related incidents, including inappropriate data entry, unreviewed externals sharing, suspicious system behaviour or harmful outputs, must be reported through standard incident channels promptly.

Accuracy, verification and citations

AI can produce polished and confident answers that are simply wrong. For this reason, all outputs should be treated as drafts and users are responsible for checking:

  • Factual accuracy
  • Numbers and calculations
  • Citations and links (which may be fabricated)
  • Alignment with internal policy and legal compliance

Human oversight

In some instances, explicit human review is required before anything is shared or implemented, such as outputs that:

  • Are distributed externally
  • Inform high-impact decisions
  • Are part of contracts, policies or formal notices

Check for accuracy, completeness, tone, policy alignment and any unintended sharing of sensitive information.

Intellectual property

Respect third-party intellectual property rights when using AI tools. Don’t ask AI tools to reproduce copyrighted content you don’t have the right to use or input third-party confidential information without authorisation.

Work that resembles a certain third-party work should be treated as high-risk. You should seek guidance before using it in any capacity.

Records and retention

Monitoring and periodical audits of AI use are critical for compliance with this policy and applicable laws and regulations. If AI outputs are used to inform business decisions, they should be stored appropriately in approved systems.

Training and awareness

Training is necessary for anyone using AI for work. It should cover the likes of what data must never be entered into AI tools, how to draft prompts safely (anonymising and minimising data), how to verify outputs and when and how to escalate issues.

Enforcement

Any violations of this policy may result in disciplinary action and/or removal of access.

Review cadence

Regularly review this policy. As tools, AI risks, ethical implications and relevant laws and regulations evolve, so should this policy. Update it accordingly.

Implementing your generative AI governance policy so everyone knows how to use it responsibly

A policy only works if people can use it.

Week 1: Keep it simple and visible

Decide who owns the policy. Knowing this from the outset makes decision-making easier. You know who to turn to when legal and regulatory standards evolve or intellectual property laws change. Your organization’s AI use will develop as the world around it does.

Start with a straightforward summary of what’s allowed and not allowed, outlining safe prompts that are anonymous and void of any sensitive data and clearly defining what needs human review. While external communications destined for customer inboxes need a human sign-off, a routine, team-only email carries less risk and can likely be automated.

Weeks 2 -4: Make compliance easy

Make the policy easy to find and ingrained from the get-go. For new hires, onboarding documentation is the perfect place to set out your company’s guidelines for the ethical and responsible use of AI. Add it to your internal knowledge base, so everyone can find it whenever they need it.

For common tasks, create templates and examples. For more complex operations, build a clear escalation path so no matter what the query, all staff know who to ask when they’re unsure. If anything ever does go wrong, there needs to be a comprehensible incident response. What happens if someone pastes sensitive information? Or are biased results used to inform decisions?

Months 2 -3: Improve quality, reduce risk and encourage continuous learning

As tools and systems evolve, so do AI ethics, risks and governance. Refresher training is critical to make sure ethical use of AI continues. And there’s no better learning tool than real life; when patterns and risks emerge as people use AI to create content or streamline business operations, you should see them as valuable insights and update the policy accordingly.

Different teams will require specific guidelines. At the same time, HR professionals are focusing on cover letters produced by AI, the marketing team is combating the ethical implications of AI-generated designs derived from copyrighted material. Guidance will become role-specific

FAQ: Generative AI policy basics

Do we need a generative AI policy if we don’t “officially” use AI?

Yes. Chances are, even if you don’t use or develop AI tools officially, your team is already using it in some form. This might be mainstream tools like ChatGPT or Gemini, automated decision-making systems or predictive analytics. And if people are using AI without a clear idea of how to use it effectively, it’s likely introducing potential risks that your company isn’t ready to handle. A policy reduces risks and sets expectations.

Should we ban generative AI at work?

A blanket ban can be hard to enforce. It is unlikely to be effective in isolation and may push staff to use it under the radar, bringing even more risks. It is far more sensible to be proactive and consider how AI can be used effectively in the workplace that both complies with regulatory obligations and meets business needs.

Can employees paste customer or employee data into AI tools?

As a rule of thumb, no. That’s a high-risk practice and should really be prohibited. There are, however, some tools that are explicitly approved for that purpose.

How do we handle AI hallucinations?

AI hallucinations are inevitable, and they should never make it into external or high-impact environments. That’s why outputs should always be treated as drafts that require verification. Fact-check information and citations to ensure they’re trustworthy and reliable.

How often should we update the policy?

When it’s first put in place, review the policy every 3-6 months to make sure it is working effectively. AI is developing quickly and with that, organisational usage patterns change.

 

 

Related articles:

Interested in exploring Dazychain’s solutions?

Table Of Contents:

Stay in the loop

Subscribe to our free newsletter.