Guides

How to Use Claude/ChatGPT to Write Security Policies That Don't Suck

Marcus ·

Writing security policies is one of those tasks that everyone agrees is important and nobody wants to do. It's tedious, detail-heavy, and the output usually ends up as a PDF that lives in SharePoint and gets reviewed once a year when audit season rolls around. But policies are the foundation of your security program, and bad ones create real risk — either because they're too vague to enforce or too disconnected from how your organization actually operates.

AI can dramatically accelerate this process. I've used both Claude and ChatGPT to draft security policies over the past six months, and the results are genuinely good — with some important caveats. Here's my step-by-step workflow.

Step 1: Start With a Framework, Not a Blank Prompt

The single biggest mistake people make: opening ChatGPT and typing "write me an access control policy." You'll get something that reads well and is approximately 40% wrong for your environment.

Instead, start with a framework. NIST 800-53, ISO 27001 Annex A, CIS Controls — pick whatever your organization is aligned to (or needs to be aligned to). The framework gives AI the structure it needs to produce something useful.

Here's my opening prompt template:

I need to draft a [policy name] for a [company size/type] organization. We align to [framework]. The policy should cover the following control areas: [list specific controls]. Our environment includes [key technology details: cloud provider, identity system, endpoint management]. Write the policy in formal but readable language, with clearly numbered sections. Include an applicability statement, roles and responsibilities section, and exception process.

That single prompt, well-filled-out, will give you a first draft that's 70-80% of the way there. The framework reference is doing the heavy lifting — it gives the AI a known structure to work from rather than generating generic boilerplate.

Step 2: Feed It Your Existing Documentation

If you have existing policies, even outdated ones, feed them in. This is where Claude's longer context window is genuinely useful — you can paste in a 30-page existing policy document and ask it to modernize the language, add missing controls, and restructure to match your target framework.

Here is our existing [policy name], last updated in [year]. Please review it against [framework] and: 1) Identify gaps where required controls are not addressed, 2) Flag language that is ambiguous or unenforceable, 3) Produce a revised version that addresses the gaps while preserving our organization-specific procedures.

This approach is dramatically better than starting from scratch because it preserves institutional knowledge. Your existing policy, even if it's poorly written, contains decisions about how your organization operates. AI starting from zero will miss all of that.

Step 3: Iterate Section by Section

Don't try to perfect the whole policy in one pass. Take the first draft and work through it section by section. This is where you catch the subtle errors.

For each section, I use prompts like:

Review section 4.3 (Access Review Procedures). Our current process is: managers review access quarterly using [tool name], with IT Security auditing a 10% sample. Revise this section to accurately reflect our process while ensuring it meets [framework control ID] requirements.

The section-by-section approach forces you to actually read what the AI produced and validate it against reality. This is not optional. Skipping this step is how you end up with policies that reference controls you haven't implemented.

Step 4: The Control Mapping Review

This is the step most people skip, and it's the most important one. AI is excellent at generating plausible-sounding control mappings that are subtly wrong.

I've seen ChatGPT map a password policy requirement to the wrong NIST control family — it looked right, the language was similar, but it was referencing an authentication control when the actual requirement was an identification control. The difference matters when an auditor reviews your documentation.

For each control reference in this policy, verify the mapping is correct. List each control ID, the requirement it maps to, and confirm the mapping is accurate. Flag any mappings where the policy language doesn't fully satisfy the control requirement.

Even with this prompt, you need to spot-check. AI will sometimes confirm its own incorrect mappings. Pull up the actual framework document and verify at least the critical controls manually.

Step 5: The Enforceability Test

A policy that can't be enforced is worse than no policy, because it creates a false sense of compliance. After your draft is solid, run it through an enforceability check:

Review this policy for enforceability. For each requirement, identify: 1) Can this be technically enforced or does it rely on user behavior? 2) How would compliance be measured? 3) Is the requirement specific enough to audit? Flag any requirements that are too vague to audit or too impractical to enforce consistently.

This prompt consistently surfaces problems like "users shall exercise caution when opening email attachments" — technically a policy statement, practically unmeasurable. AI is surprisingly good at identifying these because enforceability is a somewhat mechanical analysis: is there a clear action, a clear measurement, and a clear consequence?

Common Pitfalls

The Plausible Hallucination

AI will generate policy language that sounds authoritative and references real frameworks but connects them incorrectly. It might cite "NIST SP 800-53 AC-7" when it means "AC-2." The control family is right, the specific control is wrong. These errors are hard to catch because they look right at a glance.

The Generic Trap

If you don't provide enough environmental context, AI defaults to generic enterprise language. "The organization shall implement appropriate access controls" means nothing. Push for specificity: which systems, which access controls, which review cycle.

The Compliance Theater Problem

AI is very good at producing policy language that satisfies a checklist without actually improving security. It will write beautiful paragraphs about risk assessment processes that nobody in your organization would ever follow. Your job in the review process is to ask: "Would we actually do this?"

Version Control Chaos

When you're iterating with AI, it's easy to lose track of which version is current. I keep a running document with a change log. Every significant AI revision gets noted with the date and what was changed. When the policy goes to review, the change log goes with it.

The Review Workflow

AI generates the draft. You validate and revise. But the review process should still involve humans who weren't part of the drafting. My workflow:

  • Draft: AI + me, following the steps above (2-4 hours per policy)
  • Technical review: A colleague in IT/Security reads for accuracy (1-2 days)
  • Stakeholder review: Affected business unit leaders read for practicality (1 week)
  • Legal/compliance review: If the policy has regulatory implications (1-2 weeks)
  • Final revision: I incorporate feedback, sometimes using AI to reconcile conflicting comments
  • Approval: Formal sign-off per your governance process

The AI part — the actual drafting — went from two weeks to two hours. The total process still takes 3-4 weeks because of the human review stages. And that's fine. The bottleneck was never the review cycle. It was getting a decent first draft that people could actually react to.

AI doesn't replace your policy team. It makes your policy team faster. That's a meaningful difference, and it's enough.