root@ceqa:~$ cd /guides/ai-assisted-impact-scoping-playbook

AI-Assisted Impact Scoping Playbook

Equip CEQA teams with language models that can draft project descriptions, interrogate baseline datasets, and flag the impact topics most likely to trigger focused analysis—without sacrificing defensibility.

Level

Intermediate

Implementation window

4–6 week pilot

Core team

CEQA planner · Environmental analyst · NLP engineer

Key outcomes

Structured project narratives, validated baseline data, prioritized impact screening

Guide navigation

Stand up your impact scoping copilot

Move through discovery, model configuration, and operational rollout with reusable prompts, data validation routines, and stakeholder checklists built for CEQA timelines.

01 · Strategic framing

Anchor your scoping copilot to agency goals

Define how AI-assisted scoping advances agency priorities, from faster screening letters to defensible issue identification. Align stakeholders on scope, success metrics, and required safeguards before touching prompts.

Problem statements

Document current scoping bottlenecks and litigation risks so the AI program targets measurable gaps.

  • Late discovery of missing baseline datasets
  • Inconsistent Appendix G topic coverage
  • Manual effort stitching project descriptions

Value hypotheses

Estimate time savings, quality gains, and risk reduction when LLMs draft narratives and scan inputs.

  • Reduce first-draft scoping memos by 40%
  • Flag 95% of missing required datasets before SME review
  • Produce consistent issue matrices across projects

Guardrails

Establish model usage boundaries, human sign-offs, and documentation standards.

  • Require planner validation on draft findings
  • Log prompts, model versions, and outputs
  • Segment sensitive location data at rest
02 · Data prerequisites

Audit baseline datasets before prompting

Impact scoping is only as strong as the inventories backing it. Stand up lightweight pipelines that catalog available data, score completeness, and route gaps for remediation.

Baseline data audit

Compile an extract describing datasets, formats, vintages, and custodians.

  • Create metadata cards for land use, traffic, air quality, biology, noise
  • Score freshness, spatial coverage, and regulatory adequacy
  • Track gaps that block Appendix G topic evaluation

Data access pipelines

Integrate datasets into model-ready stores with clear access controls.

  • Normalize formats using ETL/ELT workflows with data quality checks
  • Provide planners a governed catalog with search and lineage
  • Segment proprietary or personal data with policy-based access

Gap management playbook

Automate detection and routing of missing baseline inputs into remediation workflows.

Intake

LLM generates gap tickets mapped to responsible data owners.

Resolution

Assign retrieval actions, like commissioning surveys or sourcing GIS layers.

Verification

Require data stewardship sign-off before AI resumes scoping tasks.

03 · Prompt engineering

Design modular prompts that mirror CEQA scoping logic

Translate agency templates, Appendix G checklists, and jurisdictional thresholds into structured prompt components. Keep them version-controlled and reusable across project types.

Project narrative scaffold

Use chain-of-thought steps to gather inputs and produce consistent project description drafts.

  • Solicit land use, construction phasing, utilities, approvals
  • Generate tables for project components and phasing
  • Insert citation placeholders linked to data sources

Baseline interrogation guide

Prompt the model to analyze data completeness and highlight missing supporting evidence.

  • Ask the model to assess dataset timestamps versus policy thresholds
  • Flag missing field surveys by season or species coverage
  • Recommend targeted data collection tasks

Impact screening matrix

Generate annotated matrices that align to Appendix G categories and local thresholds.

  • Score impact probability, severity, and data confidence
  • Reference relevant regulatory triggers and guidance
  • Highlight issues requiring SME interviews or fieldwork
04 · Automation modules

Operationalize the scoping copilot inside review workflows

Integrate LLM outputs with project management systems, GIS viewers, and document templates so planners see AI assistance within their existing tools.

Module 01

Intake brief generator

Convert applicant submittals into standardized project briefs with embedded data checks.

  • Parse PDFs, CAD exports, and application forms
  • Flag missing attachments or inconsistent acreage totals
  • Export briefs to project management tools
Module 02

Impact prioritization engine

Pair LLM reasoning with rule-based thresholds to rank impact topics for SME focus.

  • Crosswalk project features with Appendix G triggers
  • Blend historical EIR review findings for similar projects
  • Push high-risk topics to SME queues with justification
Module 03

Scoping memo assembler

Auto-assemble scoping memos and IS/MND outlines with traceable citations and reviewer tasks.

  • Insert baseline summaries and source references
  • Create issue-specific task lists and due dates
  • Generate export-ready Word or Google Docs packages
05 · Quality controls

Embed defensibility checks at every handoff

Pair AI efficiency with rigorous human review, documentation, and continuous monitoring to satisfy agency counsel and external reviewers.

Validation loops

Set up tiered QA to catch hallucinations, missing citations, and misclassified impacts.

  • Automated linting for style, citations, and dataset versions
  • Reviewer sign-offs for each impact topic within issue matrix
  • Legal counsel review of thresholds and findings before release

Monitoring & feedback

Instrument usage analytics and escalate anomalies into remediation sprints.

  • Track prompt effectiveness, data gap recurrence, and review cycle time
  • Maintain a model change log with dataset refresh dates
  • Run quarterly calibration sessions with planners and SMEs
06 · Implementation

Sequence pilots, training, and enterprise rollout

Launch small, measure impact, and scale responsibly. The roadmap below structures work into digestible waves with clear exit criteria.

Phase 01

Discovery sprint

Assess data availability, select pilot project, formalize success metrics.

  • Stakeholder workshops
  • Data gap inventory
  • Legal review of guardrails
Phase 02

Pilot build

Configure prompts, integrate datasets, and run supervised scoping cycles.

  • Model configuration
  • Baseline QA dashboards
  • Reviewer training sessions
Phase 03

Pilot evaluation

Measure accuracy, cycle time, and planner satisfaction before scaling.

  • Calibration workshops
  • Gap remediation backlog
  • Executive go/no-go
Phase 04

Scale & sustain

Roll production workflows across programs with continuous improvement loops.

  • Playbook publication
  • Quarterly retraining cadence
  • Enterprise-level support model
07 · Ops toolkit

Daily operating checklist

Use this quick reference to keep scoping copilots accurate, transparent, and aligned with CEQA best practices.

Before running prompts

  • Confirm baseline datasets are refreshed and cataloged
  • Load project-specific constraints (e.g., overlays, sensitive receptors)
  • Review guardrails or legal updates affecting impact thresholds

After outputs generate

  • Validate citations and dataset references
  • Assign SMEs to high-priority impact findings
  • Archive logs and reviewer annotations to the audit trail
08 · Further reading

Templates, tools, and references

Extend the playbook with these starting points and benchmarks.

Template bundles

  • Appendix G impact screening matrix (CSV + prompt)
  • Project description chain-of-thought prompt library
  • Baseline data audit workbook (Google Sheets)

Reference standards

  • Governor’s Office of Planning and Research technical advisories
  • Regional air quality management plan thresholds
  • USFWS and CDFW species survey protocols