Purpose
Use AI to support teaching, learning, and operations while protecting students, staff, and families; complying with laws; and maintaining community trust.
Scope & Definitions
- AI tools: Software that generates content or insights or automates tasks using machine learning.
- Generative AI (GenAI): Produces text, images, code, or analyses.
- Public AI: Open web tools (e.g., public chat sites).
- Enterprise/approved AI: IT-vetted tools with security, logging, and admin controls.
Data Classification — “Traffic Light”
When in doubt, treat data as
Red.
- Green (Allowed):Public or de-identified info
- Example: generic lesson ideas, public standards, de-identified scenarios, district policies already published.
- Amber (Conditional):Internal, non-sensitive info in approved tools only
- Example: anonymized class trends, scrubbed meeting notes, process docs without names or IDs.
- Red (Prohibited in AI unless written approval & enterprise controls):
- Student PII (names, addresses, emails, phone numbers, IDs, photos tied to identity).
- Grades, test scores, discipline records, IEP/504 information, health or counseling data (FERPA/IDEA/Section 504).
- Staff HR records, SSNs, payroll/banking data.
- Security credentials, network diagrams, access codes.
- Attorney-client privileged or litigation-related information.
Never paste Red data into public AI tools.
Approved Tools & Access
- Staff may use only AI tools on the Approved AI Tools List maintained by District IT.
- Access must use district accounts (SSO/MFA where available).
- Do not use personal AI accounts for official school work or with student data.
Acceptable Use
Examples of appropriate use of AI (with human review):
- Drafting: lesson outlines, rubrics, parent letters, newsletters, procedures, job aids.
- Instructional support: generating practice questions, writing prompts, examples (reviewed by the teacher).
- Operational support: summarizing policies, meeting notes, or public reports; drafting checklists and workflows.
- Data support: organizing non-identifiable data, suggesting visualizations (verification required).
Prohibited Use
AI
must not be used to:
- Make final decisions about grades, student placement, discipline, special education, or hiring without human review and documented professional judgment.
- Enter Red data into any AI tool that is not explicitly approved for that data type.
- Bypass district security controls or export AI logs/outputs to personal devices or accounts.
- Generate content that is biased, harassing, discriminatory, misleading, or otherwise violates law or district policy.
Human Oversight (“Human-in-the-Loop”)
- Staff remain responsible for all AI-assisted work.
- Verify facts, dates, citations, and any numbers before use.
- Teachers must review AI-generated materials for age appropriateness, accuracy, and alignment with curriculum and district values.
- Communications to parents, the public, or the Board that involve AI must be reviewed by the sender and, when appropriate, a supervisor.
Student Use of AI
- Student use of AI must follow the district’s Student Acceptable Use Policy and any classroom guidelines set by the teacher and principal.
- AI may not be used to cheat, plagiarize, or misrepresent academic work.
- Teachers must explicitly communicate when and how AI may be used on assignments and how to cite such use.
Records, Privacy, and Compliance
- AI-generated content used for official district business is subject to public records and retention laws and to student privacy laws (e.g., FERPA).
- Use only tools that support appropriate retention, export, and access controls.
- Do not store or process student Red data in systems that are not covered by a signed data-privacy or vendor agreement.
Procurement & Vendor Requirements (for AI-enabled Products)
Before adopting AI-enabled tools or features, the district must ensure:
- Student and staff data is not used to train public models without explicit written consent from the district.
- Data ownership, security (encryption, breach notification), and audit rights are defined in contract.
- Vendors disclose sub-processors and data locations and provide admin controls for logging, access, and retention.
- Tools comply with applicable privacy and accessibility requirements.
Training & Incident Reporting
- Staff will receive periodic training on safe AI use, data classification, student privacy, and academic integrity.
- Suspected data exposure, misuse of AI, or harmful outputs must be reported immediately to the building principal and District IT (following existing incident reporting procedures).
Governance & Changes
- New AI use cases or tools must be reviewed and approved by District IT and district leadership before use.
- This policy will be reviewed at least annually and updated as laws, technologies, and best practices evolve.