
AI in Schools Without Safeguards: How to Prevent a New Education Divide
AI in Schools Without Safeguards: How to Prevent a New Education Divide
Artificial Intelligence (AI) is making its way into classrooms at an unprecedented pace. While this rapid integration is promising, it raises an important question: will AI help bridge learning gaps, or will it exacerbate them? Without robust safeguards, we risk creating further inequality. This article discusses the challenges and suggests actionable strategies that district leaders, educators, vendors, and policymakers can adopt to transition from mere excitement to ensuring safe and equitable outcomes.
Why This Matters Now
Schools are currently experimenting with various AI tools for tutoring, lesson planning, grading, translation, and accessibility. Early research indicates that AI can enhance teacher productivity and tailor learning experiences. However, access to these tools is inconsistent, implementations differ widely, and oversight remains inadequate. Failing to address these realities means AI might entrench the very disparities schools are striving to eliminate.
This piece builds on reporting by Government Technology, providing additional guidance and resources for educators to implement AI responsibly.
The Promise of AI in Education
When utilized responsibly, AI has the potential to greatly benefit classrooms:
- Personalized Support: AI can offer adaptive practice, formative feedback, and multilingual explanations tailored to meet students where they are.
- Relief for Teacher Workload: By drafting lesson plans, differentiating materials, and generating quizzes, AI enables teachers to focus on building relationships and effective instruction.
- Accessibility and Inclusion: Features like speech-to-text, text simplification, and translation help eliminate obstacles for students with disabilities and multilingual learners.
- Family Engagement: Tools that provide real-time translation and summaries help parents stay informed about their children’s progress and support learning at home.
The U.S. Department of Education encourages educators to explore these benefits while prioritizing human judgment, transparency, and a commitment to safety (US ED, 2023).
The Risks of AI Without Safeguards
If left unchecked, AI could amplify existing disparities in access, opportunities, and educational outcomes.
1) Uneven Access to Connectivity and Devices
Students without access to AI tools cannot benefit from their potential. A digital divide continues to affect millions. A Pew analysis shows that lower-income households are significantly less likely to have reliable broadband or a desktop/laptop, creating a persistent homework gap (Pew, 2021). The Affordable Connectivity Program, which provided discounted internet for over 20 million households, ended in 2024, raising concerns about access (FCC, 2024).
Even when schools supply devices, the availability of bandwidth and after-hours access can vary significantly by neighborhood. This exacerbates inequities when AI-driven homework, tutoring, or research assumes constant internet access.
2) Quality Gaps in Tools and Implementation
Wealthier districts can afford higher-quality AI platforms, custom models, and professional coaching. In contrast, less affluent areas often depend on free tools that may lack essential safeguards and sufficient data protection. Without standardized purchasing agreements across districts, student experiences and outcomes can vary drastically by ZIP code.
3) Biased Models Lead to Biased Outcomes
AI systems are trained on data that may not be representative of all learners. This can result in inaccurate predictions, stigmatizing labels, or inconsistent performance across languages and dialects. The NIST AI Risk Management Framework stresses the need for careful evaluation of fairness, validity, and specific harms before deployment (NIST, 2023).
4) Surveillance and Chilling Effects
Certain proctoring, monitoring, or emotion-detection tools have raised concerns regarding civil rights. Documented cases show that face detection and environmental scans can fail, particularly for students with darker skin tones or in atypical settings, prompting legal scrutiny (EFF, 2020; Washington Post, 2021). Excessive monitoring may deter students from seeking help or expressing their creativity.
5) Privacy Risks and Opaque Data Flows
Student data, which is sensitive, can attract malicious actors. Human Rights Watch discovered that many educational technology tools embedded advertising trackers and gathered data beyond educational necessities during the pandemic (HRW, 2022). Without strict data minimization and clear vendor contracts, AI could increase exposure to risks.
6) Accessibility and Language Barriers
If AI products are not developed according to WCAG 2.2 AA standards and tested with actual users, students with disabilities and multilingual learners will be left behind (W3C WCAG). Features like translation quality, reading-level modifications, and speech interfaces should be standard inclusions, not premium add-ons available only in select schools.
7) Gaps in Teacher Capacity
Effective professional development for teachers is crucial for positive outcomes. The RAND Corporation has reported that while interest in AI is high, training and guidance for classroom application are inconsistent, which could widen disparities from class to class and school to school (RAND, 2024).
What Safeguards Look Like in Practice
Implementing safeguards isn’t about banning AI; rather, it’s about establishing visible, measurable protections that allow schools to harness the benefits of AI while minimizing potential harms.
Anchor on Recognized Frameworks
- NIST AI Risk Management Framework: A practical guide for identifying, measuring, and managing AI risks throughout its lifecycle (NIST, 2023).
- U.S. Department of Education Guidance: Stresses human oversight, transparency, and protecting students as data subjects while evaluating efficacy (US ED, 2023).
- White House Executive Order 14110: Directs agencies to create standards for secure and trustworthy AI, including in educational contexts (White House, 2023).
- UNESCO and OECD: Focus on human rights, equity, and transparency in AI systems used for education and assessment (UNESCO, 2023; OECD, 2019).
Strengthen Privacy and Data Governance
- Data Minimization: Collect only what is necessary for educational purposes—no ad tracking or selling of student data.
- Clear Data Protection Agreements: Require audits, breach notifications, sub-processor lists, and restrictions on using student data for model training.
- Legal Compliance: Adhere to U.S. laws like FERPA and COPPA, as well as stricter state privacy laws; globally, comply with GDPR and child-focused guidelines such as the UK Age Appropriate Design Code.
- Meaningful Opt-Outs: Offer alternatives that do not use AI for high-stakes decisions and implement accessible complaint channels.
Design for Equity and Accessibility from Day One
- Accessibility: Ensure compliance with WCAG 2.2 AA, using third-party testing and evidence. Include features like screen readers, captions, keyboard navigation, and reduce cognitive load.
- Language Equity: Provide high-quality multilingual support for common school languages, establishing benchmarks for translation accuracy.
- Fairness Checks: Conduct bias audits and create model and system cards that disclose training data sources, limitations, and subgroup performance.
- Human-in-the-Loop: Ensure that human review is mandatory for all high-stakes decisions (such as placement and discipline) with clear rationales and appeal processes.
Procure with Aligned Standards
- Interoperability: Require adherence to 1EdTech standards like LTI, OneRoster, and Caliper for secure data exchange (1EdTech).
- Evidence of Impact: Conduct pilot studies with clear success metrics and mixed-method evaluations, publicly sharing results.
- Security Posture: Select vendors with SOC 2 or ISO 27001 certifications, strong vulnerability management practices, and student-focused incident response plans.
- Prohibited Features: Exclude tools that enable emotion recognition, invasive biometric surveillance, and persistent webcam monitoring.
Invest in Capacity, Not Just Licenses
- Ongoing Professional Learning: Provide continuous coaching on prompt design, lesson integration, bias awareness, and data protection, while ensuring time for teachers to experiment.
- Student Digital Agency: Teach students how to use AI responsibly and critically, including proper citation and data protection.
- Community Engagement: Transparently publish AI usage policies, host forums, and include students and families in governance processes.
Policy Guardrails and the Current Landscape
Schools function within a growing framework of laws, guidelines, and standards. Here’s a brief overview of existing regulations and where gaps persist.
Core Laws and Regulations
- FERPA: Governs access to student educational records and parental rights, lacking provisions for AI training on derived data.
- COPPA: Restricts data collection from children under 13; enforcement varies for school-authorized applications.
- State Student Privacy Acts: Several states exceed federal law by limiting targeted ads and secondary data use.
- GDPR and International Regulations: Enforce strict consent and transparency requirements for many global vendors.
National Guidance and Standards
- U.S. Department of Education: Cautions against automated decision-making in high-stakes contexts, advocating for transparency and rigorous evaluation (US ED, 2023).
- NIST AI RMF: This voluntary standard is increasingly utilized by public agencies and vendors for structured risk management.
- Executive Order 14110: Tasks agencies with developing safety standards, advancing privacy-preserving techniques, and promoting equity in AI (White House, 2023).
- UNESCO/OECD Principles: Emphasize human rights, teacher agency, and transparency as essential elements in AI for education.
Funding and Infrastructure
- Connectivity: The FCC has extended E-Rate support for Wi-Fi on school buses to help close the homework gap (FCC, 2023). However, the conclusion of ACP subsidies increases home access challenges (FCC, 2024).
- Cybersecurity: With a rise in K-12 cyber incidents, districts should adopt best practices and consider cybersecurity provisions in procurement contracts (K12 SIX).
Real-World Lessons Schools Can Apply Now
Lesson 1: Pilot with Purpose
Begin small with specific goals. For example, a middle school could test an AI writing assistant aimed at enhancing revision skills. Track various outcomes: gains in writing rubrics, student autonomy, time savings for teachers, and any observed bias or accessibility challenges. Publish and refine results based on evaluations.
Lesson 2: Safeguards in the Contract
Ensure vendor contracts clearly codify safeguards, including:
- No training on student data without explicit approval and public disclosure.
- Timelines for data deletion and secure export on demand.
- Security controls, third-party assessments, and transparency in incident reporting.
- Accessibility, language support, and fairness benchmarks accompanied by remediation strategies.
- Prohibitions on targeted advertising and trackers.
Lesson 3: Avoid High-Stakes Automation
Retain human oversight for critical decisions in admissions, placement, discipline, special education, and grading. Use AI as a tool to support decision-making, ensuring clear explanations and documentation of educator judgments.
Lesson 4: Train for Equity
Professional learning should integrate pedagogy with safety. For example, teachers can learn to configure AI to generate leveled reading passages while scrutinizing for stereotypes, vocabulary complexity, and cultural relevance before assigning them.
Lesson 5: Communicate Openly
Families have the right to understand where and why AI is utilized. Share information about AI applications, data practices, and opt-out processes. Ensure communication channels are available for reporting issues in various languages.
A District Checklist for Equitable AI
Employ this checklist to mitigate risks and enhance outcomes:
- Equity Impact Assessment: Consider who benefits, who is disadvantaged, and how you will measure unintentional effects.
- Purpose and Evidence: Clarify the learning problem being addressed and how you will measure the effectiveness of AI solutions.
- Data Governance: Define what data is collected, stored, shared, and for how long, alongside access controls.
- Human Oversight: Determine where educator judgment factors into decision-making.
- Accessibility and Language: Ensure compliance with WCAG 2.2 AA, compatibility with assistive technologies, and integrated multilingual support.
- Bias Testing: Conduct subgroup performance assessments and develop mitigation plans; require vendor model cards.
- Interoperability: Implement standards-driven integration and maintain clean data exports.
- Security: Ensure robust security measures, such as encryption and least-privilege access, along with incident response procedures.
- Professional Learning: Prioritize coaching, allotted time for safe experimentation, and identify communities of practice.
- Transparency: Maintain public policy visibility, procurement disclosures, and pilot study outcomes.
Frequently Asked Questions
Is AI Actually Useful for Learning, or Is It Just Hype?
AI can be beneficial for practice, feedback, and easing teacher workloads when aligned with clear instructional goals. Evidence varies across products; pilot programs and evaluate against non-AI alternatives, relying on rigorous evaluations and peer-reviewed research.
What AI Uses Are Too Risky for Schools?
Automated discipline, emotion recognition, and biometric surveillance pose significant risks and potential biases. High-stakes decisions should always involve human review, clear justifications, and avenues for appeals.
Can We Use Generative AI Without Exposing Student Data?
Yes. Prefer education-grade deployments that isolate school data, disable training on input data, and provide administrative controls. Require data protection agreements that clearly outline training, retention, access, and deletion procedures.
How Do We Ensure Equity If Some Students Do Not Have Home Internet?
Design AI programs for offline or low-bandwidth scenarios, provide school-based access hours, loan hotspots, and leverage initiatives like E-Rate to improve infrastructure. Offer non-digital homework options that don’t rely on AI features.
What Should We Tell Families About AI?
Communicate openly about the tools being used, data being collected, and reasons for their usage. Provide guidance on opting out and avenues for obtaining help, ensuring this information is accessible in the primary languages spoken within your district.
Conclusion: Equity is a Design Choice
AI alone will not inherently reduce or increase inequality. The outcomes largely depend on the choices schools make today. With appropriate safeguards, careful procurement processes, robust data governance, and investments in human capital, AI can enable more inclusive and effective teaching and learning. Without such measures, it risks becoming yet another privilege for those who are already advantaged.
Sources
- Government Technology – Sans Safeguards, AI in Education Risks Deepening Inequality
- US Department of Education – Artificial Intelligence and the Future of Teaching and Learning (2023)
- NIST AI Risk Management Framework 1.0 (2023)
- Executive Order 14110 on Safe, Secure, and Trustworthy AI (2023)
- UNESCO – Guidance for Generative AI in Education (2023)
- OECD AI Principles (2019)
- Pew Research Center – Digital Divides Persist (2021)
- Federal Communications Commission – Affordable Connectivity Program Wind-Down (2024)
- FCC – E-Rate Funding for Wi-Fi on School Buses (2023)
- Human Rights Watch – EdTech and Children’s Rights During COVID-19 (2022)
- Electronic Frontier Foundation – Proctoring Apps Concerns (2020)
- Washington Post – AI Proctoring Scrutiny (2021)
- RAND – Artificial Intelligence and the Future of Teaching (2024)
- W3C – Web Content Accessibility Guidelines (WCAG 2.2)
- 1EdTech – Interoperability Standards (formerly IMS Global)
- California Consumer Privacy Act – Reference for Broader Privacy Context
- ACLU – Call to Ban Emotion Recognition in Schools
- K12 Security Information Exchange (K12 SIX) – K-12 Cybersecurity Resources
Thank You for Reading this Blog and See You Soon! 🙏 👋
Let's connect 🚀
Latest Insights
Deep dives into AI, Engineering, and the Future of Tech.

I Tried 5 AI Browsers So You Don’t Have To: Here’s What Actually Works in 2025
I explored 5 AI browsers—Chrome Gemini, Edge Copilot, ChatGPT Atlas, Comet, and Dia—to find out what works. Here are insights, advantages, and safety recommendations.
Read Article


