Key Takeaways
Educators are already using AI in schools, but without a clear purpose or systemwide support, adoption could widen current differences in access and advantage, strain resources, create confusion around priorities, and fail to deliver true transformation.
Foundational conditions, including a shared instructional vision, infrastructure, policy, and educator capacity, are essential to enable safe and effective use of AI.
Leaders can foster safe, impactful experimentation by anchoring AI efforts in student-centered pedagogy.
What is the problem?
Schools, systems, and the broader education sector are under mounting pressure to respond to the rise of AI. Media narratives, hype cycles, policy shifts, and the rapid spread of these tools in classrooms have created a sense of urgency and uncertainty that’s difficult to ignore.
In response, many actions across the sector have been reactive. Some schools and systems have rushed to adopt tools without clear plans, while others have banned them outright. Even when well-intended, these approaches contribute to uneven adoption, disjointed outcomes, and growing confusion among educators and students. They can also lead to wasted resources, expose districts to data privacy risks, and erode trust among families and staff.
Why is it important?
Without intentional leadership, the use of AI in schools and classrooms risks reinforcing existing inequities, compromising student privacy, and overwhelming educators. Left unchecked, fragmented adoption and unclear guidance will only deepen these challenges.
But this moment also presents a powerful opportunity. When aligned to a shared instructional vision and supported by the right conditions, AI can extend teachers’ capacity, personalize learning, and open up new possibilities for student-driven inquiry, creativity, and collaboration.
By embedding AI into broader strategies for instructional improvement and powerful, student-centered learning, schools can move from reaction to proactively designing future-ready education systems that meet the needs of all learners.
The research says...
Clear instructional goals and aligned leadership are essential for achieving meaningful and sustained improvements in teaching and learning, particularly when schools integrate new technologies like AI.
Without thoughtful planning to mitigate risks, technology adoption can widen opportunity gaps and reinforce existing societal biases, especially for students in under-resourced communities.
Ongoing, job-embedded professional learning increases teachers’ ability to use AI tools effectively and ethically, making it more likely that new practices will lead to positive student outcomes.
Continuous improvement models support schools in managing uncertainty, learning from early efforts, and scaling successful innovations in ways that reflect real-world classroom needs.
How: Solution
Building the foundation for powerful, responsible, and effective AI use starts with aligning strategy to instruction. Rather than focusing first on tools or risks, school and system leaders should create conditions that support thoughtful experimentation and meaningful integration. This means preparing the system for change, enabling innovation through small tests, and building lasting educator capacity.
The following guidance and examples come from the School Teams AI Collaborative, a partnership between The Learning Accelerator and Leading Educators that brought together ~80 innovative educators from 19 schools across the country to collaborate and surface promising ways to implement generative AI into their teaching and learning practices.
Start with a systemwide vision and readiness plan.
Before adopting any tool or developing new policies, schools and systems must define a clear, student-centered instructional vision for AI. This vision should guide all decisions, ensuring technology is used to support high-quality teaching and deeper learning. Aligning teams around this purpose lays the groundwork for coherent, scalable, and purposeful implementation.
What this looks like in practice:
Develop a shared instructional vision for AI use across the system. With input from academic, operational, and IT leaders, co-create a vision that clearly defines how AI will enhance high-quality teaching. This vision should build on and align with the system’s broader instructional technology vision, ensuring coherence across tools, platforms, and instructional priorities. It should also articulate how AI supports student-centered learning that is targeted and relevant, actively engaging, socially connected, and growth-oriented.
Conduct an infrastructure, access, and accessibility audit. Assess student and teacher access to devices, internet connectivity, and AI-compatible tools, both in school and at home, as well as the accessibility of these tools for learners with disabilities or those using assistive technologies. Use this data to guide investments and ensure that all learners can meaningfully engage with AI. For additional tools to guide this process, TLA’s Digital Equity Self-Assessment Tool and EdTech Systems Audit offer frameworks and templates to assess your current state, identify gaps, and prioritize infrastructure investments across schools and communities.
Establish clear policies that embed guardrails to protect students and enable responsible innovation. System-level policies should define expectations for AI use, grounding decisions in instructional goals while setting clear boundaries to safeguard student privacy, address algorithmic bias, ensure appropriate tool selection, and manage responsible classroom use. Within these policies, guardrails help educators navigate where experimentation is encouraged, where caution is needed, and where clear limits apply. Well-designed policies provide both protection and flexibility, serving as a lever to support thoughtful innovation as understanding and practice evolve.
Establish cross-functional leadership teams to coordinate implementation. Bring together instructional leaders, IT directors, legal staff, and educators to create coherent guidance, problem-solve barriers, and ensure decisions reflect technical and pedagogical needs.
Below are examples of how school systems lay the groundwork for responsible, instructionally grounded AI adoption through shared vision, infrastructure, and policy.
Catalyze Change with Early Adopters.
Pilots and small tests of change help systems learn what works in their specific context. Rather than waiting for perfect conditions, leaders can identify early adopters—teachers, coaches, and students—to explore promising AI practices. These efforts spark momentum, surface potential challenges early, and generate practical insights to guide next steps and build internal leadership capacity. Starting small also helps systems manage risk, creating space to identify and address challenges before broader implementation.
What this looks like in practice:
Identify and support early adopter teams. Select a small, diverse group of educators and students to pilot promising AI strategies in real-world instructional settings. Include representation across grade levels, content areas, and school types to ensure findings are inclusive and transferable.
Design structured pilot programs with learning goals and feedback loops. Clarify instructional goals and desired outcomes at the outset. Build in regular reflection, classroom observations, and stakeholder feedback to refine implementation over time. Use shared tools to capture what’s working, what needs adjustment, and what remains uncertain.
Use pilot data to inform broader system learning and continuous improvement. Treat these early use cases as test beds for strategy, not just tools. Leverage insights to refine policy, professional learning, and resource allocation before scaling. Pilots also help develop implementation champions, educators, and leaders who can mentor peers and support systemwide scaling when the time is right.
The strategies below highlight how districts empower early adopters to safely test, learn from, and shape the future of AI use in classrooms.
Build teacher capacity through job-embedded learning.
When used well, AI can extend the reach of great teaching, supporting, but never replacing, the expertise and relationships that skilled educators bring to learning. Teachers need time, training, and collaborative spaces to develop fluency with AI and align it to effective instructional practice. Building this capacity requires consistent, embedded support that helps educators apply what they learn directly in their classrooms.
What this looks like in practice:
Use early pilot insights to inform professional learning system-wide. Leverage what a select group of educators learn from intentionally experimenting with AI in real classrooms to shape PD priorities, design adult learning experiences, and model meaningful use cases. This ensures that professional development stays grounded in real, high-value instructional opportunities and challenges.
Focus learning on pedagogy and purpose, not just tools. Help educators explore how AI can support core instructional moves like differentiation, formative feedback, and content scaffolding. Equip teachers to ask not just “how does this tool work?” but “how does this tool help my students learn?”
Create time and space for collaborative learning and peer support. Build regular opportunities for teachers to plan, experiment, and reflect together. Encourage team teaching, peer observation, and coaching models, reinforcing a shared vision for responsible AI use. Differentiate supports to meet teachers where they are, whether early adopters or those new to AI. Use pilot insights to design inclusive professional learning that not only builds AI fluency but also strengthens broader digital literacy and instructional technology skills.
The following examples illustrate how schools support teachers in building fluency, experimenting responsibly, and integrating AI into meaningful instructional practice.
Take it further
School and system leaders can draw on a growing set of field-tested tools, research, and expert guidance to build strong foundations for powerful, responsible AI use. The resources below offer strategies to engage communities, learn in real time, and align AI efforts with broader goals for student-centered learning. Together, they reflect a collective effort across the education field to shape AI use in ways that support great teaching and deeper learning.
TLA’s Hop, Skip, Leapfrog Framework: This guide supports continuous improvement through small tests of change and helps districts identify which practices are ready to scale.
TLA’s Starting Smart with AI Briefs: These guides, for SEA leaders and LEA leaders, offer concrete early actions in policy, instruction, and educator capacity.
Leading Educators’ VATT Framework: The VATT Framework helps system leaders make coherent, strategic decisions around AI use by defining the impact they want technology and AI to have on teaching and learning. It fosters shared ownership across teams by linking tools to meaningful practice, not just adoption
RPPL’s AI in Professional Learning Report: This report outlines design principles, risks, and cautions for using AI to support educator growth, helping leaders ensure their AI strategies elevate high-quality professional learning.
Michigan Virtual’s AI Guidance for Schools: This guidance offers practical advice and sample policy language to help districts develop locally grounded guidelines for AI use in schools.
All4Ed’s AI for All Report: This report explores how AI can accelerate learning and improve access for historically underserved students, while outlining clear principles for equitable implementation.
Transcend’s AI and Extraordinary Learning for All: This toolkit frames how AI can be used to reimagine the learning experience, focusing on relevance, relationships, and equity.
TeachAI’s Foundational Policy Ideas for AI in Education: Outlines key policy levers and governance frameworks to help state and local leaders ensure safe, equitable, and effective implementation of AI tools in K-12 settings.
TeachAI’s AI Guidance for Schools Toolkit: Provides templates, checklists, and starter guidance to help school systems create AI policies and protocols.
