How to Conduct an AI Readiness Assessment for Your School
Schools that rush into AI adoption without understanding their starting point tend to fail in predictable ways: they buy tools their infrastructure can't support, roll out AI without policies that address inevitable academic integrity questions, train teachers who aren't interested while ignoring teachers who are ready, and allocate budget without knowing their actual ongoing costs. A 2024 ISTE report found that schools completing a structured readiness assessment before AI implementation were 3.2 times more likely to report sustained adoption after two years compared to schools that adopted tools opportunistically.
Readiness assessment isn't about determining whether your school is "ready enough" for AI — every school is ready for some level of AI adoption. The purpose is to understand where you are strong, where you have gaps, and what sequence of actions will produce the most progress with the least friction. A school with excellent infrastructure but no AI policy needs different first steps than a school with strong teacher interest but unreliable Wi-Fi.
This guide provides a five-domain readiness framework, a self-assessment rubric you can use immediately, interpretation guidance for your results, and a gap-closing action plan.
The Five Domains of AI Readiness
| Domain | What It Covers | Why It Matters |
|---|---|---|
| 1. Infrastructure | Internet bandwidth, Wi-Fi coverage, device availability, SIS/LMS capabilities | AI tools require reliable connectivity and adequate devices; infrastructure failures create frustration that kills adoption |
| 2. Policy | Acceptable use policy, data privacy framework, academic integrity guidelines, procurement procedures | AI without policy creates confusion, inequity, and legal risk; teachers won't commit to tools that might be prohibited next semester |
| 3. People | Staff AI literacy, teacher comfort level, IT support capacity, administrative knowledge | AI tools are only as effective as the people using them; skill gaps create dependence on a few champions rather than sustainable adoption |
| 4. Culture | Innovation norms, risk tolerance, collaboration practices, attitude toward change | Schools with fear-based cultures suppress AI experimentation; schools with trust-based cultures enable organic adoption |
| 5. Budget | Current edtech spending, available funding streams, sustainability planning, total cost of ownership awareness | AI adoption without budget clarity leads to subscription sprawl, funding cliffs, and eventual tool abandonment |
AI Readiness Self-Assessment Rubric
Score each item 1-4 based on your school's current state. Be honest — an inflated assessment produces a useless action plan.
Domain 1: Infrastructure
| Item | 1 — Not Ready | 2 — Developing | 3 — Ready | 4 — Advanced |
|---|---|---|---|---|
| Internet bandwidth | Frequent slowdowns; streaming/video unreliable | Adequate for basic use; struggles during peak demand | Reliable for all standard uses; supports simultaneous cloud tools | High-bandwidth; no performance issues even at full capacity |
| Wi-Fi coverage | Dead zones common; teachers report frequent disconnections | Coverage in most areas; some classrooms unreliable | Full building coverage; reliable in all learning spaces | Full coverage plus outdoor areas; documented heat maps; monitored |
| Student devices | Shared devices; less than 1:2 ratio | 1:2 or better; students share during some activities | 1:1 or near-1:1; devices reliably available | 1:1 with take-home; devices current; replacement cycle planned |
| LMS/SIS integration | No LMS or SIS, or systems don't integrate with third-party tools | LMS/SIS available but limited integration; manual data transfer common | LMS/SIS with SSO; some third-party integrations active | LMS/SIS with comprehensive API; SSO for all tools; automated data flows |
Domain 1 Score: _ / 16
Domain 2: Policy
| Item | 1 — Not Ready | 2 — Developing | 3 — Ready | 4 — Advanced |
|---|---|---|---|---|
| AI acceptable use policy | No policy exists; AI use is unaddressed | Informal guidance (verbal or email) but no formal policy | Written AUP covering staff and student AI use; board-approved | Comprehensive AUP with grade-level guidelines, review schedule, and incident protocol |
| Data privacy framework | No DPA process; privacy decisions made ad hoc | DPA template exists but not consistently applied | DPA required for all tools; FERPA compliance documented | DPA + state law compliance; privacy review board; vendor audit process |
| Academic integrity | Traditional honor code only; AI not addressed | General statement about AI and integrity; no specific guidelines | Assignment categories (prohibited/assisted/integrated); grade-level guidance | Comprehensive integrity framework with detection alternatives, response protocol, and teaching components |
| Procurement process | No formal process; individuals purchase tools independently | Centralized purchasing for major items; small purchases untracked | All edtech purchases require approval and privacy review | Formal approval workflow with duplicate check, privacy review, instructional review, and budget verification |
Domain 2 Score: _ / 16
Domain 3: People
| Item | 1 — Not Ready | 2 — Developing | 3 — Ready | 4 — Advanced |
|---|---|---|---|---|
| Staff AI literacy | Most staff have not used AI tools; limited understanding of capabilities/limitations | Some staff have explored AI independently; awareness is inconsistent | Majority of staff have used AI tools; understand basic capabilities and limitations | Staff proficient in AI use; can evaluate AI output quality; understand bias and ethics |
| Teacher comfort level | Widespread anxiety or resistance to AI | Mixed — some enthusiastic, many apprehensive | Most teachers willing to try AI with support; concerns are specific and addressable | Teachers actively seek AI applications; peer mentoring happening organically |
| IT support capacity | IT is reactive; no capacity for new tool support | IT can support basic tool deployment; limited training capacity | IT can manage tool deployment, SSO configuration, and basic troubleshooting support | IT proactively evaluates tools; provides teacher support; monitors usage; manages integrations |
| Administrative knowledge | Administration has limited AI understanding; delegates all technology decisions | Administration is aware of AI trends but not engaged in planning | Administration engaged in AI planning; understands policy and budget implications | Administration leads AI strategy; can articulate vision, evaluate impact, and communicate with community |
Domain 3 Score: _ / 16
Domain 4: Culture
| Item | 1 — Not Ready | 2 — Developing | 3 — Ready | 4 — Advanced |
|---|---|---|---|---|
| Innovation norms | New ideas are met with skepticism; "we've always done it this way" is common | Innovation happens in isolated classrooms; not systematic | School encourages experimentation; structured sharing opportunities exist | Innovation is expected and supported; time, recognition, and resources provided |
| Risk tolerance | Failure is penalized; teachers avoid trying new approaches | Individual risk-taking tolerated but not encouraged; no safety net | Structured risk-taking supported; pilot programs with clear evaluation criteria | Failure is treated as data; rapid iteration is normal; learning from failure is celebrated |
| Collaboration | Teachers work in isolation; limited professional sharing | Some collaboration (grade-level teams or departments) | Regular collaborative structures (PLCs, peer observation, shared planning) | Deep collaboration including co-teaching, shared resource creation, and cross-school learning |
| Change attitude | Change fatigue is strong; initiative overload | Mixed — some openness, some resistance; depends on the specific change | Generally positive attitude toward purposeful change; questions are constructive | Staff actively seeks improvement; distinguishes between productive and performative change |
Domain 4 Score: _ / 16
Domain 5: Budget
| Item | 1 — Not Ready | 2 — Developing | 3 — Ready | 4 — Advanced |
|---|---|---|---|---|
| Edtech spending awareness | No central tracking of edtech spending; unknown total cost | Some tracking exists but incomplete; significant untracked spending | Comprehensive edtech inventory with costs, renewal dates, and usage data | Real-time subscription dashboard; annual ROI review; consolidation strategy |
| Funding source identification | Only local funds considered; federal funding streams not explored for AI | Awareness of federal funding options but not utilized for AI tools | Active use of Title I/II/IV-A or other federal funds for technology | Multiple funding streams strategically aligned to AI priorities; sustainability planned |
| Total cost awareness | Only subscription cost considered; training, support, and opportunity costs ignored | Subscription + some implementation costs considered | Full cost model: subscription + training + support + migration + opportunity cost | TCO analysis standard practice; includes multi-year projections and exit costs |
| Sustainability planning | No plan for funding beyond current year or grant period | General awareness of funding cliff risk; no specific plan | Sustainability funding identified for key tools before purchase commitment | Multi-year budget projection; transition plan for every grant-funded tool; contingency reserves |
Domain 5 Score: _ / 16
Interpreting Your Results
Overall Score: _ / 80
| Score Range | Readiness Level | What It Means |
|---|---|---|
| 60-80 | Advanced | Your school is well-positioned for ambitious AI adoption. Focus on optimization, advanced applications, and sharing your practices with other schools |
| 45-59 | Ready | Solid foundation exists. Address specific gaps (low-scoring items) before scaling AI use. You can adopt selectively while building capacity |
| 30-44 | Developing | Significant work needed in multiple domains. Start with foundational actions — policy, basic PD, infrastructure upgrades — before committing to AI tool purchases |
| 16-29 | Early Stage | Major foundational investments needed. Focus on the lowest-scoring domain first. AI tool adoption should wait until infrastructure and policy foundations are established |
Domain-Level Analysis
The total score tells you your overall readiness. The domain scores tell you where to focus:
| Domain Score | Interpretation | Priority |
|---|---|---|
| 13-16 | Strong in this domain; maintain and optimize | Low — no immediate action needed |
| 9-12 | Adequate with areas for improvement | Medium — address specific low-scoring items |
| 5-8 | Significant gaps; this domain will limit AI success | High — address before expanding AI adoption |
| 4 | Critical gaps; this domain will block AI adoption | Urgent — prioritize above all other domains |
The Most Common Readiness Profile
Based on ISTE survey data (2024) and CoSN infrastructure reports, the most common profile for U.S. schools is:
- Infrastructure: 10-12 (Adequate — most schools have basic connectivity and devices)
- Policy: 5-8 (Significant gaps — most schools lack AI-specific policy)
- People: 7-10 (Developing — mixed comfort levels, limited systematic PD)
- Culture: 8-11 (Developing to Adequate — depends heavily on school leadership)
- Budget: 5-8 (Significant gaps — most schools don't track edtech spending comprehensively)
This means Policy and Budget are typically the binding constraints — not infrastructure or teacher skill. Schools often invest in devices and PD while neglecting the governance and financial frameworks that sustain AI adoption.
Gap-Closing Action Plan
Priority Sequencing
IF Infrastructure Score < 8:
→ Fix infrastructure FIRST
→ AI tools can't work without reliable connectivity
ELSE IF Policy Score < 8:
→ Build policy framework NEXT
→ Teachers won't adopt tools without clear guidelines
ELSE IF People Score < 8:
→ Invest in PD and support
→ Skill building requires time; start early
ELSE IF Budget Score < 8:
→ Build financial framework
→ Prevents subscription sprawl and funding cliffs
ELSE IF Culture Score < 8:
→ Culture work is ongoing
→ Addressed through leadership behavior more than
programs
Quick Wins by Domain
| Domain | Quick Win (Achievable in 30 Days) | Impact |
|---|---|---|
| Infrastructure | Map Wi-Fi coverage; identify and resolve dead zones | Removes the #1 teacher frustration with technology |
| Policy | Draft a Version 1.0 AI AUP (even a 1-page interim guideline) | Removes ambiguity; gives teachers permission to experiment within boundaries |
| People | Host a single 90-minute AI awareness workshop | Shifts conversation from fear to informed curiosity |
| Culture | Principal publicly tries an AI tool and shares the experience (including mistakes) | Signals that experimentation is safe and valued |
| Budget | Compile a complete edtech subscription inventory | Reveals consolidation opportunities and unknown spending |
How to Administer the Assessment
Who Should Complete It
The assessment is most valuable when completed by a small, cross-functional team — not by one person alone. The ideal completion team mirrors the AI committee structure:
- Principal or assistant principal (administrative perspective)
- IT coordinator (infrastructure and technical perspective)
- 2-3 teachers (classroom reality perspective)
- Curriculum coordinator (instructional perspective)
Process: Each team member completes the rubric independently, then the group meets to compare scores, discuss disagreements, and arrive at consensus scores. Disagreements are often the most valuable part — they reveal assumptions and blind spots.
When to Administer
- Initial assessment: Before any formal AI adoption planning
- Annual reassessment: Each spring, to track progress and reprioritize
- Post-implementation check: 6 months after major AI tool deployments, to identify emerging gaps
Key Takeaways
- Readiness assessment prevents the predictable failures of rushing AI adoption — tools without infrastructure, adoption without policy, spending without sustainability. Schools with structured assessments are 3.2x more likely to sustain adoption after 2 years (ISTE, 2024). See AI for School Leaders — A Strategic Guide to Transforming Education Administration for strategic context.
- Five domains determine readiness: infrastructure, policy, people, culture, and budget. Score each on a 1-4 rubric across 4 items per domain. Your total (out of 80) tells you overall readiness; domain scores tell you where to focus. Most schools score lowest on policy and budget.
- Policy and budget are typically the binding constraints, not infrastructure. Most U.S. schools have adequate connectivity and devices. What they lack is AI-specific policy (acceptable use, academic integrity, data privacy) and financial frameworks (subscription tracking, sustainability planning). Fix these first. See Building a Culture of Innovation — Leading AI Adoption in Schools for culture-building.
- Complete the assessment as a cross-functional team. Individual assessments reveal one perspective; group discussion reveals blind spots. Principal, IT, teachers, and curriculum coordinator should each score independently, then discuss disagreements. See Building an AI Committee — Who Should Lead Your School's AI Strategy? for team structure.
- Address the lowest domain first. Infrastructure below 8? Fix connectivity before buying tools. Policy below 8? Write the AUP before launching PD. Budget below 8? Inventory your subscriptions before adding new ones. See Managing AI Tool Subscriptions Across a District for subscription management.
- Quick wins build momentum. A 30-day action in each domain — map Wi-Fi, draft interim AUP, host one workshop, share a principal's AI experiment, compile subscription list — creates visible progress before the assessment even produces a formal plan.
See Creating AI Usage Reports for Stakeholders and Parents for communicating progress. See AI for Student Information Systems (SIS) and Administrative Tasks for SIS integration. See Best AI Content Generation Tools for Educators — Head-to-Head Comparison for evaluating tools — platforms like EduGenius that require only a web browser and internet connection have low infrastructure barriers, making them accessible even to schools in early readiness stages.
Frequently Asked Questions
Our infrastructure score is high but our policy score is very low. Should we start adopting AI tools anyway?
You can begin limited, structured piloting — but not broad adoption. Without policy, you'll face confusion about what's allowed, inconsistent practices across classrooms, potential privacy violations, and academic integrity questions you can't answer. Start a controlled pilot with clear interim guidelines: 3-5 volunteer teachers, one or two approved tools, explicit expectations documented in writing. Use the pilot period (60-90 days) to draft formal policy informed by real experience. This approach lets you build policy from practice rather than theory, which typically produces more practical and enforceable guidelines.
How do we address the "culture" domain when the problem is leadership, not teachers?
Culture scores below 8 almost always trace to leadership behaviors, not teacher attitudes. If leadership penalizes failure, resists change, or doesn't model technology use, no amount of PD will improve the culture score. The most effective intervention is principal behavior change — publicly experimenting with AI, sharing mistakes, protecting time for teacher exploration, and celebrating process over product. If the barrier is above the principal (superintendent or board resistance), the principal can still create a micro-culture of innovation within their building by using their discretionary authority to protect experimentation. Culture change starts at the top of whatever scope you control.
Should we share our readiness scores publicly (with staff, parents, board)?
Share selectively and strategically. Share with staff — transparency builds trust and invites contribution. Share domain-level results with your board — "we're strong in infrastructure but need investment in policy and professional development" — to justify resource requests. Be cautious about sharing specific item-level scores publicly; low scores can be taken out of context. Frame results as "baseline measurements that inform our improvement plan" rather than "grades." The assessment is a diagnostic tool, not a report card. The value is in trajectory — how scores improve over time — not in any single measurement.
How long does the full assessment process take?
Individual completion of the rubric takes 20-30 minutes. The group discussion and consensus-building meeting takes 60-90 minutes. The gap-closing action plan can be drafted in the same meeting or a follow-up 60-minute session. Total investment: 2-3 hours for a cross-functional team of 5-6 people. The most common mistake is spending too long on the assessment and too little on the action plan. Set a firm time limit for the discussion phase and ensure at least 30 minutes for action planning. Perfect scores don't exist and aren't the goal; directional accuracy with clear next steps is the goal.