When Gallup surveyed 2,232 U.S. public school teachers online from March 18 to April 11, 2025, six in 10 reported using an AI tool for their work during the 2024–25 school year. About one-third (32 percent) said they used such tools at least weekly, according to Gallup.

Teachers who used AI tools at least weekly estimated the software trimmed an average of 5.9 hours from routine tasks such as lesson drafting and worksheet creation. This gain equates to roughly six workweeks over a 37.4-week school year.

That estimate now serves as a reference point in a broader shift in education. Systems are focusing less on whether to permit AI tools and more on how to capture time savings while ensuring that students, not algorithms, demonstrate mastery of the material.

Key Findings: AI and the New Classroom

  • About one-third (32 percent) of U.S. K–12 teachers used AI weekly in the 2024–25 school year, saving an estimated 5.9 hours each week, or roughly six weeks over a school year, according to a 2025 report from Gallup.
  • Teachers report reinvesting saved time in nuanced feedback, individualized lessons and family outreach, according to qualitative responses in the Gallup–Walton Family Foundation study.
  • UNESCO’s generative AI guidance, published in 2023 and updated in 2025, outlines a human-centered approach to governance, ethics and curriculum integration.
  • Equity and capacity-building, including inclusive access and educator training, are recurring themes in UNESCO’s 2024–2025 AI competency and guidance documents.
  • Australia’s TEQSA 2025 guidance describes an "assurance of learning at the unit or subject level" pathway that incorporates at least one secure assessment task in every unit or subject across a program.
  • Turnitin’s AI-detection tool showed an approximate 4 percent sentence-level false-positive rate in 2023, indicating why detection scores alone are a weak basis for high-stakes misconduct decisions.

Adoption Outpaces Early Expectations


The same Gallup poll indicates that teacher use of AI now spans most major preparation tasks. Lesson preparation topped the list of uses, with 37 percent of respondents turning to AI at least monthly.

Worksheet generation followed at 33 percent, while 28 percent of teachers said they used AI to modify materials to meet student needs. These figures reflect AI’s role in routine planning rather than only in isolated experiments.

Across tasks, between 57 percent of teachers using AI for grading and feedback and 74 percent using it for administrative work reported that AI improves the quality of their work. Gallup notes that teachers who use AI at least weekly are often about twice as likely as less frequent users to describe the improvement as "much higher."

Taken together, these findings indicate that in K–12 settings, AI tools are moving from occasional or experimental use toward more regular inclusion in everyday workflows.

What Teachers Do With the Extra Time


Gallup’s analysis of qualitative responses reports that teachers who save time with AI often reinvest it in student-facing activities such as feedback, grading and family outreach.

Teachers described using AI for initial drafting of lessons, worksheets or communications. They then spend more time reviewing, editing and tailoring content for students with different reading levels or subject mastery.

Because the estimated time savings are measured in weeks across a school year, the findings raise questions about how schools might organize planning and professional-development time if AI-assisted preparation becomes standard practice.

The qualitative data also depict a gradual shift in emphasis from producing content to diagnosing student needs and providing targeted support, particularly in feedback and individualized planning.

Student Usage Raises Equity Questions


As teachers expand their own use of AI tools, classroom exposure to AI-generated explanations, practice questions and feedback is likely to increase for students across subjects and grade levels.

At the same time, access to AI-supported learning can depend on infrastructure, devices and institutional policies, which may differ across schools and communities.

UNESCO’s AI competency framework for students, first published in 2024 and updated in 2025, links AI education explicitly to the Sustainable Development Goal on inclusive and equitable quality education. It describes AI literacy as part of preparing students to participate responsibly in digital societies.

Policy discussions are moving toward disclosure rules that specify when AI support is acceptable and when students must demonstrate skills through unsupervised writing, live explanation or paper-and-pencil work. This makes expectations about assistance clear to both teachers and learners.

Global Frameworks Guide Classrooms


UNESCO's Guidance for Generative AI in Education and Research, first released in 2023 and updated in 2025, outlines a human-centered approach to generative AI with attention to governance, ethics and human–agent processes.

The guidance discusses regulatory steps such as data protection and age-appropriate access. It also highlights opportunities to use generative AI in curriculum design, teaching, learning and research when safeguards are in place.

A related UNESCO publication, the AI competency framework for students, describes 12 competencies across four dimensions: a human-centered mindset, ethics of AI, AI techniques and applications, and AI system design.

The framework defines three progression levels labeled understand, apply and create. This provides curriculum planners with a way to map AI-related learning objectives from basic familiarity to more advanced design tasks.

Because both documents are linked to Sustainable Development Goal 4 on inclusive and equitable quality education, they serve as reference points for school systems and ministries that are updating curricula, teacher training and assessment policies around AI.

Assessments Built to Resist Shortcutting


In higher education, assessment design is becoming a central tool for managing AI-enabled shortcutting. Australia’s higher-education regulator, TEQSA, notes in its 2025 resource "Enacting Assessment Reform in a Time of Artificial Intelligence" that one pathway to assuring learning focuses on incorporating at least one secure assessment task in every unit or subject across a program, as detailed in TEQSA.

In this "assurance of learning at the unit or subject level" approach, the secure task is intended to show that students can meet key learning outcomes without unauthorized assistance from generative AI or other external sources.

TEQSA specifies that it should not be possible to pass a unit or subject without satisfactorily completing these secure tasks. This makes them a mandatory hurdle within the assessment structure.

The resource lists oral presentations, in-class tasks and supervised practical demonstrations among the formats that can serve as secure assessment, alongside other tasks conducted under controlled conditions.

The document also notes that detecting generative AI use with certainty is currently "all but impossible." It recommends that institutions prioritize redesigning assessment and establishing secure points of evidence rather than investing primarily in detection technologies.

Detection Tools Show Persistent Gaps


Detection tools themselves have documented limitations. Turnitin’s chief product officer Annie Chechitelli wrote in 2023 that the company’s AI-writing detector shows a sentence-level false positive rate of approximately 4 percent, according to Inside Higher Ed.

The same report notes that when the product was first released, Turnitin advertised a document-level false positive rate below 1 percent. Later communication focused on the higher sentence-level rate without updating the document-level figure.

In practice, these numbers mean that a notable share of human-written sentences may be labeled as AI-generated, especially in documents that mix human and machine text.

Combined with TEQSA’s observation that generative AI use cannot be identified with certainty, the error rates underscore why relying on a single AI-detection score as the main basis for alleging misconduct carries a significant risk of misclassification.

Human Verification Remains Central


One response gaining attention is to make the process of using AI part of what is assessed, rather than focusing only on final products.

Assignments can require students to submit draft histories, prompt logs or short reflections that explain where AI tools were used, what they produced and how the student evaluated or modified the output.

In K–12 contexts, related structures can include allowing students to consult chatbots while planning an essay but requiring an unscripted oral defense of their argument. A live problem-solving session or a supervised written response can check retention and reasoning.

These models treat human interaction as a central verification layer for understanding. This is particularly important in high-stakes assessments where identity, authorship and independent reasoning need to be demonstrated directly.

They also assume that AI will remain part of students’ learning environments. The focus shifts from eliminating AI use to making expectations about assistance transparent and assessable.

Taken together, the Gallup–Walton Family Foundation data, UNESCO’s guidance and TEQSA’s assessment pathways suggest that education systems are moving toward a model in which AI is a routine layer in preparation and practice. Meanwhile, core demonstrations of learning remain human-centered and often face to face.

For now, the "AI dividend" of roughly six weeks per year for regular teacher users provides a tangible metric for administrators and policymakers who are weighing trade-offs between efficiency, integrity and equity.

If secure assessment practices expand alongside wider access to AI-aligned curricula and teacher training, future cohorts of students may experience AI as ordinary classroom infrastructure. It would support, rather than substitute for, the relationships and judgments that define education.

Sources


Article Credits