New York regulates use of artificial intelligence in its public schools
The central goal is to ensure that A.I. is used as a tool to support learning, without replacing the fundamental role of teachers or human interaction.

Reference image of a student.
The city of New York has taken a decisive step by publishing new guidelines for the use of artificial intelligence (A.I.) in its public school system, the largest in the country. The initiative, driven by the New York City Department of Education, seeks to establish a clear, safe and ethical framework for integrating these technologies into the classroom.
The central objective is to ensure that A.I. is used as a tool to support learning, without replacing the fundamental role of teachers or human interaction. The regulation arises in a context of rapid technological expansion, accompanied by concerns among families and educators about its risks and limits.
In addition, the document was prepared with the participation of more than 1,000 members of the educational community, reflecting a collaborative approach and a shared vision on the responsible use of these tools.
Purpose: Innovation with responsibility
The guidelines state that A.I. should complement teaching, not replace it. It emphasizes that the professional judgment of educators remains irreplaceable and that the development of critical thinking in students should prevail over reliance on automated systems.
The policy also responds to the need to prepare students for a work and social environment where A.I. is already part of everyday life, without ignoring the associated risks such as errors, biases or misuse of data.
The "traffic light" system: What is and what is not allowed
One of the most prominent elements of the regulation is the implementation of a traffic light classification system, designed to guide the use of A.I. in schools:
Red light (prohibited):
The use of A.I. in critical decisions about students, such as grading, discipline, emotional counseling, or defining academic trajectories, is prohibited. These applications are considered high risk to equity and privacy.
Yellow light (use with caution):
Allows the use of A.I. under teaching supervision. Includes activities such as research, creative projects, writing communications or handling school data, always with ethical criteria and human review.
Green light (permitted use):
Authorizes the use of A.I. in administrative and support tasks, such as lesson planning, idea generation, translation of non-sensitive materials and content organization.
This model seeks to balance innovation with student protection, facilitating clear decisions for teachers and administrators.
Gradual implementation and public consultation
The policy will not be implemented immediately in its entirety. The plan calls for progressive implementation in several phases until June 2026.
A key element is the opening of a 45-day public consultation period, during which parents, teachers and other stakeholders will be able to submit comments and suggestions. This process will allow the regulations to be fine-tuned prior to their final version.
Reactions: From enthusiasm to concern
The initiative has generated diverse opinions. Some parents, including groups such as Parents for A.I. Caution in Educational Spaces, have expressed concern about the speed of implementation and the lack of robust mechanisms to foster critical digital literacy.
Among the main fears are:
- The potential overreliance on automated tools.
- The loss of essential cognitive skills.
- The use of students as a "testing ground" for new technologies.
Some students, especially at beginning levels, have also expressed a preference for educational environments with less A.I. presence.
The official position: A.I. will not replace teachers
Many teachers agree that A.I. can be useful as an assistant in planning and organization, but they insist that the final pedagogical decision must always rest with human beings.
Privacy and control: Strict requirements
The use of A.I. in schools will be subject to strict controls. All tools will have to go through the ERMA (Enterprise Request Management Application) evaluation system, which verifies compliance with regulations such as FERPA and New York state education legislation.
Key requirements include:
- Prohibition of using student data to train models.
- Obligation of transparency by vendors.
- Evaluation of algorithmic biases and their impact on fairness.
Only approved tools may be used with student or staff information.
A model with global impact
The challenge, experts agree, will be to find the balance between harnessing the potential of A.I. and preserving the fundamental values of education: critical thinking, human interaction and equity.