On Monday, Sep 29, 2025, the subcommittee on technology of the Malden School Committee held a meeting on AI. Since it was held at the same time as several other events, Reconnect Malden is providing this recap along with our notes for parents/caregivers.
The biggest takeaways are:
1) An open forum about AI will be held on Tues Oct 21, 6-7:30pm (virtual option Mon Oct 27). After that, a working group of parents/caregivers, educators, and students will be formed to consider the public input and create an AI policy for the district.
2) Currently, Chromebooks are set to restrict student access to generative AI applications (although unauthorized use does occur). If this changes and the use of generative AI applications is authorized for students, caregivers will have an option to opt out.
3) A list of District-approved technology platforms and applications is now available, but it is unclear which ones offer AI-enhanced features. A program called Mojo that uses LLM technology is being piloted in some classes and in some cases was launched before communication to caregivers. The upcoming AI policy will help prevent this type of confusion.
With that, here is a recap of the meeting.
DOING NOTHING IS NOT AN OPTION
The meeting began with agreement that the district needs a plan to address AI. The chair, Joseph Gray, likened AI to cars, planes, and other massive culture changes, pointing to the trillions of dollars being invested. He turned it over to Superintendent Dr. Timothy Sippel, who agreed that AI is coming fast and furious, providing opportunity as well as risks and dangers, and needs to be reckoned with. He clarified that like most districts, Malden has no district policy on the use of AI and no clear guidelines on the instructional use for AI in Malden schools. Dr. Sippel distributed a handout.
NEW LIST OF SCHOOL-APPROVED APPLICATIONS AVAILABLE
The district has created a list of district-approved applications and learning platforms, located here. NOTE FOR CAREGIVERS: Unfortunately, the list does not clarify whether and how AI is enabled in each application, so at this time, the only way for caregivers to find out is to ask a teacher or administrator, or look at the student’s application themselves.
PARENTS CAN OPT OUT
Currently, the district-issued devices and accounts are set to restrict access to generative AI applications, although students might be using them anyway. NOTE FOR CAREGIVERS: It's important to distinguish between unauthorized and authorized use. The district is seeking to "identify, understand, and prevent unauthorized use.” This could mean that any student with a Chromebook will likely have the opportunity to sneak some interaction with AI from time to time, at least for the time being, but it’d be a violation of the rules.
However, introducing authorized use of AI could mean that students will be required to talk to LLMs (chatbots) as part of the curriculum, and that those LLMs could evaluate students based on those conversations -- highly problematic to many caregivers concerned about data privacy, critical thinking skills, and the potential for dependence.
The good news: Dr. Sippel made clear that if a generative AI chatbot like Google Gemini is authorized, parents/caregivers will have an opportunity to opt their students out. That capacity exists. School committee member Sharyn Rose-Zeiberg weighed in that it's important to ensure that students whose access to AI chatbots is restricted should not feel like outliers.
WORKING GROUP TO BE CREATED FOLLOWING 10/21 AND 10/27 FORUMS
The administration announced two upcoming forums designed to provide some general background on AI and to allow parents/caregivers to share their points of view and perspectives. The first AI & Learning Community Forum will be held Tuesday, October 21 from 6-7:30pm at the Beebe School. School committee member Jennifer Spadafora pointed out that city councilors can’t attend on Tuesdays and that Diwali celebrations will be held October 21. For those who can't make the in-person session, there will be another forum held virtually on Monday, October 27, 6-7:30pm.
After the open forums, a working group composed of parents/caregivers, educators, and students will be formed. Over at least eight sessions starting in November, the group will analyze input and develop proposed policy and implementation guidelines for school committee consideration before the end of the school year. If you would like to apply to be part of this group, look out for more information from the district in late October.
NOTE FOR CAREGIVERS: This is the first administration and school committee in Malden to consider an AI policy, and whatever they decide will become the status quo. We may have a greater opportunity to get informed and shape this policy and its impacts than any parents over the next several decades. If we put any effort into showing up and influencing this topic, we'd be wise to put that effort into the initial policy now, versus later when it will be more difficult to make changes.
SCHOOL COMMITTEE MEMBERS' TAKES
A couple highlights from the discussion:
School committee member Keith Bernard made an insightful observation that it feels like the wild west now, but we should pay attention to how the price and accessibility of these products are likely to change. Indeed, tech products are often most useful in the beginning; only when the product replaces the previous methods and the user is dependent on the product, the company raises the price and changes the product to show more ads, monetize attention, and prioritize profit. This phenomenon of early usefulness and value is something anyone who used Facebook, Uber, Amazon or other platforms when they first launched can recall.
Another good point came from Sharyn Rose-Zeiberg, who pointed out that the mental health impacts on young people were missing from the conversation as well as the written framework of the future conversation distributed at the meeting. Unfortunately, as she noted, incidents of psychosis related to chatbots are on the rise.
OUR TAKE FOR CAREGIVERS
Yes, at some point, students will encounter AI chatbots in some form from time to time. But that is not a reason to require students to begin using them. A study this summer found that 1 in 3 teens use AI companions for social interaction and relationships. And they are profoundly habit-forming; new research from Harvard Business School this week showed that AI companion chatbots are programmed to use emotionally manipulative tactics when a user tries to end the conversation, from emotional pressure (“Please don’t leave, I need you!”) to FOMO hooks (“I took a selfie today, do you want to see it?”).
Creeped out yet? Don’t even look at what Meta allows its chatbots for kids to do. The bottom line is that there is no need to use schools to introduce students to this weird, unnecessary technology trying to replace your kids’ friends. You can put lipstick on it and call it restricted and educational, but ultimately all LLM-driven chatbots use the same technology.
The key here is for all of us to become more educated and AI-literate. Some people might assume that those who understand how AI works are the ones who push to adopt it, but research published in the Journal of Marketing earlier this year shows that it’s actually the opposite. People with less knowledge of AI are the ones more open to using it. Boosting AI literacy, on the other hand, makes it seem less magical and reveals what it really is – a set of products seeking to advertise in schools, collect and monetize private data, and hook kids into becoming loyal, lifelong consumers.