Skip to content
A conceptual illustration showing a student climbing steps made of ChatGPT-style icons. A conceptual illustration showing a student climbing steps made of ChatGPT-style icons.
Illustration by Sébastien Thibault
Winter/Spring 2025 Features

An Intellectual Revolution?

At Lawrenceville, students are using artificial intelligence to assist understanding rather than as a tool to replace it.

An hour after sunset, it was chilly outside in mid-October when Jennifer Parnell P’23 headed over to Upper House at 7:30 p.m., hoping to run an idea past a faculty colleague who was on House duty that night. Inside the brightly lit common room, as Parnell waited among the cushy club chairs, couches, and billiards tables, a student was struggling with a math problem and appealed to her for help.

“I was like, no, I’m a history teacher,” she recalled. “I haven’t seen anything even slightly related to calculus since, like, 1979.”

But then Parnell, who came to Lawrenceville in 2021 after earning awards for her innovative work at Kodiak High School in Alaska, pivoted to offer the frustrated student a solution.

“I said, ‘AI can help you with that,’ and he’s like, No, I already have the answer,” Parnell continued, explaining that the student was trying to understand how to get to the solution himself.

“I said, ‘No, AI can help you understand how to get that answer.”

As the boy earnestly positioned his laptop in front of him, Parnell mapped out a process that required no knowledge of calculus, only that the student lean into a basic tactic of the Harkness — Socratic inquiry — but in this case, using artificial intelligence, or AI, as a Socratic tutor. Parnell fed him some initial prompts to enter into ChatGPT 3, giving the platform context around his situation: You’re going to tell it the work is important. You’re going to tell it it’s meaningful to you. You’re going to tell it where you are — at a prestigious independent school that’s very competitive.

“And then you’re going to explain that you want AI to help you understand how to solve this problem, but not at any point to give you the answer,” Parnell told him. “Just keep asking it questions in a Socratic method.”

The student began a dialogue with ChatGPT, typing open-ended questions and receiving follow-up questions in return, laying down blocks of understanding along the way. When he couldn’t answer one of GPT’s questions, Parnell urged him: “Well, tell it you can’t, and it will adjust the level of what it’s asking you.”

After about ten minutes, Parnell saw a smile of satisfaction spread across the boy’s face. “I get it,” he quietly announced. “I get it.”

“Excellent. Could you teach it to someone else?’” she recalled asking him. “And he said, ‘Yes, because the part that I didn’t understand was this step right here and now I understand that step.’”

It was marvelous. It was just really powerful to witness it happening in real time with a student who, at that moment, was struggling and how it could help him.

Jennifer Parnell P'23, history teacher, director of innovation and AI projects

With its public introduction on November 30, 2022, ChatGPT sparked a revolution that most people still don’t fully understand. Unlike other technological advances — the internet, for a primary example — this publicly accessible generative AI platform did not gradually infiltrate the culture over years while it improved incrementally. Rather, it struck like a thunderbolt, in one day and all at once. Though AI technology has developed over generations, revealing itself more recently and subtly through chatbots on retail websites or natural-language processing models like Apple’s Siri and Amazon’s Alexa, OpenAI’s launch of ChatGPT 3 suddenly brought this user-friendly large language model to anyone with an internet-abled device. At once, users were able to interact with this generative pretrained transformer, or GPT, in a conversational way, using prompts and natural dialogue to have the model develop creative prose, formulate vacation plans, or think through sophisticated work processes. ChatGPT could ask follow-up questions to clarify a user’s input, or admit its mistakes and correct them as it worked through the prompts, and even reject requests it “knew” to be inappropriate.

Though adoption of the technology continues to spread, ChatGPT still resonated immediately, growing from 13 million new unique users each day in December 2022 to 180 million users twelve months later. But as with any technological revolution in its nascent days, people remain uncertain, uneasy, and even divided on how to regulate the use of AI in institutional settings. A November 11, 2024, headline in The Chronicle of Higher Education asked, “Is It Time to Regulate AI Use on Campus?” In the story, reporter Lee Gardner writes, “As the technology spreads throughout all aspects of academe — and evolves at a pace measured in months, not years — experts and a burgeoning number of administrators believe that colleges need to establish guidelines about its use or face potential disaster.”

To be clear, a significant part of the need for policy, according to Gardner, involves data security more than use by students in their academic work: A staff or faculty member giving information to a generative AI platform also provides its deep-learning capabilities with training data that could then reappear in answers it provides to another user. This could entail a violation of federal privacy laws or the inadvertent disclosure of admissions information or financial data that an institution would otherwise shield from its rivals.

A conceptual illustration of two students discussing artificial intelligence, with a boy holding a book from which a bubble of AI information rises.
Illustration by Sébastien Thibault

But the most immediate focus on just about every campus across higher and secondary education was on academic dishonesty and preventing the use of AI as a workaround. In the earliest days of ChatGPT, administrators and faculty members at Lawrenceville reacted with concern about this, too.

“I think our first inclination when we started thinking about AI and our students was to teach ethics in AI,” explains Bernadette Teeley P’24, dean of academics, of the class titled AI Applications and Ethics, which launched in 2023. “That was the first [AI-related] class Lawrenceville offered, and I would say that was our reactive answer to what we were seeing. It was so steeped in our mission that our job is to prepare students to be ethical leaders and to live lives of integrity.”

Mission statements can ring hollow when institutions do not treat them as lighthouses shining a beacon of guidance in uncertain waters. In this case, Lawrenceville’s instinctual deference to its mission pointed the School on a course to become a pioneer among independent schools.

“And so that was a really a keystone piece for us, which — I have to say — we were the only school doing that,” Teeley continues. “And still, when we look back on how schools first approached it, we were the group that approached it from that lens. And that has proven to be the right lens.”

The eleven-week, 400-level course had Fourth and Fifth Form students grappling with questions such as why a society would build AI, how the technology learns and adapts, if it’s possible to prevent or limit bias in algorithms, whether AI boosts creativity or makes us less intelligent though thoughtless delegation, whether AI can be effectively regulated, and even whether it poses an existential threat to humanity. (This year’s Fifth Form Capstone course is also AI-focused.)

We’re trying to learn from other people, trying to bring experts here to have open conversations about this, including students and teachers, so everyone has a seat at the table.

Sathvik Samant ’26

Parnell, who last spring was named director of innovation and AI projects, says the School’s choice to focus on ethics was prescient, given the unrestrained pace of the technology.

“Academic work must be grounded in deeper issues and essential questions,” she says, adding that the student evaluations of the course were excellent, with them citing the content as “relevant, purposeful, and joyful.”

“Those are my three criteria,” Parnell says. “AI fits all of that. It’s purposeful. It’s absolutely relevant. And for a lot of these kids, it is joyful. They’re not doing some of this AI work on campus because they feel they have to do it or it’s a requirement. A bunch of them are doing it in their spare time.

“As Lawrenceville students…” She pauses, letting you contemplate just how they are allocating what might be their most precious resource for the sheer joy of learning. “They’re doing this in their spare time.”

Perhaps that is not a surprise. After all, who is more curious than a Lawrenceville student?

“You might not ever work with it, but why not learn a new skill? Why not explore something fun? Because I think it’s extremely useful, but it will only be useful to you if you practice with it,” says Lena Haefele ’25 of the many AI platforms. “It’s kind of freaky but it’s cool and it’s fun to work with.”

Haefele is part of Lawrenceville’s AI Council, a student-driven group that is setting the agenda for AI use on campus. The AI Council seeks to safely, ethically integrate the potential of AI into traditional Lawrenceville principles. Its members have also enjoyed learning from experts in the field through events such as MIT’s AI at the Crossroads conference in New York in October. That opportunity made an impression on Haefele.

“I mean, something that was said at the MIT conference was, it’s not so much an industrial revolution,” she recalls, “but it’s an intellectual revolution, and you want to be part of that.”

Lawrenceville does, of course, have AI policies. There are, to be sure, ways students can use it to cheat, though it’s not easy. The School’s current position on generative AI is that unless a student has clear and specific permission from their teacher to use AI tools in completing an assignment, using them will be considered a form of academic dishonesty. Teachers are welcome to use AI or refuse its use in their classes. Toward that end, the AI Council is there to advise the community on its adoption, something the council believes is consistent with the tenets of Harkness.

“Our council is centered around striking a balance,” explains Sathvik Samant ’26, a founding member of the group. “Striking a balance between human and technology, striking a balance between face-to-face, Harkness-style communication, and also using AI as a resource.” Samant says the AI Council wants to create understanding around a technology that remains uncharted territory for many students and faculty members alike.

“We’re trying to learn from other people, trying to bring experts here to have open conversations about this,” he says, “including students and teachers, so everyone has a seat at the table.”

The AI Council was originally composed with equal numbers of students and adults, but Parnell says the adults have left it to their young learners, who quickly proved their capable leadership.

“They did slides for orientation. They’re preparing materials to help the freshmen come in. They’re working on guidance for different teachers in terms of how you could bring AI into an 80-minute class period, what you can do for assessments, what you can do for assignments, how you can start to use AI in a lot of ways,” she explains. “They’re entirely student driven.”

One tool developed by the council is the AI Scale of Use, a five-level chart that faculty members are using to establish clear expectations of students. Regardless of the degree to which AI is employed, there is an expectation that all work done in any class will reveal the student’s personal understanding and analytical skills.

It’s an intellectual revolution, and you want to be part of that.

Lena Haefele '25

Samant is glad for the opportunities and the agency he and his peers were given.

“I think that our administration did a really good job of setting down ground rules and stuff like that,” he says, “but also not barring [AI] from existence, not pretending it didn’t exist.”

At the MIT conference in New York, which six students attended as guests of Latif Alam ’08, council members heard from early adopters and innovators who are already harnessing AI to transform industries and to provide actionable insights into the rapidly evolving AI landscape and its impact on business and society.

“The opportunities that the AI Council has given me, like talking with alumni, being able to go to these conferences, made me realize how important the human aspect of AI is,” Haefele says. “And it’s really about how we use AI. The AI Council has made me realize … that I want to focus on how we interact with AI, because I think that’s the most applicable for education in our community.”

Haefele, who along with several other members of the council, recently pitched Teeley on a pilot project for AI use in classrooms, says her primary aim with the group is to show that this technology is not a replacement for knowledge but, rather, a way to augment it.

“The main goal for what I’m trying to do in the AI Council,” she says, “is promote using AI as a tool to assist your understanding instead of using it as a tool to replace understanding.”

If we take a generative language AI that is prompted by a human prompt, and then gives a response, and then the human has the opportunity to ask another question, I think that’s exactly what’s happening at the Harkness table.

Bernadette Teeley P’24, dean of academics

The Fifth Form calculus student Parnell guided in the Upper House common room is illustrative of AI’s teaching potential, an event from which she draws inspiration.

“It was marvelous,” she recalls. “It was just really powerful to witness it happening in real time with a student who, at that moment, was struggling and how it could help him.

“I’m going to try and push a little bit more this year,” Parnell continues. “I’m getting into some of the Houses. I’m doing some speeches, presentations with tricks and tips about AI in different Houses at night during study hall, because I want to teach kids how to use it responsibly.”

That Socratic method of inquiry Parnell used to guide the calculus student toward his breakthrough is something that she believes is an extension of the very ideals he and his peers idealize about Lawrenceville. In relating that story, she added that the student wondered if he could run into disciplinary trouble if he used the same process in other classes. For Parnell, his concern seemed to underscore a belief she holds firm about her students.

 

This is a graphic listing some of the AI platforms Lawrenceville students are using this year.

“It’s this idea that we are all learning how to live in this new space and what the ethics are, what’s appropriate and how to use it responsibly and how to use it to help you learn — not to help you cheat — but that’s the path that we’re all navigating,” she says. “Because we have kids here that love actually to learn. They don’t want AI to do it for them. They want to be the one that says, I can do this on my own.”

To that end, Teeley sees that same process as being in full alignment with the tenets of Harkness education.

“If we take a generative language AI that is prompted by a human prompt, and then gives a response, and then the human has the opportunity to ask another question, I think that’s exactly what’s happening at the Harkness table,” she says. “When you’re interacting with AI in that very basic format, you’re … trying to ask a question that gives you the data that you’re really looking for. And I think the process of asking questions is at the heart of Harkness teaching and learning.”

* * *

Learn More:

Jennifer Parnell P’23, history teacher and Lawrenceville’s director of innovation and AI projects, is the author of an article titled “Survival Guide for the AI-pocalypse,” which appears in the the spring 2025 issue of Independent School magazine, published by the National Association of Independent Schools. It is part of a collection of pieces on artificial intelligence titled “Survival Guide.”

Parnell and Ajay Dhaul P’24, senior vice president of global data solutions and applied AI for a Fortune 500 consumer products company, also recorded a two-part episode of 1810: The Lawrenceville School Podcast, on the relationship between AI and education.