By Michael Telek

Throughout history, when new technologies emerge, educators are often the early adopters. If it can increase learning in a meaningful way, that technology will find a home in the classroom.

Xerox introduced its photocopier in 1959, allowing for mass
production of materials. Electronic handheld calculators allowed us to say goodbye to slide rules in the 1970s. The internet has grown from a rarity in classrooms; to a necessity. In 1994, just three percent of public-school classrooms had internet access.  In 2021, 45 percent of schools reported having an electronic device (laptop or tablet) available for every student.
Xerox machine

The xerographic process, which was invented by Chester Carlson in 1938 and developed and commercialized by the Xerox Corporation. Credit: Xerox

 

For school leaders and teachers around the globe, ChatGPT and generative artificial intelligence (AI) is a new frontier. OpenAI released their generative pre-trained language model ChatGPT 3.5, free to the public in November 2022. It is designed to understand context, generate coherent and contextually relevant responses, and engage in interactive dialogues. The model is fed a diverse range of internet text, allowing it to source solutions and grasp the intricacies of language, syntax, and context. Users can interact with ChatGPT by providing prompts or queries, and the model responds swiftly and accordingly

That’s what makes ChatGPT different from the AI you may already be familiar with. The chatbots helping you online or over the phone. Alexa and Siri understanding your commands. AI algorithms picking the perfect content for your social media timeline. Same for streaming services suggesting your next show. ChatGPT uses algorithms that processes natural language inputs and predicts the next word based on what it’s already seen. Within two months of its global launch, ChatGPT had more than 100 million active daily users. Industry experts have dubbed it one of the fastest growing apps of all time. To give that claim a little more credence, investment firm UBS tracks these kinds of metrics. They said it took TikTok about 9 months to add 100 million users and Instagram 2.5 years to reach that feat. That success sent a seismic shift across Silicon Valley. Microsoft allegedly gave OpenAI a $10 billion investment to expand their Generative AI work (OpenAI is also behind DALL-E 2, another AI program that can generate art from mere lines of text).  Google’s stock dropped nearly 8 percent, wiping out more than $100 billion in value, when they botched the announcement of Bard, their answer to ChatGPT. The message from Wall Street was simple. Generative AI is the future, and the future is now.

AI teacher

An AI generated picture for “classroom” via Adobe Firefly.

Billed as the next great, transformative tech, generative AI has spent the year generating a lot of buzz, both good and bad. It promises the automating of mundane tasks and increased productivity in the workplace. However, there are plenty of examples of the model making up answers. Legal experts are questioning if this is just a plagiarism machine. And of course, bad actors are always lurking.

Is technology neutral?

“People think that technology is neutral. Right? Like people think that oh, technology, it’s a computer. It’s math. It in no way can discriminate. But it’s really just amplifying and recreating existing discrimination that exists in our life,” said Dr. Angela Stewart, an assistant professor at the University of Pittsburgh’s School of Computing and Information and a research scientist at the University’s Learning Research and Development Center(LRDC) where she explores the intersection of the learning sciences, artificial intelligence, and human-computer interaction. Much of that work centers around equity and creating culturally responsive technology. Dr. Stewart has been studying AI for nearly a decade now, but in recent years has been focusing on how people learn about AI. How they think about its usage. How people create AI and technology systems through learning experiences.

“Equity has always been something that has been a particular passion of mine. As a Black woman in computing, there’s certainly not a lot of us, so I often think about how that influences my experience every day in computing as well as the kinds of things that get developed,” said Dr. Stewart. “I think that one of the many reasons why these biases exist is because of the limited perspectives of people creating these systems.”

During an episode that aired in March 2021, Jimmy Fallon was joined by Addison Rae, a white woman, to perform some of the most popular dances on Tik Tok. After a backlash, Fallon acknowledged that the original TikTok creators deserved recognition. Credit: The Tonight Show/YouTube

 

Like the people behind the technology, AI programs are far from perfect. Biases across different technology have shown themselves to be problematic on many levels.  Speech recognition software that cannot understand non-American accents. Health care algorithms creating errors because they are only trained on a specific group of people or certain stages of a disease. Facial recognition software used by law enforcement that can racial discriminate. Tik Tok’s algorithm required tweaks after proving to hold bias. The app would promote white content creators over people of color who often times were the originator of a popular trend. The views are given to the white creators, and with that, the brand endorsements and money that comes from a viral hit

Of course, having diverse voices in the room when technology is made, when the ideas are being initiated, are important. But if the leaders and decision-makers are not listening to those voices, or valuing those voices, then the problems will persist. “We can’t just rely on the most marginalized people to fix all of these societal oppressions,” said Dr. Stewart. “In this space, it is important for people who have power, let’s say researchers who have power and are creating new AI systems. Administrators, principals, and school board leaders use that power to advocate for learners. To advocate for parents. To advocate for equitable and inclusive uses of these technologies. Part of that starts with awareness, understanding what is happening, what is going on. Then the second part is actively taking steps to combat [bias].”

“Machine learning demonstrates that we can actually make a machine learn and perform tasks, but with that comes a lot of implications,” said Dr. Cassandra Kelley, a researcher and lecturer specializing in AI and emerging technologies and a member of the crew at the Center for Integrative Research in Computing and Learning Sciences (CIRCLS) a partner of the University of Pittsburgh that supports the LRDC in their mission. Dr. Kelley agrees with the assessment from Dr. Stewart and says a lot of the conversation at CIRCLS is about the person behind the machine. “For instance, there can be biases learned from data that lead AI to generate and spread misinformation. Furthermore, how AI is used can lead to unethical consequences—especially if we rely on AI to make decisions for us.”

Both Dr. Kelley and Dr. Stewart stressed their concern over user’s data. What’s happening to it? Where is it going? How long will it stay there? Is it being used responsibly? Machine learning requires lots and lots of data for the AI system to make predictions of what a human would do. However, if that data is being used responsibly and respectfully, it could make positive impact in the classroom. “[AI systems] could be a thought partner that you can utilize to brainstorm with, whether it’s thinking through problems or exploring creative ways to do something,” said Dr. Kelley. “From a productivity perspective, teachers might use AI to initially gather ideas or resources for lesson plans and assist with personalizing assessments for students’ varying needs. Essentially, AI could be leveraged to alleviate some of this workload and help to create with the teacher.

Data could also be used to provide feedback not just to students, but teachers as well. Dr. Stewart is part of a team designing ClassInSight, an app for teachers that uses language-based AI techniques to visualize student and teacher discourse patterns in the classroom.

Shock and AWE (Automated Writing Evaluation) 

Without writing, it is hard to advance through school, let alone life. That’s why we have federal and state standards for English language arts that includes writing. Engaging students in the revision process of analytic text-based writing is often lacking during elementary years.  

Teachers report feeling underprepared to teach writing, and the time needed to assess student writing is burdensome. When students do write, they rarely receive substantive feedback and rarely engage in cycles of revision that require them to apply feedback to strengthen their work.

“We’re addressing what I’m thinking is a bit of a crisis, which is most students in most schools are only getting one-shot assessments. They’re doing one piece of writing one day and they’re not returning to it,” said Rip Correnti, a professor at the LRDC. “They’re never engaging in revision. Part of our design is to build in the revision as an essential part of writing.”

Correnti is on the team that developed eRevise, an Automated Essay Scoring (AES) and Automated Writing Evaluation (AWE) system, for improving fifth to seventh-grade students’ skill in using text evidence. eRevise uses machine learning natural language processing (NLP) techniques to predict evidence-use scores in students’ writing.

Architecture of eRevise

The score provided by eRevise is based on features of evidence use on a typical grading. Using a rubric-based approach to ensure that the features of “good text-evidence use” (e.g., number of pieces of evidence provided, specificity of evidence) are well represented by the scoring algorithm. The system was trained on more than 1,500 previously collected and manually scored essays by humans.

“We pretty easily can highlight surface level errors, spelling, grammar, whatever, but we’re talking more about trying to help students along with aspects of argumentation that are substantive and important,” said Elaine Wang, a policy researcher at RAND.

eRevise Student Interface with example feedback messages.

“We are motivated to make sure that kids are having opportunities to engage with a rich task. Students don’t often have a lot of opportunities to practice their argument writing. We were inspired by those two challenges in teaching practice,” said LRDC Associate Director and Senior Scientist Lindsay Clare Matsumura.

Moving forward, the group wants to not only provide feedback through eRevise, but also directions for students on how to improve their revision skill, traditionally a difficult writing skill to learn. That’s where generative AI could play a role. As part of a new grant, they’re working to identify different revision patterns to bolster student revision strategies.

Finding Our Way in This New Frontier 

When ChatGPT was released, the first move of many was to block the technology until it could be better studied and understood. New York Public Schools banned students’ use of the software. Four months later, the nation’s largest school district rescinded that ruling after holding learning sessions with industry experts.

Seckinger High School in Gwinnett County, Georgia went in the opposite direction. They embraced AI, making it the first artificial intelligence-themed school in the country. About 1,500 students received a college preparatory curriculum that was taught through the lens of artificial intelligence this school year.

After caution and trepidation, AI is becoming more common in the classroom. A recent survey from Education Week found that a third of teachers asked are using artificial intelligence-driven tools. While most admit to using them “a little”, generative AI is creating lesson plans, building rubrics, and even composing emails to parents or writing letters of recommendation.

“My advice is to think about the nuance. It’s not all good, and it’s also not all bad,” said Dr. Stewart. “In particular, key into the ways that ChatGPT and other kinds of generative AI systems might be supportive of learning.”

We are in an era where misinformation runs rampant, now more than ever, Dr. Stewart says it will be important for students to critically think about sources and where they come from. Determine what’s real and true information versus what is not.

It’s something governments of all shapes are sizes are currently wrestling with. At the federal level, there’s a bi-partisan effort in the House to combat “deepfakes” by creating baseline protection against AI abuse and uphold Americans’ First Amendment rights online.

Since 2019, 17 states have enacted 29 bills focused on regulating the design, development and use of artificial intelligence, according to the Council of State Governments. North Carolina is the latest state to release AI guidance for schools. Educators in the Tar Heel State have been supplied parameters around AI’s use.

“I really urge educators and administrators to think about it as a tool. When can the tool be used, and how, rather than thinking about it so strictly from this plagiarism perspective,” said Dr. Stewart.

Dr. Kelley shares similar a sentiment. She echoed Dr. Stewart’s point in the nuances of the new technology and importance of having discussions surrounding the positives and perils with not just teachers and students, but also families.

With generative AI changing the game, educators and administrators will have to adapt. There will need to be new standards surrounding academic integrity and further questions asked around how we teach with and about AI. Digital literacy and digital citizenship should be at the forefront of our focus.

Another adjustment Dr. Kelley could see happening revolves around the dynamics of how we assess and gauge learning. She says in this new world, teachers will have to consider different types of evaluations for students. With generative AI’s ability to spit out writing prompts and pass multiple choice exams, it will be imperative that teachers incorporate further opportunities for collaboration and project-based work.

“We have to consider different approaches and prepare students for a new future,” said Dr. Kelley. “Students need to have a clear understanding of how these technologies are created and intended to work, as well as their imperfections. They need to recognize that AI is developed by people and with that can come biases and other ethical concerns. Additionally, they will need guidance on interacting with such technologies, which should include learning about how issues such as cyberbullying and deepfakes can be further amplified if these technologies are not used appropriately.

Reference:

Zheng, Z., & Zhang, G. (2019). Practical research of pre-service teachers’ TPACK development based on design-based. China Educational Technology, 389, 86-94.