For years educators have been trying to glean lessons about learners and the learning process from the data traces that students leave with every click in a digital textbook, learning management system or other online learning tool. Itâs an approach known as âlearning analytics.â
These days, proponents of learning analytics are exploring how the advent of ChatGPT and other generative AI tools bring new possibilities â and raise new ethical questions â for the practice.
One possible application is to use new AI tools to help educators and researchers make sense of all the student data theyâve been collecting. Many learning analytics systems feature dashboards to give teachers or administrators metrics and visualizations about learners based on their use of digital classroom tools. The idea is that the data can be used to intervene if a student is showing signs of being disengaged or off-track. But many educators are not accustomed to sorting through large sets of this kind of data and can struggle to navigate these analytics dashboards.
âChatbots that leverage AI are going to be a kind of intermediary â a translator,â says Zachary Pardos, an associate professor of education at the University of California at Berkeley, who is one of the editors on a forthcoming special issue of the Journal of Learning Analytics that will be devoted to generative AI in the field. âThe chatbot could be infused with 10 years of learning sciences literatureâ to help analyze and explain in plain language what a dashboard is showing, he adds.
Learning analytics proponents are also using new AI tools to help analyze online discussion boards from courses.
âFor example, if you’re looking at a discussion forum, and you want to mark posts as âon topicâ or âoff topic,ââ says Pardos, it previously took much more time and effort to have a human researcher follow a rubric to tag such posts, or to train an older type of computer system to classify the material. Now, though, large language models can easily mark discussion posts as on or off topic âwith a minimum amount of prompt engineering,â Pardos says. In other words, with just a few simple instructions to ChatGPT, the chatbot can classify vast amounts of student work and turn it into numbers that educators can quickly analyze.
Findings from learning analytics research is also being used to help train new generative AI-powered tutoring systems. âTraditional learning analytics models can track a studentâs knowledge mastery level based on their digital interactions, and this data can be vectorized to be fed into an LLM-based AI tutor to improve the relevance and performance of the AI tutor in their interactions with students,â says Mutlu Cukurova, a professor of learning and artificial intelligence at University College London.
Another big application is in assessment, says Pardos, the Berkeley professor. Specifically, new AI tools can be used to improve how educators measure and grade a studentâs progress through course materials. The hope is that new AI tools will allow for replacing many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.
âThe accuracy with which LLMs appear to be able to grade open-ended kinds of responses seems very comparable to a human,â he says. âSo you may see that more learning environments now are able to accommodate those more open-ended questions that get students to exhibit more creativity and different kinds of thinking than if there was a single deterministic answer that was being looked for.â
Concerns of Bias
These new AI tools bring new challenges, however.
One issue is algorithmic bias. Such issues were already a concern even before the rise of ChatGPT. Researchers worried that when systems made predictions about a student being at risk based on large sets of data about previous students, the result could be to perpetuate historic inequities. The response had been to call for more transparency in the learning algorithms and data used.
Some experts worry that new generative AI models have what editors of the Journal of Learning Analytics call a ânotable lack of transparency in explaining how their outputs are produced,â and many AI experts have worried that ChatGPT and other new tools also reflect cultural and racial biases in ways that are hard to track or address.
Plus, large language models are known to occasionally âhallucinate,â giving factually inaccurate information in some situations, leading to concerns about whether they can be made reliable enough to be used to do tasks like help assess students.
To Shane Dawson, a professor of learning analytics at the University of South Australia, new AI tools make more pressing the issue of who builds the algorithms and systems that will have more power if learning analytics catches on more broadly at schools and colleges.
âThere is a transference of agency and power at every level of the education system,â he said in a recent talk. âIn a classroom, when your K-12 teacher is sitting there teaching your child to read and hands over an iPad with an [AI-powered] app on it, and that app makes a recommendation to that student, who now has the power? Who has agency in that classroom? These are questions that we need to tackle as a learning analytics field.â