COLING 2025 Tutorial
LLMs in Education: Novel Perspectives, Challenges, and Opportunities

1New York University, 2NRC-CNRC, 3University of Cambridge, 4MBZUAI

About this tutorial

The role of large language models (LLMs) in education is an increasing area of interest today, considering the new opportunities they offer for teaching, learning, and assessment. This cutting-edge tutorial provides an overview of the educational applications of NLP and the impact that the recent advances in LLMs have had on this field. We will discuss the key challenges and opportunities presented by LLMs, grounding them in the context of four major educational applications: reading, writing, and speaking skills, and intelligent tutoring systems (ITS). This tutorial is designed for researchers and practitioners interested in the educational applications of NLP and the role LLMs have to play in this area. It is the first of its kind to address this timely topic.

Schedule

Slides

Time Section Presenter
2:00—2:15 Section 1: Introduction Ekaterina
2:15—3:00 Section 2: LLMs for Writing Assistance Bashar
3:00—3:45 Section 3: LLMs for Reading Assistance Sowmya
3:45—4:30 Section 4: LLMs for Spoken Language Learning and Assessment Stefano
4:30—5:15 Section 5: LLMs in Intelligent Tutoring Systems (ITS) Kaushal & Ekaterina
5:15—5:30 Q & A

Reading List

We've compiled a comprehensive reading list for each topic covered in this tutorial. Each topic has a dedicated reading list page along with the list of papers we'll discuss during the tutorial. We welcome additions and suggestions from community for each topic. If you think that we're missing essential papers, please submit a pull request.


Section 1: Overview


Section 2: LLMs for Writing Assistance


Section 3: LLMs for Reading Assistance


Section 4: LLMs for Spoken Language Learning and Assessment


Section 5: LLMs in Intelligent Tutoring Systems (ITS)


Section 6: Challenges & Opportunities

We will finish the tutorial by discussing open challenges and opportunities as well as most promising future directions. We solicit suggestions from the wider community on the material to cover in this section. If you would like to suggest papers that we should cover, please submit a pull request.