Workshop Schedule

Time SlotActivityType
09:00–09:30Welcome, Introduction to the workshop, and Introduction to XAI in educationPresentation
09:30–10:30Keynote presentationPresentation
10:30–11:00Coffee break-
11:00–11:45Group activity: Framing problems and needs of XAI in educationGroup work
11:45-12:30Accepted papersPoster-like session
12:30–13:30Lunch-
13:30–14:15Panel discussionPanel
14:15–15:00Group activity: Requirement definition and solution structuringGroup work
15:00–15:30Coffee break-
15:30–16:15Group activity: Group alignment and creating shared solution spaces (domain-specific and domain-agnostic)Group work
16:15–17:00Open discussion, reflection, and closing thoughtsDiscussion

Keynote Details

Portrait of Prof. Dr. Benjamin Paaßen

Foto-Credit: Susanne Freitag

Keynote Speaker

Prof. Dr. Benjamin Paaßen

Bielefeld University

From explainable to explaining systems - the role of social XAI in learning analytics and educational technology

While explainable AI (XAI) has made substantial progress in the past decades, most explanations are limited to selecting the most important features to explain a classification decision. In education, explanatory needs are much richer: We need to diagnose a learners' (or educators') explanatory needs, adjust explanations to context (person, topic, and situation), and communicate explanations in a way that they are actionable for learners and teachers. This multi-step, dynamic paradigm of XAI has been dubbed social XAI (Rohlfing et al., 2020). The keynote will highlight research results to motivate the necessity of social XAI in learning analytics and educational technology, first steps in this direction, and open questions for future research.

Bio

Benjamin Paaßen is junior professor for knowledge representation and machine learning at Bielefeld University. Their research focuses on interpretable, explainable, and domain-informed machine learning, especially for intelligent tutoring systems. As part of the large-scale research projects SAIL, KI-Akademie OWL, and the collaborative research center TRR318 "Constructing Explainability", they engage both in foundational research as well as science communication to explain opportunities as well as limitations of contemporary AI systems.

Accepted papers

  • Daniel Mora Melanchthon and Andrea Horbach. “How reliable is that explanation? Intrinsic Evaluation of XAI methods in Automated Essay Scoring models”
  • Rania Ait Chabane, Armelle Brun and Azim Roussanaly. “Uncertainty-Aware Knowledge Tracing: Towards the Use of Subjective Logic”
  • Semyon Bosonogov, Alena Suvorova and Semyon Bosonogov. “Problem of Overtrust in XAI Tools for Educational Data: Analysis of LLM Interpretations”
  • Reet Kasepalu, Kairit Tammets, Tobias Ley and Mutlu Cukurova. “AI-Supported Educational Decision-Making: Aligning Teacher Expertise Development and AI Teaming Levels”
  • Grzegorz Meller, Cédric Kestens and Tinne De Laet. “Layered Explainability for Advisor Support: A Visual-Conversational Interface for Predictive Learning Analytics”
  • Md Biplob Hosen, Houbing Herbert Song, Shuling Yang and Lujie Karen Chen. “NeuroSymRead: Symbolic Governance of Neural Generation for Adaptive Dialogic Reading”
  • Mehar Ali, Awais Ilyas Baig, Utsaha Joshi and Ekaterina Kuzmina. “TransitionIQ: An Explainable Learning Analytics Prototype for Cross-Discipline Transfer Readiness”