XAI-Ed 2026
Demystifying AI in Education and Learning Analytics through Explainability, Agency, and Transparency

Third workshop on Explainable AI in Education, in the 16th International Conference on Learning Analytics & Knowledge (LAK 2026), Bergen, Norway, 27 April-01 May 2026

Submit a Paper
Banner image
feature image

Why XAI-Ed@LAK26?

As artificial intelligence (AI) becomes deeply embedded in educational technologies and Learning Analytics (LA), the demand for transparency, trust, and fairness has never been greater. Yet, explainability remains an underexplored dimension of LA practice. Many analytics and AI models still operate as opaque “black boxes,” limiting educators’ and learners’ ability to interpret results, act upon insights, or evaluate fairness. The XAI-Ed 2026 workshop responds to this challenge by positioning Explainable AI (XAI) and Explainable Learning Analytics (XLA) as essential enablers of trustworthy, participatory, and equitable education. It brings together researchers and practitioners to bridge the gap between algorithmic predictions and stakeholder understanding, linking technical advances in model interpretability with pedagogical principles, ethical responsibility, and institutional practice. Through presentations, panels, and collaborative sessions, the workshop will explore approaches ranging from intrinsic and post-hoc explainability to the use of large language models (LLMs) for adaptive explanations. It will foreground issues of bias, accountability, evaluation, and compliance, and emphasize stakeholder-sensitive, actionable explanations that empower educators and learners to make informed decisions.

    feature image

    Workshop Goals and Objectives

    The overarching goal of XAI-Ed 2026 is to advance a shared agenda for human-centered, explainable learning analytics. Specifically, the workshop seeks to:

    • Promote integrating explainability into the core of LA research and practice, promoting interpretability and stakeholder empowerment rather than post-hoc justification.
    • Foster interdisciplinary knowledge exchange across computer science, pedagogy, ethics, and institutional policy to co-design explainable and trustworthy systems.
    • Support equity and fairness by discussing the identification and mitigation of algorithmic bias in educational analytics.
    • Examine evaluation methods for XAI and XLA, including metrics for explanation faithfulness, usefulness, and usability.
    • Explore the role of LLMs and generative AI in producing stakeholder-aware, personalized explanations.
    • Develop strategies for institutional adoption of XAI-enabled learning analytics that align with educational values and regulatory frameworks.
    feature image

    Expected Outcomes

    XAI-Ed 2026 aims to help advocate supporting learning analytics with a transparent, participatory, and ethically grounded practice, one that delivers not only accurate predictions but also actionable, explainable, and equitable insights that strengthen learner and educator agency. By the end of the workshop, participants will have:

    • A shared understanding of current challenges and opportunities in XAI and XLA.
    • Practical strategies for designing and evaluating stakeholder-sensitive explanations.
    • An understanding for the challenges and potentials of integrating explainability into institutional analytics and policy frameworks.
    • New collaborative connections across technical, pedagogical, and policy domains.
    feature image

    Call for Papers

    We welcome research and practitioner contributions to the workshop, as full papers or short ones (positional papers, conceptual papers, or practitioner reports). Contributions are invited on (but not limited to) the following topics. More details on the CfP and the submission here

    • Explainability and transparency in learning analytics
    • Evaluation metrics and frameworks for educational XAI/XLA
    • Pedagogical and cognitive foundations of explanations
    • Stakeholder-sensitive explanation design and user evaluation
    • Fairness, bias, and accountability in AI-enabled LA systems
    • Institutional and policy challenges in adopting XAI and XLA
    • Ethical, legal, and regulatory aspects of transparency and explainability in LA
    • Role of LLMs and generative AI in supporting explainability and learner reflection in LA
    • Case studies, prototypes, and empirical findings demonstrating XAI/XLA in educational contexts