Organizers
- Home /
- Organizers

Hasan Abu-Rasheed
Hasan Abu-Rasheed is a postdoctoral researcher in the field of artificial intelligence in education, with a particular focus on explainable AI (XAI), knowledge graphs, and semantic technologies. He is conducting his research at Goethe University Frankfurt, Germany, where he contributes to the development of AI-driven tools for higher education, ranging from intelligent dialogue systems (chatbots) to agent-based workflows for semantic information extraction and explainable learning analytics and feedback.

Jakub Kuzilek
Jakub Kuzilek is affiliated as a researcher/research software engineer with the Learning Science in Higher Education research group at the CATALPA research center at FernUniversität in Hagen. His research focuses on learning environments powered by machine learning systems development, explainable machine learning in education and predictive analysis of student behaviour in online environments.

Christian Weber
Dr. Christian Weber is leading the research group Medical Informatics and Graph-based Systems (.MIGS), jointly with Prof. Dr.-Ing. Kai Hahn in the faculty of Science and Technology at the University of Siegen. Christian Weber’s research focuses on Graph-based Systems, Recommender Systems, and AI, which he is applying in medicine, education, and manufacturing. He represented the professorship of Medical Data Science from 2022 till 2024 at the University of Siegen. He acquired his PhD from the Corvinus University Budapest in Hungary, as part of a Marie Skłodowska-Curie Actions Doctoral Network, where he laid new foundations for knowledge intense, individualized learning path recommendations for vocational educational training in medical and industrial applications.

Hassan Khosravi
Associate Professor Hassan Khosravi is a recognised leader in Data Science and Artificial Intelligence in Education at The University of Queensland. His research sits at the intersection of learning sciences and human–computer interaction, with a particular focus on advancing the responsible and ethical use of AI in education. He has taught more than 15,000 students across a wide range of courses, authored over 100 peer-reviewed publications, and secured more than $6 million in competitive research funding. His contributions are shaping both scholarly discourse and practical innovation on the transformative role of AI in education.

Jeroen Ooge
Dr. Jeroen Ooge is an assistant professor at Utrecht University. His research is situated in human-computer interaction, which investigates how people interact with technologies. Dr. Ooge is specialised in human-centred explainable artificial intelligence, which studies how outcomes of AI models can be explained to different audiences in different contexts; for example, teenagers in an educational context. He is interested in how transparency affects people’s trust in AI models, how it supports people’s decision-making, whether it improves people’s understanding of AI models, etc. with particular focus on studying how data visualisation can help in this endeavour

Juan D. Pinto
Juan D. Pinto is a PhD student at the University of Illinois Urbana‐Champaign. His research involves the development of learner models using machine learning methods and tackling issues of AI interpretability in education. He is currently conducting work as a member of the Human‐centered Educational Data Science (HEDS) Lab and the NSF AI Institute for Inclusive Intelligent Technologies for Education (INVITE).

Lea Cohausz
Lea Cohausz is a PhD student at the University of Mannheim. Her recent work includes research on how demographic variables influence predictions in EDM and the consequences for fairness (EDM 2023) as well as identifying causal structures in educational data and their relationship to algorithmic bias (LAK 2024). She is interested in advancing our understanding of the complex relationships of factors that influence students’ learning outcomes.

Luc Paquette
Luc Paquette is an associate professor in the department of curriculum & instruction at the University of Illinois Urbana-Champaign. His research focuses on the usage of machine learning, data mining and knowledge engineering approaches to analyze and build predictive models of the behavior of students as they interact with digital learning environments such as MOOCs, intelligent tutoring systems, and educational games. He is interested in studying how those behaviors are related to learning outcomes and how predictive models of those behaviors can be used to better support the students’ learning experience.

Luca Longo
Dr. Luca Longo received the bachelor’s andmaster’s degrees in computer science, an MSc in statistics and one in health informatics, and a Ph.D. in artificial intelligence from Trinity CollegeDublin. He also earned two MSc degrees in pedagogy from Technological University Dublin. He is currently the leader of the Artificial Intelligenceand Cognitive Load Research Laboratories and the director of the Explainable Artificial Intelligence Centre. With his team of doctoral and postdoctoral scholars, he conducts fundamental research in eXplainable Artificial Intelligence, defeasible reasoning, and non-monotonic argumentation. He also performs applied research in deep learning and neuroscience, mainly applied to the problem of mental workload modelling using electroencephalography. He is also the Founder of the World Conference on eXplainable Artificial Intelligence. He actively disseminates scientific material to the public, contributing to the non-profit TED Organisation.

Mutlu Cukurova
Prof. Mutlu Cukurova is affiliated with UCL Knowledge Lab at the Institute of Education and UCL Centre for Artificial Intelligence at the Faculty of Engineering at University College London. He investigates human-AI complementarity in teaching and learning contexts and leads the UCL Learning Analytics and AI in Education group at UCL Knowledge Lab. He is engaged in policy-making activities as an external expert for UNESCO, OECD, and EC authoring numerous influential policy reports (e.g. UNESCO AI competency framework for teachers and Teachers agency in the age of AI). He was the programme co-chair of the International Conference of AI in Education in 2020 and is named in Stanford’s Top 2% Scientists List. He is also Editor-in-Chief of the British Journal of Educational Technology and Associate Editor of the International Journal of Child–Computer Interaction.

Qianhui (Sophie) Liu
Qianhui (Sophie) Liu is a PhD student at the University of Illinois Urbana-Champaign. Her research in the HEDS lab focuses on applying data mining methods in combination with learning science theories to help improve the efficiency of teaching and learning in various educational settings. She is interested in closing the loop of machine learning to humans for actionable insights through explainable models and techniques.

Tanja Käser
Tanja Käser is a tenure-track assistant professor in computer science at EPFL, heading the Machine Learning for Education Lab. Her research lies at the intersection of machine learning, data mining, and education. She is particularly interested in creating accurate models of human behavior and learning. Prior to joining EPFL, Tanja Käser was a senior data scientist with the Swiss Data Science Center at ETH Zurich and a postdoctoral researcher at the Graduate School of Education at Stanford University. Tanja Käser received her PhD degree from the Computer Science Department of ETH Zurich. In her dissertation, she focused on user modeling and data mining in education, which was honored with the Fritz Kutter Award in 2015. Tanja is a board member of the International Educational Data Mining Society and an associate editor of the International Journal of Artificial Intelligence in Education.

Vinitra Swamy
Vinitra Swamy is a postdoctoral researcher at EPFL. Her research with the ML4ED lab involves explainable AI for education, especially through the lens of reducing adoption barriers for neural networks. Her recent work focuses on uncovering disagreement in post-hoc explainers, using learning science experts to validate explainer accuracy and actionability, and proposing interpretable-by-design neural network architectures.