Teachers’ Lived Experiences with Explainable AI in Classroom Decision-Making
Abstract
Since the introduction of artificial intelligence (AI) technologies into educational systems is more of a trend, explainable AI (XAI) systems are becoming a solution to assist teachers in making decisions in areas like grading, student interaction, and learning diagnostics. This paper will discuss the experiences of teachers who are secondary and post-secondary and interacted with systems that are enabled by XAI-enabled systems in their classrooms. Set in a phenomenon qualitative design, the study will rely on detailed solicitation of the teachers (n=12) in both state and private schools. Theymatic coding with NVivo was used to analyze the data and both the technical familiarity and pedagogical impact were taken into consideration in the study. The next assumptions made were the increasing role of algorithmic decision-making in classroom and the fact that AI output needs to be interpretable so as not to relegate teacher agency. It presents emotional, cognitive, and pedagogical tensions that teachers feel working with AI systems that affect teaching compared to other studies that concentrate on student data analytics. The paper concludes that developing teacher confidence and competence in XAI needs professional development and a participatory approach to AI design to achieve this. The longitudinal outcomes of XAI should be examined in future as a way of affecting teaching autonomy and student outcomes.


