Enhancing Fairness and Explainability in Student Performance Prediction Using Bias Mitigation Techniques
DOI:
https://doi.org/10.5755/j01.itc.55.1.39399Keywords:
Student Performance, Fairness in AI, Bias Mitigation, Explainability, Adversarial Debiasing, Model Transparency, Educational Data Mining, Predictive Modeling, AIAbstract
Ensuring fairness in artificial intelligence (AI)-driven student performance prediction remains a critical challenge, as biases in educational data can lead to unfair treatment of certain demographic groups. This study aims to develop fair and explainable AI models for predicting student performance in secondary education. Specifically, we investigate how bias mitigation techniques can be integrated with explainability methods to improve both fairness and interpretability without compromising predictive accuracy. We analyze a real-world dataset from Portuguese schools and apply machine learning models including Random Forests, XGBoost, and Logistic Regression. To mitigate bias, we implement fairness constraints and employ Adversarial Debiasing Representation Learning (ADRL). Post-hoc explainability is achieved using SHapley Additive Explanations (SHAP) to reveal the most influential factors in model predictions. Our findings demonstrate that bias mitigation techniques successfully reduce fairness violations while maintaining high predictive performance. The Bias Severity Index decreases from 0.35 to 0.08, and Demographic Parity improves from 15.3% to 4.2%. SHAP analysis reveals that factors such as study time, parental education, and previous grades have the most significant influence on student performance predictions. This study integrates fairness-aware learning with explainability tools, ensuring that AI models in education remain both equitable and interpretable.
Downloads
Published
Issue
Section
License
Copyright terms are indicated in the Republic of Lithuania Law on Copyright and Related Rights, Articles 4-37.


