Speakers
Description
Algorithmic decision-making in recruitment offers substantial efficiency gains but raises persistent concerns about embedded bias, opacity, and fairness. This study outlines a framework for bias detection and mitigation in AI-driven hiring systems, combining technical and procedural strategies for transparency and accountability. We first review established methods such as explainability techniques (e.g., SHAP, LIME), structured audit reporting, and human oversight mechanisms. Building on this framework, we conduct a comparative empirical assessment using generative AI evaluations of synthetically generated CVs. These CVs are identical in structure and content but differ in gender and age indicators. Our findings discuss assessments based on these protected attributes, underscoring the challenges of ensuring fairness even in controlled prompts and uniform data. The results point to the importance of prompt design, audit trails, and continuous monitoring in mitigating bias.