eCommons

 

Improving Inclusion in AI-Based Candidate Disparate Impact and Counterfactual Testing

dc.contributor.authorKotharu, Aishwarya
dc.contributor.authorShaikh, Rumiza Shakeel
dc.date.accessioned2025-04-26T04:56:27Z
dc.date.available2025-04-26T04:56:27Z
dc.date.issued2025-04-26
dc.descriptionThis study explores the application of Disparate Impact and Counterfactual Fairness metrics in AI-driven recruitment systems, specifically focusing on Workday ATS. It investigates how indirect features, such as education prestige, can lead to biased outcomes, even in seemingly neutral AI models. Through empirical analysis, the research demonstrates how these biases disproportionately affect certain candidate groups. The study also examines mitigation strategies, such as feature masking and reweighting, to improve fairness while maintaining model performance.
dc.description.abstractAutomated recruitment platforms, such as Workday and iCIMS, increasingly rely on machine learning (ML) models to streamline candidate selection. However, these systems may inadvertently reinforce existing biases in hiring processes. This paper proposes a quantitative and empirical approach to evaluating and improving fairness in AI-based hiring pipelines. We focus on Disparate Impact (DI) measurement and Counterfactual Fairness testing to audit a representative ATS—Workday’s AI screening engine. We compute DI across various demographic groups, demonstrating how current candidate scoring mechanisms fail the "four-fifths" threshold (DI < 0.8) in simulated recruitment scenarios. To mitigate such inequities, we introduce a bias correction model that rebalances feature weights in post-processing. Among several variables, we identify the "education prestige score" as a key contributor to disparate treatment. Experimental results on semi-synthetic datasets show that reweighting or masking this feature significantly reduces DI disparity while preserving predictive performance (AUC drop <1%). This work provides a replicable methodology for fairness auditing in enterprise-grade ATS, offering a pathway to more equitable recruitment practices.
dc.identifier.citationNot previously published
dc.identifier.urihttps://hdl.handle.net/1813/116850
dc.language.isoen
dc.publisherCornell University
dc.rightsCC0 1.0 Universalen
dc.rights.urihttp://creativecommons.org/publicdomain/zero/1.0/
dc.subjectArtificial Intelligence
dc.subjectDisparate Impact
dc.subjectAlgorithmic Fairness
dc.subjectCounterfactual Testing
dc.subjectAI Hiring
dc.subjectApplicant Tracking Systems (ATS)
dc.titleImproving Inclusion in AI-Based Candidate Disparate Impact and Counterfactual Testing
dc.title.alternativeAI Hiring Fairness: Disparate Impact & Testing
dc.typereport
schema.accessibilitySummaryThe plain text document is the accessible version; a UTF-8 encoded whitepaper with a logical heading structure and consistent formatting. It contains no visual or multimedia elements and is fully compatible with screen readers and other assistive technologies.

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Evaluating_Fairness_in_AI_Recruitment_Systems_Workday_ATS_Bias_Study.txt
Size:
71.24 KB
Format:
Plain Text