International Journal of Drug Delivery Technology
Volume 16, Issue 8s, 2026

Designing Fairness: A Systematic Review of Bias Detection and Mitigation in EHR-Based Artificial Intelligence

1 Vikas Singh, 2 Dr. Pradeep Tangade, 3 Dr. Ankita Jain

1Professor, Department of Public Health Dentistry, Teerthanker Mahaveer University, Teerthanker Mahaveer Dental College & Research Centre, Moradabad, UP, India

2Professor & Head, Department of Public Health Dentistry, Teerthanker Mahaveer University, Teerthanker Mahaveer Dental College & Research Centre, Moradabad, UP, India

3Professor, Department of Public Health Dentistry, Teerthanker Mahaveer University, Teerthanker Mahaveer Dental College & Research Centre, Moradabad, UP, India

Corresponding Author: drvikas7@gmail.com

Abstract

Objectives: This research explores methods for addressing various forms of bias in artificial intelligence models developed with electronic health record data. Healthcare might be revolutionised by the combination of artificial intelligence and electronic health data, but it is essential to tackle bias in AI to prevent the worsening of healthcare inequalities.

Materials and Methods: Following the recommendations for reporting systematic reviews and meta-analyses, we carried out a systematic review. We examined publications through IEEE, WoS & PubMed that were from January 1, 2010, until December 17, 2023. Paper evaluated metrics for bias assessment, identified important biases, and described methods for identifying and reducing bias throughout the creation of the AI model.

Results: Twenty of the 450 publications that were retrieved satisfied our requirements, identifying six main categories of bias: temporal, measurement, confounding, implicit, algorithmic, and selection. None of the AI models have been used in actual healthcare environments; they were mostly created for predicting purposes. Five research focused on using fairness indicators such as statistical parity, equal opportunity, and predictive equity to discover algorithmic and subconscious biases. Targeting implicit and selective biases in particular, fifteen research offered methods for reducing biases. These tactics mostly concerned data collecting and preprocessing methods like resampling and reweighting, and they were assessed using both performance and fairness indicators.

Discussion: In addition to highlighting the urgent need for both standardised and thorough reporting of the methodology as well as rigorous real-world testing and assessment, this research focuses on creating methods to remove bias from AI models that use electronic health records. These metrics are crucial for assessing the usefulness of models and developing moral AI that guarantees justice or integrity in health.

Key words: AI, Bias, EHR

How to cite this article: Singh V, Tangade P, Jain A. Designing Fairness: A Systematic Review of Bias Detection and Mitigation in EHR-Based Artificial Intelligence. Int J Drug Deliv Technol. 2026;16(8s): 890-899; DOI: 10.25258/ijddt.16.8s.98

Source of support: Nil.

Conflict of interest: None