← Back to Publications List

Algorithmic Auditing: Identifying and Addressing Bias in Content Recommendation Platforms

Students & Supervisors

Student Authors
Tariqul Islam
Bachelor of Science in Computer Science & Engineering, FST
Tamanna Binte Waliur Joty
Bachelor of Science in Computer Science & Engineering, FST
Md. Shah Paran Jibon
Bachelor of Science in Computer Science & Engineering, FST
Md Ashiqur Rahman
Bachelor of Science in Computer Science & Engineering, FST
Supervisors
Mahfuza Khatun
Associate Professor, Faculty, FST

Abstract

Content recommendation systems significantly shape the way billions of human beings on the planet access information, yet systemic biases along demographic, ideological, and economic aspects threaten equitable access to content and opportunities. Purpose: The paper surveys algorithmic bias detection methodologies and mitigating strategies in prominent content systems and debates their effectiveness in promoting fairness and responsibility in recommender systems. Methodology: We performed a comprehensive meta-study on 2018-2025 algorithmic audit literature, encompassing 47 peer-reviewed articles and 8 jurisdictions' regulatory regimes. Quantitative evaluation took into account 8 types of bias, 9 mitigative techniques, and 6 audit approaches. Chi-square tests were applied in statistical significance testings (α = 0.05). Results: Sock-puppet auditing was the most frequent methodological approach (n = 23, 48.9% of the experiments), and it was significantly effective in revealing demographic bias (p < 0.001, χ² = 15.7). Large data sets exhibited overall political bias significantly strongly on their own: Twitter/X yielded 11.8% higher aligned-content deliveries in favor of Republican-seeded accounts (p = 0.032), and on YouTube, the algorithm directed right-biased actors towards radical/extremist material 2.3 times more often than otherwise (p = 0.008). Pre-processing mitigation techniques achieved 67% levels of bias mitigation relative to 42% levels achieved by post-processing techniques (p = 0.019). Conclusion: Systematic algorithmic auditing reveals pervasive bias on all platforms, and sock-puppet methods provide efficient detection mechanisms. Multistage mitigation schemes that incorporate data correction in the pre-processing and fairness constraints in in-processing achieve the optimal suppression of bias and recommendation quality.

Keywords

algorithmic auditing recommendation systems bias detection fairness metrics content platforms algorithmic accountability

Publication Details

  • Type of Publication:
  • Conference Name: International Conference on Challenges and Trends in Arts and Social Sciences (ICCTASS 2025)
  • Date of Conference: 11/12/2025 - 11/12/2025
  • Venue: American International University–Bangladesh (AIUB)
  • Organizer: Faculty of Arts and Social Sciences (FASS)