Evaluating Algorithmic Fairness in Small Business Lending Models Across Urban and Rural Banking Markets

Authors

  • Mahesh Goyal Data Engineer at Google LLC, USA Author

Keywords:

Algorithmic fairness, small business lending, rural banking, urban credit models, machine learning bias, financial inclusion

Abstract

This study investigates algorithmic fairness in small business lending models across urban and rural banking environments. As machine learning tools increasingly shape credit decisions, concerns over fairness, bias, and discrimination intensify. Disparities in data availability, socioeconomic indicators, and digital infrastructure between rural and urban settings pose significant challenges. The research highlights fairness metrics, bias mitigation strategies, and proposes equitable model evaluation frameworks. Emphasis is placed on understanding the intersection of geography and credit access, particularly for marginalized small enterprises.

References

Barocas, S., & Selbst, A.D. (2016). Big data’s disparate impact. Calif. Law Rev., 104(3), 671–732.

Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In Proc. 8th Innovations in Theoretical Computer Science, 43(1), 1–23.

Binns, R., Veale, M., et al. (2018). Fairness and accountability in algorithmic decision-making: Current state and future challenges. Big Data & Society, 5(2), 1–11.

Jagtiani, J., & Lemieux, C. (2019). The roles of alternative data and machine learning in fintech lending. Risk Management, 21(1), 1–25.

Gajula, S. (2024). Adaptive zero trust architecture for securing financial microservices. Computer Fraud & Security, 2024(12), 643–655. https://doi.org/10.52710/CFS.845

Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., & Walther, A. (2019). Predictably unequal? The effects of machine learning on credit markets. Journal of Finance, 74(5), 2321–2363.

Hurley, M., & Adebayo, J. (2016). Credit scoring in the era of big data. Yale J. Law Tech., 18(1), 148–216.

Cowgill, B., Dell’Acqua, F., & Deng, S. (2019). Biased programmers? Or biased data? A field experiment in operationalizing AI ethics. Management Science, 65(7), 2947–2965.

Eubanks, V. (2018). Automating inequality. St. Martin's Press.

Citron, D.K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1–33.

Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835–850.

Khandani, A.E., Kim, A.J., & Lo, A.W. (2010). Consumer credit-risk models via machine-learning algorithms. Journal of Banking & Finance, 34(11), 2767–2787.

Bruckner, T. (2017). Algorithmic Bias and Fair Lending. Economic Review, 102(2), 37–53.

Mayson, S.G. (2019). Bias in algorithmic risk assessments. Yale Law Journal, 128(8), 2218–2291.

Skeem, J.L., & Lowenkamp, C.T. (2016). Risk, race, and recidivism. Behavioral Sciences & the Law, 34(2–3), 147–169.

Veale, M., & Binns, R. (2017). Fairer machine learning in the real world. Communications of the ACM, 61(4), 36–45.

Downloads

Published

2025-07-26

How to Cite

Evaluating Algorithmic Fairness in Small Business Lending Models Across Urban and Rural Banking Markets. (2025). GLOBAL JOURNAL OF MULTIDISCIPLINARY RESEARCH AND DEVELOPMENT, 6(4), 1-5. https://gjmrd.com/index.php/GJMRD/article/view/GJMRD.6.4.001