DEVELOPING EXPLAINABLE STATISTICAL MODELS FOR ENHANCING INTERPRETABILITY AND TRUST IN MACHINE LEARNING APPLICATIONS

Authors

  • S. B. Vinay The Velammal International School, mal Knowledge Park, Panchetti, Tamil Nadu, Author

Keywords:

Explainable Machine Learning, Interpretability, Statistical Models, Trust in AI, Predictive Modeling

Abstract

The demand for explainable machine learning (ML) models is growing, driven by applications where interpretability and trust are critical. This paper explores the development of explainable statistical models to enhance interpretability without compromising accuracy. By integrating statistical methods and modern ML techniques, the study provides insights into improving transparency in predictive modeling. The paper includes a detailed review of relevant literature, proposes methodologies for model development, and evaluates their applicability across different domains. Graphs and tables illustrate the findings to ensure clarity and accessibility.

References

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?" Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.

Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. Advances in Neural Information Processing Systems.

Kumari, B. (2024). Innovative Cloud Architectures: Revolutionizing Enterprise Operations Through AI Integration. International Journal for Multidisciplinary Research, 6(6), 1–9.

Nivedhaa, N. (2024). A comprehensive analysis of current trends in data security. International Journal of Cyber Security (IJCS), 2(1), 1–16.

Caruana, R., et al. (2015). Intelligible Models for Healthcare: Predicting Pneumonia Risk and Hospital 30-day Readmission. Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.

Sankar Narayanan .S System Analyst, Anna University Coimbatore , 2010. PATTERN BASED SOFTWARE PATENT.International Journal of Computer Engineering and Technology (IJCET) -Volume:1,Issue:1,Pages:8-17.

Kumari, B. (2024). Building scalable AI-driven MDM strategies withD365: A technical deep dive. International Journal of Research in ComputerApplications and Information Technology, 7(2), 797–812

S.Sankara Narayanan and M.Ramakrishnan, Software As A Service: MRI Cloud Automated Brain MRI Segmentation And Quantification Web Services, International Journal of Computer Engineering & Technology, 8(2), 2017, pp. 38–48.

Nivedhaa, N. (2024). Towards efficient data migration in cloud computing: A comparative analysis of methods and tools. International Journal of Artificial Intelligence and Cloud Computing (IJAICC), 2(1), 1–16.

Lipton, Zachary C. "The Mythos of Model Interpretability: In Machine Learning, the Concept of Interpretability Is Both Important and Slippery." arXiv preprint arXiv:1606.03490, 2016.

Sankar Narayanan .S, System Analyst, Anna University Coimbatore , 2010. INTELLECTUAL PROPERY RIGHTS: ECONOMY Vs SCIENCE &TECHNOLOGY. International Journal of Intellectual Property Rights (IJIPR) .Volume:1,Issue:1,Pages:6-10.

Doshi-Velez, Finale, and Been Kim. "Towards a Rigorous Science of Interpretable Machine Learning." arXiv preprint arXiv:1702.08608, 2017.

Babita Kumari. (2024). Autonomous Data Healing: AI-DrivenSolutions for Enterprise Data Integrity. International Journal of Computer Engineeringand Technology, 15(6), 33–45.

K. Vasudevan, Applications of Artificial Intelligence in Power Electronics and Drives Systems: A Comprehensive Review, Journal of Power Electronics (JPE), 1(1), 2023, pp. 1–14 doi: https://doi.org/10.17605/OSF.IO/68SQR

Mukesh, V. (2022). Cloud Computing Cybersecurity Enhanced by Machine Learning Techniques. Frontiers in Computer Science and Information Technology (FCSIT), 3(1), 1-19.

Molnar, Christoph. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. CreateSpace Independent Publishing Platform, 2019.

Biecek, Przemyslaw, and Tomasz Burzykowski. Explanatory Model Analysis: Explore, Explain, and Examine Predictive Models. Chapman and Hall/CRC, 2020.

Kumari, B. (2024). Intelligent Data Governance Frameworks: A Technical Overview. International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 10(6), 141–154.

Kannan, N. (2024). Exploring robustness and generalization in data science models through multi-fidelity simulations and transfer learning. International Journal of Data Scientist (IJDST), 1(2), 1–11.

Shrikumar, Avanti, Peyton Greenside, and Anshul Kundaje. "Learning Important Features Through Propagating Activation Differences." Proceedings of the 34th International Conference on Machine Learning, 2017.

Kumari, B. (2024). Enhancing Data Security in Knowledge Databases: A Novel Integration of Fast Huffman Encoding and Encryption Techniques. Technoarete Transactions on Advances in Computer Applications, 3(3), 1–11.

Tamilselvan, N. (2024). Blockchain-based digital rights management for enhanced content security in digital libraries. International Journal of Blockchain Technology (IJBT), 2(1), 1–8.

Rudin, Cynthia. "Stop Explaining Black Box Machine Learning Models for High-Stakes Decisions and Use Interpretable Models Instead." Nature Machine Intelligence, vol. 1, no. 5, 2019, pp. 206–215.

Guidotti, Riccardo, et al. "A Survey of Methods for Explaining Black Box Models." ACM Computing Surveys (CSUR), vol. 51, no. 5, 2018, pp. 1–42.

Govindaraaj, J. (2023). Analyzing the effectiveness of data security policies in legacy systems. International Journal of Cyber Security (IJCS), 1(1), 16–24.

Downloads

Published

2025-03-05

How to Cite

DEVELOPING EXPLAINABLE STATISTICAL MODELS FOR ENHANCING INTERPRETABILITY AND TRUST IN MACHINE LEARNING APPLICATIONS. (2025). GLOBAL JOURNAL OF MULTIDISCIPLINARY RESEARCH AND DEVELOPMENT, 6(2), 1-6. https://gjmrd.com/index.php/GJMRD/article/view/GJMRD.6.2.001