Developing interpretable models and techniques for explainable AI in decision-making
Downloads
Published
DOI:
https://doi.org/10.58414/SCIENTIFICTEMPER.2023.14.4.39Keywords:
Explainable AI interpretable AI models, Cybersecurity, Attack types, Decision-making, Botanical classification.Dimensions Badge
Issue
Section
License
Copyright (c) 2023 The Scientific Temper

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The rapid proliferation of artificial intelligence (AI) technologies across various industries and decision-making processes has undeniably transformed the way of approaching complex problems and tasks. AI systems have proven their prowess in areas such as healthcare, finance, and autonomous systems, revolutionizing how decisions are made. Nevertheless, this proliferation of AI has raised critical concerns regarding the transparency, accountability, and fairness of these systems, as many of the state-of-the-art AI models often resemble complex black boxes. These intricate models, particularly deep learning neural networks, harbor non-linear relationships that are difficult for human users to decipher, thereby raising concerns about bias, fairness, and overall trustworthiness in AI-driven decisions. The urgency of this issue is underscored by the realization that AI should not merely be accurate; it should also be interpretable. Explainable AI (XAI) has emerged as a vital field of research, emphasizing the development of models and techniques that render AI systems comprehensible and transparent in their decision-making processes. This paper investigates into the relevance and significance of XAI across various domains, including healthcare, finance, and autonomous systems, where the ability to understand the rationale behind AI decisions is paramount. In healthcare, where AI assists in diagnosis and treatment, the interpretability of AI models is crucial for clinicians to make informed decisions. In finance, applications like credit scoring and investment analysis demand transparent AI to ensure fairness and accountability. In the realm of autonomous systems, transparency is indispensable to guarantee safety and compliance with regulations. Moreover, government agencies in areas such as law enforcement and social services require interpretable AI to maintain ethical standards and accountability. This paper also highlights the diverse array of research efforts in the XAI domain, spanning from model-specific interpretability methods to more general approaches aimed at unveiling complex AI models. Interpretable models like decision trees and rule-based systems have gained attention for their inherent transparency, while integrating explanation layers into deep neural networks strives to balance accuracy with interpretability. The study emphasizes the significance of this burgeoning field in bridging the gap between AI's advanced capabilities and human users' need for comprehensible AI systems. It seeks to contribute to this field by exploring the design, development, and practical applications of interpretable AI models and techniques, with the ultimate goal of enhancing the trust and understanding of AI-driven decisions.Abstract
How to Cite
Downloads
Similar Articles
- Afroz Alam, Krishna Kumar Rawat, Praveen Kumar Verma, Sonu Yadav, Bryodiversity of Eastern Ghats (India) , The Scientific Temper: Vol. 7 No. 1&2 (2016): THE SCIENTIFIC TEMPER
- Maj Neerja Masih, E.S. Charles, Study of Rhodotorula glutinis growth and lipid production using low cost substrates , The Scientific Temper: Vol. 7 No. 1&2 (2016): THE SCIENTIFIC TEMPER
- B. R. JAIPAL, POPULATION STRUCTURE OF NILGAI (BOSELAPHUS TRAGOCAMELUS) IN THE SEMI ARID REGION OF THE THAR DESERT , The Scientific Temper: Vol. 10 No. 1&2 (2019): The Scientific Temper
- NAVEEN KUMAR SHARMA, KAPIL KUMAR, A REVIEW OF HIMALAYAN BIODIVERSITY WITH REFERENCE TO UTTARAKHAND , The Scientific Temper: Vol. 10 No. 1&2 (2019): The Scientific Temper
- Anita Yadav, Neerja Kapoor, Shivji Malviya, Sandeep K. Malhotra, COVID-19 Pandemic and the Global Vaccine Strategy , The Scientific Temper: Vol. 11 No. 1&2 (2020): The Scientific Temper
- Gourav Kalra, Arun Kumar Gupta, Multi-response Optimization of Machining Parameters in Inconel 718 End Milling Process Through RSM-MOGA , The Scientific Temper: Vol. 13 No. 02 (2022): The Scientific Temper
- Saroj Bala, Rajiv Ranjan Dwivedi, The Problematics of Parenthood in the Shiva Trilogy by Amish , The Scientific Temper: Vol. 13 No. 02 (2022): The Scientific Temper
- Jyoti Kataria, Himanshi Rawat, Himani Tomar, Naveen Gaurav, Arun Kumar, Azo Dyes Degradation Approaches and Challenges: An Overview , The Scientific Temper: Vol. 13 No. 02 (2022): The Scientific Temper
- Sohini Bhattacharyya, Ajay Kumar Harit, Manoj Singh, Urvashi Sharma, Chaitramayee Pradhan, Occurrence of Antibiotic Resistance in Lotic Ecosystems , The Scientific Temper: Vol. 13 No. 02 (2022): The Scientific Temper
- Abhishek K Pandey, Amrita Sahu, Ajay K Harit, Manoj Singh, Nutritional composition of the wild variety of edible vegetables consumed by the tribal community of Raipur, Chhattisgarh, India , The Scientific Temper: Vol. 14 No. 01 (2023): The Scientific Temper
<< < 12 13 14 15 16 17 18 19 20 21 > >>
You may also start an advanced similarity search for this article.
Most read articles by the same author(s)
- Pravin P. Adivarekar1, Amarnath Prabhakaran A, Sukhwinder Sharma, Divya P, Muniyandy Elangovan, Ravi Rastogi, Automated machine learning and neural architecture optimization , The Scientific Temper: Vol. 14 No. 04 (2023): The Scientific Temper
- Balaji V, Purnendu Bikash Acharjee, Muniyandy Elangovan, Gauri Kalnoor, Ravi Rastogi, Vishnu Patidar, Developing a semantic framework for categorizing IoT agriculture sensor data: A machine learning and web semantics approach , The Scientific Temper: Vol. 14 No. 04 (2023): The Scientific Temper