EXPLAINABLE AI MODELS FOR SAFETY-CRITICAL ENGINEERING SYSTEMS

Authors

  • Jumanne M Author

DOI:

https://doi.org/10.12060/jet-ep-v25.i1-2

Keywords:

Explainable AI, safety-critical systems, interpretability, transparency, trustworthiness, model explainability, engineering systems

Abstract

The adoption of Artificial Intelligence (AI) across safety-critical engineering domains — including autonomous vehicles, aerospace control systems, industrial automation, and critical infrastructure monitoring — promises significant performance improvements. However, model opacity and the “black-box” nature of many AI and machine learning (ML) systems introduce risks of misinterpretation, misdiagnosis, and catastrophic failures, which can directly threaten human life, asset safety, or environmental integrity. Explainable AI (XAI) addresses this challenge by providing interpretable and transparent explanations of AI decision processes, crucial for validation, regulatory compliance, operator trust, and real-time operational accountability. This paper offers a comprehensive examination of XAI methods tailored for safety-critical engineering systems, comparing pre-hoc and post-hoc strategies, model-agnostic and model-specific techniques, and hybrid approaches that balance interpretability with predictive performance. We present a structured methodology for evaluating XAI models and showcase results from case studies involving autonomous driving, industrial robot control, and fault diagnosis. Findings suggest that XAI integration significantly improves diagnostic clarity and operator decision confidence, while challenges remain in real-time scalability and standardized evaluation metrics. The discussion synthesizes comparisons with existing literature and outlines future research directions, including formal verification integration and context-aware human-AI collaborative frameworks.

Downloads

Published

2022-04-30

Issue

Section

Articles