Explainable AI (XAI)
A critical element within the regulations when deploying AI Systems is ensuring the stakeholders and users understand the model(s) they are using. Explainable AI (XAI) is a collection of techniques and processes that help people understand the reasoning behind a machine learning algorithm's output. XAI can help improve and debug models, meet regulatory requirements, and increase trust in AI models.
The four principles of XAI are transparency, interpretability, causality, and fairness. These principles help ensure that AI models are accountable, understandable, and free from harmful biases.
Some methods for XAI include:
Local Interpretable Model-Agnostic Explanations (LIME)
A technique that explains machine learning model predictions by perturbing the input data and observing the model's output.Counterfactual explanations
A "what-if" analysis that generates a new data point with a different outcome from a given instance. This helps explain why an AI model made a particular prediction.SHapley Additive exPlanations (SHAP)
An algorithm that explains a prediction by mathematically computing how each feature contributed to it. SHAP can visualize the output of a machine learning model to make it more understandable.
Decision Trees - Machine Learning Model approach to XAI
Decision trees are a versatile machine learning model used for both regression and classification problems. They are known for their flexibility, ease of interpretation, and potential for high accuracy and stability. Decision trees operate by splitting data based on specific variables to determine the most accurate path toward a prediction. This process is very much like a flow chart, where data is divided into smaller sections based on conditional control statements or thresholds. Below is good example that aims to define if a product you are engaging with OR a product you intend to develop will contain (require) AI.
References:
Explainable AI: Working, Techniques & Benefits - apptunix.com/blog May 2024
Is it AI, MIT Algorithm flow-chart - Karen Hao, 2018