![]() ![]() Moreover, Reference (Arrieta et al., 2020) defines Explainable Artificial Intelligence (XAI) as “Given an audience, an explainable Artificial Intelligence is one that produces details or reasons to make its functioning clear or easy to understand.” Further, Reference (Murdoch et al., 2019) defines interpretable or eXplainable Machine Learning (XML) as “the extraction of relevant knowledge from a machine learning model concerning relationships either contained in data or learned by the model”, where the knowledge is considered relevant if it provides insight into the problem faced by the target audience. What is Explainable Anomaly Detection?Īccording to Reference (Doshi-Velez and Kim, 2017), interpretability or explainability is defined as the ability to explain or provide meaning to humans in understandable terms. Moreover, depending on the technique used to explain an anomaly, the Detection-Definition and Explanation-Definition can also be different, especially when the explanation approach does not reflect the decision-making process behind the anomaly detection model. For example, an anomaly detector may miss relevant anomalies while detecting ‘anomalies’ that are uninteresting to end-users. Importantly, this Detection-Definition definition can be different from the Oracle-Definition, which may lead to problems. For example, KNN (Ramaswamy et al., 2000) defines objects with ‘far’ k-nearest neighbours as anomalies, LOF (Breunig et al., 2000) treats objects with a low local density as anomalies, and Isolation Forest (Liu et al., 2008) considers ‘easily isolated’ objects as anomalies. As this is informal, each specific anomaly detection model has its own definition of an anomaly, either explicitly or implicitly. ![]() A commonly accepted definition by Reference (Hawkins, 1980) is that “an outlier is an observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism”. From this point of view, there is no universal definition of an anomaly. In general, the Oracle-Definition is given based on domain knowledge, which is application-specific. Clearly the Oracle-Definition, the Detection-Definition, and the Explanation-Definition can be different from each other. Moreover, for an identified anomalous payment, the anomaly explanation method could generate the explanation “the payment is flagged as anomalous because it happened at midnight”, which follows from the Explanation-Definition. Hence, the Detection-Definition is “unprecedented high payments and/or payments at a never-before-seen location” and this is actually a theft fraud. However, a given credit card fraud system might only detect anomalous behaviours such as unprecedented high payments and/or payments at a never-before-seen location. ![]() Therefore, the Oracle-Definition is “behaviour that aims to obtain services/goods and/or money by unethical means”. However, after a thorough survey of academic publications on explainable anomaly detection, we found that existing surveys are either outdated, have missed some important work, or their proposed taxonomies are relatively coarse and therefore unable to characterize the increasingly rich set of explainable anomaly detection techniques available in the literature.įor example, for a credit card fraud detection system, the end-users aim to detect fraudulent behaviour, which is defined as “obtaining services/goods and/or money by unethical means”, including bankruptcy fraud, theft fraud, application fraud and behavioral fraud (Delamaire et al., 2009). More importantly, for applications in safety critical domains, providing explanations to stakeholders of AI systems has become an ethical and regulatory requirement (Voigt and Von dem Bussche, 2017 Commission, 2020). As suggested by Reference (Langone et al., 2020), model explainability represents one of the main issues concerning the adoption of data-driven algorithms in industrial environments. In contrast, we only found a handful of surveys (Sejr and Schneider-Kamp, 2021 Panjei et al., 2022 Yepmo et al., 2022) about the explainability of anomaly detection methods. Since the seminal work in (Knorr and Ng, 1998), anomaly detection has been well studied and there exists a plethora of comprehensive surveys and reviews on it, including but not limited to References (Markou and Singh, 2003a, b Agyemang et al., 2006 Patcha and Park, 2007 Chandola et al., 2009 Zimek et al., 2012 Aggarwal, 2015 Chalapathy and Chawla, 2019 Boukerche et al., 2020 Pang et al., 2021b). ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |