Fraud experts need for this task to have access to the right and interpretable information for each investigated operation in order to justify the suspicion and the resulting action.
實務上是怎麼做的呢?需要問問想想。
人類需要「模型可解釋性」來理解這些事情,不然很難下判斷。
human tasks create the need for model interpretability in order to treat each alert or understand globally the evolution of fraudsters’ behavior.
Intrinsically interpretable models, such as ..... , are characterized by their transparency and by a self-explainable structure. They are generally applied for use cases with legal or policy constraints (Zhuang et al., 2020), but they may well be not accurate enough for tasks such as fraud detection, which have high financial stakes. This explains why more accurate black box models look appealing as soon as a post hoc interpretability method is applied to provide explanations on either how they work or on their results.
的確,風險太大要想辦法降低風險。但也是要看每個應用的風險是什麼。
但Post hoc 的可解釋性,有個很大的缺點,就是無法在模型之間公平比較。
Among these methods, some, called post-hoc specific, are specific to a type of model. ..... . The main disadvantage of the latter is that their use is restricted to a single type of model and it is therefore complicated to compare performances and explanations of several different models.
First, an anti-fraud software, carried by the publisher Bleckwen, is developed for instant cash transfer fraud, characterized by high operation frequencies and limited human involvement. This software is based on the improvement of a black box scoring model (XGBoost), resulting in a fraud probability score, completed with a local interpretative overlay: all operations over a given optimal threshold are suspended and must be investigated.
我們目前好像沒有這種「詐欺機率分數」的概念,可以繼續去發展看看細節。
之後閱讀的清單
Weerts H, Ipenburg W and Pechenizkiy M (2019) Case-Based Reasoning for Assisting Domain Experts in Processing Fraud Alerts of Black-Box Machine Learning Models. Available at https://arxiv.org/pdf/1907.03334.pdf