我對可解釋性的三個思考:可解釋資訊,合規與風險權衡,詐欺機率分數

2022/08/03閱讀時間約 6 分鐘
圖片來源:https://nam.edu/language-interpretation-and-translation-a-clarification-and-reference-checklist-in-service-of-health-literacy-and-cultural-respect/
本文章分享閱讀關於可解釋性的文章後,產生的三個思考理解。
利用真實數據訓練,可解釋性與準確度的權衡,是很常聽到的現象。
研究的主題,則是去問說:「這個權衡,如果改成合成數據訓練的話,是否還是有這種現象?」
下面收入三個思考
  • 思考#1:可解釋性的需求,以利人類翻譯資訊
  • 思考#2:可解釋的模型和法規,不可解釋的模型風險低
  • 思考#3:可解釋性,給詐欺機率分數,然後可以快速進去看問題在哪裡

思考#1:可解釋性的需求,以利人類翻譯資訊

詐欺偵測,照講會抓出些可疑的東西,進行調查。
調查的過程,需要內容的「可解釋性」。這個意思是,對於調查的運營,要有可解釋的資訊,來論證可疑性以及後續處理。
Fraud experts need for this task to have access to the right and interpretable information for each investigated operation in order to justify the suspicion and the resulting action.
實務上是怎麼做的呢?需要問問想想。
人類需要「模型可解釋性」來理解這些事情,不然很難下判斷。
human tasks create the need for model interpretability in order to treat each alert or understand globally the evolution of fraudsters’ behavior.
這個完全沒錯,但實務上要怎麼弄,就會有很多細節。

思考#2:可解釋的模型和法規,不可解釋的模型風險低

本真可解釋的模型,能夠符合政策的限制,但對詐欺偵測而言,不夠準確,產生較高的金融風險。因此事後可解釋的方法是一個風險較低的做法。
Intrinsically interpretable models, such as ..... , are characterized by their transparency and by a self-explainable structure. They are generally applied for use cases with legal or policy constraints (Zhuang et al., 2020), but they may well be not accurate enough for tasks such as fraud detection, which have high financial stakes. This explains why more accurate black box models look appealing as soon as a post hoc interpretability method is applied to provide explanations on either how they work or on their results.
的確,風險太大要想辦法降低風險。但也是要看每個應用的風險是什麼。
但Post hoc 的可解釋性,有個很大的缺點,就是無法在模型之間公平比較。
Among these methods, some, called post-hoc specific, are specific to a type of model. ..... . The main disadvantage of the latter is that their use is restricted to a single type of model and it is therefore complicated to compare performances and explanations of several different models.
不曉得事後可解釋性,具體是怎麼操作的。可能要回答這個問題後,再去想看看合成數據是不是真的有辦法免疫這個問題。
也就是說,如果可解釋模型的表現,可以靠合成數據來變得很好,那就真的不一定要用很好的模型,風險也很低了。
也有道理。如果做線性模型,可能改數據還比改模型的效果大很多。這個就是Data-centric想法的引人之處。

思考#3:可解釋性,給詐欺機率分數,然後可以快速進去看問題在哪裡

可解釋的覆蓋:根據一些可疑的行為特徵,給出詐欺機率分數,然後分數太高的要調查。
First, an anti-fraud software, carried by the publisher Bleckwen, is developed for instant cash transfer fraud, characterized by high operation frequencies and limited human involvement. This software is based on the improvement of a black box scoring model (XGBoost), resulting in a fraud probability score, completed with a local interpretative overlay: all operations over a given optimal threshold are suspended and must be investigated.
我們目前好像沒有這種「詐欺機率分數」的概念,可以繼續去發展看看細節。

之後閱讀的清單
  1. Weerts H, Ipenburg W and Pechenizkiy M (2019) Case-Based Reasoning for Assisting Domain Experts in Processing Fraud Alerts of Black-Box Machine Learning Models. Available at https://arxiv.org/pdf/1907.03334.pdf
    紫式講義
    紫式講義
    文字化紀錄平時學習到的底層邏輯。
    留言0
    查看全部
    發表第一個留言支持創作者!