Towards XAI in the SOC – a user centric study of explainable alerts with SHAP and LIME

Forfatter
Eriksson, Håkon Svee
Grov, Gudmund
Publisert
2023-01-26
Emneord
Maskinlæring
Permalenke
http://hdl.handle.net/20.500.12242/3185
DOI
10.1109/BigData55660.2022.10020248
Samling
Articles
Description
2022 IEEE International Conference on Big Data. IEEE (Institute of Electrical and Electronics Engineers) 2023 ISBN 978-1-6654-8045-1
2137403.pdf
Size: 845k
Sammendrag
Many studies of the adoption of machine learning (ML) in Security Operation Centres (SOCs) have pointed to a lack of transparency and explanation – and thus trust – as a barrier to ML adoption, and have suggested eXplainable Artificial Intelligence (XAI) as a possible solution. However, there is a lack of studies addressing to which degree XAI indeed helps SOC analysts. Focusing on two XAI-techniques, SHAP and LIME, we have interviewed several SOC analysts to understand how XAI can be used and adapted to explain ML-generated alerts. The results show that XAI can provide valuable insights for the analyst by highlighting features and information deemed important for a given alert. As far as we are aware, we are the first to conduct such a user study of XAI usage in a SOC and this short paper provides our initial findings. Index Terms—Interpretability, explainability, artificial intelligence, machine learning, security operation center, intrusion detection system, explainable artificial intelligence, user studies
View Meta Data