Improving the explain-any-concept by introducing nonlinearity to the trainable surrogate model | Kütüphane.osmanlica.com

Improving the explain-any-concept by introducing nonlinearity to the trainable surrogate model

İsim Improving the explain-any-concept by introducing nonlinearity to the trainable surrogate model
Yazar Ozer, Sedat, Zaval, Mounes
Basım Tarihi: 2024-01-01
Basım Yeri - IEEE
Konu SAM, Explain-Any-Concept, XAI, Explainability
Tür Belge
Dil İngilizce
Dijital Evet
Yazma Hayır
Kütüphane: Özyeğin Üniversitesi
Demirbaş Numarası 979-835038896-1
Kayıt Numarası 5ae8b083-3567-49b7-b77b-e6fa4ab030b8
Lokasyon Computer Science
Tarih 2024-01-01
Örnek Metin In the evolving field of Explainable AI (XAI), interpreting the decisions of deep neural networks (DNNs) in computer vision tasks is an important process. While pixel-based XAI methods focus on identifying significant pixels, existing concept-based XAI methods use pre-defined or human-annotated concepts. The recently proposed Segment Anything Model (SAM) achieved a significant step forward to prepare automatic concept sets via comprehensive instance segmentation. Building upon this, the Explain Any Concept (EAC) model emerged as a flexible method for explaining DNN decisions. EAC model is based on using a surrogate model which has one trainable linear layer to simulate the target model. In this paper, by introducing an additional nonlinear layer to the original surrogate model, we show that we can improve the performance of the EAC model. We compare our proposed approach to the original EAC model and report improvements obtained on both ImageNet and MS COCO datasets.
DOI 10.1109/SIU61531.2024.10600959
Kaynağa git Özyeğin Üniversitesi Özyeğin Üniversitesi
Özyeğin Üniversitesi Özyeğin Üniversitesi
Kaynağa git

Improving the explain-any-concept by introducing nonlinearity to the trainable surrogate model

Yazar Ozer, Sedat, Zaval, Mounes
Basım Tarihi 2024-01-01
Basım Yeri - IEEE
Konu SAM, Explain-Any-Concept, XAI, Explainability
Tür Belge
Dil İngilizce
Dijital Evet
Yazma Hayır
Kütüphane Özyeğin Üniversitesi
Demirbaş Numarası 979-835038896-1
Kayıt Numarası 5ae8b083-3567-49b7-b77b-e6fa4ab030b8
Lokasyon Computer Science
Tarih 2024-01-01
Örnek Metin In the evolving field of Explainable AI (XAI), interpreting the decisions of deep neural networks (DNNs) in computer vision tasks is an important process. While pixel-based XAI methods focus on identifying significant pixels, existing concept-based XAI methods use pre-defined or human-annotated concepts. The recently proposed Segment Anything Model (SAM) achieved a significant step forward to prepare automatic concept sets via comprehensive instance segmentation. Building upon this, the Explain Any Concept (EAC) model emerged as a flexible method for explaining DNN decisions. EAC model is based on using a surrogate model which has one trainable linear layer to simulate the target model. In this paper, by introducing an additional nonlinear layer to the original surrogate model, we show that we can improve the performance of the EAC model. We compare our proposed approach to the original EAC model and report improvements obtained on both ImageNet and MS COCO datasets.
DOI 10.1109/SIU61531.2024.10600959
Özyeğin Üniversitesi
Özyeğin Üniversitesi yönlendiriliyorsunuz...

Lütfen bekleyiniz.