Concept whitening: A strategy to improve the interpretability of image recognition models


Over the past decade or so, deep neural networks have achieved very promising results on a variety of tasks, including image recognition tasks. Despite their advantages, these networks are very complex and sophisticated, which makes interpreting what they learned and determining the processes behind their predictions difficult or sometimes impossible. This lack of interpretability makes deep neural networks somewhat untrustworthy and unreliable.





Like it? Share with your friends!

What's Your Reaction?

Angry Angry
0
Angry
Confused Confused
0
Confused
Buffoon Buffoon
0
Buffoon
Cry Cry
0
Cry
Cute Cute
0
Cute
WOW WOW
0
WOW
Dislike Dislike
0
Dislike
Fail Fail
0
Fail
Geek Geek
0
Geek
Like Like
0
Like
Source link

Send this to a friend