A neural network learns when it should not be trusted


Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know they’re correct? Alexander Amini and his colleagues at MIT and Harvard University wanted to find out.





Like it? Share with your friends!

What's Your Reaction?

Angry Angry
0
Angry
Confused Confused
0
Confused
Buffoon Buffoon
0
Buffoon
Cry Cry
0
Cry
Cute Cute
0
Cute
WOW WOW
0
WOW
Dislike Dislike
0
Dislike
Fail Fail
0
Fail
Geek Geek
0
Geek
Like Like
0
Like
Source link

Send this to a friend