Novel techniques extract more accurate data from images degraded by environmental factors


Computer vision technology is increasingly used in areas such as automatic surveillance systems, self-driving cars, facial recognition, healthcare and social distancing tools. Users require accurate and reliable visual information to fully harness the benefits of video analytics applications but the quality of the video data is often affected by environmental factors such as rain, night-time conditions or crowds (where there are multiple images of people overlapping with each other in a scene). Using computer vision and deep learning, a team of researchers led by Yale-NUS College Associate Professor of Science (Computer Science) Robby Tan, who is also from the National University of Singapore’s (NUS) Faculty of Engineering, has developed novel approaches that resolve the problem of low-level vision in videos caused by rain and night-time conditions, as well as improve the accuracy of 3D human pose estimation in videos.





Like it? Share with your friends!

What's Your Reaction?

Angry Angry
0
Angry
Confused Confused
0
Confused
Buffoon Buffoon
0
Buffoon
Cry Cry
0
Cry
Cute Cute
0
Cute
WOW WOW
0
WOW
Dislike Dislike
0
Dislike
Fail Fail
0
Fail
Geek Geek
0
Geek
Like Like
0
Like
Source link

Send this to a friend