Humanitarian AI, PyTorch Models, and Saliency Maps

Journal Club

30-04-2020 • 27 mins

George's paper this week is Sanity Checks for Saliency Maps. This work takes stock of a group of techniques that generate local interpretability - and assesses their trustworthiness through two 'sanity checks'. From this analysis, Adebayo et al demonstrate that a number of these tools are invariant to the model's weights and could lead a human observer into confirmation bias. Kyle discusses AI and brings the question: How can AI help in a humanitarian crisis? Last but not least, Lan brings us the topic of Captum, an extensive interpretability library for PyTorch models.

You Might Like

StarTalk Radio
StarTalk Radio
Neil deGrasse Tyson
Curious Cases
Curious Cases
BBC Radio 4
The Naked Scientists Podcast
The Naked Scientists Podcast
The Naked Scientists
Farming Today
Farming Today
BBC Radio 4
That UFO Podcast
That UFO Podcast
That UFO Podcast
Paranormal Mysteries
Paranormal Mysteries
Nic Ryan Media | Unexplained Supernatural Stories
Sasquatch Chronicles
Sasquatch Chronicles
Sasquatch Chronicles - Bigfoot Encounters
Science Weekly
Science Weekly
The Guardian
Hidden Brain
Hidden Brain
Hidden Brain, Shankar Vedantam
Radiolab
Radiolab
WNYC Studios