Listen ad-free

Virtual Reality Sexual Assault, AI Risks to Women & Bias

The Foil

07-03-2022 • 41 mins

On International Women’s Day we celebrate by speaking with Dr Catriona Wallace who is a mother and a global leader in AI ethics. She sits on numerous boards and educates leaders around the world on mitigating unintended harms from AI.


Catriona discusses the recent emergence of metaverses; immersive virtual worlds where users can interact in new and creative ways. Catriona relates a recent sexual assault incident in which a woman Nina Jane Patel was virtually assaulted within Horizon, a metaverse created by Meta, and another incident in which the owner of a virtual residence found that their virtual dwelling was being squatted in and there was no clear recourse to justice.


We discuss the risks of bias in AI algorithms and how women have historically been under-valued by AI systems tasked with recommending job candidates for Amazon or estimating customer creditworthiness for Goldman Sachs and Apple. Catriona argues that this bias stems from inadequate representation of women in the data used to train the AI systems, and under-representation of women in the field of Data Science. Catriona observes that 85 million jobs will be replaced by AI systems, and that 90% of these jobs are held by women and minorities.


Catriona argues that the responsibility for AI-enhanced real-world decisions should remain with business owners, not the technical teams who develop the AI systems. Catriona relates her experience as the Executive Director at the Gradient Institute of training boards and executives who have very little understanding of AI.


Catriona describes how it is predominantly young men who are creating datasets, for example by manually labelling images, and that this is one way in which bias is introduced into AI systems. Catriona talks about the work of the Gradient Institute training Data Scientists to code ethically and teaching Data Scientists about tools that are available for assessing whether their work is having unintended consequences. Catriona advocates for regular AI systems assessments by external assessors to provide Data Scientists with feedback about how they can be more responsible.


Catriona shares the recent release of Australia’s first Responsible AI Index by Fifth Quadrant, Ethical AI Advisory, and the Gradient Institute. The research found that only 8% of organisations had any type of Responsible AI maturity. Organisations can measure their own Responsible AI maturity using the Responsible AI Self-Assessment Tool (fifthquadrant.com.au).


Catriona observes that many of the entry-level, administrative, and customer service jobs that will be automated by AI systems in the coming years are typically held by women and minorities, and that Australia needs another 160,000 Data Scientists to keep pace with global industry.


www.seerdata.ai

www.thefoil.ai


Hosted on Acast. See acast.com/privacy for more information.

You Might Like

Darknet Diaries
Darknet Diaries
Jack Rhysider
Double Tap
Double Tap
Double Tap Productions Inc.
Acquired
Acquired
Ben Gilbert and David Rosenthal
The Vergecast
The Vergecast
The Verge
TechStuff
TechStuff
iHeartPodcasts
Talkin' Shop
Talkin' Shop
Eclipse Automotive Technology
Hard Fork
Hard Fork
The New York Times
Waveform: The MKBHD Podcast
Waveform: The MKBHD Podcast
Vox Media Podcast Network
This Week in Retro
This Week in Retro
Neil from RMCretro - The Cave, Chris from 005 AGIMA and Dave
RNIB Tech Talk
RNIB Tech Talk
RNIB Connect Radio