In our first Episode of our Responsible Tech Series in the lead up to the Australian Federal Election, we speak with Edward Santow who is the Industry Professor for Responsible Tech at the University of Technology in Sydney. Prior to his current role, Ed was the Australian Human Rights Commissioner. During his tenure, he led the world’s largest public consultation on human rights and technology and published a public report with recommendations for the development of responsible tech.
In this Episode, we talk with Ed about his early experiences working as a lawyer in community legal services where he saw first-hand the impact of tech applications gone wrong in policing. We discuss the pivotal moment when public attitudes shifted away from complacency to real public concern for responsible use of data and tech; when Cambridge Analytica used personal data belonging to millions of Facebook users collected without their consent to provide analytical assistance to the 2016 presidential campaign of Donald Trump.
Ed outlines the three key vectors for responsible tech: the law, training and design. We explore regulation and legislation as it currently exists and that through enforcement of the current law, “80% of problems would go away.” Ed presents the recommendation of an impact assessment before use of AI for automated decision making. We discuss the future expectations from the public and the challenge for policy makers.
Ed highlights the application of AI in today’s business and public sector context noting that 85% of AI projects fail and why this is the case. We discuss facial recognition technology and the risks, and the need to build data capabilities across society in the data and digital age.
https://profiles.uts.edu.au/Edward.Santow
seerdata.ai
Hosted on Acast. See acast.com/privacy for more information.