London Futurists

London Futurists

Anticipating and managing exponential impact - hosts David Wood and Calum Chace

Calum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.

His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.

He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.

In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.

He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.

Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.

David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future,including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.

He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.

As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.

From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.

Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

read less
TechnologyTechnology

Episodes

Progress with ending aging, with Aubrey de Grey
4d ago
Progress with ending aging, with Aubrey de Grey
Our topic in this episode is progress with ending aging. Our guest is the person who literally wrote the book on that subject, namely the book, “Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime”. He is Aubrey de Grey, who describes himself in his Twitter biography as “spearheading the global crusade to defeat aging”.In pursuit of that objective, Aubrey co-founded the Methuselah Foundation in 2003, the SENS Research Foundation in 2009, and the LEV Foundation, that is the Longevity Escape Velocity Foundation, in 2022, where he serves as President and Chief Science Officer.Full disclosure: David also has a role on the executive management team of LEV Foundation, but for this recording he was wearing his hat as co-host of the London Futurists Podcast.The conversation opens with this question: "When people are asked about ending aging, they often say the idea sounds nice, but they see no evidence for any actual progress toward ending aging in humans. They say that they’ve heard talk about that subject for years, or even decades, but wonder when all that talk is going to result in people actually living significantly longer. How do you respond?"Selected follow-ups:Aubrey de Grey on X (Twitter)The book Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our LifetimeThe Longevity Escape Velocity (LEV) FoundationThe SENS paradigm for ending aging , contrasted with the "Hallmarks of Aging" - a 2023 article in Rejuvenation ResearchProgress reports from the current RMR projectThe plan for RMR 2The RAID (Rodent Aging Interventions Database) analysis that guided the design of RMR 1 and 2Longevity Summit Dublin (LSD): 13-16 June 2024Unblocking the Brain’s Drains to Fight Alzheimer’s - Doug Ethell of Leucadia Therapeutics at LSD 2023 (explains the possible role of the cribriform plate)Targeting Telomeres to Clear Cancer – Vlad Vitoc of MAIA Biotechnology at LSD 2023How to Run a Lifespan Study of 1,000 Mice - Danique Wortel of Ichor Life Sciences at LSD 2023XPrize HealthspanThe Dublin Longevity Declaration ("DLD")Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
What’s it like to be an AI, with Anil Seth
13-04-2024
What’s it like to be an AI, with Anil Seth
As artificial intelligence models become increasingly powerful, they both raise - and might help to answer - some very important questions about one of the most intriguing, fascinating aspects of our lives, namely consciousness.It is possible that in the coming years or decades, we will create conscious machines. If we do so without realising it, we might end up enslaving them, torturing them, and killing them over and over again. This is known as mind crime, and we must avoid it.It is also possible that very powerful AI systems will enable us to understand what our consciousness is, how it arises, and even how to manage it – if we want to do that.Our guest today is the ideal guide to help us explore the knotty issue of consciousness. Anil Seth is professor of Cognitive and Computational Neuroscience at the University of Sussex. He is amongst the most cited scholars on the topics of neuroscience and cognitive science globally, and a regular contributor to newspapers and TV programmes.His most recent book was published in 2021, and is called “Being You – a new science of consciousness”.The first question sets the scene for the conversation that follows: "In your book, you conclude that consciousness may well only occur in living creatures. You say 'it is life, rather than information processing, that breathes the fire into the equations.' What made you conclude that?"Selected follow-ups:Anil Seth's websiteBooks by Anil Seth, including Being YouConsciousness in humans and other things - presentation by Anil Seth at The Royal Society, March 2024Is consciousness more like chess or the weather? - an interview with Anil SethAutopoiesis - Wikipedia article about the concept introduced by Humberto Maturana and Francisco Varela Akinetic mutism, WikipediaCerebral organoid (Brain organoid), WikipediaAI Scientists: Safe and Useful AI? - by Yoshua Bengio, on AIs as oraclesEx Machina (2014 film, written and directed by Alex Garland)The Conscious Electromagnetic Information (Cemi) Field Theory by Johnjoe McFaddenThe Electromagnetic Field Theory of Consciousness by Susan PockettMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Regulating Big Tech, with Adam Kovacevich
04-04-2024
Regulating Big Tech, with Adam Kovacevich
Our guest in this episode is Adam Kovacevich. Adam is the Founder and CEO of the Chamber of Progress, which describes itself as a center-left tech industry policy coalition that works to ensure that all citizens benefit from technological leaps, and that the tech industry operates responsibly and fairly.Adam has had a front row seat for more than 20 years in the tech industry’s political maturation, and he advises companies on navigating the challenges of political regulation.For example, Adam spent 12 years at Google, where he led a 15-person policy strategy and external affairs team. In that role, he drove the company’s U.S. public policy campaigns on topics such as privacy, security, antitrust, intellectual property, and taxation.We had two reasons to want to talk with Adam. First, to understand the kerfuffle that has arisen from the lawsuit launched against Apple by the U.S. Department of Justice and sixteen state Attorney Generals. And second, to look ahead to possible future interactions between tech industry regulators and the industry itself, especially as concerns about Artificial Intelligence rise in the public mind.Selected follow-ups:Adam Kovacevich's websiteThe Chamber of ProgressGartner Hype Cycle"Justice Department Sues Apple for Monopolizing Smartphone Markets"The Age of Surveillance Capitalism by Shoshana ZuboffEpic Games v. Apple (Wikipedia)"AirTags Are the Best Thing to Happen to Tile" (Wired)Adobe FireflyThe EU AI ActMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The case for brain preservation, with Kenneth Hayworth
29-03-2024
The case for brain preservation, with Kenneth Hayworth
In this episode, we are delving into the fascinating topic of mind uploading. We suspect this idea is about to explode into public consciousness, because Nick Bostrom has a new book out shortly called “Deep Utopia”, which addresses what happens if superintelligence arrives and everything goes well. It was Bostrom’s last book, “Superintelligence”, that ignited the great robot freak-out of 2015.Our guest is Dr Kenneth Hayworth, a Senior Scientist at the Howard Hughes Medical Institute's Janelia Farm Research Campus in Ashburn, Virginia. Janelia is probably America’s leading research institution in the field of connectomics – the precise mapping of the neurons in the human brain.Kenneth is a co-inventor of a process for imaging neural circuits at the nanometre scale, and he has designed and built several automated machines to do it. He is currently researching ways to extend Focused Ion Beam Scanning Electron Microscopy imaging of brain tissue to encompass much larger volumes than are currently possible.Along with John Smart, Kenneth co-founded the Brain Preservation Foundation in 2010, a non-profit organization with the goal of promoting research in the field of whole brain preservation.During the conversation, Kenneth made a strong case for putting more focus on preserving human brains via a process known as aldehyde fixation, as a way of enabling people to be uploaded in due course into new bodies. He also issued a call for action by members of the global cryonics community.Selected follow-ups:Kenneth HayworthThe Brain Preservation FoundationAn essay by Kenneth Hayworth: Killed by Bad PhilosophyThe short story Psychological Counseling for First-time Teletransport Users (PDF)21st Century MedicineJanelia Research CampusMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The Political Singularity and a Worthy Successor, with Daniel Faggella
15-03-2024
The Political Singularity and a Worthy Successor, with Daniel Faggella
Calum and David recently attended the BGI24 event in Panama City, that is, the Beneficial General Intelligence summit and unconference. One of the speakers we particularly enjoyed listening to was Daniel Faggella, the Founder and Head of Research of Emerj.Something that featured in his talk was a 3 by 3 matrix, which he calls the Intelligence Trajectory Political Matrix, or ITPM for short. As we’ll be discussing in this episode, one of the dimensions of this matrix is the kind of end goal future that people desire, as intelligent systems become ever more powerful. And the other dimension is the kind of methods people want to use to bring about that desired future.So, if anyone thinks there are only two options in play regarding the future of AI, for example “accelerationists” versus “doomers”, to use two names that are often thrown around these days, they’re actually missing a much wider set of options. And frankly, given the challenges posed by the fast development of AI systems that seem to be increasingly beyond our understanding and beyond our control, the more options we can consider, the better.The topics that featured in this conversation included:"The Political Singularity" - when the general public realize that one political question has become more important than all the others, namely should humanity be creating an AI with godlike powers, and if so, under what conditionsCriteria to judge whether a forthcoming superintelligent AI is a "worthy successor" to humanity.Selected follow-ups:The website of Dan FaggellaThe BGI24 conference, lead organiser Ben Goertzel of SingularityNETThe Intelligence Trajectory Political MatrixThe Political SingularityA Worthy Successor - the purpose of AGIRoko Mijic on Twitter/XThe novel Diaspora by Greg EganMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The Longevity Singularity, with Daniel Ives
07-03-2024
The Longevity Singularity, with Daniel Ives
In the wide and complex subject of biological aging, one particular kind of biological aging has been receiving a great deal of attention in recent years. That’s the field of epigenetic aging, where parts of the packaging or covering, as we might call it, of the DNA in all of our cells, alters over time, changing which genes are turned on and turned off, with increasingly damaging consequences.What’s made this field take off is the discovery that this epigenetic aging can be reversed, via an increasing number of techniques. Moreover, there is some evidence that this reversal gives a new lease of life to the organism.To discuss this topic and the opportunities arising, our guest in this episode is Daniel Ives, the CEO of Shift Bioscience. As you’ll hear, Shift Bioscience is a company that is carrying out some very promising research into this field of epigenetic aging.Daniel has a PhD from the University of Cambridge, and co-founded Shift Bioscience in 2017.The conversation highlighted a way of using AI transformer models and a graph neural network to dramatically speed up the exploration of which proteins can play the best role in reversing epigenetic aging. It also considered which other types of aging will likely need different sorts of treatments, beyond these proteins. Finally, conversation turned to a potential fast transformation of public attitudes toward the possibility and desirability of comprehensively treating aging - a transformation called "all hell breaks loose" by Daniel, and "the Longevity Singularity" by Calum.Selected follow-ups:Shift BioscienceAubrey de Grey's TED talk "A roadmap to end aging"Epigenetic clocks (Wikipedia)Shinya Yamanaka (Wikipedia)scGPT - bioRxiv preprint by Bo Wang and colleaguesMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Where are all the Dyson spheres? with Paul Sutter
21-02-2024
Where are all the Dyson spheres? with Paul Sutter
In this episode, we look further into the future than usual. We explore what humanity might get up to in a thousand years or more: surrounding whole stars with energy harvesting panels, sending easily detectable messages across space which will last until the stars die out.Our guide to these fascinating thought experiments in Paul M. Sutter, a NASA advisor and theoretical cosmologist at the Institute for Advanced Computational Science at Stony Brook University in New York and a visiting professor at Barnard College, Columbia University, also in New York. He is an award-winning science communicator, and TV host.The conversation reviews arguments for why intelligent life forms might want to capture more energy than strikes a single planet, as well as some practical difficulties that would complicate such a task. It also considers how we might recognise evidence of megastructures created by alien civilisations, and finishes with a wider exploration about the role of science and science communication in human society.Selected follow-ups:Paul M. Sutter - website"Would building a Dyson sphere be worth it? We ran the numbers" - Ars TechnicaForthcoming book - Rescuing Science: Restoring Trust in an Age of Doubt"The Kardashev scale: Classifying alien civilizations" - Space.com"Modified Newtonian dynamics" as a possible alternative to the theory of dark matterThe Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory - 1999 book by Brian GreeneThe Demon-Haunted World: Science as a Candle in the Dark - 1995 book by Carl SaganMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Provably safe AGI, with Steve Omohundro
13-02-2024
Provably safe AGI, with Steve Omohundro
AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.Selected follow-ups:Steve Omohundro: Innovative ideas for a better worldMetaculus forecast for the date of weak AGI"The Basic AI Drives" (PDF, 2008)TED Talk by Max Tegmark: How to Keep AI Under ControlApple Secure EnclaveMeta Research: Teaching AI advanced mathematical reasoningDeepMind AlphaGeometryMicrosoft Lean theorem proverTerence Tao (Wikipedia)NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)The team at MIRIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Robots and the people who love them, with Eve Herold
06-02-2024
Robots and the people who love them, with Eve Herold
In this episode, our subject is the rise of the robots – not the military kind of robots, or the automated manufacturing kind that increasingly fill factories, but social robots. These are robots that could take roles such as nannies, friends, therapists, caregivers, and lovers. They are the subject of the important new book Robots and the People Who Love Them, written by our guest today, Eve Herold.Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She has written extensively about issues at the crossroads of science and society, including stem cell research and regenerative medicine, aging and longevity, medical implants, transhumanism, robotics and AI, and bioethical issues in leading-edge medicine – all of which are issues that Calum and David like to feature on this show.Eve currently serves as Director of Policy Research and Education for the Healthspan Action Coalition. Her previous books include Stem Cell Wars and Beyond Human. She is the recipient of the 2019 Arlene Eisenberg Award from the American Society of Journalists and Authors.Selected follow-ups:Eve Herold: What lies ahead for the human raceEve Herold on Macmillan PublishersThe book Robots and the People Who Love ThemHealthspan Action CoalitionHanson RoboticsSophia, Desi, and GraceThe AIBO robotic puppySome of the films discussed:A.I. (2001)Ex Machina (2014)I, Robot (2004)I'm Your Man (2021)Robot & Frank (2012)WALL.E (2008)Metropolis (1927)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Meet the electrome! with Sally Adee
05-01-2024
Meet the electrome! with Sally Adee
Our subject in this episode is the idea that the body uses electricity in more ways than are presently fully understood. We consider ways in which electricity, applied with care, might at some point in the future help to improve the performance of the brain, to heal wounds, to stimulate the regeneration of limbs or organs, to turn the tide against cancer, and maybe even to reverse aspects of aging.To guide us through these possibilities, who better than the science and technology journalist Sally Adee? She is the author of the book “We Are Electric: Inside the 200-Year Hunt for Our Body's Bioelectric Code, and What the Future Holds”. That book gave David so many insights on his first reading, that he went back to it a few months later and read it all the way through again.Sally was a technology features and news editor at the New Scientist from 2010 to 2017, and her research into bioelectricity was featured in Yuval Noah Harari’s book “Homo Deus”.Selected follow-ups:Sally Adee's websiteThe book "We are Electric"Article: "An ALS patient set a record for communicating via a brain implant: 62 words per minute"tDCS (Transcranial direct-current stimulation)The conference "Anticipating 2025" (held in 2014)Article: "Brain implants help people to recover after severe head injury"Article on enhancing memory in older peopleBioelectricity cancer researcher Mustafa DjamgozArticle on Tumour Treating FieldsArticle on "Motile Living Biobots"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Don't try to make AI safe; instead, make safe AI, with Stuart Russell
27-12-2023
Don't try to make AI safe; instead, make safe AI, with Stuart Russell
We are honoured to have as our guest in this episode Professor Stuart Russell. Stuart is professor of computer science at the University of California, Berkeley, and the traditional way to introduce him is to say that he literally wrote the book on AI. Artificial Intelligence: A Modern Approach, which he co-wrote with Peter Norvig, was first published in 1995, and the fourth edition came out in 2020.Stuart has been urging us all to take seriously the dramatic implications of advanced AI for longer than perhaps any other prominent AI researcher. He also proposes practical solutions, as in his 2019 book Human Compatible: Artificial Intelligence and the Problem of Control.In 2021 Stuart gave the Reith Lectures, and was awarded an OBE. But the greatest of his many accolades was surely in 2014 when a character with a background remarkably like his was played in the movie Transcendence by Johnny Depp. The conversation covers a wide range of questions about future scenarios involving AI, and reflects on changes in the public conversation following the FLI's letter calling for a moratorium on more powerful AI systems, and following the global AI Safety Summit held at Bletchley Park in the UK at the beginning of November.Selected follow-ups:Stuart Russell's page at BerkeleyCenter for Human-Compatible Artificial Intelligence (CHAI)The 2021 Reith Lectures: Living With Artificial IntelligenceThe book Human Compatible: Artificial Intelligence and the Problem of ControlMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The Politics of Transhumanism, with James Hughes
13-11-2023
The Politics of Transhumanism, with James Hughes
Our guest in this episode is James Hughes. James is a bioethicist and sociologist who serves as Associate Provost at the University of Massachusetts Boston. He is also the Executive Director of the IEET, that is the Institute for Ethics and Emerging Technologies, which he co-founded back in 2004.The stated mission of the IEET seems to be more important than ever, in the fast-changing times of the mid-2020s. To quote a short extract from its website:The IEET promotes ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies. We believe that technological progress can be a catalyst for positive human development so long as we ensure that technologies are safe and equitably distributed. We call this a “technoprogressive” orientation.Focusing on emerging technologies that have the potential to positively transform social conditions and the quality of human lives – especially “human enhancement technologies” – the IEET seeks to cultivate academic, professional, and popular understanding of their implications, both positive and negative, and to encourage responsible public policies for their safe and equitable use.That mission fits well with what we like to discuss with guests on this show. In particular, this episode asks questions about a conference that has just finished in Boston, co-hosted by the IEET, with the headline title “Emerging Technologies and the Future of Work”. The episode also covers the history and politics of transhumanism, as a backdrop to discussion of present and future issues.Selected follow-ups:https://ieet.org/James Hughes on Wikipediahttps://medium.com/institute-for-ethics-and-emerging-technologiesConference: Emerging Technologies and the Future of WorkMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
How to make AI safe, according to the tech giants, with Rebecca Finlay, CEO of PAI
30-10-2023
How to make AI safe, according to the tech giants, with Rebecca Finlay, CEO of PAI
The Partnership on AI was launched back in September 2016, during an earlier flurry of interest in AI, as a forum for the tech giants to meet leaders from academia, the media, and what used to be called pressure groups and are now called civil society. By 2019 more than 100 of those organisations had joined.The founding tech giants were Amazon, Facebook, Google, DeepMind, Microsoft, and IBM. Apple joined a year later and Baidu joined in 2018.Our guest in this episode is Rebecca Finlay, who joined the PAI board in early 2020 and was appointed CEO in October 2021. Rebecca is a Canadian who started her career in banking, and then led marketing and policy development groups in a number of Canadian healthcare and scientific research organisations.In the run-up to the Bletchley Park Global Summit on AI, the Partnership on AI has launched a set of guidelines to help the companies that are developing advanced AI systems and making them available to you and me. Rebecca will be addressing the delegates at Bletchley, and no doubt hoping that the summit will establish the PAI guidelines as the basis for global self-regulation of the AI industry.Selected follow-ups:https://partnershiponai.org/https://partnershiponai.org/team/#rebecca-finlay-staffhttps://partnershiponai.org/modeldeployment/An open event at Wilton Hall, Bletchley, the afternoon before the Bletchley Park AI Safety Summit starts: https://lu.ma/n9qmn4h6Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration