London Futurists
Podcast

London Futurists

122
2

Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

122
2

What's your p(Pause)? with Holly Elmore

Our guest in this episode is Holly Elmore, who is the Founder and Executive Director of PauseAI US. The website pauseai-us.org starts with this headline: “Our proposal is simple: Don’t build powerful AI systems until we know how to keep them safe. Pause AI.” But PauseAI isn’t just a talking shop. They’re probably best known for organising public protests. The UK group has demonstrated in Parliament Square in London, with Big Ben in the background, and also outside the offices of Google DeepMind. A group of 30 PauseAI protesters gathered outside the OpenAI headquarters in San Francisco. Other protests have taken place in New York, Portland, Ottawa, Sao Paulo, Berlin, Paris, Rome, Oslo, Stockholm, and Sydney, among other cities. Previously, Holly was a researcher at the think tank Rethink Priorities in the area of Wild Animal Welfare. And before that, she studied evolutionary biology in Harvard’s Organismic and Evolutionary Biology department. Selected follow-ups: Holly Elmore - substack PauseAI US PauseAI - global site Wild Animal Suffering... and why it matters Hard problem of consciousness - Wikipedia The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik - by Michael Plant Leading Evolution Compassionately - Herbivorize Predators David Pearce (philosopher) - Wikipedia The AI industry is racing toward a precipice - Machine Intelligence Research Institute (MIRI) Nick Bostrom's new views regarding AI/AI safety - reddit AI is poised to remake the world; Help us ensure it benefits all of us - Future of Life Institute On being wrong about AI - by Scott Aharonson, on his previous suggestion that it might take "a few thousand years" to reach superhuman AI California Institute of Machine Consciousness - organisation founded by Joscha Bach Pausing AI is the only safe approach to digital sentience - article by Holly Elmore Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers - book by Geoffrey Moore Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Internet and technology Yesterday
0
0
5
43:42

Real-life superheroes and troubled institutions, with Tom Ough

Popular movies sometimes feature leagues of superheroes who are ready to defend the Earth against catastrophe. In this episode, we’re going to be discussing some real-life superheroes, as chronicled in the new book by our guest, Tom Ough. The book is entitled “The Anti-Catastrophe League: The Pioneers And Visionaries On A Quest To Save The World”. Some of these heroes are already reasonably well known, but others were new to David, and, he suspects, to many of the book’s readers. Tom is a London-based journalist. Earlier in his career he worked in newspapers, mostly for the Telegraph, where he was a staff feature-writer and commissioning editor. He is currently a senior editor at UnHerd, where he commissions essays and occasionally writes them. Perhaps one reason why he writes so well is that he has a BA in English Language and Literature from Oxford University, where he was a Casberd scholar. Selected follow-ups: About Tom Ough The Anti-Catastrophe League - The book's webpage On novel methods of pandemic prevention What is effective altruism? (EA) Sam Bankman-Fried - Wikipedia (also covers FTX) Open Philanthropy Conscium Here Comes the Sun - book by Bill McKibben The 10 Best Beatles Songs (Based on Streams) Carrington Event - Wikipedia Mirror life - Wikipedia Future of Humanity Institute 2005-2024: final report - by Anders Sandberg Oxford FHI Global Catastrophic Risks - FHI Conference, 2008 Forethought Review of Nick Bostrom’s Deep Utopia - by Calum DeepMind and OpenAI claim gold in International Mathematical Olympiad What the Heck is Hubble Tension? The Decade Ahead - by Leopold Aschenbrenner AI 2027 Anglofuturism Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Inspiring Tech Leaders - The Technology Podcast Interviews with Tech Leaders and insights on the latest emerging technology trends. Listen on: Apple Podcasts   Spotify
Internet and technology 1 month
0
0
6
40:25

Safe superintelligence via a community of AIs and humans, with Craig Kaplan

Craig Kaplan has been thinking about superintelligence longer than most. He bought the URL superintelligence.com back in 2006, and many years before that, in the late 1980s, he co-authored a series of papers with one of the founding fathers of AI, Herbert Simon. Craig started his career as a scientist with IBM, and later founded and ran a venture-backed company called PredictWallStreet that brought the wisdom of the crowd to Wall Street, and improved the performance of leading hedge funds. He sold that company in 2020, and now spends his time working out how to make the first superintelligence safe. As he puts it, he wants to reduce P(Doom) and increase P(Zoom). Selected follow-ups: iQ Company Herbert A. Simon - Wikipedia Amara’s Law and Its Place in the Future of Tech - Pohan Lin Predict Wall Street The Society of Mind - book by Marvin Minsky AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google - BBC News Statement on AI Risk - Center for AI Safety I’ve Spent My Life Measuring Risk. AI Rings Every One of My Alarm Bells - Paul Tudor Jones Secrets of Software Quality: 40 Innovations from IBM - book by Craig Kaplan London Futurists Podcast episode featuring David Brin Reason in human affairs - book by Herbert Simon US and China will intervene to halt ‘suicide race’ of AGI – Max Tegmark If Anybody Builds It, Everyone Dies - book by Eliezer Yudkowsky and Nate Soares AGI-25 - conference in Reykjavik The First Global Brain Workshop - Brussels 2001 Center for Integrated Cognition Paul S. Rosenbloom Tatiana Shavrina, Meta Henry Minsky launches AI startup inspired by father’s MIT research Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Internet and technology 1 month
0
0
6
41:15

How progress ends: the fate of nations, with Carl Benedikt Frey

Many people expect improvements in technology over the next few years, but fewer people are optimistic about improvements in the economy. Especially in Europe, there’s a narrative that productivity has stalled, that the welfare state is over-stretched, and that the regions of the world where innovation will be rewarded are the US and China – although there are lots of disagreements about which of these two countries will gain the upper hand. To discuss these topics, our guest in this episode is Carl Benedikt Frey, the Dieter Schwarz Associate Professor of AI & Work at the Oxford Internet Institute. Carl is also a Fellow at Mansfield College, University of Oxford, and is Director of the Future of Work Programme and Oxford Martin Citi Fellow at the Oxford Martin School. Carl’s new book has the ominous title, “How Progress Ends”. The subtitle is “Technology, Innovation, and the Fate of Nations”. A central premise of the book is that our ability to think clearly about the possibilities for progress and stagnation today is enhanced by looking backward at the rise and fall of nations around the globe over the past thousand years. The book contains fascinating analyses of how countries at various times made significant progress, and at other times stagnated. The book also considers what we might deduce about the possible futures of different economies worldwide. Selected follow-ups: Professor Carl-Benedikt Frey - Oxford Martin School How Progress Ends: Technology, Innovation, and the Fate of Nations - Princeton University Press Stop Acting Like This Is Normal - Ezra Klein ("Stop Funding Trump’s Takeover") OpenAI o3 Breakthrough High Score on ARC-AGI-Pub A Human Amateur Beat a Top Go-Playing AI Using a Simple Trick - Vice The future of employment: How susceptible are jobs to computerisation? - Carl Benedikt Frey and Michael A. Osborne Europe's Choice: Policies for Growth and Resilience - Alfred Kammer, IMF MIT Radiation Laboratory ("Rad Lab") Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Internet and technology 2 months
0
0
5
37:08

Intellectual dark matter? A reputation trap? The case of cold fusion, with Jonah Messinger

Could the future see the emergence and adoption of a new field of engineering called nucleonics, in which the energy of nuclear fusion is accessed at relatively low temperatures, producing abundant clean safe energy? This kind of idea has been discussed since 1989, when the claims of cold fusion first received media attention. It is often assumed that the field quickly reached a dead-end, and that the only scientists who continue to study it are cranks. However, as we’ll hear in this episode, there may be good reasons to keep an open mind about a number of anomalous but promising results. Our guest is Jonah Messinger, who is a Winton Scholar and Ph.D. student at the Cavendish Laboratory of Physics at the University of Cambridge. Jonah is also a Research Affiliate at MIT, a Senior Energy Analyst at the Breakthrough Institute, and previously he was a Visiting Scientist and ThinkSwiss Scholar at ETH Zürich. His work has appeared in research journals, on the John Oliver show, and in publications of Columbia University. He earned his Master’s in Energy and Bachelor’s in Physics from the University of Illinois at Urbana-Champaign, where he was named to its Senior 100 Honorary. Selected follow-ups: Jonah Messinger (The Breakthrough Institute) nucleonics.org U.S. Department of Energy Announces $10 Million in Funding to Projects Studying Low-Energy Nuclear Reactions (ARPA-E) How Anomalous Science Breaks Through - by Jonah Messinger Wolfgang Pauli (Wikiquote) Cold fusion: A case study for scientific behavior (Understanding Science) Calculated fusion rates in isotopic hydrogen molecules - by SE Koonin & M Nauenberg Known mechanisms that increase nuclear fusion rates in the solid state - by Florian Metzler et al Introduction to superradiance (Cold Fusion Blog) Peter L. Hagelstein - Professor at MIT Risk and Scientific Reputation: Lessons from Cold Fusion - by Huw Price Katalin Karikó (Wikipedia) “Abundance” and Its Insights for Policymakers - by Hadley Brown Identifying intellectual dark matter - by Florian Metzler and Jonah Messinger Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Internet and technology 4 months
0
0
6
40:11

AI agents, AI safety, and AI boycotts, with Peter Scott

This episode of London Futurists Podcast is a special joint production with the AI and You podcast which is hosted by Peter Scott. It features a three-way discussion, between Peter, Calum, and David, on the future of AI, with particular focus on AI agents, AI safety, and AI boycotts. Peter Scott is a futurist, speaker, and technology expert helping people master technological disruption. After receiving a Master’s degree in Computer Science from Cambridge University, he went to California to work for NASA’s Jet Propulsion Laboratory. His weekly podcast, “Artificial Intelligence and You” tackles three questions: What is AI? Why will it affect you? How do you and your business survive and thrive through the AI Revolution? Peter’s second book, also called “Artificial Intelligence and You,” was released in 2022. Peter works with schools to help them pivot their governance frameworks, curricula, and teaching methods to adapt to and leverage AI. Selected follow-ups: Artificial Intelligence and You (podcast) Making Sense of AI - Peter's personal website Artificial Intelligence and You (book) AI agent verification - Conscium Preventing Zero-Click AI Threats: Insights from EchoLeak - TrendMicro Future Crimes -  book by Marc Goodman How TikTok Serves Up Sex and Drug Videos to Minors - Washington Post COVID-19 vaccine misinformation and hesitancy - Wikipedia Cambridge Analytica - Wikipedia Invisible Rulers - book by Renée DiResta 2025 Northern Ireland riots (Ballymena) - Wikipedia Google DeepMind Slammed by Protesters Over Broken AI Safety Promise Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods. Listen on: Apple Podcasts   Spotify Inspiring Tech Leaders - The Technology Podcast Interviews with Tech Leaders and insights on the latest emerging technology trends. Listen on: Apple Podcasts   Spotify
Internet and technology 4 months
0
0
7
54:51

The remarkable potential of hydrogen cars, with Hugo Spowers

The guest in this episode is Hugo Spowers. Hugo has led an adventurous life. In the 1970s and 80s he was an active member of the Dangerous Sports Club, which invented bungee jumping, inspired by an initiation ceremony in Vanuatu. Hugo skied down a black run in St.Moritz in formal dress, seated at a grand piano, and he broke his back, neck and hips when he misjudged the length of one of his bungee ropes. Hugo is a petrol head, and done more than his fair share of car racing. But if he’ll excuse the pun, his driving passion was always the environment, and he is one of the world’s most persistent and dedicated pioneers of hydrogen cars. He is co-founder and CEO of Riversimple, a 24 year-old pre-revenue startup, which have developed 5 generations of research vehicles. Hydrogen cars are powered by electric motors using electricity generated by fuel cells. Fuel cells are electrolysis in reverse. You put in hydrogen and oxygen, and what you get out is electricity and water. There is a long-standing debate among energy experts about the role of hydrogen fuel cells in the energy mix, and Hugo is a persuasive advocate. Riversimple’s cars carry modest sized fuel cells complemented by supercapacitors, with motors for each of the four wheels. The cars are made of composites, not steel, because minimising weight is critical for fuel efficiency, pollution, and road safety. The cars are leased rather than sold, which enables a circular business model, involving higher initial investment per car, and no built-in obsolescence. The initial, market entry cars are designed as local run-arounds for households with two cars, which means the fuelling network can be built out gradually. And Hugo also has strong opinions about company governance. Selected follow-ups: Hugo Spowers - Wikipedia Riversimple Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods. Listen on: Apple Podcasts   Spotify Real Talk About Marketing An Acxiom podcast where we discuss marketing made better, bringing you real... Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods. Listen on: Apple Podcasts   Spotify
Internet and technology 4 months
0
0
6
44:33

The AI disconnect: understanding vs motivation, with Nate Soares

Our guest in this episode is Nate Soares, Executive Director of the Machine Intelligence Research Institute, or MIRI. MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Eliezer Yudkowsky, with support from a couple of internet entrepreneurs. Among other things, it ran a series of conferences called the Singularity Summit. In 2012, Peter Diamandis and Ray Kurzweil, acquired the Singularity Summit, including the Singularity brand, and the Institute was renamed as MIRI. Nate joined MIRI in 2014 after working as a software engineer at Google, and since then he’s been a key figure in the AI safety community. In a blogpost at the time he joined MIRI he observed “I turn my skills towards saving the universe, because apparently nobody ever got around to teaching me modesty.” MIRI has long had a fairly pessimistic stance on whether AI alignment is possible. In this episode, we’ll explore what drives that view—and whether there is any room for hope. Selected follow-ups: Nate Soares - MIRI Yudkowsky and Soares Announce Major New Book: “If Anyone Builds It, Everyone Dies” - MIRI The Bayesian model of probabilistic reasoning During safety testing, o1 broke out of its VM - Reddit Leo Szilard - Physics World David Bowie - Five Years - Old Grey Whistle Test Amara's Law - IEEE Robert Oppenheimer calculation of p(doom) JD Vance commenting on AI-2027 SolidGoldMagikarp - LessWrong ASML Chicago Pile-1 - Wikipedia Castle Bravo - Wikipedia Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Real Talk About Marketing An Acxiom podcast where we discuss marketing made better, bringing you real... Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods. Listen on: Apple Podcasts   Spotify
Internet and technology 5 months
0
0
6
50:18

Anticipating an Einstein moment in the understanding of consciousness, with Henry Shevlin

Our guest in this episode is Henry Shevlin. Henry is the Associate Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where he also co-directs the Kinds of Intelligence program and oversees educational initiatives.  He researches the potential for machines to possess consciousness, the ethical ramifications of such developments, and the broader implications for our understanding of intelligence.  In his 2024 paper, “Consciousness, Machines, and Moral Status,” Henry examines the recent rapid advancements in machine learning and the questions they raise about machine consciousness and moral status. He suggests that public attitudes towards artificial consciousness may change swiftly, as human-AI interactions become increasingly complex and intimate. He also warns that our tendency to anthropomorphise may lead to misplaced trust in and emotional attachment to AIs. Note: this episode is co-hosted by David and Will Millership, the CEO of a non-profit called Prism (Partnership for Research Into Sentient Machines). Prism is seeded by Conscium, a startup where both Calum and David are involved, and which, among other things, is researching the possibility and implications of machine consciousness. Will and Calum will be releasing a new Prism podcast focusing entirely on Conscious AI, and the first few episodes will be in collaboration with the London Futurists Podcast. Selected follow-ups: PRISM podcast Henry Shevlin - personal site Kinds of Intelligence - Leverhulme Centre for the Future of Intelligence Consciousness, Machines, and Moral Status - 2024 paper by Henry Shevlin Apply rich psychological terms in AI with care - by Henry Shevlin and Marta Halina What insects can tell us about the origins of consciousness - by Andrew Barron and Colin Klein Consciousness in Artificial Intelligence: Insights from the Science of Consciousness - By Patrick Butlin, Robert Long, et al Association for the Study of Consciousness Other researchers mentioned: Blake Lemoine Thomas Nagel Ned Block Peter Senge Galen Strawson David Chalmers David Benatar Thomas Metzinger Brian Tomasik Murray Shanahan Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Promoguy Talk Pills Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama... Listen on: Apple Podcasts   Spotify Real Talk About Marketing An Acxiom podcast where we discuss marketing made better, bringing you real... Listen on: Apple Podcasts   Spotify
Internet and technology 6 months
0
0
7
42:20

The case for a conditional AI safety treaty, with Otto Barten

How can a binding international treaty be agreed and put into practice, when many parties are strongly tempted to break the rules of the agreement, for commercial or military advantage, and when cheating may be hard to detect? That’s the dilemma we’ll examine in this episode, concerning possible treaties to govern the development and deployment of advanced AI. Our guest is Otto Barten, Director of the Existential Risk Observatory, which is based in the Netherlands but operates internationally. In November last year, Time magazine published an article by Otto, advocating what his organisation calls a Conditional AI Safety Treaty. In March this year, these ideas were expanded into a 34-page preprint which we’ll be discussing today, “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”. Before co-founding the Existential Risk Observatory in 2021, Otto had roles as a sustainable energy engineer, data scientist, and entrepreneur. He has a BSc in Theoretical Physics from the University of Groningen and an MSc in Sustainable Energy Technology from Delft University of Technology. Selected follow-ups: Existential Risk Observatory There Is a Solution to AI’s Existential Risk Problem - Time International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty - Otto Barten and colleagues The Precipice: Existential Risk and the Future of Humanity - book by Toby Ord Grand futures and existential risk - Lecture by Anders Sandberg in London attended by Otto PauseAI StopAI Responsible Scaling Policies - METR Meta warns of 'worse' experience for European users - BBC News Accidental Nuclear War: a Timeline of Close Calls - FLI The Vulnerable World Hypothesis - Nick Bostrom Semiconductor Manufacturing Optics - Zeiss California Institute for Machine Consciousness Tipping point for large-scale social change? Just 25 percent - Penn Today Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Promoguy Talk Pills Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama... Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods. Listen on: Apple Podcasts   Spotify
Internet and technology 7 months
0
0
5
38:12

Humanity's final four years? with James Norris

In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks. Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries. Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention. Selected follow-ups: James Norris website Upgrade your life & legacy - Upgradable The 7 Habits of Highly Effective People (Stephen Covey) Beneficial AI 2017 - Asilomar conference "...superintelligence in a few thousand days" - Sam Altman blogpost Amara's Law - DevIQ The Probability of Nuclear War (JFK estimate) AI Designs Chemical Weapons - The Batch The Vulnerable World Hypothesis - Nick Bostrom We Need To Build Trustworthy AI Systems To Monitor Other AI: Yoshua Bengio Instrumental convergence - Wikipedia Neanderthal extinction - Wikipedia Matrioshka brain - Wikipedia Will there be a 'WW3' before 2050? - Manifold prediction market Existential Safety Action Pledge An Urgent Call for Global AI Governance - IAIGA petition Build your survival sanctuary Other people mentioned include: Eliezer Yudkowsky, Roman Yampolskiy, Yan LeCun, Andrew Ng Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Promoguy Talk Pills Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama... Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods. Listen on: Apple Podcasts   Spotify
Internet and technology 7 months
0
0
5
49:37

Human extinction: thinking the unthinkable, with Sean ÓhÉigeartaigh

Our subject in this episode may seem grim – it’s the potential extinction of the human species, either from a natural disaster, like a supervolcano or an asteroid, or from our own human activities, such as nuclear weapons, greenhouse gas emissions, engineered biopathogens, misaligned artificial intelligence, or high energy physics experiments causing a cataclysmic rupture in space and time. These scenarios aren’t pleasant to contemplate, but there’s a school of thought that urges us to take them seriously – to think about the unthinkable, in the phrase coined in 1962 by pioneering futurist Herman Kahn. Over the last couple of decades, few people have been thinking about the unthinkable more carefully and systematically than our guest today, Sean ÓhÉigeartaigh. Sean is the author of a recent summary article from Cambridge University Press that we’ll be discussing, “Extinction of the human species: What could cause it and how likely is it to occur?” Sean is presently based in Cambridge where he is a Programme Director at the Leverhulme Centre for the Future of Intelligence. Previously he was founding Executive Director of the Centre for the Study of Existential Risk, and before that, he managed research activities at the Future of Humanity Institute in Oxford. Selected follow-ups: Seán Ó hÉigeartaigh - Leverhulme Centre Profile Extinction of the human species - by Sean ÓhÉigeartaigh Herman Kahn - Wikipedia Moral.me - by Conscium Classifying global catastrophic risks - by Shahar Avin et al Defence in Depth Against Human Extinction - by Anders Sandberg et al The Precipice - book by Toby Ord Measuring AI Ability to Complete Long Tasks - by METR Cold Takes - blog by Holden Karnofsky What Comes After the Paris AI Summit? - Article by Sean ARC-AGI - by François Chollet Henry Shevlin - Leverhulme Centre profile Eleos (includes Rosie Campbell and Robert Long) NeurIPS talk by David Chalmers Trustworthy AI Systems To Monitor Other AI: Yoshua Bengio The Unilateralist’s Curse - by Nick Bostrom and Anders Sandberg Music: Spike Protein, by Koi Discovery, availab Promoguy Talk Pills Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama... Listen on: Apple Podcasts   Spotify
Internet and technology 7 months
0
0
7
42:27

The best of times and the worst of times, updated, with Ramez Naam

Our guest in this episode, Ramez Naam, is described on his website as “climate tech investor, clean energy advocate, and award-winning author”. But that hardly starts to convey the range of deep knowledge that Ramez brings to a wide variety of fields. It was his 2013 book, “The Infinite Resource: The Power of Ideas on a Finite Planet”, that first alerted David to the breadth of scope of his insight about future possibilities – both good possibilities and bad possibilities. He still vividly remembers its opening words, quoting Charles Dickens from “The Tale of Two Cities”: Quote: “‘It was the best of times; it was the worst of times’ – the opening line of Charles Dickens’s 1859 masterpiece applies equally well to our present era. We live in unprecedented wealth and comfort, with capabilities undreamt of in previous ages. We live in a world facing unprecedented global risks—risks to our continued prosperity, to our survival, and to the health of our planet itself. We might think of our current situation as ‘A Tale of Two Earths’.” End quote. 12 years after the publication of “The Infinite Resource”, it seems that the Earth has become even better, but also even worse. Where does this leave the power of ideas? Or do we need more than ideas, as ominous storm clouds continue to gather on the horizon? Selected follow-ups: Ramez Naam - personal website The Infinite Resource: The Power of Ideas on a Finite Planet The Nexus Trilogy (Nexus Crux Apex) Jesse Jenkins (Princeton) Six Degrees: Our Future on a Hotter Planet - book by Mark Lynas 1991 eruption of Mount Pinatubo - Wikipedia We cool Earth, with reflective clouds - Make Sunsets Direct Air Capture (DAC) - Wikipedia Frontier: An advance market commitment to accelerate carbon removal Toward a Responsible Solar Geoengineering Research Program - by David Keith South Korea scales down plans for nuclear power Microsoft chooses infamous nuclear site for AI power Machines of Loving Grace: How AI Could Transform the World for the Better - Essay by Dario Amodei Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Promoguy Talk Pills Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama... Listen on: Apple Podcasts   Spotify
Internet and technology 8 months
0
0
6
45:07

PAI at Paris: the global AI ecosystem evolves, with Rebecca Finlay

In this episode, our guest is Rebecca Finlay, the CEO at Partnership on AI (PAI). Rebecca previously joined us in Episode 62, back in October 2023, in what was the run-up to the Global AI Safety Summit in Bletchley Park in the UK. Times have moved on, and earlier this month, Rebecca and the Partnership on AI participated in the latest global summit in that same series, held this time in Paris. This summit, breaking with the previous naming, was called the Global AI Action Summit. We’ll be hearing from Rebecca how things have evolved since we last spoke – and what the future may hold. Prior to joining Partnership on AI, Rebecca founded the AI & Society program at global research organization CIFAR, one of the first international, multistakeholder initiatives on the impact of AI in society. Rebecca’s insights have been featured in books and media including The Financial Times, The Guardian, Politico, and Nature Machine Intelligence. She is a Fellow of the American Association for the Advancement of Sciences and sits on advisory bodies in Canada, France, and the U.S. Selected follow-ups: Partnership on AI Rebecca Finlay Our previous episode featuring Rebecca CIFAR (The Canadian Institute for Advanced Research) "It is more than time that we move from science fiction" - remarks by Anne Bouverot International AI Safety Report 2025 - report from expert panel chaired by Yoshua Bengio The Inaugural Conference of the International Association for Safe and Ethical AI (IASEAI) A.I. Pioneer Yoshua Bengio Proposes a Safe Alternative Amid Agentic A.I. Hype US and UK refuse to sign Paris summit declaration on ‘inclusive’ AI Current AI Collaborative event on AI accountability CERN for AI AI Summit Day 1: Harnessing AI for the Future of Work The Economic Singularity Why is machine consciousness important? (Conscium) Brain, Mind & Consciousness (CIFAR) Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Promoguy Talk Pills Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama... Listen on: Apple Podcasts   Spotify
Internet and technology 9 months
0
0
5
38:51

AI agents: challenges ahead of mainstream adoption, with Tom Davenport

The most highly anticipated development in AI this year is probably the expected arrival of AI agents, also referred to as “agentic AI”. We are told that AI agents have the potential to reshape how individuals and organizations interact with technology. Our guest to help us explore this is Tom Davenport, Distinguished Professor in Information Technology and Management at Babson College, and a globally recognized thought leader in the areas of analytics, data science, and artificial intelligence. Tom has written, co-authored, or edited about twenty books, including "Competing on Analytics" and "The AI Advantage." He has worked extensively with leading organizations and has a unique perspective on the transformative impact of AI across industries. He has recently co-authored an article in the MIT Sloan Management Review, “Five Trends in AI and Data Science for 2025”, which included a section on AI agents – which is why we invited him to talk about the subject. Selected follow-ups: Tom Davenport - personal site Five Trends in AI and Data Science for 2025 - MIT Sloan Management Review Michael Martin Hammer - Wikipedia AI winter - Wikipedia AI is coming for the OnlyFans chat industry - Fortune How Gen AI and Analytical AI Differ — and When to Use Each - Harvard Business Review Truth Terminal - The AI Bot That Became a Crypto Millionaire - a16z Jim Simons - Wikipedia Why The "Godfather of AI" Now Fears His Own Creation - Curt Jaimungal interviews Geoffrey Hinton Attention Is All You Need - Google researchers  Apple suspends error-strewn AI generated news alerts - BBC News Gen AI cuts costs by 30% - London Futurists Podcast episode featuring David Wakeling, partner at A&O Shearman The path to agentic automation is UiPath - UiPath Microsoft CEO Predicts: "AI Agents Will Replace ALL Software" - AI Insights Explorer NVIDIA CEO Jensen Huang Keynote at CES 2025 - Nvidia Pioneering Safe, Efficient AI - Conscium A New Survey Of Generative AI Shows Lots Of Work To Do - October 2023 article by Tom Davenport Gen AI: Too much spend, too little benefit? - Goldman Sachs Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Internet and technology 10 months
0
0
7
33:29

Post-labour economics, with David Shapiro

In this episode, we return to a theme which is likely to become increasingly central to public discussion in the months and years ahead. To use a term coined by this podcast’s cohost Calum Chace, this theme is the Economic Singularity, namely the potential all-round displacement of humans from the workforce by ever more capable automation. That leads to the question: what are our options for managing the transition of society to increasing technological unemployment and technological underemployment. Our guest, who will be sharing his thinking on these questions, is the prolific writer and YouTuber David Shapiro. As well as keeping on top of fast-changing news about innovations in AI, David has been developing a set of ideas he calls post-labour economics – how an economy might continue to function even if humans can no longer gain financial rewards in direct return for their labour. Selected follow-ups: David Shapiro’s Substack David Shapiro's channel on YouTube Julia McCoy's channel on YouTube Next stop: Miami - Waymo Resource Based Economy Debt: The First 5,000 Years - book by David Graeber Broken Money: Why Our Financial System is Failing Us and How We Can Make it Better - book by Lyn Alden The Bitcoin Standard: The Decentralized Alternative to Central Banking - book by Saifedean Ammous Normalcy bias - Wikipedia Why Nations Fail: The Origins of Power, Prosperity, and Poverty - book by Daron Acemoğlu and James A. Robinson Principles for Dealing with the Changing World Order: Why Nations Succeed and Fail - book by Ray Dalio Vulture Capitalism: Corporate Crimes, Backdoor Bailouts, and the Death of Freedom - book by Grace Blakeley The Economic Singularity: Artificial Intelligence and Fully Automated Luxury Capitalism - book by Calum Chace Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Internet and technology 10 months
0
0
6
42:49

Longevity activism at 82, 86, and beyond, with Kenneth Scott and Helga Sands

Our guests in this episode have been described as the world’s two oldest scientifically astute longevity activists. They are Kenneth Scott, aged 82, who is based in Florida, and Helga Sands, aged 86, who lives in London. David has met both of them several times at a number of longevity events, and they always impress him, not only with their vitality and good health, but also with the level of knowledge and intelligence they apply to the question of which treatments are the best, for them personally and for others, to help keep people young and vibrant. Selected follow-ups: Waiting For God - 1990s BBC Comedy Adelle Davis, Nutritionist Roger J. Williams, Biochemist The Importance of Maintaining a Low Omega-6/Omega-3 Ratio Life Extension Magazine California Age Management Institute Fibrinogen and aging Professor Angus Dalgleish, Nuffield Health About Aubrey de Grey speaking at the Royal Institution George Church, Geneticist James Kirkland, Mayo Clinic Daniel Munoz-Espin, Cambridge Nobel Prize for John Gurdon and Shinya Yamanaka VSELs and S.O.N.G. laser Xtend Optimal Health Follistatin gene therapy, Minicircle Exosomes vs Stem Cells Prevent and Reverse Heart Disease - book by Caldwell Esselstyn Jr  Dasatinib and Quercetin (senolytics) We reverse atherosclerosis - Repair Biotechnologies Bioreactor-Grown Mitochondria - Mitrix Nobel Winner Shinya Yamanaka: Cell Therapy Is ‘Very Promising’ For Cancer, Parkison's, More Death of the world's oldest man, 25th Nov 2024 Blueprint protocol - Bryan Johnson Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Internet and technology 11 months
0
0
6
45:10

Models for society when humans have zero economic value, with Jeff LaPorte

Our guest in this episode is Jeff LaPorte, a software engineer, entrepreneur and investor based in Vancouver, who writes Road to Artificia, a newsletter about discovering the principles of post‑AI societies. Calum recently came across Jeff's article “Valuing Humans in the Age of Superintelligence: HumaneRank” and thought it had some good, original ideas, so we wanted to invite Jeff onto the podcast and explore them. Selected follow-ups: Jeff LaPorte personal business website Road to Artificia: A newsletter about discovering the principles of societies post‑AI Valuing Humans in the Age of Superintelligence: HumaneRank Ideas Lying Around - article by Cory Doctorow about a famous saying by Milton Friedman PageRank - Wikipedia Nosedive (Black Mirror episode) - IMDb The Economic Singularity - book by Calum Chace World Chess Championship 2024 - WIkipedia WALL.E (2008 movie) - IMDb A day in the life of Asimov, 2045 - short story by David Wood Why didn't electricity immediately change manufacturing? - by Tim Harford, BBC Responsible use of artificial intelligence in government - Government of Canada Bipartisan House Task Force Report on Artificial Intelligence - U.S. House of Representatives Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Internet and technology 11 months
0
0
5
41:02

From ineffective altruism to effective altruism? with Stefan Schubert

Our subject in this episode is altruism – our human desire and instinct to assist each other, making some personal sacrifices along the way. More precisely, our subject is the possible future of altruism – a future in which our philanthropic activities – our charitable donations, and how we spend our discretionary time – could have a considerably greater impact than at present. The issue is that many of our present activities, which are intended to help others, aren’t particularly effective. That’s the judgement reached by our guest today, Stefan Schubert. Stefan is a researcher in philosophy and psychology, currently based in Stockholm, Sweden, and has previously held roles at the LSE and the University of Oxford. Stefan is the co-author of the recently published book “Effective Altruism and the Human Mind”. Selected follow-ups: Stefan Schubert - Effective Altruism Effective Altruism and the Human Mind: The Clash Between Impact and Intuition - Oxford University Press (open access) Centre for Effective Altruism Professor Nadira Faber - Uehiro Institute, Oxford What are the best charities to support in 2024? - Giving What We Can Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed - Time Virtues for Real-World Utilitarians - by Stefan Schubert & Lucius Caviola, Utilitarianism Deworming - Effective Altruism Forum What we know about Musk's cost-cutting mission - BBC article about DOGE What is your p(doom)? with Darren McKee Longtermism - Wikipedia Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Internet and technology 11 months
0
0
7
34:12

The global energy transition: an optimistic assessment, with Amory Lovins

Our guest in this episode is Amory Lovins, a distinguished environmental scientist, and co-founder of RMI, which he co-founded in 1982 as Rocky Mountain Institute. It’s what he calls a think do and scale tank, with 700 people in 62 countries, and a budget of well over $100m a year. For over five decades, Amory has championed innovative approaches to energy systems, advocating for a world where energy services are delivered with least cost and least impact. He has advised all manner of governments, companies, and NGOs, and published 31 books and over 900 papers. It’s an over-used word, but in this case it is justified: Amory is a true thought leader in the global energy transition. Selected follow-ups: Inside Amory's Brain - RMI Get to know us - RMI Books by Amory B. Lovins - Goodreads Reinventing Fire - RMI Integrative Design: A Practice to Tackle Complex Challenges - Stanford d.school What is Integrative Design? - RMI Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Internet and technology 11 months
0
0
5
34:34
You may also like View more
xHUB.AI En la era de la Inteligencia Artificial, la aplicación en cualquier escenario supone el mayor debate y más importante para el ser humano y su futuro.En el podcast de xHUB.AI hablamos sobre inteligencia artificial y otras ciencias transversales, su aplicación a diferentes sectores y soluciones, con los mejores speakers y especialistas.La Inteligencia Artificial cambiará el mundo y nosotros queremos contartelo.Te lo vas a perder? Updated
monos estocásticos monos estocásticos es un podcast sobre inteligencia artificial presentado por Antonio Ortiz (@antonello) y Matías S. Zavia (@matiass).  Sacamos un episodio nuevo cada jueves. Puedes seguirnos en YouTube, LinkedIn y X. Más enlaces en cuonda.com/monos-estocasticos/links Hacemos todo lo que los monos estocásticos saben hacer: coser secuencias de formas lingüísticas que hemos observado en nuestros vastos datos de entrenamiento según la información probabilística de cómo se combinan. Updated
Hablando Crypto ¿Te interesan las criptomonedas? A nosotros también. Somos Óscar y Cristian. Después de más de 5 años jugueteando con las criptomonedas os explicamos nuestras historias. También hablamos sobre como vemos el crypto-mundo y hacia donde creemos que irá. Updated
Go to Internet and technology