
Podcast
Ethical Machines
67
0
I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.
I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.
We Are All Responsible for AI, Part 2
Episode in
Ethical Machines
In the last episode, Brian Wong, argued that there’s a “gap” between the harms that developing and using AI causes, on the one hand, and identifying who is responsible for those harms. At the end of that discussion, Brian claimed that we’re all responsible for those harms. But how could that be? Aren’t some people more responsible than others? And if we are responsible, what does that mean we’re supposed to do differently? In part 2 Brian explains how he thinks about what responsibility is and how it has implications for our social responsibilities.
Advertising Inquiries: https://redcircle.com/brands
58:35
We Are All Responsible for AI, Part 1
Episode in
Ethical Machines
We’re all connected to how AI is developed and used across the world. And that connection, my guest Brian Wong, Assistant Professor of Philosophy at the University of Hong Kong argues, is what makes us all, to varying degrees, responsible for the harmful impacts of AI. This conversation has two parts. This is the first, where we focus on the kinds of geo-political risks and harms he concerned about, why he takes issue with “the alignment problem,” and how AI operates in a way that produces what he calls “accountability gaps and deficits” - ways in which it looks like no one is accountable for the harms and how people are not compensated by anyone after they’re harmed. There’s a lot here - buckle up!
Advertising Inquiries: https://redcircle.com/brands
01:04:23
Orchestrating Ethics
Episode in
Ethical Machines
One company builds the LLM. Another company uses that model for their purposes. How do we know that the ethical standards of the first one match the ethical standards of the second one? How does the second company know they are using a technology that is commensurate with their own ethical standards? This is a conversation I had with David Danks, Professor OF Philosophy and data science UCSD, almost 3 years ago. But the conversation is just as pressing now as it was then. In fact, given the widespread adoption of AI that’s built by a handful of companies, it’s even more important now that we get this right.
Advertising Inquiries: https://redcircle.com/brands
44:15
The Military is the Safest Place to Test AI
Episode in
Ethical Machines
How can one of the most high risk industries also be the safest place to test AI? That’s what I discuss today with former Navy Commander Zac Staples, currently Founder and CEO of Fathom, an industrial cybersecurity company focused on the maritime industry. He walks me through how the military performs its due diligence on new technologies, explains that there are lots of “watchers” of new technologies as they’re tested and used, and that all of this happens against a backdrop of a culture of self-critique. We also talk about the increasing complexity of AI, which makes it harder to test, and we zoom out to larger, political issues, including China’s use of military AI.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
45:39
Should We Make Digital Copies of People?
Episode in
Ethical Machines
Deepfakes to deceive people? No good. How about a digital duplicate of a lost loved one so you can keep talking to them? What’s the impact of having a child talk to the digital duplicate of their dead father? Should you leave instructions about what can be done with your digital identify in your will? Could you lose control of your digital duplicate? These questions are ethically fascinating and crucial in themselves. They also raise other longer standing philosophical issues: can you be harmed after you die? Can your rights be violated? What if a Holocaust denier uses a digital duplicate of a survivor to say the Holocaust never happened? I used to think deepfakes were most of the conversation. Now I know better thanks to this great conversation with Atay Kozlovski, Visiting Research Fellow at Delft University of Technology.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
46:05
How Society Bears AI’s Costs
Episode in
Ethical Machines
AI is leading the economic charge. In fact, without the massive investments in AI, our economy would look a lot worse right now. But what are the social and political costs that we incur? My guest, Karen Yeung, a professor at Birmingham Law School and School of Computer Science, argues that investments in AI our consolidating power while disempowering the rest of society. Our individual autonomy and our collective cohesion are simultaneously eroding. We need to push back - but how? And on what grounds? To what extent is the problem our socio-economic system or our culture or government (in)action? These questions and more in a particularly fun episode (for me, anyway).
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
40:12
How Should We Teach Ethics to Computer Science Majors?
Episode in
Ethical Machines
The engineering and data science students of today are tomorrow’s tech innovators. IF we want them to develop ethically sound technology, they better have a good grip on what ethics is all about. But how should we teach them? The same way we teach ethics in philosophy? Or is something different needed given the kinds of organizational forces they’ll find themselves subject to once they’re working. Steven Kelts, a lecturer in Princeton’s School of Public and International Affairs and in the Department of Computer Science researches this subject and teaches those very students himself. We explore what his research and his experience shows us about how we can best train our computer scientists to take the welfare of society into their minds and their work.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
55:35
In Defense of Killer Robots
Episode in
Ethical Machines
Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
50:47
Is AI Creating a Sadder Future?
Episode in
Ethical Machines
In August, I recorded a discussion with David Ryan Polgar, Founder of the nonprofit All Tech Is Human, in front of an audience of around 200 people. We talked about how AI mediated experiences make us feel sadder, that the tech companies don’t really care about this, and how people can organize to push those companies to take our long term well being more seriously.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
39:26
Season finale: A New Ethics for AI Ethics?
Episode in
Ethical Machines
Wendell Wallach, who has been in the AI ethics game longer than just about anyone and has several books to his name on the subject, talks about his dissatisfaction with talk of “value alignment,” why traditional moral theories are not helpful for doing AI ethics, and how we can do better.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
42:22
Is AI a Person or a Thing… or Neither?
Episode in
Ethical Machines
It would be crazy to attribute legal personhood to AI, right? But then again, corporations are regarded as legal persons and there seems to be good reason for doing so. In fact, some rivers are classified as legal persons. My guest, David Gunkel, author of many books including “Person Thing Robot” argues that the classic legal distinction between ‘person’ and ‘thing’ doesn’t apply well to AI. How should we regard AI in a way that allows us to create it in a legally responsible way? All that and more in today’s episode
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
47:25
How Do You Control Unpredictable AI?
Episode in
Ethical Machines
LLMs behave in unpredictable ways. That’s a gift and a curse. It both allows for its “creativity” and makes it hard to control (a bit like a real artist, actually). In this episode, we focus on the cyber risks of AI with Walter Haydock, a former national security policy advisor and the Founder of StackAware.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
51:38
The AI Job Interviewer
Episode in
Ethical Machines
AI can stand between you and getting a job. That means for you to make money and support yourself and your family, you may have to convince an AI that you’re the right person for the job. And yet, AI can be biased and fail in all sorts of ways. This is a conversation with Hilke Schellmann, investigative journalist and author of ‘The algorithm” along with her colleague Mona Sloane, Ph.D., is an Assistant Professor of Data Science and Media Studies at the University of Virginia. We discuss Hilke’s book and all the ways things go sideways when people are looking for work in the AI era.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
42:13
Accuracy Isn’t Enough
Episode in
Ethical Machines
We want accurate AI, right? As long as it’s accurate, we’re all good? My guest, Will Landecker, CEO Accountable Algorithm, explains why accuracy is just one metric among many to aim for. In fact, we have to make tradeoffs across things like accuracy, relevance, and normative (including ethical) considerations in order to get a usable model. We also cover whether explainability is important and whether it’s even on the menu and the risks of multi-agentic AI systems.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
49:23
Beware of Autonomous Weapons
Episode in
Ethical Machines
Should we allow autonomous AI systems? Who is accountable if things go sideways? And how is AI going to transform the future of military work? All this and more with my guest, Rosaria Taddeo, Professor of Digital Ethics and Defense Technologies at the University of Oxford.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
38:50
How Do We Construct Intelligence?
Episode in
Ethical Machines
The Silicon Valley titans talk a lot about intelligence and super intelligence of AI…but what is intelligence, anyway? My guest, former philosopher professor and now Director at Gartner Philip Walsh argues the SV folks are fundamentally confused about what intelligence is. It’s not, he argues, like horsepower, which can be objectively measured. Instead, whether we ascribe intelligence to something is a matter of what we communally agree to ascribe intelligence to. More specifically, we have to collectively agree on the criteria for intelligence and that’s when it makes sense to say “yeah this thing is intelligent.” But we don’t really have a settled collective agreement and that’s why we sort of want to say “this is not intelligence” at the same time we say, “How is this thing so smart?!” I think this is a crucial discussion for anyone who wants to think deeply about what to make of our new quasi/proto/faux intelligent companions.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
53:59
AI Needs Historians
Episode in
Ethical Machines
How can we solve AI’s problems if we don’t understand where they came from? Originally aired in season one.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
33:38
We’re Not Ready for Agentic AI
Episode in
Ethical Machines
Tech companies are racing to build and sell agentic. The vision is one in which countless AI agents are acting on our behalf: searching the web, making transactions, interacting with other AI agents. But my guest Avijit Ghosh, Applied Policy Researcher at Hugging Face, explains why we’re not even close to having the appropriate safeguards in place. What are the massive gaps and what would it take to close them? That’s the topic of our discussion.
AI Agent Framework: SmolAgents: https://huggingface.co/blog/smolagents
AI Agents Course: https://huggingface.co/learn/agents-course/en/unit0/introduction
Position paper: https://huggingface.co/papers/2502.02649
Op Ed! : https://www.technologyreview.com/2025/03/24/1113647/why-handing-over-total-control-to-ai-agents-would-be-a-huge-mistake/
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
53:34
How Algorithms Manipulate Us
Episode in
Ethical Machines
We’re told that algorithms on social media are manipulating us. But is that true? What is manipulation? Can an AI really do it? And is it necessarily a bad thing? These questions and more with philosopher Michael Klenk.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
47:57
What Should We Do When AI Knows More Than Us?
Episode in
Ethical Machines
We often defer to the judgment of experts. I usually defer to my doctor’s judgment when he diagnoses me, I defer to quantum physicists when they talk to me about string theory, etc. I don’t say “well, that’s interesting, I’ll take it under advisement” and then form my own beliefs. Any beliefs I have on those fronts I replace with their beliefs. But what if an AI “knows” more than us? It is an authority in the field in which we’re questioning it. Should we defer to the AI? Should we replace our beliefs with whatever it believes? On the one hand, hard pass! On the other, it does know better than us. What to do? That’s the issue that drives this conversation with my guest, Benjamin Lange, Research Assistant Professor in the Ethics of AI and ML at the Ludwig Maximilians University of Munich.
Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy
48:24
You may also like View more
xHUB.AI
En la era de la Inteligencia Artificial, la aplicación en cualquier escenario supone el mayor debate y más importante para el ser humano y su futuro.En el podcast de xHUB.AI hablamos sobre inteligencia artificial y otras ciencias transversales, su aplicación a diferentes sectores y soluciones, con los mejores speakers y especialistas.La Inteligencia Artificial cambiará el mundo y nosotros queremos contartelo.Te lo vas a perder? Updated
Hablando Crypto
¿Te interesan las criptomonedas? A nosotros también. Somos Óscar y Cristian. Después de más de 5 años jugueteando con las criptomonedas os explicamos nuestras historias. También hablamos sobre como vemos el crypto-mundo y hacia donde creemos que irá. Updated
TISKRA
Podcast sobre tecnología de consumo y software. Análisis estratégico del mundo Apple, Google, Microsoft, Tesla y Amazon así como de todos aquellos productos de entretenimiento y su posible impacto económico y social. Conducido por @JordiLlatzer Updated






















