Online Conversation | Understanding Transhumanism with Richard Mouw and Rosalind Picard
Online Conversation | Understanding Transhumanism with Richard Mouw and Rosalind Picard

On Friday, September 10th we hosted theologian and author Richard Mouw and scholar and inventor Rosalind Picard as we explored the interconnection of AI, transhumanism, and human flourishing. This online conversation was part of our series on “Discovery and Doxology” in partnership with BioLogos and Church of the Advent. This series brings together leading scientists and theologians to discuss the relationship between science and faith.

Rapid developments in artificial intelligence and other emerging technologies can add confusion to the existential questions of who we are and why we are here. Through this conversation, Mouw and Picard explored the potential benefits and cautions of these technologies and thoughtfully examine the philosophical foundation of transhumanism.

The song is “Halelyah” by Avishai Cohen Trio.

The instillation is ‘Domain Field’ by Antony Gormley, 2003.

 

Special thanks to this event’s partners:
Transcript of Understanding Transhumanism with Richard Mouw and Rosalind Picard

Cherie Harder: [I want to welcome] you to today’s Online Conversation on understanding transhumanism with Richard Mouw and Rosalind Picard. I’d also like to thank our friends at BioLogos, as well as the Templeton Religion Trust, whose support has helped make this program possible, as well as our co-collaborators in this, the good folks from Church of the Advent, ably led by Tommy Henson, the rector there. So many thanks to our partners for making today’s discussion possible.

I’d also just like to say how good it is to be back after a hiatus of a few weeks, and we’d especially like to welcome the many people who are joining us today, including our more than 200 first-time guests, as well as our many international guests joining us from at least 34 different countries that we know of, ranging from Argentina to Zimbabwe. Let us know where you’re joining from if you haven’t already done so. You may see that in the chat feature, many of our different viewers are letting us know where they’re from, and if you are especially one of those people joining us from across many miles, hope that you’ll make yourself known in there. If you are one of those 200 or so first-time guests or are new to the work of the Trinity Forum, we seek to provide a space to engage the big questions of life in the context of faith and offer programs such as the one today, like this Online Conversation, to do so, to come to better know the Author of the answers. We hope that you’ll get a taste of that from our discussion today.

This is actually our 60th Online Conversation that we have hosted since the pandemic began. Over the last year and a half of these Online Conversations, we’ve delved into a wide variety of topics reflecting on the pleasures and consolations of great literature in quarantine, the power of poetry, the rise of conspiracy thinking, the challenge of Christian nationalism, new ways of reading Jane Austen, the possibilities of redeeming a culture of contempt, and the role of story in shaping culture. We’d love for you to check out many of these conversations at our website on www.ttf.org. But today we are wrestling with a very different topic: looking at the implications of technological advances that some believe have the power to enhance, improve, even remake what it means to be human. New discoveries made within the fields of artificial intelligence, or AI, as well as within biotechnology, hold thrilling promise for everything from reducing global poverty, the creation of entirely new industries, the development of more sustainable agriculture and the reduction of hunger, the elimination of certain diseases, the enhancement of human performance, and massive improvements in health, wealth, human performance, and even cognition.

But with any technology or tool of extraordinary power, there’s also the potential for disruption or distortion. In the biomedical world, the technology that can help eliminate cystic fibrosis can also be used to alter the gene line of future generations and even potentially create new hybrid creatures. And the AI advances that have brought us such incredibly helpful user-friendly assistants like Siri and Alexa may, according to some thinkers like Ray Kurzweil, lead potentially to the possibility of merging with or even being mastered by a superior machine intelligence, such that our very idea of what it means to be human is transformed. In many ways, some have said that we stand at the precipice of a brave new world, and like the characters in Aldous Huxley’s novel, it’s all too easy to remain distracted from the serious questions that our technologies pose. And so it’s a real privilege to welcome our guests today, both of whom thought long and deeply about both the promises of new A.I. technology, the hopes and the philosophies that guide its development, from their various and respective positions as a scientist-inventor in one case and a philosopher and theologian in the other, to help us think more wisely and faithfully about the inevitable questions that are raised by life- and society-altering new technologies.

So I’m so pleased to introduce first Richard Mouw. Richard is a theologian, philosopher, and senior research fellow at the Paul Henry Institute for the Study of Christianity and Politics at Calvin College and previously served as the president of Fuller Seminary for over 20 years. He has written many, many works—I think it’s at least 19, possibly more—including Uncommon Decency: Christian Civility in an Uncivil World, Pluralisms and Horizons, He Shines in All That’s Fair, Praying at Burger King, Calvinism at the Las Vegas Airport, and many others. He has served as the president of the Association of Theological Schools and six years as the co-chair of the Reformed-Catholic Dialogue, and has been awarded the Abraham Kuyper Prize for Excellence in Reform Theology and Public Life by Princeton Seminary.

Joining Rich is Rosalind Picard, and Rosalind is a scientist and engineer, a professor at the MIT Media Lab, where she is also the founder and the director of the Affective Computing Research Group. She has also co-founded two companies, including Affectiva, which provides emotion A.I. technologies now used by more than a quarter of the global Fortune 500, as well as in Empatica, which provides wearable sensors and analytics to improve health and created the first FDA-approved smartwatch for epilepsy patients, a company where she also serves as chief scientist as well as chairman of the board. She has helped launch the field of wearable computing, has authored or coauthored more than 300 peer-reviewed articles spanning affective computing, AI, and digital health, is an elected member of the National Academy of Engineering, serves on the Board of Advisors for Scientific American, is an active inventor with numerous patents, and a sought-after speaker whose TED talk has generated more than two million views.

So Rich and Rosalind, welcome. It’s great to have you here.

Rosalind Picard: It is such a pleasure.

Cherie Harder: Absolutely. Well, we’ll just sort of dive in at the very beginning, and because I’ve just thrown around a number of terms—”affective computing” and “transhumanism”—all of which could probably use better definition, Rosalind, I’d just love to hear from you, first of all, more about your field of affective computing, what it is, what AI means, what transhumanism means, and how they all relate to each other.

Rosalind Picard: Thanks, Cherie. That’s a tall order. Affective computing is computing that relates to, arises from, or deliberately influences emotion. Very practically, it’s been motivated by trying to give computers more of the skills of emotional intelligence, social-emotional intelligence, not just the mathematical and verbal kinds of intelligence, with the goals of making interactions with computers being a lot less stressful and less frustrating and annoying. The transhumanism is a topic that actually in my area of research, we don’t usually use that term, but my understanding is it’s kind of a loosely defined movement that does encompass a lot of what we do build, where I work at MIT in the Media Lab and in the fields of AI and affective computing. The main loose definition I might give is a movement that is inspired by trying to improve the human condition: curing or eradicating disease, trying to eliminate unnecessary suffering, and augment ourselves in ways that in our lab have been focused mostly on alleviating a lot of the challenges of disability. But we may start with a physical prosthesis or a cognitive prosthesis or an emotional prostheses, and from there, we may go from simply giving a person who doesn’t have legs, legs to walk, to giving a person legs to run faster than any person can run. So in some cases, it can enhance human performance. And then sometimes we talk well beyond that, with a bit of rapsody about how we might augment humans. And we use a lot of technical jargon or geeky jargon, like upgrading ourselves or human 2.0 and making us into something that is more than what we are today. And that may not just be enhancing these abilities, but maybe there’s some kind of superhuman future. And, you know, would that be an A.I.? Would that be some kind of combination of us and AI? And then on top of that, I hear from a lot of my colleagues, especially non-religious colleagues, a real interest in prolonging life and achieving something like eternal life, but without God. So it ranges from everything from, you know, giving somebody legs to, you know—and one might even look at a wheelchair as a kind of transhumanism where you’re adding to a human—and going on to something that exceeds us.

Cherie Harder: Now that is fascinating. You know, you indicate there that in some quarters, there’s even a discussion about immortality and eternal life, and transhumanism while dealing with technology has been called in many ways a philosophy, and so, Rich, I’d love to hear from you as someone who is a philosopher as well as a theologian very concerned with questions of immortality and eternal life, how you became interested in this philosophy and what the implications are, as a theologian?

Richard Mouw: Yeah, thanks, Cherie. It’s just great to be with you. And, you know, I did my PhD at the University of Chicago back in the late sixties before many of those who are listening and watching this today were born. But in those days there was— and my area of specialty was the philosophies, philosophical understandings of human consciousness. And all of this was generated in the larger social-scientific world by B.F. Skinner and behaviorism, that there are no such things as minds over and above overt behavior. That also got picked up in the philosophical world as a discussion that was kicked off by Gilbert Ryle, called the “Concept of Mind,” in which he says, we no longer believe in a ghost in a machine, some kind of said consciousness that is nonphysical, that is somehow the center of things. And a later version of that discussion was something called “central state materialism” or “brain mind identity theory,” where whatever we ordinarily refer to as mental events are really sort of electrical, electronic firings in the brain and this kind of thing, you know. So there was a lot of interest in what we philosophical discussions often talked about the nature of human composition of what is a human being made, composed.

And in those days, some of the practical questions were raised in a fascinating discussion that we’re well beyond now. But minds and machines, there were books written on that. There were projects devoted to that. And dealing with questions like this: Can a computer really play chess? That was a big issue in a seminar that I took in grad school. There were some philosophers, there was a well-known philosopher at Berkeley, Hubert Dreyfus, who argued that minds, there are minds, there is consciousness in it. Human beings play games in a different way than a machine could ever play a game. In fact, strictly speaking, machines don’t play games. They consider options and go through various possible moves and eliminate ones until they finally have one that works. But human beings just look at the board and we just see what an appropriate move would be in that. So that was an interesting discussion in those days, and it got me thinking about a lot of this theologically, as later on I got into the world of theological education, taught courses in theological anthropology, the theology of human nature. And one of the big issues in theology has been the debate over whether human beings are totally bodily, anticipating a bodily resurrection, or whether there’s a part of us that goes to be with the Lord, even when our body dies, and all of those kinds of questions, and a lot of debates over how you understand passages of scripture and the like. But really getting at many of the same issues, and that is how do we explain our conscious lives metaphysically, theologically, in light of what the Bible teaches? And I continue to be very fascinated by that.

Transhumanism is really opening up the possibility that—and much of it is kind of materialistic or physicalistic—and that is that there is nothing, there is not kind of an extra physical outside of the physical consciousness, but that our brain states are replicable or at least parallel states in a computer program. In fact, we might eventually be able to upload our brains into a computer program and achieve a kind of eternal life that way. So the metaphysical issue of whether we’re purely physical beings or whether there’s something over and above the physical, I think is one of the issues at stake in the kinds of thinking that Rosalind so nicely summarized for us.

Cherie Harder: That is fascinating. I mean, you referred to the potential at some point to essentially upload our brains and achieve immortality. And, you know, there’s certainly kind of a recurring science-fiction theme of, you know, the idea of runaway technology, you know, of kind of going into crazy territory of being dominated by or even destroyed by our tools or trying to become God in this way. And there does seem to be at least some real-life basis for this fear. You know, technology does seem to have its own imperatives at times. It certainly seems to have an orientation towards multiplying in scale and in applications, sometimes outside of humane considerations. And there are some sane as well as mad scientists who seem to buy the idea that if it can be done, it should be done. And so starting with you, Rosalind, I’d be really interested in how optimistic or hopeful you are around the chances of our humanely stewarding our own technologies.

Rosalind Picard: Hmm. This is about optimism and human behavior, which is very unpredictable. I’m an optimist in general. I also see a lot of variety of behavior. And I think it’s important that we really, I guess, educate the whole person, not just educate people about technology. And in my university, at MIT, and places, you know, you come in, you learn a whole lot about math and science and technology and engineering. You tend to learn a little bit less—although MIT is really good about demanding a lot of humanities—but a lot of technological universities, a little bit less about asking the big “Why” kinds of questions. And recently, there is a bit more of a movement toward trying to understand, you know, if we’re trying to make the world better, what does that mean? We talk at the Media Lab about trying to invent a better future, and then we put a period there. And now we’re trying to think more about what, you know, what does that really mean? And that means not just ethical behavior, but it means really identifying a bunch of values and trying to promote those. And then when we start to look at those, we start to say, “Gee, are we just building this because we can? Or are we thinking about what the world might really need or what we could do as kind of an opportunity cost, right?” Instead of just building this thing because we can incrementally make it higher or faster or better. What are we not building that maybe we should be thinking about building that the world might be even better off if we did? So we’re trying to promote that kind of thinking now also. And there we really need partnerships with people, with everybody in society, not just other academics, not just, certainly not just other engineers, but everybody who’s on this call could have something valuable to say to this.

Cherie Harder: Yes. You know, Rich, you mentioned earlier the kind of arresting prospect of downloading our minds into immortality, which certainly grabs the attention. And you know, it makes one think, you know, as a theologian, you know that the first temptation, the oldest temptation in the world, was the temptation to be like God, you know, to basically take control of one’s own destiny. And so it seems relatively easy to realize, like, that is not what we should be doing. But I’d love to ask you about just how— the line between playing God and perhaps just creative and wise stewardship. You know, downloading our brain to immortality seems a pretty clear-cut case. But one could also argue that so many really exciting technological advances, antibiotics, vaccines, wearables, and the like, you know, are in a way all human enhancement. You know, as a philosopher and a theologian, do you see a clear guideline for when we know whether we are attempting to play God?

Richard Mouw: Yeah, thanks. Yeah. I appreciate your going back to Genesis 3 on this because in many ways, in Genesis 3, in the first three chapters of Genesis, we see the fundamental choice. And what the godless version of transhumanism holds is that human beings are on the way to something greater, that our present state isn’t what we will end up to be or what we’re meant to be in terms of an evolutionary process. And in many ways, that parallels biblical teaching that we’re created in the very image of God. And Adam and Eve were created, the human race was created, to grow more and more into the image of God. 1 John 3 has this wonderful promise: “It does not yet appear what we shall be. But when he shall appear, we shall be like him.” We’re on the way to something. And Adam and Eve already were on the way to something. And if they were to flourish in that something, it would be acknowledging that they aren’t gods, but they are created in the image of God as the likeness of God. And then the tempter comes along, and he says, “You can be your own god. You know, just sit on a throne and run the show yourself.” And those are two very different images: growing into the image of God or trying to be our own gods. And you know, the fallenness of the human race, of trying to control things, trying to be our own gods, by God’s grace, also produce some really good things. Rosalind mentioned wheelchairs. I mean, you know, who would want to go back to the days when people who lost the control of their legs really couldn’t move around or couldn’t get anyplace? And so there are those ways of improving human nature, promoting human flourishing, that scientific technology over the centuries has developed and produced that we thank God for.

But it’s when we see ourselves as moving beyond our present finite state, our present limited selves as creatures of God, into something bigger than we are, not guided by God’s commands, by God’s revelation to us and our biblical understanding of who we are. We’re not animals and we’re not gods; we’re someplace in the middle there. And sometimes we define ourselves down and we try to act like animals. But there’s also the sin of defining ourselves up. Nietzsche had this German phrase, the “ubermensch,” the “over man,” beyond our present. And for him, that was a, I mean, from our point of view, it was a very bad thing, that we could grow into something more like what we used to think of as God, according to Nietzsche. And that’s a very dangerous thing.

And so Rosalind is much clearer and better on this than than I am, but the things that enhance our understanding of who we are in the light of our understanding of God’s will for human flourishing versus those sinful tendencies to want to create ourselves in some brand new way. And the big danger technically was realized actually a couple of years ago by the Chinese scientist, He Jiankui, who edited genes and tried to create a different kind of human being, and that was generally, at least thus far, has been seen as an inappropriate form of transhumanism.

Cherie Harder: That’s fascinating. Rosalind, I’d love to kind of ask your thoughts about this, in that, you know, the vision of human flourishing that Rich has just articulated—you know, the Christian understanding and sort of philosophical assumptions. One of the things that sociologist Neil Postman talked about is that every technology includes some kind of epistemological, political, or social bias within it. You know, so for example, it’s very difficult to do philosophy by smoke signal. You know, the very technology itself kind of precludes kind of abstract reasoning. You know, certain social media platforms predispose us towards a certain way of both understanding and interaction. As someone who has kind of immersed herself and, you know, has made incredible advances within sort of the field of affective computing and A.I., are there philosophies embedded within the technology itself that you have had to contend with in the course of your work? Or have you found it to be more neutral?

Rosalind Picard: So it’s a great question and a bunch of great points, too. Lately, we’ve been revisiting some of the language that we’re using in AI, in artificial intelligence. And in the origins of it, when John McCarthy first proposed the term, I believe it was Herb Simon had proposed an alternative term: complex information processing, which is actually much more accurate for what AI is today. But the term AI won out, and I think not because it was more accurate, because it’s not really, but because it’s more aspirational, and there’s something about a term that we can’t achieve that inspires people to want to think beyond the limits of what is, you know, known with existing constraints and imagine. We are, I think we’re made in the image of God and we are made as makers. Also, we are makers. We have makers in our lab who want to make things that make things. And when one of the things we might be able to make is an intelligence, and maybe we call it artificial intelligence, then even though we’re not really making that, there’s something that draws people to that. And that’s the attraction of something aspirational. And with that, we have started to use language like, “Oh, the machine learns,” “Oh, the machine thinks.” Some take a look at my work and say, “Oh, the machine feels,” and I’m like, no, no, cross that out of the headline. It does not feel. In fact, we also should be very careful to say it does not think. And I love Rich mentioning Dreyfus’s comment. It does not play. It does not— it is not being in the way that we are being. It is not experiencing anything. It is not experiencing play. It is not experiencing thought. It is not experiencing feeling. It doesn’t have any consciousness or awareness when we flip it off. It’s ethical to flip it off, to turn off the switch, even if the machine we just switched off just got a $50,000 honorarium for showing up on The Tonight Show as a female robot looking like it had emotions, right, and got citizenship in Saudi Arabia. I’m referring to the Sophia robot. You know it can be accorded all of these rights almost like a publicity stunt, but it doesn’t actually learn, think, play, feel, nor experience anything. It’s not a living being. It is a simulation of these things that we, as makers made in the image of the ultimate maker, you know, are making.

So I think the fact that we re-use this language that refers to us and we use it for machines gets us in a lot of trouble. It leads people to think, “Oh, wow, if machines can learn now and they’re learning faster and given, you know, once they could add, they could do math faster than all of us.” You know, if they can do this, then people start extrapolating and they’re afraid. Now that Sophia’s on The Tonight Show, cracking jokes and saying she’s funnier than The Tonight Show host, then why can’t a machine replace all of us? Should I just be building an AI that replaces me and builds other AIs? And as we use that language, it makes it very easy to extrapolate, and I think we create a lot of danger with that. If we had stuck with complex information processing and just described simulations and all people wouldn’t be so worried. On the other hand, it wouldn’t inspire as much trying to understand humans. And ultimately we are, you know, we use the phrase “fearfully and wonderfully made.” It’s even more awesome than that. The more we get into how people are made, you know, I just become speechless. We are so incredible; it’s amazing that we work at all, that our bodies work and all. And of course, sometimes when they don’t work, we appreciate that. But we are so amazing how we work, and it just remains this aspirational thing for us. So we adopt that language and that language gets us in a lot of trouble. It brings a lot of agitation.

Richard Mouw: Rosalind, a question. Rosalind, we’ve come a long way since the discussion of whether computers can win chess games, chess matches, and they do win. I mean, they, the human beings lose the game as it were. And I think you want to say, yeah, but are they rejoicing in winning? Do they get satisfaction out of that? And one of your examples in your writing just fascinates me. There’s a robot in the kitchen and you come down and you’re going to make coffee and the robot says, “Good morning, Rosalind. How are you this morning?” And you’re kind of grumpy, give a grumpy answer, and then you spill some coffee and you say a word that a Christian is allowed to say when you’re—. And the computer, the robot sees that you aren’t in a very good mood and changes its tone and the pace with which it responds and tries to initiate conversations with you. And we can imagine that activity taking place, that robotic behavior occurring. But you want to say that robot isn’t really concerned about you. That robot isn’t really caring about you, and I’m with you on that, but I need help. Why not?

Rosalind Picard: Yeah, I mean, we can call that artificial caring, artificial empathy. And the crazy thing is, when the machine does it and a person even who programs it and knows how it works receives it, it works, in the sense that it can alleviate frustration. It can help you regulate your emotion. It can help you feel better. And it doesn’t really understand. I can liken it to two other quick examples. One is when a human therapist is talking to another human therapist, and therapist B is using a technique that therapist A knows, and it helps therapist A feel better, even though therapist A knows that therapist B is just using this technique and might just be simply thinking, “OK, I’ll say this to her and we’ll have this empathetic exchange and then she’ll feel better,” right? And it becomes almost like running a program to do that. It can be done with true human compassion and caring. It can also be kind of simulated even when the person might be thinking about lunch. Now, I would argue it’s more effective when it’s real and a real human being is doing this. And in fact, we’ve had people rate empathetic responses. And the only difference in one batch is that they’re told they come from machines and in the other batch, they’re told they come from people, and they rate the ones that come from people higher than the ones from machines, even though they’re worded the same. So there are these biases. We bring more to our content in our message and the interaction than just the transcript of it, if you will.

There’s more there. When the computer executes this kind of artificial caring without really caring or understanding our feelings—. Sorry, the second example— the first example is the two therapists. The second is a person who has a dog and they come home at the end of the day and they are maybe, you know, kind of miserable. They open the door. But their dog is happy to see them, you know, tail is wagging, jumping up on them. Then the dog sees that the dog’s owner is miserable or, yeah, bad day. And what does the dog do? The dog puts its ears back, its tail down. You know, kind of looks sort of sad. And then the owner starts to feel a little more understood by the dog. Feels better. Now the dog has just done something very powerful, showing a kind of dog-like empathy. But do we think the dog understands our feelings, knows the definition of emotion? Knows what empathy is, any of this stuff? No, no. It doesn’t know this. What does it know compared to people? Well, it’s at least alive, right? We think it has a whole lot more going on inside it than a machine. We truly believe that. At the same time, we recognize this huge difference between it and us, and yet it can perform a service that helps a person regulate their emotion and feel better. So I put the robot kind of in that category of something that doesn’t really know, but we can tune it like man’s best friend. The dog has been domesticated to do certain things that help people feel better.

Richard Mouw: Somebody on Facebook, a Christian, their dog just died recently and just said that this person was very sad to lose this wonderful pet. And another Christian just put a remark on it saying, “Well, you’ll be together again someday.” You know, I don’t rebel against that as a Christian. But I wonder, is there even a difference with a robot that, your robot friend, you will probably not ever see that robot again when it breaks down and dies.

Rosalind Picard: Yeah. Well, we’d be happy to, you know, just get a totally new robot or does it need to be modded up? And also—I don’t know if we’re allowed to ask each other questions—but, Rich, you know, people like you who’ve studied so much more of this immortality, you know, some of my geek friends who are Christians have likened it to, you know, God backing us up, right? Backing up who we are. If we know God, it’s sort of like God getting to know us and backing us up, except that we’re not just simply a digital backup, and then being given an imperishable body at some point, right, with something different. It may look like this, but be fundamentally different. And, you know, is that like backing up the machine, you know, getting new hardware. You know, people make these metaphors, and maybe these are just our very imperfect, lacking-knowledge way of trying to approximate something we don’t fully understand.

Cherie Harder: Rich, before you answer that, I’m going to ladle on some of our audience questions as well. So we have a bunch of audience questions that have come in, and some of them are right on the theme of what Rosalind has just asked you. So we’ll ladle one on top of of Rosalind’s. And this comes from John Tongue, and John asks, “It seems that only theology has maintained an important distinction between God and the human by pointing out the creator and creature distinction. Would it help in how we think about computers by likewise maintaining an important distinction between humans and their creation, such as computers?” So sort of combining Rosalind and John’s, what would you say?

Richard Mouw: Well, I, yeah, I think that’s a very helpful way to put it. I do think that in our— Rosalind said earlier, you know, we’re created to be makers, not supreme makers, but makers under the rule and the guidance of God. And I do think that when we make things like robots, we might learn some metaphors or analogies to God’s creative activity. In certain ways it will always be incomprehensible to us, but it may illuminate certain things. So I would have no problem saying, for example, that God might someday, that God might be “backing us up” and that the resurrection would be uniting the backup with a new physical body. It’s not something I’d preach at a funeral, though. I think it’s just an interesting kind of intellectual exercise to play around with. Many of these metaphors and analogies, insofar as they help us as we’re thinking theologically and philosophically about that, I’m not sure they’re pastorally very helpful to people. I just think what we have to say is, you know, you’ll go to be with the Lord and you will be raised up. And yet we can have good discussions about what robotics might even illuminate about that.

Cherie Harder: So, Rosalind, this next question is for you from Jonathan Pavlik. And he asked, “Does there exist any momentum in the transhumanist community about how to transcend the common human spirit? It seems like transhumanism has to date focused principally on physical and mental enhancements, but our human composition includes a spiritual dimension as well.”

Rosalind Picard: Yeah, I agree it includes a spiritual dimension as well, and among my science and engineering colleagues, they don’t talk about that from what I’ve seen. In fact, they’ll kind of look at you like you have two heads if you even bring it up. While practicing science and engineering, most scientists and engineers act like materialists. Some go so far as to believe in materialism, like that’s all there is. I don’t think there’s any evidence that’s all there is. That’s a faith position. And it’s an unnecessary, myopic faith position. But it is a commonly held position. So a lot of people would say, you know, we are mind, we are body. I’ve gotten in lots of conversations, many of them who thought emotion either should just be completely ignored or didn’t really exist, to at least include not only physical and cognitive, but affective. And I believe there’s also a spiritual side of us, and we just don’t know how to deal with it. We don’t have the material tools for it. We don’t have any— we don’t really know what it does functionally, and we need functional descriptions to implement code for things. So the best we could kind of do right now is have a program print-out and say, “Of course I have a spirit,” you know, like, you know, which is completely meaningless, right? It’s just a program executing printing out something we might think someone with a spirit would say. So we just really don’t know how to deal with that. If anybody on this call has ideas how to deal with that I’d love to hear your input.

Cherie Harder: Rich, I’ll direct this next question to you from Elizabeth Jennings, and she says she’d love to hear thoughts on the relation of Huxley’s Brave New World to today’s technological developments. A conclusion of that book is the elimination of suffering is the elimination of our humanity. What are your thoughts?

Richard Mouw: Yeah, thank you. That’s a wonderful question. I’m not sure I have very profound thoughts on it, but you know, Jim Stump is in on this, too, and BioLogos has really struggled with this, this whole issue of a certain expression of transhumanism—specifically, the kind of transhumanism that sees everything as evolving without any divine purpose in all of this—has a kind of very optimistic view of the future that all of this technology can be a way of moving to a new stage of our humanity. What Rosalind was just saying about the spiritual dimension also has to take into effect that—made very effectively, I think, in a film that those of us who, those people here who aren’t in their twenties yet might want to go back and look: 2001: A Space Odyssey, where HAL, the computer, actually rebelled against its programmers and destroyed them. I mean, you know. And that idea of being spiritual also means being able to rebel against God. And that’s an issue that many of us have to take seriously. How does sin enter into human consciousness and how does it enter into, how might it enter into, artificial intelligence? And with Rosalind, I don’t think a robot could ever rebel against its programing, for example.

But the Brave New World idea is really—I mean, I’m not saying this is Huxley’s idea—but the Brave New World concept fits into a certain kind of transhumanism that sees that we’re moving toward the elimination of suffering. And I don’t want to say that suffering is essential to human life because that gets into all questions about the Garden of Eden and the resurrected state. But I do think that having challenges to overcome and learning patience and endurance and trust, these are the kinds of virtues that I think are very precious to the Christian in our relationship to God. And so we really need to, if we’re going to really push this discussion of transhumanism, we have to ask if we ever get beyond the capacity or the need to trust, the need to endure, the need to show patience, a need to show caring for other people in their suffering, would we have lost something about our God-given humanity?

Cherie Harder: Yeah, that’s fascinating. Rosalind, the next questions are for you. They’re from Eva Napier, and Eva asks if you could share about the most ethically questionable human enhancement idea that may have the possibility of being played out in the near future, as well as where you would draw the line between creating as faithful subcreators, imagining God in his creative nature, and creating that is far more grasping for godhood.

Rosalind Picard: Actually, can you repeat that last part?

Cherie Harder: Yes, where would you draw the line between creating as a faithful subcreator or grasping for godhood? Small order. 

Rosalind Picard: Yeah. Golly. Lots of stuff here. It’s funny. There are some ethical lines in my own work that very few people know about, and I don’t actually, I find myself not wanting to talk about them because I don’t want people to take them and do them. I don’t know any way to prevent people from doing them. Probably the best public ethical line is the engineering of one’s children, to want to go in and modify, you know, the embryo and build a child who might be enhanced in certain ways. And I’m really troubled by what’s going on there. I’m troubled by a lot of what’s going on inside these cases of modifying our human race. We do have a lot of people who develop CRISPR and things like that, the gene-editing technologies, that are very ethically minded and trying really hard to build tools to prevent disastrous things from happening. And you know, they’re on this. 

But it is the case that ultimately these powerful tools that we have can be put in the hands of people who use them to enhance their own power, to enhance their own God-likeness, if you will, versus use them—and again, this comes back to what kind of people are we educating and shaping in our society—versus seeing other needs around them that are greater and putting their brains and minds and imaginations and hands into serving greater needs, greater causes, rather than self-enhancement and self-promotion and self-power, which is what seems to be what’s driving some of these worst uses of technology, whether it’s a gene-editing technology or an effect-sensing and recognition or regulation technology and or an AI when it’s used to just, you know, build the power of someone who already has a lot of power, like some national leaders are extremely interested in doing to preserve their power, then that’s pointing to a real problem. And sometimes it doesn’t have to be the most advanced technology to enable great evil to be done either, right? We saw what Hitler was able to do with gas chambers, right, and regular weapons. So the power of A.I. and these technologies to just amplify and scale evil is huge. But it’s not the cause of the evil. It’s the tool of it, I think. So I think, you know, we need to address the whole issue there.

 When it comes to the latter part of the question, the creating versus grasping for godhood. You know, I think it’s important to get each individual who’s a creator to reflect on why they’re creating, what are they creating for, and try to get people past their resumes. You know, thinking about greater causes and not just creating in isolation, really getting community input. I think everybody here should be willing to ask the creators they know around them, the makers around them, “You know, why are you building that? Why are you interested in that? Why do you think that’s important? Where do you think that could go? That’s good. Where do you think that could go? That’s a problem.” And let’s hold each other accountable and help our society as a whole listen and hear and understand our needs and that none of us is God. I’m reminded of this great poster a friend of mine had in the business school of all places, but we need it in schools besides the business school. The poster on her wall right when you walked in, said, “There is a God and you are not him.” How many business school faculty needed to see that? I think we all need to be reminded of that and especially those of us who are making very powerful things.

Cherie Harder: Thanks, Rosalind. That’s actually a wonderful lead-in to what will probably have to be our last question, which I’ll direct you first, Rich, and, Rosalind, if you have anything to add, please do. This question comes from an anonymous viewer and they ask, “What advice would you give to someone who is intimidated by this field, but desires to engage in this kind of conversation as a Christian? Where can one start?”

Richard Mouw: Well, I think a great question, and I think a place to start really has to do with the nature of God. Yeah, we’re created also to be creators of things, but only God creates ex nihilo: out of nothing. But God creates out of nothing as a loving God, as a faithful God. John 3:16: “God so loved the cosmos, the whole creation,” and 17: “sent the Son into the cosmos, not to condemn the cosmos, but that the whole creation might be redeemed.” That God created the world as a loving God, as an arena for the flourishing of human life and of other kinds of life. And when we lose sight of that, and, you know, this often doesn’t get said about the Fall, but it isn’t just that Adam and Eve rebelled against God, but they accepted a false theology of God because the tempter says to Eve, “Did God say that? You don’t have to believe him. You know he’s just trying to control you.” But God is not a controller in that sense. He’s a loving God who creates us lovingly to flourish as people created in his image, and we rebel against him and then God lovingly saves us. And it’s so important in this discussion to start with the right kind of God, with the right kind of Creator, and to say, yes, we want to take advantage of all of this technological advance and anything that could promote human flourishing—but human flourishing, as understood within the purposes of God, our awareness of God’s purposes and God’s revelation to us about how God wants us to be his creatures and stewards of his creation.

Cherie Harder: Thank you, Rich and Rosalind. This has been absolutely fascinating. And in just a moment, I’m going to give Rich and Rosalind the last word of our Online Conversation. But first, a few things just to communicate. Right after this Online Conversation ends, we’ll be sending around an online feedback form. We’re really grateful for your thoughts. We read every one. We try to incorporate your suggestions in terms of making these online programs ever more valuable to you. And as a special incentive for filling out that survey, we will give you a free Trinity Forum reading download of your choice. One that we would— well, two that we’d recommend in particular, is both Aldous Huxley’s Brave New World, but also the term “transhumanism” itself is derived actually from Dante’s The Divine Comedy, and so we would also encourage you to avail yourself of The Divine Comedy. Also, for those of you who signed up, immediately after this Online Conversation is over, you can participate in a breakout discussion group to kind of further dig into some of the things that you have heard. If you’ve not yet signed up, there should be a link on the chat feature where you can do so. And for those of you who are in the DC area, Church of the Advent will also be sponsoring an in-person discussion group and dinner to further dig into some of these issues. There should be more information on that in the chat feature as well. We highly encourage you to take advantage of that opportunity. We’ll also be sending around a video link tomorrow to all those who signed up to attend today’s event. We’d love for you to share this video with others, start a conversation. We’ll also be sending along within that video link additional resources and recommended readings to help aid your understanding.

In addition, we’d love to invite each of you to become members of the Trinity Forum Society, which is the community of people that helps make programs like this possible. We attempt to try to provide a space and resources to engage the big questions of life in the context of faith and would love your collaboration towards that mission. There’s also many benefits of being a member of the Trinity Forum Society, including a subscription to our quarterly readings—where we take the best of literature and letters, provide an introduction which gives background and context, as well as discussion questions at the end to make it essentially function like a book club in a bag—our podcast, our daily list of “what we’re reading,” a list of curated reading recommendations, and as a special incentive to those of you who either join today or with your gift of $100 or more, we will send you our “Technology and Flourishing” reading collection, which includes Brave New World, the short story “The Birthmark,” and On Being Human.

And finally, as we discussed, we’d love to get the last word from Rich and Rosalind. So, Rosalind, maybe we can start with you.

Rosalind Picard: I’m actually just going to read some of the last words from St. Paul in his letter to the Corinthians. The one that most people pay most attention to the first words of at weddings: 1 Corinthians 13. But I’m going to start with verse 12. Actually, I’m just going to read verse 12. “For now, we see only a reflection as in a mirror. Then we shall see face to face. Now I know in part. Then I shall know fully, even as I am fully known.”

Cherie Harder: Thanks, Rosalind. Rich.

Richard Mouw: Thank you. That captures exactly what I want to emphasize as well. I do think that, I want to say I think we’re living— I think this is an exciting discussion and that we as Christians should not be intimidated. We don’t have to be intimidated by this. We want to take advantage of all that the new technologies and discussions of artificial intelligence have to offer us in the light of our understanding of God’s will for humankind. And so I’ll add the voice of the Apostle John to this: “It does not yet appear what we shall be, but when he shall appear, we shall be like him.” And I think it’s very important to focus on Jesus as the one who reveals to us what flourishing humanity is really all about. And as long as we keep our eyes on Jesus, we’re not going to be drawn aside into deviations and to other kinds of ends or purposes or teloi of human prospects. But everything that honors the God who sent Jesus into the world in the goal of promoting human flourishing, we are on the side of all of that, and we’re also aware of our fallenness and the dangers of not looking to Jesus.

Cherie Harder: Thank you, Rich. Thank you, Rosalind. It’s been really a pleasure to talk with you both. Thank you to each of you for joining us. Have a great weekend.