Braden Allenby, Nicholas Agar, and myself have squared off (hexagoned? how does one extend the analogy for a three person debate?) for a discussion of transhumanism’s ethical implications. The conversation was one of the most polite and enlightening that I have had in a long time. I really must thank Slate’s Future Tense for organizing it. Considering that Nick, Brad, and I disagree, it’s amazing how much consensus we found.
The center of my investigation stems from this point made by Agar:
The argument against transhumanism is that limits and restrictions create meaning but are also liberating. Nick, you argue that those with extended life spans would become more fearful of death, because they would have so much more to lose. As a result, people would opt for safe but shallow digital experiences, leading to long, ultimately empty lives. You also argue that those who would be enhanced beyond human capacity would no longer be demonstrably human. Transhumans would be capable of experiences far beyond those of the average human, such that to describe those who are enhanced as human would be a misnomer. The threat there is that those who are human would feel pressure to no longer be human—and, in the end, society may no longer value humanness itself. In short, the loss of human biological limits is the loss of our humanity.
Allenby comes in with his reply to my rather pro-transhumanist arguments with this rejoinder to be cautious:
Two things, I think. First, technological change is far more rapid and pervasive than it has been in the past. We’ve always had technologies that restructured society, culture, economies, and psychology—the steam engine did, railroads did, cars did, airplanes did, and search engines that increasingly substitute for memory do. But depending on how you count, we have five foundational technologies now—nanotech, biotech, robotics, information and communication tech, and applied cognitive science—all of which are not only evolving in interesting and unpredictable ways; they are actually accelerating in their evolution. Moreover, they’re doing that against the backdrop of a world in which systems we’ve always framed as “natural”—the climate, the nitrogen and phosphorous cycles, biology and biodiversity, and others—are increasingly products of human intervention, intentional or not. We are terraforming everything, from our planet to one another … and it’s all connected, of course.
Critically for our purposes, the human being is more and more becoming a design space. Per our ape friend above, this is not new. But the rate of change and concomitant (and mainly still potential) psychological, cultural, and social dislocations are. So in some ways transhumanism is not, as we’d like to frame it, about us; it’s about reality as we know it becoming our design space—including, and especially, that part of reality that we have heretofore ring-fenced as “human.” What we are really seeing is the human equivalent of the Great Divergence, that period in history when economic development touched the lives of some cultures and led them to exponential economic growth, while others remained behind. I’m not saying this is a good thing. But it is where we’re headed. Already, people in developed countries live almost twice as long as others, similarly human, in some developing countries. That’s a pretty profound divergence, and we seem to accept that en passant.
Bioethics is controversial.
No one endorses the ideas or concepts explored here, not even me.
You will develop a strong opinion about something you find here. I want to hear it. Philosophy is a conversation.
popbioethics [at] gmail [dot] com
Long Form ArticlesWhy Mass Effect is the most Important Science Fiction Universe of our Generation