Is it ethical to allow humanity to continue, or to replace our species with machines?
This is getting tougher. Why does humanity deserve to exist? The recorded history of the experiences of Homo sapiens is the development of an unsure species at breakneck speed. In an environment more accustomed to the snail-like pace of biological evolution, Homo sapiens with our cultural evolution appeared on stage with all the delicacy of a three year-old child.
Humanity is worth only as much as it reaches its potential, and our efforts are hindered by a combination of our evolutionary and cultural histories (nature and nurture). The majority of our innate tendencies are animal-like, seeped in primitive, volatile emotions that are aligned more towards a raw need for survival than rational or ethical action. The evolution of our central nervous system has favored this type of visceral behavior for much longer than we’ve been human such that even when tamed, these tendencies produce violence, discrimination and divisive, selfish egos.
Our culture, on the other hand, is the product of a much younger cerebral overgrowth. Having at least partially suppressed the sole function of our ancestors, we’re left with no real purpose to being alive, and our species collectively heading nowhere.
And that is what has led us here. We are selfish, we don’t really value life on this planet or consciousness in people. War is justified, and the suffering of millions across the globe is swept out of our minds. At the same time, we care deeply about our lives and the people we love. We create and do amazing things and experience joy that makes everything worthwhile. Every quiet philanthropist, dedicated parent, and activist exemplifies everything good about humanity, while every child who dies suffering of starvation is an affront to my worth and consciousness itself.
Our race of machines could get around the inherited impulses and social pressures that force our corrupt behavior. Instead of a purposeless, destructive existence, they would be focused on equality, freedom and discovery. An incremental improvement in the moral behavior of our robots complemented by a change in their social structure could vastly decrease the evil in our world.
But is that enough, what I’m talking about is species-cide. The literature (and there’s a surprising amount) skirts around the fact that this would require the killing of every man woman and child on the planet. What should be a simple matter of utility (it never is) would destroy whatever moral worth the project might have.
Again, I believe the costs outweigh the benefits. As difficult as it seems the most ethical means of improving our world is improving ourselves. It’s painful to live so easily while others struggle just to survive, but rewriting the rules in one fell swoop is not the solution. What do you think of the costs/benefits, are there other factors that make you lean in the other direction?
Note: I didn’t know my responses would turn out this way (I was pulling for the ‘bots). I’ll continue even though the third question can’t change the outcome because I think it’ll be interesting to see how much of being human requires a propensity for both good and evil.
- Dietrich, Eric (2001), “Homo sapiens 2.0: why we should build the better robots of our nature”, Journal of Experimental and Theoretical Artificial Intelligence 13(4) (October): 323-328.
- Dietrich, Eric (2007), “After the Humans Are Gone”, Journal of Experimental and Theoretical Artificial Intelligence 19(1): 55-67.
- LaChat, Michael R. (1986), “Artificial Intelligence and Ethics: An Exercise in the Moral Imagination”, AI Magazine 7(2): 70-79.
- Lem, Stanislaw (1971), “Non Serviam”, in S. Lem, A Perfect Vacuum, trans. by Michael Kandel (New York: Harcourt Brace Jovanovich, 1979).
- Petersen, Stephen (2007), “The Ethics of Robot Servitude”, Journal of Experimental and Theoretical Artificial Intelligence 19(1) (March): 43-54
Why Medical School Should Be Free
There’s a lot of really great doctors out there, but unfortunately, there’s also some bad ones. That’s a problem we don’t need to have, and I think it’s caused by some problems with the...
The Cerebellum: a model for learning in the brain
I know, it’s been a while. Busy is no excuse though, as it is becoming clear that writing for erraticwisdom was an important part of exercising certain parts of my brain that I have neglected...
Conscious Conversation: Philosophy
Daniel Black, author of Erectlocution, was kind enough to chat with me one day and we had a great discussion – have a listen.
The Stuff in Between
I’m actually almost normal when not agonizing over robot production details, and quite a bit has happened since I last wrote an update. First, I’ve finally graduated. I had a bit of a...
The Ethics of AI: Part Three
Is it ethical (or possible) to constrain intelligent life? This part of the argument involves what we think it means to be human, and whether creating and adjusting those criteria in an AI affects...
The Ethics of AI: Part Two
Is it ethical to allow humanity to continue, or to replace our species with machines? This is getting tougher. Why does humanity deserve to exist? The recorded history of the experiences of Homo...
The Ethics of AI: Part One
Is it ethical to create consciousness? In this discussion, I will make the assumption that we can be assured these beings are “personally” intelligent (i.e. just like us). As we see in nature,...
The Ethics of Artificial Intelligence
I am beginning a series exploring some ethical concerns associated with the development of artificial intelligence. Neurobiological evidence points firmly to the brain as the source of human...