The Ethics of AI: Part Two

Is it ethical to allow humanity to continue, or to replace our species with machines?

This is getting tougher. Why does humanity deserve to exist? The recorded history of the experiences of Homo sapiens is the development of an unsure species at breakneck speed. In an environment more accustomed to the snail-like pace of biological evolution, Homo sapiens with our cultural evolution appeared on stage with all the delicacy of a three year-old child.

Humanity is worth only as much as it reaches its potential, and our efforts are hindered by a combination of our evolutionary and cultural histories (nature and nurture). The majority of our innate tendencies are animal-like, seeped in primitive, volatile emotions that are aligned more towards a raw need for survival than rational or ethical action. The evolution of our central nervous system has favored this type of visceral behavior for much longer than we’ve been human such that even when tamed, these tendencies produce violence, discrimination and divisive, selfish egos.

The Pioneer Plaque
The Pioneer plaque: what we thought an alien race would need to know.

Our culture, on the other hand, is the product of a much younger cerebral overgrowth. Having at least partially suppressed the sole function of our ancestors, we’re left with no real purpose to being alive, and our species collectively heading nowhere.

And that is what has led us here. We are selfish, we don’t really value life on this planet or consciousness in people. War is justified, and the suffering of millions across the globe is swept out of our minds. At the same time, we care deeply about our lives and the people we love. We create and do amazing things and experience joy that makes everything worthwhile. Every quiet philanthropist, dedicated parent, and activist exemplifies everything good about humanity, while every child who dies suffering of starvation is an affront to my worth and consciousness itself.

Our race of machines could get around the inherited impulses and social pressures that force our corrupt behavior. Instead of a purposeless, destructive existence, they would be focused on equality, freedom and discovery. An incremental improvement in the moral behavior of our robots complemented by a change in their social structure could vastly decrease the evil in our world.

But is that enough, what I’m talking about is species-cide. The literature (and there’s a surprising amount) skirts around the fact that this would require the killing of every man woman and child on the planet. What should be a simple matter of utility (it never is) would destroy whatever moral worth the project might have.

Again, I believe the costs outweigh the benefits. As difficult as it seems the most ethical means of improving our world is improving ourselves. It’s painful to live so easily while others struggle just to survive, but rewriting the rules in one fell swoop is not the solution. What do you think of the costs/benefits, are there other factors that make you lean in the other direction?

Note: I didn’t know my responses would turn out this way (I was pulling for the ‘bots). I’ll continue even though the third question can’t change the outcome because I think it’ll be interesting to see how much of being human requires a propensity for both good and evil.

  1. Dietrich, Eric (2001), “Homo sapiens 2.0: why we should build the better robots of our nature”, Journal of Experimental and Theoretical Artificial Intelligence 13(4) (October): 323-328.
  2. Dietrich, Eric (2007), “After the Humans Are Gone”, Journal of Experimental and Theoretical Artificial Intelligence 19(1): 55-67.
  3. LaChat, Michael R. (1986), “Artificial Intelligence and Ethics: An Exercise in the Moral Imagination”, AI Magazine 7(2): 70-79.
  4. Lem, Stanislaw (1971), “Non Serviam”, in S. Lem, A Perfect Vacuum, trans. by Michael Kandel (New York: Harcourt Brace Jovanovich, 1979).
  5. Petersen, Stephen (2007), “The Ethics of Robot Servitude”, Journal of Experimental and Theoretical Artificial Intelligence 19(1) (March): 43-54
  1. Great series, I love these ethical experiments.

    I do have one point to contend. if as you say it is unethical to make robots you take away one of the more promising fields for understanding consciousness. We can’t really know how far we’ve gotten when we make artificial intelligence so you’d probably have to stop research early to be sure.

    Tough questions you’re tackling here, just had this thought.

    W. Hardy

    Apr 19, 04:54 PM #

  2. W. Hardy:

    That’s a very good point. Trying to replicate the features of consciousness is probably the best way of understanding it. But as we get closer to creating intelligence (and understanding consciousness), we get closer to experimenting with a living being.

    I hate to say we should put an embargo on research, particularly in a field so promising. The alternative, however, is also undesirable and while the effects of the former are rather generalized, the effect of stumbling upon a conscious being in the course of research would affect a specific individual.

    Again, good point, I’m really not sure which is worse.

    Thame

    Apr 19, 10:52 PM #

  3. I agree that we shouldn’t create consciousness merely as a tool to clean up our mess.

    But how about for it’s own sake?

    Why do we have children? Using your reasoning, we should never have children.

    Think the analogy through before dismissing it. It is not really an analogy – it is precisely the same issue.

    Alex Houseman

    May 2, 12:07 AM #

  4. Alex Houseman: Why do we have children? Using your reasoning, we should never have children.

    Think the analogy through before dismissing it. It is not really an analogy – it is precisely the same issue.

    Absolutely, that’s a great point.

    Here’s where I stand, I addressed part of your question in part one:

    Is consciousness created within a synthetic machine significantly different from the same through natural means? From a theologically neutral standpoint, it seems not. Consciousness is consciousness, and the ethical implications of its creation are not tied to what it’s in.

    However, that is only if we have a fully developed “blueprint” for creating artificial intelligence. What is more likely is that AI research would progress slowly toward its goal of personally intelligent machines. As they near that goal, they would begin creating, revising, and destroying first primitive forms of consciousness until they would be eventually experimenting with the conscious equivalents of primates and even humans.

    So, while I do think creating artificial and “natural” intelligence are equivalent in their ethical implications, it’s the experimental process of reaching personally intelligent machines (and the risk of harming conscious beings) that I believe we should avoid.

    Thame

    May 2, 11:38 AM #

  5. Things to consider:

    • The probability that our species, liek all others, will eventually become extinct is almost 100 percent. The only question is when and how, not if.
    • That doesn’t mean that the accumulated “knowledge” of the human version of DNA won’t live on. But it’s a long shot. What is human is only an expression of DNA in a given niche.
    • It is inconceivable that humans could create AI without replicating our own worst (and best) faculties. Why are there computer and internet viruses?
    • The Cherokee myth is a good tale for individual moral decisions, but in the aggregate both wolves are getting fed, always. http://www.firstpeople.us/FP-Html-Legends/TwoWolves-Cherokee.html

    john

    May 7, 05:39 PM #

  6. Hi John, thanks for the comment. I loved that Cherokee tale, it’s really cool when everything we’re talking about can be summed up in a great story.

    The probability that our species, liek all others, will eventually become extinct is almost 100 percent. The only question is when and how, not if.

    I agree, knowing the inevitability of our eventual extinction isn’t very encouraging. But, it isn’t as much about beating those impossible odds as it is extending the length and “quality” of our stay.

    That doesn’t mean that the accumulated “knowledge” of the human version of DNA won’t live on. But it’s a long shot. What is human is only an expression of DNA in a given niche.

    Why bother “extending our stay”? Because, to me, the important thing about being human (the critical part of our accumulated genetic knowledge) is our consciousness (and brain). Being human is just an incidental implementation of this other, much cooler thing (consciousness), and it would be worse than careless to let it blink out after all the time that went into producing and perfecting it.

    The legacy of our DNA is that it contains a blueprint for producing consciousness, what it is to be human is different from the genome of homo sapiens.

    It is inconceivable that humans could create AI without replicating our own worst (and best) faculties. Why are there computer and internet viruses?

    I think part three will help a bit with this because it specifically explores good and evil and their role in making us who we are. But as a quick example, imagine if I created a very close artificial replicate of myself whose brain was adjusted slightly to reduce the influence of structures contributing to my immoral behavior. My good doppelgänger wouldn’t be as influenced by social pressures, he wouldn’t spend his money frivolously when it would be better donated, etc.

    I don’t think we have to go all the way from us to perfectly moral robots, an incremental improvement, however, seems conceivable at least.

    Thanks for the excellent comment!

    Thame

    May 8, 09:55 AM #

  7. Seems solving the problem by carrying out the worst form of what we consider evil (killing everyone because some humans allow others to die needlessly) seems ironically problematic. I wonder if we ought to study instead our response to fear and loss. We as a species have only had a few thousand years to deal with a social scale on an order of magnitude unprecedented in primate history. No other species must coordinate the level of social hierarchy and complexity by the millions, billions (no, ants and wasps and termites don’t have to deal with multinational corporate negotiations, elections, hundreds of languages and cultures, monetary systems, economic models, contract law, nuclear warheads). But just as our major advantage is learning rather than instinct or claws or horns or speed, becoming aware of how we operate will be a start. We hate losing progress, we become habituated on the advances of lifestyle, and when we have anxiety we seek familiarity, control and elimination of perceived threat. Right off, there are probably a dozen places to intervene. The issue is the coordination required because if only some of us back off from our goals, others may surge forward to take our place and no legitimate progress is made. (Global warming, for instance). Robots to me seem to play no part in this, precisely because their calculations are either based on our own calculations, or else do not take into account the subjectivity of an actual human life.

    Carolyn

    Jul 2, 09:25 AM #

Add a Comment

Phrase modifiers:

_emphasis_
*strong*
__italic__
**bold**
??citation??
-deleted text-
@code@

Block modifiers:

bq. Blockquote
p. Paragraph

Links:

"linktext":http://example.com


Show Articles By:

You can show articles by time or category.

  • 260.

    The Ethics of Practicing Procedures on the Nearly Dead

    The report from the field was not promising by any stretch, extensive trauma, and perhaps most importantly unknown “downtime” (referencing the period where the patient received no basic care like...

    Read More

  • 260.

    The Ethics of Teaching Hospitals

    I can’t imagine what the patient was thinking. Seeing my trembling hands approaching the lacerations on his face with a sharp needle. I tried to reassure him that I knew what I was doing, but the...

    Read More

  • 260.

    Conscious Conversation: Behavioral Science

    Dr. Eran Zaidel is a professor of Behavioral Neuroscience and faculty member at the Brain Research Institute at UCLA. His work focuses on hemispheric specialization and interhemispheric interaction...

    Read More

  • 260.

    Progress Report

    Two years down, I’m still going. The next two years are my clinical rotations, the actual hands-on training. It’s a scary prospect, responsibilities and such; but it’s equally exciting, after...

    Read More

  • 260.

    Why Medical School Should Be Free

    There’s a lot of really great doctors out there, but unfortunately, there’s also some bad ones. That’s a problem we don’t need to have, and I think it’s caused by some problems with the...

    Read More

  • 260.

    The Cerebellum: a model for learning in the brain

    I know, it’s been a while. Busy is no excuse though, as it is becoming clear that writing for erraticwisdom was an important part of exercising certain parts of my brain that I have neglected...

    Read More

  • 260.

    Conscious Conversation: Philosophy

    Daniel Black, author of Erectlocution, was kind enough to chat with me one day and we had a great discussion – have a listen.

    Read More

  • 260.

    The Stuff in Between

    I’m actually almost normal when not agonizing over robot production details, and quite a bit has happened since I last wrote an update. First, I’ve finally graduated. I had a bit of a...

    Read More