The Ethics of AI: Part Three

Is it ethical (or possible) to constrain intelligent life?

This part of the argument involves what we think it means to be human, and whether creating and adjusting those criteria in an AI affects what they are capable of.

A critical part of being human is having freedom, the freedom to choose one of many possible actions (some we would deem moral and others immoral). It is these choices we make that – combined with environmental factors – produce an individual. The existence of an individual is of critical importance because it is this variety of conditions and decisions that create all of us, the paradigm-shifting geniuses, mass murderers, and utterly mundane.

Man as Industrial Palace
Fritz Kahn’s “Man As Industrial Palace”. Also check out this beautifully done animated version.

One problem that arises immediately when discussing moral AI is whether that freedom, and individuality, remains. Perhaps latent in the term “robot” itself is the notion that constraining a personally intelligent machine such that it is incapable of acting immorally would restrict its freedom. If a machine operated in such a way, it could not comprehend the gravity of making an immoral decision, and it would be difficult to differentiate that machine from other instances that (necessarily) operate in the same way.

However, this is not how I would expect to develop moral artificial intelligence. Rather than creating preprogrammed “moral drones” that are unconsciously restricted from acting immorally, I would create an AI that was aware of the full range of possible decisions, but always (or at least often) acted morally. The distinction here is that our machine would want to act morally. By removing whatever evolutionary propensities for immoral behavior, we could expect our machines to “think clearly” and not only recognize the proper choice, but to seek it willingly (as the consequences, however delayed, would be determined to be desirable). The moral machines needn’t be perfect either (that may well be impossible), but even an incremental improvement would be worthwhile.

I believe this type of moral AI would preserve individuality, because it produces moral behavior not by forcing a particular decision, but by ensuring that moral behavior is always desired. It is not difficult to imagine a nearby possible universe whose inhabitants (through whatever tweaks of nature/nurture) evolved in such a way as to emphasize equality and unity. Their slightly altered nervous systems would imperfectly prefer moral behavior, with relevant changes to their emergent social structure (one perhaps untainted by discriminatory or violent tendencies). Easier still, imagine the person you wish you could be. For example, my good doppelgänger is not as easily influenced by social pressures, he does not spend his money frivolously when it would be better donated, and while this makes him a different individual, he is still an individual.

Freedom is a necessary part of being human, as it allows for individual decisions towards good or evil, but what about evil itself, is it too a necessary human component? Can we know what it means to be good (and to make the necessary individuating choices) if there is no contrasting evil?

If we abolish even the conscious propensity for evil, we don’t necessarily lose the ability to differentiate good. For example, I don’t have to kill someone to learn that it’s wrong. If evil has to exist in some form, then even a memory would suffice as a deterrent. Being human and homo sapiens are not the same thing, evil is only a necessary component of the latter.

It is obvious that free will alone does not constitute individuality, something valuable happens over the course of a conscious being’s life that transforms a cloned instance into an individual. Preserving this process in our AI would be essential to ensuring the same unique development that leads to both genius and the mundane (having hopefully eliminated the profane). The most reliable method for conserving these features is by doing it the homo sapiens way. Instances should be unique and plastic: that is, every AI should be created randomly according to a general blueprint and should be highly flexible. This allows for individual talents (and weaknesses) along with an ability to learn and develop over time.

How would we accomplish something like algorithmically improving the moral behavior of a machine? This is obviously speculative, but it is possible that amplifying the activity of mirror neurons could lead to more moral behavior. Mirror neurons, as their name suggests, reflect perceived behavior as neural activity in the perceiver – they are thought to be responsible for learning language, and perhaps, empathy (Bråten, 2007). For example, when you wince at the sight of someone in pain, it is believed that your mirror neurons are firing a similar uncomfortable pattern, possibly creating a need to help assuage their pain (and ultimately your own). A mirror response strong enough would essentially implant the Golden Rule such that any individual’s suffering would be distributed among the “species” and would produce a widespread effort to reduce it.

In this case, it seems that it is not only possible, but ethically advisable to create moral AI. We sacrifice nothing of what it means to be an individual, and can ensure the moral treatment of individuals in society.

Conclusion

This is where we stand: creating a race of artificially intelligent machines is not ethically permissible since using them as a means to an end (which violates Kant’s categorical imperative) does not afford them the respect they deserve as conscious beings. While it is at least theoretically possible to create AI that behaves more morally than us, the cost of the actual implementation of the project (the species-cide of humanity) is too high to justify. It seems there’s no easy way out.

  1. Bråten, Stein. (2007). On being moved: from mirror neurons to empathy. Philadelphia: John Benjamins Publishing Company.

Show Articles By:

You can show articles by time or category.

  • 260.

    The Ethics of Practicing Procedures on the Nearly Dead

    The report from the field was not promising by any stretch, extensive trauma, and perhaps most importantly unknown “downtime” (referencing the period where the patient received no basic care like...

    Read More

  • 260.

    The Ethics of Teaching Hospitals

    I can’t imagine what the patient was thinking. Seeing my trembling hands approaching the lacerations on his face with a sharp needle. I tried to reassure him that I knew what I was doing, but the...

    Read More

  • 260.

    Conscious Conversation: Behavioral Science

    Dr. Eran Zaidel is a professor of Behavioral Neuroscience and faculty member at the Brain Research Institute at UCLA. His work focuses on hemispheric specialization and interhemispheric interaction...

    Read More

  • 260.

    Progress Report

    Two years down, I’m still going. The next two years are my clinical rotations, the actual hands-on training. It’s a scary prospect, responsibilities and such; but it’s equally exciting, after...

    Read More

  • 260.

    Why Medical School Should Be Free

    There’s a lot of really great doctors out there, but unfortunately, there’s also some bad ones. That’s a problem we don’t need to have, and I think it’s caused by some problems with the...

    Read More

  • 260.

    The Cerebellum: a model for learning in the brain

    I know, it’s been a while. Busy is no excuse though, as it is becoming clear that writing for erraticwisdom was an important part of exercising certain parts of my brain that I have neglected...

    Read More

  • 260.

    Conscious Conversation: Philosophy

    Daniel Black, author of Erectlocution, was kind enough to chat with me one day and we had a great discussion – have a listen.

    Read More

  • 260.

    The Stuff in Between

    I’m actually almost normal when not agonizing over robot production details, and quite a bit has happened since I last wrote an update. First, I’ve finally graduated. I had a bit of a...

    Read More