The Ethics of AI: Part One

Is it ethical to create consciousness?

In this discussion, I will make the assumption that we can be assured these beings are “personally” intelligent (i.e. just like us). As we see in nature, there is no abrupt line where we can begin to call a being conscious, but at a distant point along this gradient we find this conclusion undeniable. I am assuming the AI is farther along this line than us and their consciousness is assured (save for “mere” philosophical skepticism).

Is consciousness created within a synthetic machine significantly different from the same through natural means? From a theologically neutral standpoint, it seems not. Consciousness is consciousness, and the ethical implications of its creation are not tied to what it’s in. Further, the definition of “natural” means is changing rapidly. Today, a prospective parent can create a child from eggs and sperm selected for various features of their donors, fertilized in vitro and gestated in a surrogate mother.

The current task, however, requires the creation of a race of conscious beings for a specific purpose (cleaning up our mess). This raises a different concern as it would involve using conscious beings as means and not ends in themselves. This is a clear violation of Kant’s categorical imperative (the humanity formulation), perhaps one of the simplest and least controversial universal moral rules.

This, I believe, is the most significant reason to abandon the project altogether. The generation of our own consciousness is “excusable” in virtue of its randomness. We, on the other hand, would be using (worse yet, creating) beings as a means to our end. Even so, it is not entirely clear if our discussion should end there. For one, it can be thought that we are creating intrinsically more moral beings whose actions could not help but produce the outcome we desire. Also, we can guess, were it not logically impossible, that we could get the consent of such beings before their creation. Would an intelligent being who is better than us not agree to the opportunity to reduce the evil in our world?

Ultimately, we can’t have access to our creations’ experiences. Without ensuring the quality of life of our creations, we run the risk of “playing god” and losing, of creating beings that would rather not have been created.

  1. Haven’t really chewed this up enough to make too meaty a statement, but I wonder if the framing isn’t crippled by something similar to the anthropic principle. That is, why is the creation of progeny under the influence of gene-coded reproductive behavior substantially different than creating progeny or a simulacrum thereof by conscious choice? That’s a messy framing of my own: if, for instance, I think consciousness is an emergent property or family of properties of the brain, and the brain is the result of gene-coded development, then our conscious choices owe genetics their ultimate pedigree just as much as our instinct for procreation.

    Not a criticism; just lobbing something into the fray

    Daniel Black

    Apr 13, 09:45 PM #

  2. Daniel Black: That is, why is the creation of progeny under the influence of gene-coded reproductive behavior substantially different than creating progeny or a simulacrum thereof by conscious choice? […] If, for instance, I think consciousness is an emergent property or family of properties of the brain, and the brain is the result of gene-coded development, then our conscious choices owe genetics their ultimate pedigree just as much as our instinct for procreation.

    I think I understand what you mean, but let me know if this doesn’t answer your question. I don’t think the implications of creating artificial intelligence are at all different from natural procreation. But, this is because I (perhaps wrongly) assume genetically programmed reproductive behaviors are, if not altogether weak, “controllable” in us. We may have a natural drive to procreate, but it’s possible for a couple to plan when they have a child. When this couple makes the choice to create life, whether it is their biological child, or an AI child makes no difference (and we can be critical of this choice).

    I don’t think there’s as big an issue with creating single instances of AI as we would our children (and the same genetic impulses would apply). The problem comes when I decide to be god to a bunch of them and set them on a task.

    Thame

    Apr 14, 10:29 AM #

  3. Excellent point (that I’m finally coming back ‘round to read). I’m really intrigued by a point you slid into your piece that I hadn’t really assimilated your notion of asking permission before creating the AI. Perhaps, though, we could approach this idea.

    Consider that the “I” in “AI” doesn’t require installation into a particular chassis, but rather is an algorithm that arguably can be stored in plaintext (at least its seed). Even allowing for the seeding and development of an AI, such that it endows itself (I suppose we’d say “emergently,” for whatever value that word has) with characteristics that don’t deterministically arise from our plaintext code, the AI needn’t occupy a particularly useful chassis, just RAM somewhere. I’m imagining a Dickian “well of souls” where we allow variously coded AI constructs to develop, and then recruit/conscript some for the types of projects you’re talking about here. While they can’t have consented to their creation, they can consent to their implementation.

    Daniel Black

    Jul 8, 01:54 PM #

  4. I think that’s a good middle ground. I don’t know what it’s like to exist (if even that much can be said) in AI limbo, but being able to consent to specific “jobs” would solve the most difficult problem of use.

    Thame

    Jul 31, 04:53 PM #

  5. What the CompSci professor said in his interview with you is fascinating. I am unclear how one gets around the concept of programming a computer to become aware, without just following more orders. Is there a need for free will and subjectivity in order for there to be a fully functioning consciousness? For instance, I care what happens to me because it affects my well-being, in which I am deeply invested. Does it really make sense to program into a computer a sense of self-preservation? At some point, we feel compelled to create a ‘kill switch’ to prevent the AI from prioritizing its own existence above our own (‘do no harm to humans’). We like control, and the AI is by far more likely to be developed because it serves us, rather than merely out of curiosity or altruism.

    A separate but important matter is whether we would or could create enough complexity that its own subjectivity would arise from its circuitry/artifical neurons so that it could rid itself of those controls we create. Similar to what you state you believe true human consciousness is.

    And finally, if we were to write the program, or choose what the intelligence is exposed to from which to learn, we are still the limited creatures that are providing the input on which its superior supposed moral intelligence would be based. How would we (or it) know that it is being trained properly or has attained sufficient abilities? We are who input the nuance and judgment and (perhaps oversimplistic) directives.

    Finally, the Brown Principle (I just made that up). We may show it many examples of what we consider to be compassion or whatever we deem most worthy. But we’re considering across time and culture and circumstances. Who is to say that we know how to program AI to ‘learn’ something beyond averages? A rainbow or kaleidoscope rather than just come up with ‘brown’?

    Carolyn

    Jul 2, 04:55 AM #

Add a Comment

Phrase modifiers:

_emphasis_
*strong*
__italic__
**bold**
??citation??
-deleted text-
@code@

Block modifiers:

bq. Blockquote
p. Paragraph

Links:

"linktext":http://example.com


Show Articles By:

You can show articles by time or category.

  • 260.

    The Ethics of Practicing Procedures on the Nearly Dead

    The report from the field was not promising by any stretch, extensive trauma, and perhaps most importantly unknown “downtime” (referencing the period where the patient received no basic care like...

    Read More

  • 260.

    The Ethics of Teaching Hospitals

    I can’t imagine what the patient was thinking. Seeing my trembling hands approaching the lacerations on his face with a sharp needle. I tried to reassure him that I knew what I was doing, but the...

    Read More

  • 260.

    Conscious Conversation: Behavioral Science

    Dr. Eran Zaidel is a professor of Behavioral Neuroscience and faculty member at the Brain Research Institute at UCLA. His work focuses on hemispheric specialization and interhemispheric interaction...

    Read More

  • 260.

    Progress Report

    Two years down, I’m still going. The next two years are my clinical rotations, the actual hands-on training. It’s a scary prospect, responsibilities and such; but it’s equally exciting, after...

    Read More

  • 260.

    Why Medical School Should Be Free

    There’s a lot of really great doctors out there, but unfortunately, there’s also some bad ones. That’s a problem we don’t need to have, and I think it’s caused by some problems with the...

    Read More

  • 260.

    The Cerebellum: a model for learning in the brain

    I know, it’s been a while. Busy is no excuse though, as it is becoming clear that writing for erraticwisdom was an important part of exercising certain parts of my brain that I have neglected...

    Read More

  • 260.

    Conscious Conversation: Philosophy

    Daniel Black, author of Erectlocution, was kind enough to chat with me one day and we had a great discussion – have a listen.

    Read More

  • 260.

    The Stuff in Between

    I’m actually almost normal when not agonizing over robot production details, and quite a bit has happened since I last wrote an update. First, I’ve finally graduated. I had a bit of a...

    Read More