Philosophy

When the Enterprise Got a Soul

Author Greg Koukl Published on 04/24/2013

Greg challenges a misinterpretation of what it means to have a soul which occurred on an episode of “Star Trek: The Next Generation.”

I don’t really watch Star Trek, and to be honest with you, I’m not quite sure what the big appeal is. I guess you either get it or you don’t.

Now, our friend Frank Beckwith is much more committed to the whole process. He knows all the stories and works some of them into his own talks. It’s interesting—as I hear some of the comments made and some of the plots developed—that issues are touched upon that border on being profound. That is, the issue is profound, even though how they treat it may not be.

One issue I think is profound is the question of consciousness, and what kind of value consciousness has. Apparently in one particular episode of “Star Trek, The Next Generation,” the Enterprise (the machine that flies the crew) begins to develop consciousness; it’s developing a mind. And so there are reflections on board. Consciousness is viewed in this context as an emergent property, a property that emerges from a certain organization of matter.

If you’ve read some of the recent treatments of brain function and neurophysiology, of course, this is the point of view. A year ago July, Time Magazine said, as it did an examination of what consciousness is, that the closest they could come to an answer is that consciousness is an emergent property.

It’s kind of like when you put a lot of H2O’s together, you get water. You put your hand in a bunch of H2O and you feel something; what you feel is wetness. Wetness is a property of water. It’s a property that manifests itself when hydrogen and oxygen are bonded together in a certain configuration and grouped together en mass. And so according to this view, consciousness would be a result—a mysterious result, but a result—of the organization of matter.

Now, I put it that way because I want you to notice that, when we view consciousness as an emergent property—which, as I understand it, is the most popular way of looking at it—we’re looking at the whole picture from the bottom up. What I mean is, on the simplest level—the bottom—we have molecules and atoms and subatomic particles moving about and connecting in such a way as to manifest certain qualities or properties on a higher level. So whatever is manifested on the top is a result of the configuration of atomic particles on the bottom. You put all these particles together and it results in these properties on top. We look on the visible properties and we ask what causes this property, and go back down and look at molecular structures and details and atoms and nuclear particles and that kind of thing.

The main point in that little analysis is for you to understand that properties that are emergent are effects and not causes. Very important. It isn’t that wetness causes the water or does anything to the molecules. It’s the molecules themselves configuring in a certain way that produce, as a result or effect, the property of wetness.

So the idea goes, this is the way consciousness appears in human beings. This approach, by the way, treats human beings as principally physical things. We are a physical body, a brain and central nervous system, and that’s what we are. As Time Magazine put it, there’s no “you” inside there, inside that body directing the process, causing anything to happen; some kind of immaterial homunculi that sits on the throne of your life, as it were, and directs the processes. No, that’s just a leftover from religious superstition, because science has never found such a thing. There’s no place in the brain for it to fit. That’s almost their exact words. Therefore, science has concluded that such a thing doesn’t exist.

That’s a foolish way of thinking. I’ve dealt with it in the past and I’m not going to approach it again right now.

But let’s just take that point of view at face value, that the physical organism functioning in a certain way produces a property called consciousness. This is what happened in the “Star Trek” episode.

The problem, of course, is that those on the Enterprise have a prime directive, and that is they are to respect all sentient life, all conscious beings. Now, what happens when the ship that carries them about—and is a tool for them to use in their endeavors and goals and purposes—turns out to develop sentience itself?

All of a sudden they’re not dealing with an inanimate machine which they can simply use, but with something like a person, which can’t be used in the same way. So they confront an ethical dilemma, which is the dilemma of this particular story.

I want to tell you why this thinking doesn’t work. What happens when the Enterprise gets a soul? How do you treat it? That’s the ethical problem they dealt with in this issue. The particular thing they were concerned with was that, in this case, they felt they needed to respect the autonomy of this consciousness and also respect, as a result, the choices it made. Apparently, they had to back off the controls. The machine which developed a soul had some goals of its own and had the Enterprise crew warping all over the universe, gathering up odds and ends for it to create a different life form which eventually left the ship along with the consciousness of the ship.

And that was that. It was back to a machine again. Solved that problem. It would be hard to have to fly a ship around in deep space when it had a mind of its own which you couldn’t violate.

This raises a couple of questions for me. The first question is about their point of view that the ship has a consciousness which they now must respect, and not violate its autonomy. They must be respectful of its own choices. My first question is, why? What moral obligation compels us to respect the autonomy of all sentient creatures?

In fact, there are other sentient, conscious life forms on the ship—the crew for example—and, I imagine, a bunch of other things, maybe some ship rats or some cockroaches or something like that. I imagine there are other sentient creatures there. Does the prime directive apply to them, too? Do we not squash the cockroach or control the rats because, after all, the rat did desire to get into the food stores? Does it have an autonomy that has to be respected?

That’s a minor issue, really, but it did pop up. But what about the humans on that ship? Don’t they have purposes and desires as well? And why is it that the ship’s desires must now trump the human’s desire? This is a problem with a prime directive that says you have to respect the autonomy of all sentient creatures. What happens when the desires between sentient creatures—their autonomous goals and objectives—are in conflict? How to you adjudicate between those different conflicts?

Why consciousness deserves rights at all is another thing. Where do rights come from? It’s almost as if, just because there’s this fact of consciousness, all kinds of privileges and entitlements accrue to it. Why do privileges and entitlements accrue to consciousness? Why do they have rights?

This is somewhat in keeping with the present mentality that you can pluck rights out of thin air. People do that. I have a right to do this; I have a right to do that. I have an inalienable right to breast feed a baby in public, many women say. I have a right to be a homosexual. Animals have rights to live.

My very first question to anyone who makes this statement is, what is a right? You’ll usually get a silence when you ask such a question. Because these people who are fabricating rights out of thin air—that result in obligations on your part, by the way—have never given any thought to the notion of what a right is anyway and what justifies their claim to having a particular right. Rights are just claims to things. They’re a claim to something that justice gives you.

Now, you can’t have justice without a law. And so you can’t have rights without higher laws that establish this entitlement. But if there is no God, it’s hard to imagine how one might have a higher law. Why can’t I just make up a right? I have a right to receive half of your income for Stand To Reason. Now, how is my rights claim any less legitimate than whatever rights claim you happen to be making?

So that’s another problem. How do you justify the rights? It sure isn’t enough to just manufacture them, because if we can do that, I’ll manufacture all kinds of rights in which I become the beneficiary, and the obligation is upon you to fulfill my entitlements.

But there’s another even greater problem that has more practical application to the question of the mind/body problem: the problem of the soul. The understanding about consciousness in the case of the Enterprise was that its consciousness arose from the material structure of the thing as an emergent property. Remember our earlier discussion? If that’s the case, then consciousness is on the top, not on the bottom. Therefore it’s an effect, not a cause. Consciousness doesn’t cause anything, it’s simply the effect of something. Wetness doesn’t change the water, it’s the property that results as an effect when hydrogen and oxygen molecules come together.

On this view of things, the human physical body operates in such a way as to produce, as the effect, consciousness. And in the case of “Star Trek” here in this episode, the Enterprise was just one of those physical things that produced consciousness. Now, if it’s producing consciousness, then consciousness is the effect. And if it’s the effect, then it doesn’t cause anything. If it doesn’t cause anything, it can’t decide anything. If it can’t decide anything, it can’t choose anything. If it can’t choose anything, it doesn’t make any sense to say that it has a right to autonomous choices.

Here’s the problem with this way of looking at things, the problem with what might be called property dualism, the idea that some physical things have two different types of properties, physical and mental. Notice that this point of view doesn’t leave room for a soul that is a thing in itself with mental properties, but that mental properties are merely the effects of the physical thing itself.

The problem is that mental properties are the results of the physical response. They never cause anything. Now, the problem with that is, it seems that mental states do cause things. It seems that we do make choices. It seems that we do think about things and adjudicate between good ideas and bad ideas and, therefore, exercise rationality.

You see, this bottom-up approach (which treats consciousness merely as a property) results in determinism, meaning that physical systems drive everything and produce consciousness. But all physical systems ultimately are determined.

You say: Here’s a feeling I’m having. What caused that feeling? Well, this configuration of electromagnetic impulses (or whatever) going through your system. C-fibers firing. And you say: What caused that? Well, some kind of chemistry in the body. Well, what caused that? Some kind of physical stimulus to the end of the finger or something like that, which ends up in the feeling of pain, but it’s some other thing that caused that.

And you see you can keep going back further and further and further, and there’s always going to be some prior cause, because all purely physical systems are always the result of some prior necessary and sufficient physical condition that causes the subsequent condition.

Physical systems are always deterministic. This is why science works. We look at purely physical systems and we know that we can set up the same physical conditions and get the same result, because the circumstances determine the result.

But what does that make human beings then, if we’re viewed simply as physical systems? If our consciousness is only an emergent property—it’s only a result—then it’s on the top. It doesn’t effect anything. It’s just riding along.

And if it doesn’t effect anything, then I don’t choose. What appear to be choices are merely the results of chemical reactions that happen on their own in a deterministic way in my physical body.

And if I don’t choose anything, not really, then it doesn’t make any sense to talk about my rights to autonomy as if I had choices. I’m not capable of choosing at all.

And indeed, if we’re not capable of choosing at all, then we’re not capable of choosing a good idea over a bad idea. Which means there is no rationality, because rationality means that we adjudicate between ideas and choose the one that makes the most sense, given the evidence. But if there’s no choosing, then there’s no rationality either.

So what this really does is make us all into just machines. Indeed, we start as machines, we end as machines. The only difference is, this machine has this property floating around on the surface of it that doesn’t cause anything. And if we’re machines, then choice makes no sense...then morality makes no sense...then rationality makes no sense.

And so, if the Enterprise got itself a soul, and the soul is just an emergent property of consciousness, then who cares what it wants? Indeed, even its wants are determined by its machinery.