Can Computers Become Conscious

Can Computers Become Conscious
困难 3341
Can Computers Become Conscious?
Debby Urken

At first glance, computers of the 1990s seem remarkably human. They can speak, play chess, serve coffee, make cars, mix chemicals; computers can even search the surface of Mars and operate on humans. But anyone who has ever interacted with a machine will admit that while these supercomputers are surely impressive, they lack those qualities which make humans unique.

Computers most often have trouble with tasks that involve common sense. Imagine, for instance, a robot trying to cross a busy street. Assume it can sense all of the environmental stimuli that humans can and that it has been programmed to avoid moving objects of significant size, so that it does not get hit by cars. The robot might successfully avoid the traffic, but what if there is a different sort of obstacle, such as a construction area, which gets in the robot's way and causes it to stumble and fall?

The problem is that the computer cannot construct an orderly picture of the world from all the sensory input it receives. It has no way of knowing how to respond differently to different sorts of stimuli and therefore cannot "understand".

As a result, machines remain hopelessly unaware. Unlike humans, they have no consciousness. Today, many people wonder how human-like computers of the future will become. Can computers ever become fully intelligent or are they destined to remain number crunchers?

Deep Blue

Take, for instance, the case of Deep Blue, an IBM supercomputer. Last spring, Deep Blue's defeat of world chess champion Gary Kasparov raised many questions about the nature of the computer's intelligence. Although Deep Blue's skills were surely impressive, most agreed that Deep Blue's computational powers did not demonstrate human intelligence. Although the algorithms Deep Blue used in the match resembled Kasparov's thought processes, Deep Blue clearly lacked the self-awareness that Kasparov possessed.

Some experts have suggested, however, that computers like Deep Blue might eventually become self-aware with the addition of new kinds of programs. George Johnson, journalist for The New York Times and author of several books about the mind, describes a potential program that might simulate one way humans identify and respond to anxiety. In Johnson's design, an anxiety meter installed in a computer like Deep Blue could make readings based on how far its opponent is ahead. If the reading on the meter was high, indicating that the computer was losing, the meter might suggest that the computer think about the immediate situation. Conversely, if the meter reading was low, the computer could take the time to ponder future moves.

This meter might simulate the process by which a person's stomach secretes acid when he gets tense during a game; the acid directs the nerves to relay the message to the brain that he should concentrate his attention more on the immediate match. In this way, Johnson suggests, a computer could become aware. It is doubtful, however, that this simple mechanism, developed to encompass the gamut of environmental stimuli with which computers might come in contact, could successfully emulate the consciousness of humans. Even if its meters could interact and form a unified picture of the world, the computer wouldn't act out of understanding but would only behave in an automated way. For example, a computer that receives signals indicating that it has a low anxiety level and a high satisfaction level might mechanically "smile," but the computer can never truly feel happy since it still lacks understanding.

If such a system of meters proves inadequate, what essential element needs to be added to the mechanics of information processing in order to create a sense of awareness? This question represents the fundamental problem underlying the relationship between the mind and the brain.

"Consciousness" remains a vague term since scientists have not agreed on exactly what consciousness is. People often refer to the "mind" as the intangible entity that provides people with consciousness. Presumably, the mind allows us to filter out unimportant sensory input, concentrate on important messages, and associate related thoughts.

When we smell a rose, for example, the mind enables us to consider all of the significant associations which the scent conjures up, but filters out the other unimportant environmental stimuli so that we are left with a coherent "thought."

From where does this magical mind arise? Many folk traditions maintain that the mind comes from mystical spirits. However, from a biological perspective, the mind must emerge from the complex network of neurons that comprise the nervous system. How can such an abstract entity be reduced to a physical basis?

This "mind-body problem," or problem of "reductionism," has been the subject of much debate ever since René Descartes, 17th century French philosopher and mathematician, hypothesized how a mechanical body and a rational mind could interact. According to one of Descartes' models, visual information is perceived when nerves detect stimuli and relay a message to the brain. The visual signals from the two eyes are inverted and focused into a single image on the brain's pineal gland. At this specific point in the brain, the mind and body then interact, and the mind creates a complete visual and mental representation of the world.

Although Descartes' ideas explained how environmental stimuli are delivered to the brain, they did not satisfactorily explain how the mind and brain interact. Recent suggestions of how the physical brain creates consciousness draw upon modern linguistic and philosophical theory.

One hypothesis, proposed by Rutgers University philosopher Jerry Fodor, suggests that the nervous system constructs beliefs, feelings, and desires directly from the "raw data" it collects. Fodor posits that each fundamental brain process is assigned a specific symbol. Taken together, these discrete symbols constitute a "language of thought" that helps to form our thoughts. John Searle, a philosopher at the University of California at Berkeley, argues that language by itself cannot create the complex beliefs, feelings, and desires that form our consciousness. Symbolism alone, he argues, contains no meaning. Consequently, computers may be able to use a symbolism similar to human language. They will, however, never be able to grasp any form of understanding, since meaning is not an intrinsic part of symbolic language.

In order to illustrate his point, Searle proposed the following situation: suppose you are locked alone in a room with a stack of papers containing Chinese writing (which, unknown to you, the experimenter calls a "script"). Assume that you don't know any Chinese; in fact, you can't distinguish Chinese characters from senseless squiggles. You are given a second stack of Chinese writing, called a "story," along with instructions that explain in English how to correlate the Chinese characters in the first stack with the characters in the second stack. You perform the task of correlating the two languages until you master it.

Subsequently you are given a third set of Chinese papers, or "questions," with instructions that allow you to correlate characters from the third set with characters in the first two sets and that tell you how to write down appropriate Chinese symbols in response. You are given similar questions in English, which you comprehend well and complete in English. Suppose your answers to the Chinese questions are identical to those of native Chinese speakers, just as your answers to the English questions are to those of other native English speakers like yourself. Can you say that you understand Chinese just as you understand English?

It seems obvious that you cannot. "In the Chinese case, unlike the English case," Searle argues, "I produce the answers by manipulating uninterpreted formal symbols." In essence, when you produced the Chinese answers, you imitated a computer process of solving a problem. Though you had all the tools a Chinese speaker has when he uses Chinese, you still could not comprehend the language as a Chinese speaker does.

Comprehension — the whole — is certainly greater than a bunch of characters, or the sum of its parts. Searle concludes that consciousness, which involves the existence of beliefs, desires, and intentions, must be a biological phenomenon that computers will never be able to mimic.

Computers of the Future

From this perspective, no matter how fancy computer programming becomes in the future, computers never will be able to gain consciousness. Computers like Deep Blue will forever remain mere information processors.

Some who disagree with this conclusion insist that since the formal processes computers use to process information resemble the mechanical processes of our brains, computers must eventually be able to "understand" as much as humans do. They believe artificial intelligence — the modeling of human cognitive processes using computers as tools — can help people investigate the inner workings of the mind.

A second alternative is that an unknown causal link exists, such as Fodor's language of thought, which when discovered will provide the missing connection between symbols and complex thoughts. When computer programs incorporate this link, computers should presumably gain consciousness and be able to understand.

Still, after one considers the "Chinese Room" narrative and the several unsuccessful attempts at conceiving of a computer with a true sense of understanding, Searle's position — that consciousness is a unique biological property — seems most likely. This hypothesis gains even more credence when we realize that consciousness is not unique to humans, but rather lies on a continuum: complex animals have higher levels of awareness and possess more elaborate nervous systems than simpler organisms. Thus it seems that consciousness has evolved over time and that some fundamental brain operation must exist which can account for this evolution of awareness of living things.

A satisfactory explanation of the origin of mind, therefore, must wait until more advanced biological techniques allow us to probe deeper into the basic functions of the nervous system. In the future more efficient programming may create computers that resemble humans. However, computers will never truly gain consciousness, as the properties of mind are most likely unique to the mysterious complexity of our brains.

— www.englib.cornell.com


  • 字数:1654个
  • 易读度:困难
  • 来源:外教社 2015-07-17