The Ecphorizer
An Approach to Artificial Intelligence |
George Towner |
Issue #70 (September 1987)
One of the great conundrums of our time — at least in the computer business — concerns "artificial intelligence." Fifteen years ago the main AI question seemed to be "How do we program a computer to be intelligent?" Nowadays the more relevant questions are "What
is artificial intelligence?" and "How can we tell when a machine is truly intelligent?"...the machine's 'understanding' would improve with time.
Defining Artificial Intelligence. From the outset of AI research, the traditional answer to the second question — how to tell if a machine is truly intelligent — was to apply the "Turing test." You asked the machine questions, and concluded that it was [quoteright]intelligent if you could not tell from the answers whether it was the machine replying or a concealed human operator. Modern "expert systems," however, come very close to passing the Turing test in limited areas. For example, when you present a set of symptoms to a computer running a medical diagnosis program it can be hard to tell whether the responses are coming from a machine or a doctor. If we added all the expert programs together, the result might be a machine that could come up with a pretty good reply to practically any question. But we would not call it "intelligent."
It seems to me that our instinctive acceptance of the Turing test to decide the presence of artificial intelligence implies an answer to the question "What is artificial intelligence?" If our gut feeling about the Turing test truly identifies AI, then AI is simply the ability of a machine to reproduce our own pattern of responses to symbols. But by "reproduce" we don't mean that the machine regurgitates a response previously fed into it; we mean that it fashions the response from simpler and more general resources.
The Design of an Intelligent Machine. What, then, are these "simpler and more general resources" out of which a machine might create intelligent responses? In the early days of AI research, many people assumed that if you just gave a machine the right kind of start, and told it when it was right and wrong, it would build up its own intelligence. This was the idea behind "learning programs" and self-programming languages such as LISP. But nothing much has ever come from these efforts; no machine has ever taught itself to respond in a humanlike way to an entirely new input. No matter how cleverly the machine converses, you can always trace its output back to specific information inserted by its programmer.
I believe that learning programs never worked because they were based on the idea that there is a single, universally-valid set of "intelligent responses" to a given set of stimuli. The assumption was that a properly programmed machine would "work toward" these universal responses, figuring them out one by one as it went. But I believe that what we call "human intelligence" comprises a very specific set of responses, filled with evolved tendencies and special viewpoints. From this viewpoint, intelligence is as peculiar to us as our metabolic pathways and enzyme systems. Thus it is no easier for a machine to calculate our intelligent responses than it is for it to calculate how many ribs we have.
Is there no hope, then, for artificial intelligence? I think that true AI is possible, but that it will require a great deal of work to achieve. What we must do is emulate in a machine the whole range of evolved human responsive mechanisms, and then turn the machine loose, like a baby, to build its own set of responses by absorbing human knowledge through these mechanisms. The result would reproduce human intelligence only as accurately as we had programmed into our machine the special adaptations of the human organism.
At this stage I cannot specify in detail what such programing would ultimately look like. However, I believe I can suggest an approach for developing it. My approach is somewhat different from the directions that presently prevail in AI research. Its central characteristic is the need for three different kinds of computer program, each designed and written in a different style, that would run simultaneously on the intelligent machine.
Three Kinds of Programming. The origin of my approach to AI can be found in Paul MacLean's book A Triune Concept of the Brain and Behavior (1973); it was later elaborated in Carl Sagan's The Dragons of Eden (1977). MacLean distinguishes three layers of tissue in the human brain that evolved quite separately, at different times and in response to different environmental challenges. The lowest layer, which he calls the "reptilian complex," organizes our more elaborate dealings with the physical world, above the level of mere reflexes. The next layer, the limbic system, generates our emotional, conative behavior. The top layer, the neocortex, performs the tasks of abstraction and reasoning. It was the emergence of neocortex, scarcely yesterday in the span of living evolution, that produced what we call intelligence in human beings. However, the neocortex does not operate in isolation; it works with the other layers to produce intelligent responses.
I/O Programming. The lowest layer of brain tissue — MacLean's "reptilian complex" — cooperates with our physiological reflexes to translate between events in our environment and events in our brains. By virtue of the activities of this layer of tissue, for example, my view of an object in space becomes translated into some structure of electrochemical events in my brain; conversely, my decision to reach for the object becomes translated from a structure of brain events into nerve impulse to my arm and hand muscles. The details of this mechanism in human beings are apparently extremely complex; but we don't need to understand them for present purposes. The significance of the reptilian complex for AI is that it comprises a dynamic and relatively independent system that mediates between physical reality and our internal symbolic patterns. In computer terms, it is a programmed input-output (or "I/O") system.
Present-day computers have rudimentary I/O systems, in the form of "drivers" or "channels." Their purpose is to mediate between the internal workings of the computer and such things as keyboards and display consoles. But they are relatively stupid, and almost never independently programmed. They communicate by means of "data structures" created by the computer programmer, which are intended to reflect external states and events as internal patterns of zeros and ones. The data structure for an integer, for example, might consist of eight or sixteen memory cells that store the digits of a binary number. Under this scheme, when we press the "5" key on a computer keyboard it becomes translated into the internal bit pattern 00000110. When the machine wants to give us an answer of 5 it presents the same bit pattern to a driver, which causes a monitor screen or typewriter to fashion the numeral "5."
This sort of translation works very well for numbers, which is why machine computations often appear to be more intelligent than human figuring. For instance, if the machine wants to multiply 5 by 2 it just shifts the 5-pattern one cell to the left; the result is 00001100, the bit pattern for "10." There is something "natural" about converting numbers into binary strings, in the sense that the things we usually do with numbers emerge as simple binary manipulations.
But suppose we want to tell the machine something about cats and dogs? In a typical present-day I/O scheme, a computer would refer to a living feline by encoding the English word "cat" by means of a standard letter-to-binary table. The result would be come out something like "110001111000011110100." Similarly, its reference to a dog would become coded into "110010011011111100111." Now, however, there are no simple and natural manipulations available to express the events that happen to cats and dogs. The machine's references to these entities are monolithic bit patterns of arbitrary structure. The statement that the 110010011011111100111 is chasing the 110001111000011110100 is as incomprehensible to the machine as it is to the ordinary human reader.
Several attacks have been made on this problem, most notably with Marvin Minsky's concept of "frames." But what is ultimately required, I believe, is a separately programmed I/O system. At the beginning such a system would interpret inputs according to preset algorithms, equivalent to our built-in sensory pathways. But it would also be capable of creating new data structures, thereby dynamically translating new inputs into new bit patterns. In so doing, its objective would be to translate simple relations among the inputs into simple relations among the data structures. It would try to avoid the situation where the internal representations of "cat" and "dog," for example, become mutually incomprehensible patterns. The I/O program would run independently of the rest of the machine, as a dynamic "front end."
Analytic Programming. I/O programming of the sort I have just described would be something new in software design. Another new kind of program required in an AI machine would be "analytic"; it would perform functions similar to those of the neocortex in human beings. It would sort through the data structures created by the I/O software, looking for analogies and inferences. It would create and initialize new data structures, representing abstractions. Such abstractions, created entirely by the machine, would be as accessible to the rest of the AI software as normal data inputs.
Like the I/O program, the analytic program would run independently. Its programmed objective would be to create links among the I/O data structures on purely logical grounds. That is, it would manipulate the structures by examining their formal properties, not their "meanings" in the outside world. It, too, would have some preset algorithms, equivalent to our rules of logic. It might also recognize certain distinctive structures from the outset, thereby reflecting the innate tendencies of human thought such as those of Chomsky's "generative |r»grammar." But as it ran, the analytic program would create new structures that were derived from the existing structures solely by reasoning, abstraction, and analogy.
Conative Programming. What is presently called "computer programming" would, under my approach, become the final third of the total AI software. It would be the part that caused the machine to process input data and generate outputs. The conative program would control the "behavior" of the intelligent machine.
I don't think that an intelligent machine would need "drives" — motivations equivalent to human hunger, for example. The conative program would be always running (like the other two), so its choice would only be between alternate modes of action, not between action and inaction. To establish a link with the computer operator, however, the AI machine's conative program would have to recognize requests for output and give them a high priority in its branching decisions. Thus the intelligent machine would become a responsive engine, not just a builder of data structures.
The AI Machine as a Whole. What I have described here is a computer with three programs running in it simultaneously. Their area of contact and intercommunication would be a large area of memory capable of containing bit patterns. The I/O program would constantly translate data from outside the machine into data structures filled with these bit patterns. As the I/O program ran, relations among the data structures would more and more closely mirror relations among the data. Meanwhile, the analytic program would constantly sift through the data structures, abstracting their logical properties. From these abstractions it would create new data structures. Finally, the conative program would respond to operator requests (which would simply be inputs of a specific type) by selecting data structures (abstract ones created by the analytic program and/or concrete ones created by the I/O program), concatenating them, and loading them into the output port of the I/O system.
Running the AI machine would consist of presenting it with an enormous amount of input about the human world, giving it time to "think" about this material (letting the I/O and analytic programs work on it for a while) and then asking it questions. We would probably need to present the same input repeatedly, because the machine's "understanding" would improve with time. Ultimately, if the machine's replies to our question's passed the Turing test we would have to say that it was intelligent, because at least some of its pronouncements could not be deduced from its built-in programming or from the knowledge we had fed to it.
Developing AI Software. I realize that my description of the software an AI machine needs is quite vague. This article presents only a general approach to its development. The key is to recognize that an intelligent data processor, if it is to mimic the human brain, needs to do three things simultaneously. It must interpret the external world, it must create its own abstractions, and it must act on requests from its operator. Present computer programs are written to serve only the third function. Moreover, because most present computers run only one program at a time their response to any given input is completely predictable — a characteristic that we instinctively feel is incompatible with true intelligence. The machine I envision here would change its responses as it grew older and wiser.
When they are finally developed, I/O and analytic programming techniques are likely to be different in principle from present-day conative programming. In particular, they would probably not be centered around the "conditional branch" instruction, which forms the cornerstone of current programming languages. At this stage I have only fragmentary ideas about how I/O and analytic programs might be written. But then forty years ago nobody had any idea wha tpresent-day software would be like. If my approach is valid, then the conundrum of artificial intelligence translates into a problem in software design — a difficult problem, to be sure, but one that I believe is solvable.
You can read about George's latest book here!
More Articles by George Towner
Contributor Profile
George Towner
George Towner was born in Reno and grew up near Berkeley. As a teenager he began making gangster movies using an old 8mm camera, one of which featured a car being pushed over a cliff off State Highway 1. He has started and sold two successful technology firms, and currently works for Apple Computer, where he is the most senior in age. He lives with his wife in Sunnyvale. They have two daughters and a son.