Home Page
"The Philosophy of Knowledge"
"Ecphorizations"
Principles of Constructionism
About The Ecphorizer
Printed Issues
Online Issues
Contributors
Index & Search


website metrics free counters free counters
The Ecphorizer
Randomness and Artificial Intelligence
George Towner

 

After a bout of intense work that consumed the Christmas season, I decided to take a busman's holiday and have some fun with my computer. The inspiration was The Policeman's Beard is Half Constructed, a book that Geri Younggren showed me at Apple Computer. This book purports to be the first literary work written entirely by a machine.

This use of randomness is cognate, on the machine level, with its use by life itself.

I had assumed that computer-generated prose would necessarily be trite and simplistic. But this work was filled with deep mutterings about the meaning of life as well as off-the-wall comments on such subjects as lettuce and lamb chops. A short description in Scientific American confirmed that RACTER, the program that wrote it, had been evolved over several years into a rat's nest of parsing algorithms, inflective tables, and recursive functions. Complex program, complex output.

But this set me thinking. Alan Turing once suggested that we could determine whether or not a machine was intelligent by hooking it up to a telex circuit. If a human operator conversing over the circuit could not tell whether he was talking to a machine or to another person, then we would have to call the machine "intelligent. I have always suspected that such an "interactive test is too subjective, since a clever examiner could make a machine sound human (or a human being sound mechanical) by choosing suitable questions. A better test would be to ask the machine to create essays on various subjects, then judge them on their content. By this standard, was RACTER approaching artificial intelligence?

The answer is no. The rock on which the quest for artificial intelligence always breaks up was also defined by Turing; it is the "Turing machine." The operation of every computer, no matter how complex its hardware or software, can be equated to a sequence of operations performed by a Turing machine. And such a machine is stubbornly deterministic: its next act is always predestined by its state at a given moment. You know just by looking at it that a Turing machine can never become intelligent.

But wait! say the AI researchers: we use random functions and fuzzy logic; we interrupt the Turing machine's lockstep. Yet do they interrupt it in a constructive way? Random input to a computer program is no different in form from any normal data input. From a computer's "viewpoint," random numbers are indistinguishable from numbers typed by an operator at a keyboard. They are equally "meaningless" in contrast to the "meaningful" numbers (addresses, pointer values, intermediate products, etc.) that the computer generates internally. But from a human viewpoint, random numbers are less meaningful than operator inputs. Hence introducing them into a program in order to free it from Turing machine determinism is a movement away from intelligence.

So I sat down at my Apple III, my fingers moving idly o'er the keys, and came up with a program I call PROSEWRITER. Its algorithm is simple. It operates on a file of language matrices that represent English sentences and parts of sentences. Whatever it finds in lower case letters goes directly to the output; whatever is in capitals is a "classifier" that points to a group of other matrices. It picks a matrix at random from the indicated group and inserts it into the output. Thus, for example,

let's eat a NOUN      might become      let's eat a banana


Note that this arrangement is automatically recursive, since among the NOUN matrices might be "ADJ NOUN," a classifier that tells it to insert an adjective and go find another noun. There are other features in PROSEWRITER that provide hierarchies of classification, label certain classifiers to provide continuity of subject matter, and simplify classification by providing functions such as automatic pluralization. But the basic idea, which I hereby label the "Towner machine," is that of an internally referential data base operated on by a Turing machine.

With such a machine, the quest for artificial intelligence (now defined as intelligent prose output) no longer involves what is usually called "programming." The Pascal part is done; the Turing machine is defined. But as you refine the matrices and their classifications, the output becomes more and more human-sounding.

Thus arises a new kind of programming, which I call "memory programming" (as opposed to "instruction programming"). It inverts what we think of as normal program structure. Where the depth of instruction programming lies in its ability to select actions based on the content of data ("branching," in computer terms), the depth of memory programming lies in its ability to select data based on the actions to be performed with it ("classification"). In my work with the PROSEWRITER data base so far, I am doing memory programming by hand, using Roget as a guide. But doing it by machine is not far off.

The role of randomness is now both crucial and constructive. In PROSEWRITER, the random number generator mediates between the instruction program and the memory program. It is the means by which the instruction program's request to access memory is linked to a particular element of the memory program. However, it does not introduce new data into either structure. Neither program is "aware" of the fact that they are linked randomly.

This use of randomness is cognate, on the machine level, with its use by life itself. There are three major areas where life could not operate without random functions: evolution, learning, and human consciousness. In the case of evolution, random mutation is the engine that drives adaption and speciation. With learning, randomness supplies the "trial" part of trial-and-error. In the case of human consciousness, it supports imagination. In all three instances, random functions mediate between a dynamic process (survival of the fittest, trial-and-error, problem-solving) and a changing data base (the gene pool, learned behavior, individual experience).

Thus I envision the "key" to artificial intelligence residing in the interaction between two (or more) programs of fundamentally different natures. Their random intercommunications give them the freedom to achieve unpredictable (and hence possibly novel, creative) results. The fact that life itself also operates this way is a good omen.

There is a lot more to the interrelations between intelligence and other living processes than this article can cover. For my ideas on the subject, read my book The Architecture of Knowledge (University Press of America, 4720 Boston Way, Lanham, MD 20706; $10).

In the interstices of earning a living, I hope to pursue the concept of memory programming with a view to improving the quality of PROSEWRITER's output. At the same time, I hope to learn more about this new kind of software. Perhaps Editor Amyx will accept, from time to time, examples of PROSEWRITER's outpourings for publication in The Ecphorizer. Don't expect too much too soon. After all, it is only a poor, struggling program trying to break into the mainframes. When its compositions become indistinguishable from human prose, I will start selling them under my own name and retire. 




close
Title:
Link:
Summary:
We have collected the essential data you need to easily include this page on your blog. Just click and copy!close
Share |
| E-mail | Print | Blog
  


About
George Towner

Yet again are our pages cluttered by the cuckoo ideas of GEORGE TOWNER. Angry letters of complaint from subscribers are of no avail — they just get lost on the Editor's desk.

You can read about George's latest book here!

About
George Towner

You can read about George's latest book here!

Other articles by George Towner
+ more