biblio-excerptise:   a book unexamined is not worth having

Mind Children: The Future of Robot and Human Intelligence

Hans Moravec

Moravec, Hans;

Mind Children: The Future of Robot and Human Intelligence

Harvard University Press, 1988, 214 pages

ISBN 0674576160, 9780674576162

topics: |  ai | robotics | brain | evolution


	Predicts that robots will shortly be built at the human
	level.  Unabashed high priest of the Strong AI standpoint.
	See also his extensive comments on [Penrose].

	The excellent short movie on mammalian evolution in the
	mammal room of the American Museum of Natural History in NYC
	characterized evolution as : backbone - gives a diff way of
	propelling oneself (showed a fish), then Tetrapods - even
	better way (sea creature). To this one can extend Moravec's
	arguments (two-legged walking, elephant's trunk etc). I would
	really like to read his bio/brain sections, which are probably
	the more "surprise" containing parts. The predictions are
	probably off, but may not be much.  Minsky had predicted a
	machine chess champ in 1980 - off by what looks like 20
	years... Even if Moravec's timing is off - say it is 2200 -
	even then, the prospects are mindboggling enough. - AM

---

A rich discussion on computers, viruses, artificial intelligence,
robots, artificial life.  At many places I found a discussion of his
dreams rather than reality, in particular his prediction that
human-like robots will be built within next 50 years i.e by 2038 A.D.
We are nowhere close to that.

The book truly points out that traditional AI researchers spent
efforts on reasoning whereas the major problems are in perception and
mobility.

Discussion on repetitive behavior (pp.46-47):

	"Infinite patience would be an asset in a training session,
	but it could be exasperating in a robot in the field. In the
	cup fetching program I described earlier, you may have noted
	that if the robot finds the door closed and is unable to open
	it, it simply stands there and repeats "knock knock" without
	letup until someone opens the door for it. A robot that often
	behaved this way - and many present day robots do - would do
	poorly in human company. Interestingly it is possible to trick
	insects into such mindless repetition. Some wasps provide
	food for their hatching eggs by paralyzing caterpillars and
	depositing them in an underground burrow. The wasp normally
	digs a burrow and seals its entrance, then leaves to hunt for
	the caterpillar. Returning with a victim, she drops it outside
	the burrow, reopens the entrance and then drags it in. If
	however, an experimenter moves the caterpillar a short
	distance away while the wasp is busy at the opening, she
	retrieves her prey and again goes through the motions of
	opening the already open burrow. If while she is doing this,
	the experimenter moves the caterpillar away again, she repeats
	the whole performance. This cycle can apparently be repeated
	indefinitely, until either the wasp or the experimenter drops
	from exhaustion.

	A robot could be protected from such a fate by a module that
	detects repetitious behavior and generates a weak pain signal
	on each repetition, in the example, the door knocking would
	gradually become inhibited, freeing the robot for other
	pending tasks or inactivity.  The robot will have acquired the
	ability to become bored."

However this will kill desirable repetitions also. If the job of door
knocking robot is to wake up a person sleeping inside, it is better
that the robot does it till the guy inside wakes up, does the alarm
clock not produce sound till we switch it off?

I found the discussion on information theory interesting and am
quoting it below,

	" Suppose a child's story begins with the words: Here's my
	cat. It has fur. It has claws.... Pretty boring right? Imagine
	now another story that now starts with: Here's my cat. It
	wears a hat. It totes a gun..  Better. The second story seems
	more interesting and informative because its later statements
	are less likely-cats usually have fur and claws but they
	rarely have fur and claws and they rarely carry hats and
	guns.  In 1948 Claude Shannon of MIT formalized such
	observations in a mathematical system that came to be known as
	information theory.  One of its key ideas is that the
	information content of a message goes up as its likelihood, as
	measured by recipient decreases (mathematically, as the
	negative logarithm of the probability). A series of messages
	has maximum information content when it is maximally
	surprising."

(Then I think the information content of an infinite cyclic behavior
must be zero. - Mali's comments)

My measure of (Moravec's) effective computation works the same way.
Each instruction executed by a machine is like a message. The more
predictable its sequence of instructions, the less useful work a
machine is doing. For instance, a program that causes a computer to
simply add one to a memory location once every millionth of a second
is doing almost nothing of consequence. The contents of the memory
location at any time in the future are known in advance.  But even the
best programs are limited in how much " surprise " they can introduce
into a computation at every step. Each instruction can specify only a
finite number of different possible operations and choose from a
finite number of memory locations, each itself containing only a
finite number of possibilities. These sources of surprise can be
combined using the formulas of information theory to express the
maximum information content of a single computer instruction.  - Amol
D.  Mali ( 6 August 1995)


amitabha mukerjee (mukerjee [at] gmail.com) 17 Feb 2009