“Why are you producing so few red blood cells today?”

A while ago, it was thought that the trick to making a machine play chess well was to extend how far down the branching network of possible moves it could examine. Irrespective of how far they can look ahead, though, skilled human chess players can confidently confound (or at least match) most chess programs of today.

Why?

I found a fascinating account of this puzzler involving AI and human thinking in (where else?) GEB. Apparently, the reason for this was known from the 1940s; If you’ve ever played chess- or play regularly but with skill befitting a two year old, you’ll come to appreciate the reason immensely.

Chess novices and chess masters perceive a chess situation in completely different terms. The results of the Dutch psychologist Adriaan de Groot’s study (from the 1940’s) imply that chess masters perceive the distribution of pieces in chunks.

There is a higher-level description of the board than the straightforward “white pawn on K5, black rook on Q6” type of description, and the master somehow produces such a mental image of the board. This was proven by the high speed with which a master could reproduce an actual position taken from a game, compared with the novice’s plodding reconstruction of the position, after both of them had five second glances at the board. Highly revealing was the fact that masters’ mistakes involved placing whole groups of pieces in the wrong place, which left the game strategically almost the same, but to a novice’s eyes, not at all the same. The clincher was to do the same experiment but with pieces randomly assigned to the squares on the board, instead of copied from actual games. The masters were found to be simply no better than the novices in reconstructing such random boards.

The conclusion is that in normal chess play, certain types of situation recur- certain patterns- and it is on these high-level patterns that the master is sensitive. He thinks on a different level from the novice; his set of concepts is different. Nearly everyone is surprised to find out that in actual play, a master rarely looks ahead any further than a novice does- and moreover, a master usually examines only a handful of possible moves! The trick is that his mode of perceiving the board is like a filter: he literally does not see bad moves when he looks at a chess situation- no more than chess amateurs see illegal moves when they look at a chess situation. Anyone who has played even a little chess has organized his perception so that diagonal rook-moves, forward capture by pawns, and so forth, are never brought to mind. Similarly, master-level players have built up higher levels of organization in the way they see the board; consequently, to them, bad moves are as unlikely as illegal moves are, to most people. This might be called implicit pruning of the giant branching tree of possibilities. By contrast, explicit pruning would involve thinking of a move, and after superficial examination, deciding not to pursue examining it any further.

If you pause to think about this, it comes across as an utterly spellbinding revelation. Like the proverbial frog in the well, mental models and levels of perception above what we are used to are very hard to digest- but they’re there, as the excerpt explains.

The “chunking into levels” is a predominant theme in all complex systems we see*, at least in the way we seek to understand and analyze them- from computer systems (Hardwired-code->machine language->Assembly language->Interpreters and Compilers) to DNA (Specifying each nucleotide atom-by-atom->Describing codons with symbols for nucleotides->Macromolecules->Cells), and even human thinking.

The last bit requires a little exposition, but first, we note the analogy between the nightmare of writing complex useful computer code in machine language and the terror of reading a virus DNA atom by atom. In both cases, we would miss out on the higher level structures that embody computer programs and virus DNA with their attributes- complex systems possess meaning on multiple levels.

The multi-level description extends to virtually every complex phenomenon. Weather systems, for instance, possess “hardware” (the earth’s atmosphere) which has certain properties hardwired into it (hardwired code) in the form of the laws that flitting air molecules obey, and “software”, which is the weather itself. Looking at the motions of individual molecules is akin to reading a huge, complicated program on the machine language level. We chunk higher level patterns into storms and clouds, pressures and winds- large scale coherent trends that emerge from the motion of astronomical number of molecules.

As for multi-level human thinking, it is illuminating to first appreciate that a higher level perception of a system does not necessarily mean an understanding of the lower levels too. One does not need to know machine language to write complex computer programs, nor is one required to be aware of individual molecule trajectories to describe or predict the weather.** In fact, a higher level of a system may itself not be “aware” of the levels it is composed of, such as AI programs that are ignorant of the operating system they are running on. The higher level descriptions are “sealed off” from the levels below them, although there is some “leakage” between the hierarchical levels of science. (This is necessarily a good thing- or people could not obtain an approximate understanding of other people without first figuring out how quarks interact.)

The title of the post is another excerpt from GEB, derived as a somewhat whimsical analogy:

The idea that “you” know all about “yourself” is so familiar from interaction with people that it is natural to extend it to the computer- after all, AI programs are intelligent enough that they can “talk” to you in English!  Asking an AI program (which is compiled code) about the underlying operating system is not unlike asking a person “Why are you producing so few red blood cells today?” People do not know about that level- the “operating system level”- of their bodies.

This post owes its existence in entirety to GEB, the Big Book of Big Ideas.

*Where a “system” is merely something consisting of “parts”.

**The trade-off when accepting higher-level models of a system is to lose the ability to make accurate predictions. This is fairly evident, irrespective of whether we’re talking about the behaviour of weather, basketballs or people!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s