Thursday, September 13, 2012

The Chinese Room


John Searle’s experiment addresses the question of whether a machine can be programmed to literally “understand” a concept (he called this type of programming strong AI) or if the best a machine can do is only simulate “understanding” (weak AI). The experiment involves a closed room in which Chinese characters are entered through a slot and a man uses a procedure (program) to generate a response to the input without knowing Chinese (the procedure is written in English, or memorized). Since the man doesn’t actually understand Chinese, but to the outside observer he does, Searle asserts that this scenario translates to computers as well and that the apparent “understanding” demonstrated by an AI is only a simulation at best.

Searle responds to several criticisms in his paper. The reply I found most compelling (probably due to my psychology minor and its emphasis on cognitive processes) was the brain simulator reply. This reply argues that Searle’s experiment should have been redesigned by having the man manipulate valves that are mapped to synapses in a Chinese persons brain. This would result in the Chinese person receiving a reply in Chinese without the man (or the valves) understanding Chinese. I liked this reply because, upon initial examination, I agreed with it. However, upon reading Searle’s response, I understand its flaw. Attempting AI is pointless if you concede that understanding the brain is necessary to understand the mind because AI is essentially aiming to translate the “software” of the mind to mechanical hardware, rather than the brain. As a psychologist I would argue that an understanding of the brain is in fact necessary to understand the mind - as evidenced by the many advances made in psychology by examining the structure of the brain, however, as a computer scientist I understand that regardless of this, conceding (read: admitting) this would nullify the concept of strong AI, as defined by Searle. The brain simulator reply depends on having an understanding of the brain and is therefore an odd and counterproductive argument to make in the context of AI.

I really enjoyed these readings. I typically find thought experiments irrelevant, and perhaps this one is as well, but I can appreciate the question Searle was trying to address and its an important question. As for my opinion, I side with Searle: we have yet to understand the physical basis of the “mind” and until we do all we can hope for is a poor facsimile, simulation, of its functions. At least for now, “understanding” is reserved for the realm of the living.

No comments:

Post a Comment