Thursday, September 13, 2012

Reading: Chinese Room

The Chinese Room experiment is far more of a psychological theory than it is a computer science topic. Can a computer truly understand language? By extension, can a computer really understand anything? What is understanding? Searle argues vehemently that a computer can never truly understand anything, and to be perfectly frank, his writing makes him seem fanatic about his position and horribly indignant that others may disagree with it. While he makes a good point, he seems like a stubborn child shouting “NUH-UHHHH" when argued with. His support of his opinion is occasionally shaky, and his arguments are sometimes self-contradictory.

Personally, I agree that a computer, then and now, cannot truly understand anything; it is given 1's and 0's and spits out the expected response. In this case, a computer is little more than a parrot, repeating what it is told (this is, of course, not to say that a parrot does not have consciousness---I use "parrot" in the figure-of-speech sort of way). Technology for conscious computers may be developed in the future (of course, it has been the subject of a huge number of science fiction stories from Battlestar Galactica to the Terminator series), but until we can truly create a self-aware being, I think that no computer will be able to comprehend anything other than 1's and 0's. My opinion is based on the assumption that understanding requires consciousness, however, and that assumption is never explicitly stated in Searle's article--many of the replies to his theory seem to be based on fuzzy definitions of understanding.

One such reply is the "other minds" reply: it simply states that we cannot know how other minds understand the same things as ourselves. This makes perfect sense, but this paper makes it seem like Searle and the Yale responder are operating on entirely different definitions of understanding. Searle argues that he can, in fact, assume that other people understand things because they have cognitive states and that a computer does not. His writing makes it seem like he brushes off this idea quickly and without a second thought.

This all begs important questions, though: why is he so angry, and why does he care? Is he concerned that machines are rising and threatening to take over? Is he just looking for something to argue about? This paper is famous for good reason and poses a good question, but I feel like the question could have been asked and evaluated from both sides equally before a tirade was launched. Searle seems only like a man trying to throw a hissyfit instead of having an intelligent discussion.

Overall, I tend to agree with Searle's position, but his writing makes me want to disagree just to see (or read, I suppose) his reaction. I must add a disclaimer here: I don't know anything about Searle except for what I've read in this paper. He could be an extremely rational person, for all I know, who is mild-mannered and easy to get along with, so if I have offended anyone with my opinion of his writing, I do apologize.

No comments:

Post a Comment