In case you would like either an introduction or a refresher on Searle’s Chinese Room argument, that link to the Stanford Encyclopedia of Philosophy may be of interest.
That’s what this cartoon on Daily Nous recently seems to be about:


Okay, is it funny, to general audience? Is it funny, to an expert? Is it a good response to the Chinese Room Argument? Is trying to ridicule Searle what finished off Derrida?
P.S. It’s natural for comics fans to try illuminating philosophical point by consulting Existential Comics. But the only archive entry I can find there for Searle does not touch on the Chinese Room Argument at all.
I found it amusing, and I think it gets to the heart of the Chinese Room Argument. The room searlie understands no Chinese. He has a set of rules for taking in a Chinese message and producing a meaningful Chinese output message. But he has no idea what either message says.
That’s the exact issue that used to confound AI researchers. No computer understands English natively, so how can it produce “meaning”? Our chatbots use rules for translating English input to meaningful English output. But AI cheats in a way – it remembers large amounts of actual English conversation and its rules pick and choose between the words and sentences that are already in its memory. Written out, what it remembers would fill several large warehouses.
The key to success is developing rules to winnow through that data horde efficiently.
I don’t understand why the monkeys insist that the searlie understands Chinese. The instruction book understands Chinese. But the searlie is acting as nothing more than an I/O device.
@ mitch – Searle is featured prominently in Existential Comics #332, in which his reactions to the film “The Matrix” are discussed, and he (and his Chinese Room hypothesis) are also mentioned in an article about “How to Study Philosophy as an Amateur” (see “Full Courses”, near the end of the list).
P.S. There’s a Wikipedia article that explains what “Sea Monkeys” really are, but it’s impossible to really understand the reference without seeing one of the old comic book ads:
P.P.S. I remember hearing a report (either NPR or BBC, I forget which) about a man who worked as a radio announcer for cricket matches in India, calling play-by-play in English, despite not being able to speak (or understand) normal English at all. He had had so much exposure to cricket broadcasts that he had absorbed a complete catalog of the correct phrase to say for each respective “event” on the field. Unfortunately, since it was a radio report, they were unable to present a live (visual) comparison to verify how correct his descriptions were.
Kilby, thanks for that link to EC 332! That one indeed features Searle prominently, and his name is even in the title. So why does their own archive index by philosopher list only number 277? I suppose there is a tech-oversight kind of reason, not something of the deep-thought variety.
This is mostly Greek to me, but the commentary to EC 332 reminds me of a question, the answer to which I have sought for years: what precisely is the meaning of the expression “one of the only”?
Nice point, Ooten! I suppose that here, as it often is, it is the result of an impulse to hedge (“the only, or anyhow one of a very few”) but with the words mixed together.
Well, the CRP is bogus, but I don’t get how this comic shows that. It is bogus because the original CRP, using just the manual, is not even equivalent to a Turing machine, and thus even if it could not translate Chinese it means nothing – understanding is clearly complex enough to require an equivalent of a Turning machine, at least.
If the room just blindly translates Chinese, we have that already. If it is supposed to understand Chinese, just ask a question about a message from 50 or 100 or 1,000 messages ago. The manual is read only, so it won’t remember. It sure as heck won’t pass a Turing test.
I’ve seen revisions to it including state and basically learning, but that’s just begging the question. The point of the original problem was that it is obvious the room doesn’t understand Chinese – not so obvious if you make the room more complex.
Oh! BTW, and sorry I didn’t mention it in the post, the survey is set up so that you can answer up to two choices. Thus making it like two independent questions (is it funny, is it a good argument), if you treat it that way. Also, when you look at the results,they are in the same order as in the question — sort of correcting for my mistake in not having wording in choices 3 and 4 that differs sooner, so the shortened test in the results display could be distinguished.
There are 10^40 legal moves in chess. Possible messages in Chinese? Fuggedaboutit!
Searles manual couldn’t possibly work.
Mark H. (1): Excellent explanation.
That “one of the only” phrase has always irritated me. If a set of things comprises the only such widgets in existence, then of course the widget in question is one of that set. Thus the phrase is equivalent to simply “one”. I am one of the only men in this universe.
The first description I ever read of the Chinese Room said that the man had baskets to store paper written with notes. Without those baskets, as scottfrombayside noted, the room is only a finite state machine at best, and all a finite state machine can do is recognize regular expressions. Given “Be prepared to answer questions about the following chapter from DIckens: ‘Marley was dead: to begin with….'” after reading the chapter and being asked “Was Marley dead to begin with?” the room can’t answer the question without having remembered the fact somehow. But with the baskets as an associative memory, answering the question is possible.
There is also the issue that somehow high school students can read the first chapter and answer the question afterwards, presumably with material brains. (Arguing that intelligence is only possible through spiritual means — i.e. people have souls that operate in some non-material way — doesn’t really answer the question; the spiritual realm appears to have the same mathematical constraints as the material realm, and one plus one equals two there as here.)
Turing’s paper “Computing Machinery and Intelligence” is worth reading all the way through. He brings up and answers many objections to the question of whether a machine can think; for instance the theological objection that only human beings have souls, and the argument that digital computers handle only discrete quantities while the human nervous system is continuous.
The biggest problem is understanding what it means to claim that a machine can think. Determining that it has or does not have a soul isn’t helpful. Showing that it does or does not resemble the human nervous system isn’t helpful. The only test that seems like it would work is a phenomenological test: the most we can claim is that it appears to think, and we attempt to verify this claim by asking it questions and judging whether the answers appear to require thought. The Imitation Game requires three players: a judge to ask the questions and analyze the answers; an actual thinking human trying to convince the judge that he or she is the actual thinking human, and a machine trying to convince the judge that it is the actual thinking human. The judge is doing his or her best to sniff out which one is the machine.
Earlier attempts at artificial intelligence attempted to amass vast quantities of facts which are three-part statements like “Dickens full-name Charles Dickens”, “Dickens is-a writer”, “Marley is-a character,” “Dickens created Marley-the-character”, etc. Success was limited.
More recent chatbots amass vast quantities of writing of all kinds. We do that too. If I yell out “Marco!” you will yell out “Polo!” I just tried that with ChatGPT and it did the same thing. I said “The stars at night, they shine so bright” and it said “Deep in the heart of Texas!” I expected no less of a machine, so to me this does not definitively pass the Turing Test, although I believe that as with the Halting Problem, nothing will ever pass the Turing Test; either a given setup fails or else it has not been shown to fail yet. But if it has not failed yet, we can say “Well, it’s passing so far.”
I believe we can assume that the entity behind ChatGPT does not understand English any better than the man in the Chinese Room understands Chinese. I see no reason why the Chinese Room could not do as well as ChatGPT in answering questions.
P.S. given that ChatGPT does not add to its store of data as it runs — adding new data is only done when a new iteration of ChatGPT is put together — then it is like the Chinese Room WITHOUT the baskets of notes. It has only the vast unchanging operating manual. And with that, it does what we see it do.
Some of you might be familiar with the “Berserker” stories by science fiction writer Fred Saberhagen. The first one, “Without a Thought”, revolved around convincing the killing machine that there was an intelligent being at the controls of a spaceship even though its mind-scrambling ray should have rendered the pilot ineffectual.
This was done by training the pilot’s animal helper, which could learn and follow simple tasks, to play a simplified checkers game by selecting markers from boxes or something, and discarding ones that lead to poorer results, so it seemed as though it was learning the game. It’s been a while since I read it so the details are hazy.
I have never understood the argument that we only consider one part of the system when analyzing the system as a whole. The idea that you could create a deterministic set of rules which responded to all possible inputs in a real-world scenario is, of course, ridiculous, but we are presuming an arbitrarily large data store, an arbitrarily large rules set, and an unlimited time to respond to inputs – essentially, a room existing outside of time and space.
You haven’t just created a room that speaks Chinese. With a more complete data store and rules response set, you’ve all but created God. Or, perhaps, whatever being created the data store is an omniscient god. Regardless of whether the room has intentionality, the creation of the room does. Certainly, the room doesn’t have free will, but if a person doesn’t believe in free will and believes that we humans only react deterministically to input, that’s hardly an argument against its sentience. Allow the room’s data store and rules set to include and tract its own internal responses which have no response to the outside world, and it has an internal thought process.
At some point, you bump up against whatever we understand humans to be.
If humans are deterministic and have no free will, and are also sentient, then there is no reason the room wouldn’t be sentient. Of course, those are big “if”s, at least the first one. I’m moderately sure I’m sentient.
A National Lampoon parody of the Sea Monkeys ads
Re: “parodies” – It’s a bit long and off-topic, but I just re-discovered a fantastic parody of Poe’s “Raven”, written by Gene Weingarten, of “Barney & Clyde” and “
StyleInvitational” fame:The Dirty Parrot, by Gene Weingarten (2012)
Late one midnight in my garret,
as I sat with my pet parrot
Who was munching on a carrot
on her perch beside the door,
Suddenly there came a “crap”-ing,
And a #&@!ing and a @+#!ing,
With such plucking and such clucking,
Words that gentlemen abhor,
Words like these, and nothing more.
“Stop your screeching,” I said, scowling,
“Blast you, fowl, who is fouling
English with your fetid yowling.”
This I said, and said no more.
Still she cussed, as it did please her,
Causing me to reach and seize her
And entomb her in the freezer,
Punishment both swift and sure.
This I did, and did no more.
But in seconds I relented,
of this violence I repented.
My reaction was demented,
this I knew and knew for sure
So I opened up the door,
resolv’d to punish her no more.
Though spared torment,
she stopped cursing!
Started pleasantly conversing,
suddenly dispersing
Language fit for parlor,
language fit for church and pew.
Solemnly I then inquired,
what, dear birdie, has inspired
Such a sweet and soft and
wondrous sort of change in you?
She had no answer, just a query,
just a sad and eerie query
Which she asked in such a quaver
that it touched me through and through:
“Kindly tell me,” quoth the parrot,
“what did that poor chicken do?”
[…] of this as a followup to our thread from last December about Searle’s Chinese Room problem. However, the cartoon there came from the Daily Nous site, where everything is supposed to […]