The rebuttal to the rebuttal to the rebuttal
Searle's formulation of the problem neither states nor
implies that the cards with the instructions can't have
rules for rewriting themselves. To take a
slightly different tack, the claim that "to modify the instructions you
need to understand them" is absurd on its face: it's like claiming
that in order to mutate, you have to know what all the genes do.
To claim that learning is the essence of what makes us sentient
is to claim that people who have stopped learning have stopped
thinking: if I'm a grandmaster in chess and I'm playing against
an unrated amateur, does that mean I'm not thinking?
The argument as to whether reading, writing, and translating is
more difficult assumes that you know how to do it: if you
don't, perhaps it's much simpler than you give it credit. So that point
is lousy.
The final point is the most absurd of all: neural networks exist to
rewrite their own programs. They seem to display learning in certain
situations. And it presupposes that the symbols manipulated by a
hypothetical program deal primarily with the intended output
of the program: imagine that only one card in a million
actually involved speaking Chinese, and that the others were formal
symbols for manipulating and modifying the cards. You need not inject
any new learning into the system at that point.
It's also helpful to remember that, when he wrote this, Searle also
claimed that a computer would never be able to compete with the best
players of chess in the world, and would certainly never defeat
them. His argument boiled down to the belief that to play chess,
you needed to think strategically, not just tactically, and that you
need to plan and then execute those plans. Well, he was wrong
about that, apparently.
My own take on this is that if the system has enough complexity
developing from simple elements, a la John Conway's Game of Life,
the program might be able to think. But it wouldn't know that it was
thinking, because the symbols would be doing the thinking.