[Serious Phil] Rejecting the Hypothesis of Phenomenal Information
SWMirsky at aol.com
Tue Jun 26 09:37:10 CDT 2012
--- In Phil-Sci-Mind at yahoogroups.com, "iro3isdx" <Philscimind at ...> wrote:
> --- In Phil-Sci-Mind at yahoogroups.com, "SWM" <Philscimind@> wrote:
> > responding to message 2759
> > SWM:
> > The issue, it seems to me, is to say what UNDERLIES our experience
> > of being subjects (of having all the various experiences that
> > THAT entails). And so the issue is: Are syntactic (as in purely
> > mechanical, mindless) operations the foundation of experiencing
> > (including experiencing instances of understanding)?
> It seems to me that it is a mistake to identify syntactic with
> mechanical. Perhaps mechanistic, but not mechanical. Well, okay, I
> suppose that is a minor point.
"Syntax" has been a controversial term since Searle introduced it in this context, I think. Certainly it has been controversial in these list discussions. Sometimes it is taken to just mean "abstract" at other times "mindless" (for I have also used the term "rote" to further controversy, by the way). Searle has said that computers (or, sometimes, programs) are syntax but at other points he has recanted that and said that computers/programs aren't/don't have "syntax" anymore than they have semantics because both are in the minds of the users/programmers.
Personally I think the whole "syntax" thing a big muddle but I have tended to use the term when others do although, as we have seen, that doesn't always mean we mean the same by it.
As to being "mechanical", I will also agree that THAT term often conjures up other pictures which would seem to have nothing to do with programs running on computers (which is why it is often better to speak of programs as "rote", despite the furies THAT term has tended to unleash). On the other hand, Searle, in his later argument, claims that programming running on computers are just brute physical events without meaning (either of the content variety like semantics or the formal variety, like syntax!). In THAT sense it does seem like "mechanical" is a reasonable term to capture THAT meaning. But Searle's later argument is worse than his earlier, on my view, so perhaps "mechanical" is more misleading than helpful in this context.
> A more serious problem, is that syntactic operations don't come for
> free. Syntactic operations are intentional operations. An explanation
> based on syntactic operations is implicitly based on assumptions about
Yes, this is Searle's later argument, that syntax, no less than semantics, is in the minds of the beholders as it were. But my view is that this is a poorer argument than the CRA against the possibility of computational minds since brains and computers, in this sense, are equally brute physical phenomena. That they may work in a certain pre-planned way (naturally occurring in the case of brains, man-directed in the case of computers) is a different matter. A person's brain does what it does, regardless of what some other observing person reads into it or intends for it. Nor does intention matter, in this sense, in the grander scheme of things since there's no reason to think evolution is intended in any sense akin to our own.
> The world is full of signals. But signals are not syntax. A digital
> camera does not work by catching photons (as signals) and putting them
> in a database. Rather, the camera was designed to generate syntactic
> elements. And that designing of the camera involves intentional
And brains form intentions though that does not imply that they are intended via the intentions of another, does it?
> If we want to use IP methods to interact with the world, then:
> Step 1: Syntactify the world; Step 2: Apply IP to the resulting syntax;
> Somehow step 1 is completely ignored in these discussions. It is just
> taken for granted.
I don't think that is the case in the area of AI research at all. They are always looking for ways to make inputs significant to the machine entity receiving them via various processing rules. The question, though, is when (at what stage) do those rules in operation (the processing) produce awareness of what is being signified? That is the challenge of AI.
> My objection to computationalism is that it does
> not deal with step 1, which I see as the most important part. Without
> step 1, we would be solipsists or idealists.
I'm not convinced it doesn't deal with step 1 at all. If you recall, Minsky offered a number of ways on Eray's AI philosophy list by which his machine entity would syntactify the world. Although I am no AI researcher, I have also proposed a way in which it could get done: the production of layered, interactive representational mappings of external and internal environments, as well as mappings of mappings, all designed to produce an entity with pictures of its world and itself, pictures which it is equipped to interrelate via associative processing that gives meaning to things in much the same way as we recognize meanings.
> In short, if you want to study the mind, you have to investigate step 1
> (how we syntactify the world).
> Philscimind mailing list
> Philscimind at ...
More information about the Philscimind