[Serious Phil] Presupposing Experiential Subsystems
SWMirsky at aol.com
Thu May 31 15:02:29 CDT 2012
For some reason, responses are not showing up with alacrity on the list today. As I have some things to attend to, I don't want to hang around here all day waiting for my most recent post's appearance, so I'll take another crack at it and put it up again:
--- In Phil-Sci-Mind at yahoogroups.com, "larry_tapper" <Philscimind at ...> wrote:
> PDJ > > In my last reply I omitted to note an interesting omission. In MB&P, Searle notes three Replies to the CRA. None of them is direectly a complaint that the CR is underspecified. Why would no one have gien him that response if it is so obvious?
> > >
> > >
> PDJ> How about answering the question, Stu? "Why would no one have given him that response if it is so obvious?"
> > >
> SWM> "Because philosophy is very complex and subtle and even smart people take a while sorting the issues all out at times. There is a very strong intuition on Searle's side. However, I agree with Dennett that the system reply is the right one but the early versions of it that I've seen missed the point that the CR qua system fails on the basis of the specs Searle gives it.
> IIRC you have the facts quite wrong here.
Do I? Well it was PDJ who asserted that the equivalent of the Bicycle Reply (underspecking) was NOT considered earlier on, not yours truly though I know that you would dearly love to have it that I am wrong on the facts. As you should recall, I returned to philosophy only in recent years and my exposure to Searle's CRA is circa 2001 or thereabouts. I had a lot of catching up to do which I have readily confessed (many, many times in these discussions).
Moreover, my response to PDJ above points out that the underspecking reply is not quite so new as PDJ made out since Dennett made it at least by the beginning of the second decade after the CRA was initially presented.
> As I recall, what you call the "underspecking" response was a quite commonplace response to Searle's original statement of the CRA, from day one. I am quite sure, for example, that we can find Hofstadter making this point back in the 70s when I was following these things more closely. Hofstadter pointed out that a system that could pass the Turing Test would have to be far more complex than the mere squiggle-matcher Searle apparently described in his early presentation. This is not a terribly subtle point, of the sort that would take decades to notice.
I NEVER CLAIMED IT DID. I only claimed that it took me a while to see it after my initial exposure to Searle. Your favorite Searle defender, PDJ, on the other hand is the fellow who wrote:
"In MB&P, Searle notes three Replies to the CRA. None of them is direectly a complaint that the CR is underspecified. Why would no one have gien him that response if it is so obvious?"
Perhaps you got a little confused?
> PB's much vaunted (by you) Bicycle Reply says absolutely nothing that Hofstadter didn't say 35 years ago.
Again, bubby, I never claimed PB's comment broke new ground, just that it went to the heart of the matter in my view, i.e., it was a nice pithy way of putting it.
What's got your back up these days? Oh, right, you don't like my posting methods or style or something to do with me personally and you carry this antipathy to such a degree that it tends to confuse you when you read anything to do with me. Sorry, it must have slipped my mind for a moment!
> So why didn't Searle list underspecking as one of the major early objections to the CRA? The simple answer IMO is that it *wasn't* a
> major objection to the CRA.
Oh, you mean Dennett's take on this doesn't count as "major"? Somehow I had got the impression that he was one of the heavyweight thinkers in this field and that he carried at least as much credibility and gravitas as Searle!
Well here's another possibility: Searle didn't take it on because he didn't see the implications. Another: he did but realized there wasn't a good response to this and so passed it by. Both are equally as reasonable as your supposition that it wasn't a "major" objection.
You know, this post of yours, with your attacking me for something your pal PDJ actually said reminds me of another time, not so long ago on Analytic, when you asserted that that my denial of PDJ's claim that I was "complaining about Frege" (wrongly imputed by you to Joe) amounted to deliberate dishonesty because obviously PDJ (once you'd gotten straight who said what) didn't really mean that I was "complaining about Frege" when he asserted that I was doing so but only that I wasn't accepting his argument that "referent" could only be used, in discussions on Analytic, in the sense Frege gives it and that citing ordinary English usages for what I meant was dishonest on my part.
And you purport to wonder why I have come to believe that you think you think you have some kind of score to settle with me? Try not to let your personal antipathies cloud your judgment this time, eh?
> It was just a most likely justified complaint about a misleading aspect of Searle's presentation. As I argued in my previous post, if the original CRA was underspecked, that is only, at worst, because Searle attempted to pull a rhetorical fast one, exaggerating the simplicity of AI software designs to sway a wider audience.
In later iterations he was still pulling it then, to be sure. We have even, back on Analytic, linked to fairly recent on-line lectures by him in which he was still making the CRA in the same terms. I guess he kind of likes how it sounds.
> But we all know perfectly well that if we add complexity to the CR software as described, Searle would (and did) still make essentially the same argument, based on the intrinsic nature of software. In that > case, why make such a fuss about underspecking?
Because it's a very good candidate for the reason why the CR doesn't do what it pretends to do. That is, explain consciousness as a system level phenomenon and the problem turns out to be the scope of the system in use, not the hardware and software-driven processes that implement it.
> The only point of it is apparently to chastise Searle for rhetorical trickery in his original presentation --- the real issues are not being touched.
On the contrary, the "real issue" is precisely that, i.e., how should we think about consciousness? Is it grounded in physical phenomena (and so physically explainable) or is it a stand-alone something that co-exists with physical phenomena. The former answer is leaner and sufficient to explain consciousness qua the Dennettian model. The latter answer is more complicated and, finally, doesn't even explain consciousness although it obliges us to change how we explain the world.
> I am not taking sides here in the matter Dennett v. Searle. I agree with you, of course, that how we define 'consciousness' is part of the crux of the matter, as well as how it could be an emergent property of a system with insensate parts. All I am saying is that the underspecking argument, as prezented by you, is rather pointless, and that the problem has been compounded by your dodgy presentation of the "upspecking" that would need to be done.
Well you're entitled to your view on this. My claim about "upspecking" has been spelled out in some detail. It's not about the nature of programming but about the uses the programming is put to. I have never claimed we need more than a UTM, if Dennett is right, contra PDJ's (and your) assertions to the contrary.
> Your pal Bill Modlin understood all this perfectly, and in my opinion he completely nailed it in the Analytic post, addressed to you, which I quoted earlier:
> "What I pointed out was that Searle has explained, and apparently has
> intended from the beginning, that his CR "rules" are to include any sort of complexity that anyone wishes to add. We do not need a "more complex system" as he says to assume it is already there, that we are free to use any specification of our choice.
> "He says that no system of "mere computations", no matter how complex, will ever come to understand what it is doing. He feels that computation is the wrong sort of thing to produce understanding, that there is "something missing" that cannot be supplied by any elaboration of the computational scheme.
And I have never disputed that that IS Searle's position. What I have said is that, in taking it, Searle misses the point because he commits himself to a view that consciousness is irreducible in the case of computation while he does not stick with that in the case of brains. So he is in self-contradiction here.
> "Your arguments have been mostly about adding different more powerful
> computations to what you interpret as his intended implementation,
As I once told Bill privately, that is not what my arguments have been about. In that I believe he was misled by the arguments against my position by some, like PDJ, which appear to counter THAT position. But that isn't my position. It is not "more powerful computations" that I say are missing but "more processes doing more things in a certain way", i.e., a more powerful SYSTEM. I do not make any claim which suggests qualitative differences between types of computations. I claim that, if consciousness IS a system level phenomenon (where a system is defined as an array of processes doing certain things in a certain way) then an array of processes that don't do those things in that way is going to be inadequate (the bicycle, not the airplane).
> but since his expressed intent was to allow all that you say is needed, and his assertion is that this does not change the argument, then you are not addressing his argument.
> "Worse, you still do not seem to have fully appreciated that most of
> your suggested "improvements" would not make a real difference. You confuse hardware with algorithms, and still seem to feel that somehow there are things that are possible with a complex parallel physical mechanism that are not possible with a simple sequential one. Which is false, and makes most of your arguments pointless.
As I've said before, I think Bill misunderstood my position, supposing that my point about the advantages of a massively parallel processing platform implied a claim that this qualitatively altered the nature of the computation going on in it. Although this was one of PDJ's long term refrains against my view in the old days, in fact I never said any such thing. I have said that if consciousness can be explained as a system level feature, then we need to look to the scope of the system in order to produce it, not to the constituent elements that implement the system.
> "I think Searle is wrong, for reasons that have nothing to do with complexity or speed....
Yes. Bill has his criticisms of Searle and he is hardly alone. Note that Bill was interested in producing understanding via an algorithm. That is, he was arguing that if you could program in the right instructions the machine would understand. But he was not interested in consciousness per se, in what we have sometimes called here "experience". As I mention in a nearby post, there is a distinction to be made between an activity that could be described as understanding and the experience of understanding something. It's the latter that I have been addressing although the former has some bearing on it, of course.
> [various criticisms of Searle follow]
> "...If anyone cared we could probably elaborate more ways that he is wrong, but I have become tired of this discussion and don't feel like bothering."
Thanks for sharing again Larry. Perhaps you can convince Bill to resume active posting to Analytic now that you have finally cleaned out the detritus like me! Maybe you all can finally address his issues instead of spending the bulk of your time justifying your on-line behavior toward me to him? I'm sure he'd appreciate the change of pace.
More information about the Philscimind