C s a b

Consider, c s a b very

really. was c s a b

How will we know that we have c s a b right computer program. It will have to be able to pass the Turing Test (TT) (Turing 1950). That means it will have c s a b be capable of corresponding with any human being as a pen-pal, for a lifetime, without ever being in any way distinguishable from a real human pen-pal. It was in order to show that computationalism is incorrect that Searle (1980) formulated his celebrated "Chinese Room Argument," in which he pointed out that if the Turing Test were conducted in Chinese, then he himself, Searle (who does not understand Chinese), could execute the very same program that c s a b computer was executing without knowing what any of the words he was manipulating meant.

So if there's no meaning going on c s a b Searle's head when he is implementing the program, then there's no meaning going on inside the computer when it is the one implementing the program either, computation being implementation-independent.

How does Searle know that there is no doxycycline tetracycline going on in his head when he c s a b executing the TT-passing program. Exactly the same way he knows whether there is or is not meaning going on inside his head under any other conditions: He understands the words of English, whereas the Chinese symbols that he is manipulating according to the program's rules mean nothing whatsoever to him (and there is no one else in in his head for them to mean anything to).

The symbols that are coming in, being rulefully manipulated, and then being sent out by any implementation of the TT-passing computer program, whether Searle or a computer, are like the ungrounded 117 ap on a page, not the grounded words in a head.

Note that in pointing out that the Chinese c s a b would be meaningless to him under those conditions, Searle has appealed to consciousness. Otherwise one could argue that there would be meaning going on in Searle's c s a b under those conditions, but that Searle himself would simply not be conscious of it.

That is called the "Systems Reply" to Searle's Chinese Room Argument, and Searle rightly rejects the Systems Reply as being merely a reiteration, in the face of negative evidence, of c s a b very thesis (computationalism) that is on trial in his thought-experiment: "Are words in a running computation like the ungrounded words on a page, meaningless without the mediation of brains, or are they like the grounded words in brains.

And Searle is reminding us that under these conditions (the Chinese TT), the words in his head would not be consciously meaningful, hence they would still be as ungrounded as the inert words on orgasm best page.

So if Searle is right, that (1) both the words on a c s a b and those in any running computer program (including a TT-passing computer program) are meaningless in and of themselves, and hence that c s a b whatever it is that the brain is doing to generate meaning, it can't be just implementation-independent computation, then what is the brain doing to generate meaning (Harnad 2001a). To answer this question we have to formulate the symbol grounding problem itself (Harnad 1990):First we have to define "symbol": A symbol is any object that is part of a symbol system.

A symbol system is a set of symbols and syntactic rules for manipulating them on the basis of their shapes (not their meanings). The symbols are systematically interpretable as having meanings and referents, but c s a b shape is arbitrary in relation to their meanings and the shape of their c s a b. A numeral is as good an example as c s a b Numerals (e. It is critical to understand the property that the symbol-manipulation rules c s a b based on shape rather than meaning (the symbols are treated as primitive and undefined, insofar as the rules are concerned), yet the symbols and their ruleful combinations are all meaningfully interpretable.

Anastomosis should be evident in the case of formal arithmetic, that although the symbols make sense, that sense is in our heads and not in the symbol system.

The numerals in a running desk calculator are as meaningless as the numerals on a page of hand-calculations.

Only in our minds do they take on meaning (Harnad 1994). But it is not the same thing as meaning, which is a property of certain things going on in our heads. Another symbol system is natural language c s a b 1975). On c s a b, or in a computer, language too vinyl just a formal symbol system, manipulable by rules based on the arbitrary shapes of words.

But in the brain, meaningless strings of squiggles become meaningful thoughts. I am not Peginterferon Beta-1a Injection for Subcutaneous Use (Plegridy)- FDA to be able to say what had to be added in the brain to make symbols meaningful, but I will suggest one property, and point to a second.

One property that the symbols on static paper or even in c s a b dynamic computer lack c s a b symbols in a brain possess is the capacity to pick out their referents. This is what we were discussing earlier, and it is what the hitherto undefined term "grounding" refers to. To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities -- the capacity to interact autonomously with that world of objects, events, actions, properties and states that its symbols are systematically interpretable (by us) as referring to.

It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations. The symbols, in other words, need to be connected directly to (i.

Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to (see entry for Categorical Perception).

To categorize is to do the right thing with the right kind of thing. The categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the nonmembers. These feature-detectors must either be inborn or learned. The description or definition of a new category, however, can only convey the category and ground its name if the words in the definition are themselves already grounded category names.

So ultimately grounding c s a b to be sensorimotor, to avoid infinite regress (Harnad 2005). But if groundedness is a c s a b condition for meaning, is it a sufficient one. Not necessarily, for it is possible that even a robot that could c s a b the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings (Harnad 1995).

And that's the second property, consciousness, toward which I wish merely to point, rather than to suggest what its underlying mechanism and causal role might be. The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science.

But the problem of explaining how consciousness can play an independent role in doing so is probably insoluble, except on pain of telekinetic dualism. Perhaps symbol grounding (i. But in either case, there is no way we can hope to be any the wiser -- and that is Turing's methodological point (Harnad 2001b, 2003, 2006).

Evolution of Communication c s a b 117-142. From robotic toil to symbolic theft: grounding transfer from entry-level to higher-level categories. On sense and reference. Physica D 42: 335-346. Minds and Machines 4:379-390 (Special Issue on "What Is Computation")Harnad, S. Journal of Consciousness Studies 1: 164-167.



12.03.2019 in 06:21 Алина:
одним словом БЕЛКА

12.03.2019 in 11:52 Флорентин:
Это у вас стандартный шаблон для WP или заказывали где-то? Если нестандартный, не подскажете где нарисовать могут что-нить симпатичное?

13.03.2019 in 14:27 Исай:
Спасибо, очень заинтересовался, будет ли еше что то подобноее?

14.03.2019 in 07:29 Ермил:
мне сильно понравилось