On Intentionality , Symbol Grounding and Computationalism

Guide : Prof. Amitabha Mukerjee


Homework : 1

Group-A

Abhijit Sharang 10007
Deepak Pathak 10222
Diksha Gupta 11255
Ganesh Pitchiah Y9213
Nitica Sakharwade 10459

Presentation Slides :

Click here to download the pdf.
Click here to view prezi online.


Discussion :



The problem under consideration is the relevance of symbol grounding as applied to the processing of natural language by computers. The discussion pertained to 7 papers related to the problem. Each of these is described below.

1. Saussure, 1916 :

This paper brings into the light the idea that language is not merely vocal naming of the objects. It states that the linguistic sign of any object/action is closely associated with a psychological concept of the same. Every linguistic sign is a two-sided psychological entity; a link between the "signifier", the sound image and a "signified", the concept to which it points. It tries to enrich language by stating that language is not just associated with syntax.

2. Peirce, 1932 :

Pierce talks about Sign which is a representation of an Object/Idea of the real world which is interpreted by an observer and is called interpretant.
                    Object(O) -> Sign(S) -> Interpretant(I)
He further divides Sign into Icon (for example a pencil stroke representing the idea of a line), Index (For example a bullet hole in a wall) and Symbol (for example speech). Icons may represent abstract ideas or real objects like letters. Index can exist without Object (say bullet). It involves associations and spatial connections, for example 'The fire is 10km away from here'. In this, here is and index and directs attention by blind compulsion. Index is necessarily needed in some degree for understanding.
Finally, symbol can exist without Interpretant and need not be interpreted for it to exist like the meaning of a sentence.

3. Searle, 1990 :

In his paper, Searle defends his criticism of Strong AI, according to which manipulation of formal symbols is a necessary and sufficient condition for cognitive processes. Searle argues that this is not a sufficient condition. In other words, a cognitive process does not imply mere symbolic manipulation. He goes on to state that however close we might get a symbolic manipulator to simulate a cognitive process, we will not be able to make it duplicate the same. His postulate is that brains are not mere computational entities, but are also active symbolic grounders. In this way, brains cause mind.

4. Harnad, 1990 :

Starting with Searles CRA, Harnad uses the problem of learning Chinese from a Chinese/Chinese dictionary alone; to indicate that grounding is important. He comes with a possible mechanism using two kinds of non-symbolic representations: (1)iconic and (2)categorical. The category-member relationships are further modelled as symbolic representations. He presents how existent solutions: Symbolic AI and Connectionism can complement their strengths and weakness. He finally propose a hybrid solution which provides grounding (iconic and categorical) using connectionist neural nets and hence an "intrinsically" dedicated symbol system emerges instead of a parasitic one.

5. Cangelosi; Greco; Harnard , 2002 :

This paper builds upon the work done in Harnad's previous paper dealing with Symbol Grounding Problem and the proposed hybrid solution. The demonstration of Harnad's hypothesis that symbol grounding can be computationally modelled using connectionist bottom up approach, has been stated in this paper using Neural Networks. It discusses the computational model which does grounding of some basic symbol (categorization by training) and then demonstrates the grounding transfer thus supporting Deacon's view of hierarchal referencing system. The paper also discusses symbolic theft and claims that it is more adaptive (i.e. improves categorical learning) than sensorimotor toiling with the help of experiment.

6. Dietrich , 1990 :

This paper contests the definition of intentionality as given by Searle. According to Dietrich, if computationalism and its consequences are assumed to be true, then both humans and machines possess intentionality. The argument for the same is that computationalism describes every process as a series of states which are produced by sub-functions. Interaction of these functions necessitates the interpretation of the states produced by them, which brings intentionality to the system. The paper also clarifies the distinction between computationlism and computerism, the latter being mistakenly used for launching an attack on the former. Having described the properties of computationalism, the paper concludes that the distinction between humans and computers, if any, might be made on the basis of consciousness.

7. Kelley, 2003 :

This paper is set in the debate of: using symbolic versus sub-symbolic representation for modelling human cognition. This paper argues for an integrated architecture using biological evidence; where biological perspective shows that both are part of the same intellectual continuum, with sub-symbolic representations at the low end and symbolic representations at the higher end. Specifically, in the human context, the example of reflex actions and processing language is discussed.
The paper also cities other current connectionist approaches towards modelling complex cognitive phenomenon; for instance, using synchronous firing of related neural networks to represent conceptual relationships etc. Finally, it is also open to a connectionist system with sufficient complexity to be capable of modelling the entire human cognition.

Overall Discussion:

Searle attacks the capability of any form of symbolic manipulation to model cognitive processes. However, we feel that symbols can be divided into two categories: Implementation level symbols like the binary 0,1 which are simply syntactic entities and referential symbols which are grounded in the mind and hence represent the semantics that the brain attaches to real world objects. Since such symbol grounding is, seemingly, the huge difference between human brain and a machine, we are of the opinion that if similar grounding of symbols is accomplished in computers we might be able to achieve human cognitive abilities in machines. Harnard has demonstrated the same by proposing a hybrid model of symbolic AI and connectionism, which is strengthened by the work of Troy D. Kelly in 2003 by presenting an analogue between symbolic AI and language manipulation i n brain and that between connectionism and reflex action from the lower part of the central nervous system.
At the same time, we also concur with Dietrich's arguments and his view over intentionality. He explains the presence of intentionality in machines by taking into account different definitions of intentionality as proposed by different authors over the years. According to computationalism, a cognitive process can be modelled with the help of some functions, which can further be analysed as interacting sub-functions. This interaction necessarily requires interpretation of its own symbols by the machine, thus affirming presence of intentionality in them.
Harnad's paper was published during the same period as that of Dietrich's. His interpretation of intentionality in terms of symbol grounding and the hybrid solution proposed by him in this regard looks more appealing to us.
We, however, do not take a stand on the issue of free will. It might be the case that free will does not exist in humans as well, which renders the question of whether humans are superior to computers on the basis of free will moot. It results in an open ended question as to what consciousness is.

Minutes of Group Meetings :

Discussion 1
Discussion 2
Discussion 3




References :

Additional :
  • [Kelley, Troy D.,2003] "Symbolic and Sub-Symbolic Representations in Computational Models of Human Cognition What Can be Learned from Biology?." Theory & Psychology 13.6 (2003): 847-860.

  • [Dietrich,E. 1990] "Computationalism" Social Epistemology 4: 135-54

Original :
  • [Angelo Cangelosi, Alberto Greco and Stevan Harnad , 2002] "Symbol Grounding and the Symbolic Theft Hypothesis"; In Cangelosi A & Parisi D. Simulating the Evolution of Language. London: Springer

  • [Harnad S. , 1990]; "The Symbol Grounding Problem"; Physica D 42: 335-346.

  • [Saussure, Ferdinand de , 1916]; "Nature of the Linguistics Sign" ; In Charles Bally & Albert Sechehaye, Cours de linguistique generale, McGraw Hill Education.

  • [Pierce C. S., 1932] ; "The icon, index, and symbol" ; Collected papers of Charles Sanders Pierce, 2, 156-173.

  • [John R. Searle , 1990]; "Is the Brain's Mind a Computer Program?" ; In Scientific American, pp. 26--31.

  • [John R. Searle , 1984]; "Minds Brains and Computers", (Book)