Language
Fact-checked

At LanguageHumanities, we're committed to delivering accurate, trustworthy information. Our expert-authored content is rigorously fact-checked and sourced from credible authorities. Discover how we uphold the highest standards in providing you with reliable knowledge.

Learn more...

What Is Symbol Grounding?

Meg Kramer
Meg Kramer

Symbol grounding is the connection of symbols, such as written or spoken words, with the objects, ideas or events to which they refer. A problem called the symbol grounding problem is concerned with the ways in which words come to be associated with their meanings, and by extension, how consciousness is related to the understanding of symbolic meaning. The symbol grounding problem, because it is related to these questions of meaning and consciousness, is often discussed within the context of artificial intelligence (AI).

The study of symbols, as well as the processes by which they acquire meaning and are interpreted, is known as semiotics. Within this field of study, a branch called syntactics deals with the properties and governing rules of symbolic systems, as in the formal properties of language. The symbol grounding problem is understood within the framework of this discipline, which includes semiosis, the process that allows an intelligence to understand the world through signs.

Woman standing behind a stack of books
Woman standing behind a stack of books

The symbol grounding problem was first defined in 1990 by Steven Harnad of Princeton University. In short, the symbol grounding problem asks how the understanding of the meanings of the symbols within a system, such as formal language, can be made intrinsic to the system. Although the operator of the system might understand the symbols’ meanings, the system itself does not.

Harnad refers to John Searle’s classic "Chinese room" thought experiment to illustrate his point. In this experiment, Searle, who has no knowledge of the Chinese language, is given a set of rules that allow him to correctly respond, in written Chinese, to questions that are posed to him, also in written Chinese. An observer outside of the room might come to the conclusion that Searle understands Chinese very well, but Searle is able to manipulate Chinese symbols without ever understanding the meaning of either the questions or the answers.

According to Harnad, the experiment can be analogized to an AI. A computer might be able to produce the correct answers to external prompts, but it is acting based on its programming, like Searle following the rules that he was given in the thought experiment. The AI is able to manipulate symbols that have meaning to an outside observer, but it does not have a semantic understanding of the the symbols. Therefore, the AI can not be said to possess consciousness, because it does not actually interpret the symbols or understand to what they refer. It does not achieve semiosis.

Discuss this Article

Post your comments
Login:
Forgot password?
Register:
    • Woman standing behind a stack of books
      Woman standing behind a stack of books