Hi Dr. Huemer, you said that to understand a word is to have formed the right dispositions to apply the word in approximately the same circumstances that people in one's speech community apply it. Wouldn't this suggest that AI language models understand the meaning of words? After all, Chat GPT has a strong disposition to use words in the same circumstances that people in the speech community apply them. Yet from what I've heard you say about the Chinese room thought experiment, AI language models don't have genuine linguistic understanding. So how would you resolve this tension?
How do your views on the resolution of the Sorites Paradox fit into this scheme? Would you say that heapness is a concept with fuzzy boundaries, or that the word “heap” ambiguously denotes many precise concepts?
Different people in our speech community have slightly different concepts; in addition, each of those concepts is itself vague, in that it has non-extreme degrees of satisfaction in a range of cases. The exact satisfaction profile cannot be reproduced using other concepts, which makes it undefinable.
Michael writes in the conclusion of chapter 3 of Paradox Lost on the Sorites Paradox: "Mental representations are not simply satisfied or unsatisfied by the world; they have degrees of satisfaction. A mental state’s profile of degrees of satisfaction, as a function of the state of the world, may lack natural joints. To assign a propositional content to such a representation is to give at best an approximate characterization of its meaning; strictly speaking, no particular proposition captures the meaning of the mental state. Language, in turn, inherits its vagueness from thought: because many thoughts lack propositional contents, many sentences lack propositional contents. They are not meaningless ; they merely have a kind of meaning that fails to single out a definite proposition. Thoughts and statements without propositional content cannot be true or false in the strictest sense of “true” and “false”. They can, however, be satisfied to a high degree, and in most contexts, this suffices to make them appropriate. In most contexts, we may even call them “true”, using a loose notion of truth. But when we encounter sorites arguments, we must insist on the distinction between strict truth and mere approximate truth. The sorites argument exploits the gap between the two to derive an absurdity, applying the logical laws for strict truths to mere approximate truths."
Huemer, Michael. Paradox Lost: Logical Solutions to Ten Puzzles of Philosophy (p. 136). Springer International Publishing. Kindle Edition.
Hi Dr. Huemer, you said that to understand a word is to have formed the right dispositions to apply the word in approximately the same circumstances that people in one's speech community apply it. Wouldn't this suggest that AI language models understand the meaning of words? After all, Chat GPT has a strong disposition to use words in the same circumstances that people in the speech community apply them. Yet from what I've heard you say about the Chinese room thought experiment, AI language models don't have genuine linguistic understanding. So how would you resolve this tension?
How do your views on the resolution of the Sorites Paradox fit into this scheme? Would you say that heapness is a concept with fuzzy boundaries, or that the word “heap” ambiguously denotes many precise concepts?
Different people in our speech community have slightly different concepts; in addition, each of those concepts is itself vague, in that it has non-extreme degrees of satisfaction in a range of cases. The exact satisfaction profile cannot be reproduced using other concepts, which makes it undefinable.
Michael writes in the conclusion of chapter 3 of Paradox Lost on the Sorites Paradox: "Mental representations are not simply satisfied or unsatisfied by the world; they have degrees of satisfaction. A mental state’s profile of degrees of satisfaction, as a function of the state of the world, may lack natural joints. To assign a propositional content to such a representation is to give at best an approximate characterization of its meaning; strictly speaking, no particular proposition captures the meaning of the mental state. Language, in turn, inherits its vagueness from thought: because many thoughts lack propositional contents, many sentences lack propositional contents. They are not meaningless ; they merely have a kind of meaning that fails to single out a definite proposition. Thoughts and statements without propositional content cannot be true or false in the strictest sense of “true” and “false”. They can, however, be satisfied to a high degree, and in most contexts, this suffices to make them appropriate. In most contexts, we may even call them “true”, using a loose notion of truth. But when we encounter sorites arguments, we must insist on the distinction between strict truth and mere approximate truth. The sorites argument exploits the gap between the two to derive an absurdity, applying the logical laws for strict truths to mere approximate truths."
Huemer, Michael. Paradox Lost: Logical Solutions to Ten Puzzles of Philosophy (p. 136). Springer International Publishing. Kindle Edition.
Hence, people are coming constantly talking past each other. When we use the same word, we are only very approximately thinking the same idea.
What is the origin of the Jacques-Louis David conglomeration of images?