Discussion about this post

User's avatar
Chad's avatar

Hi Dr. Huemer, you said that to understand a word is to have formed the right dispositions to apply the word in approximately the same circumstances that people in one's speech community apply it. Wouldn't this suggest that AI language models understand the meaning of words? After all, Chat GPT has a strong disposition to use words in the same circumstances that people in the speech community apply them. Yet from what I've heard you say about the Chinese room thought experiment, AI language models don't have genuine linguistic understanding. So how would you resolve this tension?

Expand full comment
technosentience's avatar

How do your views on the resolution of the Sorites Paradox fit into this scheme? Would you say that heapness is a concept with fuzzy boundaries, or that the word “heap” ambiguously denotes many precise concepts?

Expand full comment
4 more comments...

No posts