I think that grammatical categories can be used to derive meaning. But the problem, as I see it, is that strict categorical interpretation will not provide the kinds of information that an AGI program will need. For example, a sarcastic remark means just the opposite of what is being said. For another example, we sometimes wonder if a prejudice is behind a remark that is made. On the other hand, this kind of assumption, that someone makes critical remarks because he was motivated by prejudice, if made without evidence, can lead to paranoia. But, the thing is, I think this kind of analysis can give us more insight into how 'understanding' is formed than trying to base our theories on strict mathematical methods that have worked especially well with technology that produces effects in real measurable space.
How do I decide that someone has made a remark out of prejudice? First of all, did he explicitly make a prejudicial remark? Did his statement have any content-value other than a personal criticism? Could his remark be attributed to projection or scape-goating? Using analytical methods that correspond to these kinds of questions we can at least begin to assemble some actual evidence that might support the theory that there was at least a trace of prejudice behind the comment. On the other hand we can use our minds to examine the question of whether the remark had any content-value that is at all understandable. Did the remark seem to express a value that the speaker had presented before and which does make some sense? If the remark was a personal criticism was a criticism that would be applicable to anyone? I believe that this kind of analytical projection is the best bet for developing AGI thinking. However, there is an issue. There is something missing. A computer program could, hypothetically, analyze data and find strong categorical elements in Input. But can it use semantic insight to initially derive insight into the semantics of some statements or the meaningful possibilities related to a situation? It may not be obvious, but why not? Why not just start with very simple projections of conjectural relations? I think it would work if it wasn't bogged down by complexity. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
