Mike Tintner wrote:
I have REPEATEDLY said I am talking about defining general problem classes, rather than setting narrow AI specialised problems. (Show me BTW where there has been discussion of general problem classes - I'd be v. interested).
Could you define the sense in which "general problem classes" are not like "Narrow AI problems" ... the reason I ask is that your last attempt to clarify what you meant was a question along the lines of "How does your system represent 'goals' , 'move', 'obstacle', 'path', etc?", but this is clearly not talk about a problem class anymore.
Where I am going with this question: I think that if you added some of the things that you missed out from that list (How does your system represent entities, relationships, operations, its own self, intentions, negatives, etc., and how does it deal with learning new concepts by abstraction, analogy, generalization, etc..., and what is the role of logical reasoning vs imagistic reasoning vs guesswork in its processing, etc. etc. etc....) you would have reached a stage where there was nothing that looked like a 'problem to be solved' in there at all. In fact, it would look like the kind of work that, as far as I can tell, is the target of your critique.
So I am confused.
Inevitably, in short posts, there are going to be misunderstandings. I suggest check out whether you're properly understood me, and ask questions rather than jumping to dismiss me.
Questions are always good, and I have made the same comment to other people, on occasion.
I can't help but notice though, that much of what you have said has begun with something that looks suspiciously like a jump to dismiss what others on this list are doing...?
Don't get the wrong idea: I am not dismissing you. I have some sympathy with *some* interpretations of what you have been saying. But perhaps your critiques could do with some fine-tuning, because if they are knocking both me and Goertzel sideways, in the same stroke, then something weird is going on ;-).
Your conclusion, for example, that I was "sadly mistaken" - and that what I was saying about how the brain makes sense of language and info. generally has all been said before by Kosslyn & co is nonsense. What I was talking about can be classified under the heading of "psychosemiotics" - the study of how the hierarchy of human sign systems reflects a parallel hierarchy in the human brain's information processing. That field doesn't exist yet - it's virgin territory. And the whole, related area of embodied cognition in cognitive psychology is also still in its infancy.
You were discussing the relative merits of language and images in mental representations. That is THE defining statement of the Kosslyn/Pylyshyn debate. Can you clarify why you say that it is "nonsense" to point to that literature?
Your definition of "psychosemiotics" leaves me puzzled. What is the difference between the "hierarchy of human sign systems" (perhaps you mean language and its associated sign systems?) and a "parallel hierarchy in the human brain's information processing"? That sounds like the definition of psycholinguistics, a mature field that studies the relationship between external language use and internal language processing and representation...? This would not be virgin territory. Or did you mean something else?
You mention "embodied cognition". I know what this means in the context of AI, but the nearest correlate I can think of in cognitive psychology is the topic of how internal representations relate to sensory-motor systems. That is not something that is in its infancy, surely: that is a very large chunk of what cognitive psychology is all about. Maybe you could clarify what you meant by embodied cognition?
Richard Loosemore. ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
