Jim:Only one good question stands out in my mind.  Even if the text program 
knew something about cats would it be able to infer that cat's pounce if the 
necessary information was not in the program.  The trouble with this as a 
criticism is that the issue is valid for all cases of mentation.  Does anyone 
who participates in this group know if a mountain lion purrs?  Does a mountain 
lion meow?  Does a kangaroo make some kind of vocalization?  Most of you do not 
know the answer to the questions off hand.

This is, whatever the intention, specious. We can successfully infer a vast 
amount about objects – as the tests of divergent thinking demonstrate – new 
inferences that clearly do not derive from some frame or net of words. We can 
infer that cats and mountain lions jump, snarl, stare, roll over, lie, stroke 
with their whiskers, rub with their heads, nudge with their paws, put one foot 
on, put two feet on, put three feet on, rub with their hind quarters, rub with 
their noses, shake their heads up and down/round and round, whip with their 
tails  ... etc. etc. on and on.  The fact that we may well make the odd false 
new inference , is neither here nor there. The wonder is that we can make any. 
We are more or less infinitely generative about the possible actions of any 
given object. All AI programs, all algos, all text based progs, OTOH have zero 
generativity. That is the unsolved problem of AGI

You and others here -  [possibly everyone because I’m beginning to wonder 
whether there is anyone here who isn’t basically a diehard GOFAI-er] – are 
claiming that it is possible from a frame of words – let’s say:

CATS   -  EAT -  JUMP – BITE -  etc

to make new inferences as humans do. Inferences that are not simply logical and 
transcend those that current progs. make.

There’s not a chance in hell. All the above inferences were a) 
visually/imaginatively and b) bodily derived (as you might understand, if you 
were not appallingly ignorant about embodied cog sci]. Try it for yourself – 
infer some more about cats in your head – and see how it’s done.

The challenge for everyone here – not just you but Aaron, Ben et al – is to 
show how a verbal/symbol network can generate new kinds of inference. And, 
there is no excuse for evasiveness – all you need do is take something simple 
like CAT and BALL – or even just BOX and BALL – produce a small network of 
words for them – maybe twenty or so - and give us just an idea of how that set 
of words has even the slightest chance of generating new inferences.

It isn’t just you – it is the whole of AGI that has been evasive here.  Ben and 
others could have saved themselves years of life by attempting some modest 
proof of concept here. 

You, Jim, haven’t been listening to what people have been telling you – they 
simply don’t understand what you’re saying – because it’s non-specific. No one 
can get a handle on it. Here is your chance to be specific.

Before you make more excuses, as you will – I should make it clear, that I know 
you *can’t* be specific about your ideas.  You’re 
“specific-example-challenged*. But maybe s.o. else would like to take up the 
challenge. This is a v. good example of the central challenge of AGI.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to