Mike, You are wrong and we have had these kinds of discussions too often.
Computer programs can make inferences. Now there might be a question of
whether the program 'understands' that it is making an inference but that would
be based on whether a computer program can be made to 'understand' anything.
The question that was interesting to me was whether or not a text-based AGI
that knew a lot about cats would make the inference that cats can pounce if
that information had not been presented directly to it. It is very unlikely
because pounce is a word that is tightly related to cat-like behavior. If
some young woman described herself as pouncing on some sale item, I would think
that she was projecting her proprioceptive feelings (and desire to be
perceived) as cat-like and even predatory about sales. So it would be very
likely that an actual AGI program that knew a lot about cats and knew that cats
pounce on prey would be capable of understanding the use of the metaphor after
a few brief exposures (and a few mistaken presumptions about the what happens
to the "prey".) But it is less likely that the inference would go the other
way if the program did not know that cats pounce to start with because the word
pounce is strongly related to cat-like behavior. I am disappointed that your
follow up comments were not stronger. Your example did make me think. And I
see now that your resentments are not coming from a serious lack of self
control but because your point of view is in total opposition to the
predominant views that are found this group. There are certain ai discussions
that I am not interested in because I have already made my mind up certain
ideas. Computers can make inferences. Jim BromerFrom: [email protected]
To: [email protected]
Subject: Re: [agi] Re: Summary of My Current Theory For an AGI Program.
Date: Fri, 19 Apr 2013 16:25:11 +0100
Jim:Only one
good question stands out in my mind. Even if the text program knew
something about cats would it be able to infer that cat's pounce if the
necessary information was not in the program. The trouble with this as a
criticism is that the issue is valid for all cases of mentation. Does
anyone who participates in this group know if a mountain lion purrs? Does
a mountain lion meow? Does a kangaroo make some kind of
vocalization? Most of you do not know the answer to the questions off
hand.
This is, whatever the intention,
specious. We can successfully infer a vast amount about objects – as the tests
of divergent thinking demonstrate – new inferences that clearly do not derive
from some frame or net of words. We can infer that cats and mountain lions
jump,
snarl, stare, roll over, lie, stroke with their whiskers, rub with their heads,
nudge with their paws, put one foot on, put two feet on, put three feet on, rub
with their hind quarters, rub with their noses, shake their heads up and
down/round and round, whip with their tails ... etc. etc. on and on.
The fact that we may well make the odd false new inference , is neither here
nor
there. The wonder is that we can make any. We are more or less infinitely
generative about the possible actions of any given object. All AI programs, all
algos, all text based progs, OTOH have zero generativity. That is the unsolved
problem of AGI
You and others here - [possibly
everyone because I’m beginning to wonder whether there is anyone here who isn’t
basically a diehard GOFAI-er] – are claiming that it is possible from a frame
of
words – let’s say:
CATS - EAT - JUMP
– BITE - etc
to make new inferences as humans do.
Inferences that are not simply logical and transcend those that current progs.
make.
There’s not a chance in hell. All the
above inferences were a) visually/imaginatively and b) bodily derived (as you
might understand, if you were not appallingly ignorant about embodied cog sci].
Try it for yourself – infer some more about cats in your head – and see how
it’s
done.
The challenge for everyone here – not
just you but Aaron, Ben et al – is to show how a verbal/symbol network can
generate new kinds of inference. And, there is no excuse for evasiveness – all
you need do is take something simple like CAT and BALL – or even just BOX and
BALL – produce a small network of words for them – maybe twenty or so - and
give
us just an idea of how that set of words has even the slightest chance of
generating new inferences.
It isn’t just you – it is the whole of
AGI that has been evasive here. Ben and others could have saved themselves
years of life by attempting some modest proof of concept here.
You, Jim, haven’t been listening to what
people have been telling you – they simply don’t understand what you’re saying
–
because it’s non-specific. No one can get a handle on it. Here is your chance
to
be specific.
Before you make more excuses, as you will
– I should make it clear, that I know you *can’t* be specific about your
ideas. You’re “specific-example-challenged*. But maybe s.o. else would
like to take up the challenge. This is a v. good example of the central
challenge of AGI.
AGI | Archives
| Modify
Your Subscription
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com