Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-15 Thread Richard Loosemore

Mike Tintner wrote:


Sounds a little confusing. Sounds like you plan to evolve a system 
through testing thousands of candidate mechanisms. So one way or 
another you too are taking a view - even if it's an evolutionary, I'm 
not taking a view view -  on, and making a lot of asssumptions about


-how systems evolve
-the known architecture of human cognition.


No, I think because of the paucity of information I gave you have 
misunderstood slightly.


Everything I mentioned was in the context of an extremely detailed 
framework that tries to include all of the knowledge we have so far 
gleaned by studying human cognition using the methods of cognitive science.


So I am not making assumptions about the architecture of human cognition 
I am using every scrap of experimental data I can.  You can say that 
this is still assuming that the framework is correct, but that is 
nothing compared to the usual assumptions made in AI, where the 
programmer just picks up a grab bag of assorted ideas that are floating 
around in the literature (none of them part of a coherent theory of 
cognition) and starts hacking.


And just because I talk of thousands of candidate mechanisms, that does 
not mean that there is evolution involved:  it just means that even with 
a complete framework for human cognition to start from there are still 
so many questions about the low-level to high-level linkage that a vast 
number of mechanisms have to be explored.



about which science has extremely patchy and confused knowledge. I don't 
see how any system-builder can avoid taking a view of some kind on such 
matters, yet you seem to be criticising Ben for so doing.


Ben does not start from a complete framework for human cognition, nor 
does he feel compelled to stick close to the human model, and my 
criticisms (at least in this instance) are not really about whether or 
not he has such a framework, but about a problem that I can see on his 
horizon.



I was hoping that you also had some view on how a system 's symbols 
should be grounded, especially since you mention Harnad, who does make 
vague gestures towards the brain's levels of grounding. But you don't 
indicate any such view.


On the contrary, I explained exactly how they would be grounded:  if the 
system is allowed to build its own symbols *without* me also inserting 
ungrounded (i.e. interpreted, programmer-constructed) symbols and 
messing the system up by forcing it to use both sorts of symbols, then 
ipso fact it is grounded.


It is easy to build a grounded system.  The trick is to make it both 
grounded and intelligent at the same time.  I have one strategy for 
ensuring that it turns out intelligent, and Ben has another  my 
problem with Ben's strategy is that I believe his attempt to ensure that 
the system is intelligent ends up compromising the groundedness of the 
system.



Sounds like you too, pace MW, are hoping for a number of miracles - IOW 
creative ideas - to emerge, and make your system work.


I don't understand where I implied this.  You have to remember that I am 
doing this within a particular strategy (outlined in my CSP paper). 
When you see me exploring 'thousands' of candidate mechanisms to see how 
one parameter plays a role, this is not waiting for a miracle, it is a 
vital part of the strategy.  A strategy that, I claim, is the only 
viable one.




Anyway, you have to give Ben credit for putting a lot of his stuff  
principles out there  on the line. I think anyone who wants to mount a 
full-scale assault on him ( why not?) should be prepared to reciprocate.


Nice try, but there are limits to what I can do to expose the details. 
I have not yet worked out how much I should release and how much to 
withhold (I confess, I nearly decided to go completely public a month or 
so ago, but then changed my mind after seeing the dismally poor response 
that even one of the ideas provoked).  Maybe in the near future I will 
write a summary account.


In the mean time, yes, it is a little unfair of me to criticise other 
projects.  But not that unfair.  When a scientist sees a big problem 
with a theory, do you suppose they wait until they have a completely 
worked out alternative before discussing the fact that there is a 
problem with the theory that other people may be praising?  That is not 
the way of science.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65349870-56ef76


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Linas Vepstas wrote:

On Tue, Nov 13, 2007 at 12:34:51PM -0500, Richard Loosemore wrote:
Suppose that in some significant part of Novamente there is a 
representation system that uses probability or likelihood numbers to 
encode the strength of facts, as in [I like cats](p=0.75).  The (p=0.75) 
is supposed to express the idea that the statement [I like cats] is in 
some sense 75% true.


Either way, we have a problem:  a fact like [I like cats](p=0.75) is 
ungrounded because we have to interpret it.  Does it mean that I like 
cats 75% of the time?  That I like 75% of all cats?  75% of each cat? 
Are the cats that I like always the same ones, or is the chance of an 
individual cat being liked by me something that changes?  Does it mean 
that I like all cats, but only 75% as much as I like my human family, 
which I like(p=1.0)?  And so on and so on.


Eh?

You are standing at the proverbial office water coooler, and Aneesh 
says Wen likes cats. On your drive home, you mind races .. does this

mean that Wen is a cat fancier?  You were planning on taking Wen out
on a date, and this tidbit of information could be useful ... 

when you try to build the entire grounding mechanism(s) you are forced 
to become explicit about what these numbers mean, during the process of 
building a grounding system that you can trust to be doing its job:  you 
cannot create a mechanism that you *know* is constructing sensible p 
numbers and facts during all of its development *unless* you finally 
bite the bullet and say what the p numbers really mean, in fully cashed 
out terms.


But has a human, asking Wen out on a date, I don't really know what 
Wen likes cats ever really meant. It neither prevents me from talking 
to Wen, or from telling my best buddy that ...well, I know, for
instance, that she likes cats...  

Lack of grounding is what makes humour funny, you can do a whole 
Pygmalion / Seinfeld episode on she likes cats.


No:  the real concept of lack of grounding is nothing so simple as the 
way you are using the word grounding.


Lack of grounding makes an AGI fall flat on its face and not work.

I can't summarize the grounding literature in one post.  (Though, heck, 
I have actually tried to do that in the past:  didn't do any good).




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64980585-67cbc9


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Hi,




 No:  the real concept of lack of grounding is nothing so simple as the
 way you are using the word grounding.

 Lack of grounding makes an AGI fall flat on its face and not work.

 I can't summarize the grounding literature in one post.  (Though, heck,
 I have actually tried to do that in the past:  didn't do any good).



FYI, I have read the symbol-grounding literature (or a lot of it), and
generally
found it disappointingly lacking in useful content... though I do agree with
the basic point that non-linguistic grounding is extremely helpful for
effective
manipulation of linguistic entities...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64981284-09925d

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Benjamin Goertzel wrote:



On Nov 13, 2007 2:37 PM, Richard Loosemore [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:



Ben,

Unfortunately what you say below is tangential to my point, which is
what happens when you reach the stage where you cannot allow any more
vagueness or subjective interpretation of the qualifiers, because you
have to force the system to do its own grounding, and hence its own
interpretation.



I don't see why you talk about forcing the system to do its own 
grounding --

the probabilities in the system are grounded in the first place, as they
are calculated based on experience.

The system observes, records what it sees, abstracts from it, and chooses
actions that it guess will fulfill its goals.  Its goals are ultimately 
grounded in in-built
feeling-evaluation routines, measuring stuff like amount of novelty 
observed,

amount of food in system etc.

So, the system sees and then acts ... and the concepts it forms and uses
are created/used based on their utility in deriving appropriate actions.

There is no symbol-grounding problem except in the minds of people who
are trying to interpret what the system does, and get confused.  Any symbol
used within the system, and any probability calculated by the system, are
directly grounded in the system's experience.

There is nothing vague about an observation like Bob_Yifu was observed
at time-stamp 599933322, or a fact Command 'wiggle ear' was sent
at time-stamp 54.  These perceptions and actions are the root of the
probabilities the system calculated, and need no further grounding.

 


What you gave below was a sketch of some more elaborate 'qualifier'
mechanisms.  But I described the process of generating more and more
elaborate qualifier mechanisms in the body of the essay, and said why
this process was of no help in resolving the issue.


So, if a system can achieve its goals based on choosing procedures that
it thinks are likely to achieve its goals, based on the knowledge it 
gathered
via its perceived experience -- why do you think it has a problem? 


I don't really understand your point, I guess.  I thought I did -- I thought
your point was that precisely specifying the nature of a conditional 
probability

is a rats-nest of complexity.  And my response was basically that in
Novamente we don't need to do that, because we define conditional 
probabilities

based on the system's own knowledge-base, i.e.

Inheritance A B .8

means

If A and B were reasoned about a lot, then A would (as measred by an 
weighted

average) have 80% of the relationships that B does

But apparently you were making some other point, which I did not grok, 
sorry...


Anyway, though, Novamente does NOT require logical relations of escalating
precision and complexity to carry out reasoning, which is one thing you 
seemed

to be assuming in your post.


You are, in essence, using one of the trivial versions of what symbol 
grounding is all about.


The complaint is not your symbols are not connected to experience. 
Everyone and their mother has an AI system that could be connected to 
real world input.  The simple act of connecting to the real world is NOT 
the core problem.


If you have an AGI system in which the system itself is allowed to do 
all the work of building AND interpreting all of its symbols, I don't 
have any issues with it.


Where I do have an issue is with a system which is supposed to be doing 
the above experiential pickup, and where the symbols are ALSO supposed 
to be interpretable by human programmers who are looking at things like 
probability values attached to facts.  When a programmer looks at a 
situation like


 ContextLink .7,.8
  home
  InheritanceLink Bob_Yifu friend

... and then follows this with a comment like:

 which suggests that Bob is less friendly at home than
 in general.

... they have interpreted the meaning of that statement using their 
human knowledge.


So here I am, looking at this situation, and I see:

   AGI system intepretation (implicit in system use of it)
   Human programmer intepretation

and I ask myself which one of these is the real interpretation?

It matters, because they do not necessarily match up.  The human 
programmer's intepretation has a massive impact on the system because 
all the inference and other mechanisms are built around the assumption 
that the probabilities mean a certain set of things.  You manipulate 
those p values, and your manipulations are based on assumptions about 
what they mean.


But if the system is allowed to pick up its own knowledge from the 
environment, the implicit meaning of those p values will not 
necessarily match the human interpretation.  As I say, the meaning is 
then implicit in the way the system *uses* those p values (and other stuff).


It is a nontrivial question to ask whether the implicit system 
interpretation does indeed match the human intepretation built into the 
inference 

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Benjamin Goertzel wrote:

Hi,
 




No:  the real concept of lack of grounding is nothing so simple as the
way you are using the word grounding.

Lack of grounding makes an AGI fall flat on its face and not work.

I can't summarize the grounding literature in one post.  (Though, heck,
I have actually tried to do that in the past:  didn't do any good).



FYI, I have read the symbol-grounding literature (or a lot of it), and 
generally
found it disappointingly lacking in useful content... though I do agree 
with
the basic point that non-linguistic grounding is extremely helpful for 
effective

manipulation of linguistic entities...


Ben,

As you will recall, Harnad himself got frustrated with the many people 
who took the term symbol grounding and trivialized or distorted it in 
various ways.  One of the reasons the grounding literature is such a 
waste of time (and you are right:  it is) is that so many people talked 
so much nonsense about it.


As far as I am concerned, your use of it is one of those trivial senses 
that Harnad complained of.  (Essentially, if the system uses world input 
IN ANY WAY during the building of its symbols, then the system is grounded).


The effort I put into that essay yesterday will have been completely 
wasted if your plan is to stick to that interpretation and not discuss 
the deeper issue that I raised.


I really have no energy for pursuing yet another discussion about symbol 
grounding.


Sorry:  don't mean to blow you off, but you and I both have better 
things to do, and I foresee a big waste of time ahead if we pursue it.



So let's just drop it?



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64998305-6bdb18


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Richard,



 So here I am, looking at this situation, and I see:

    AGI system intepretation (implicit in system use of it)
    Human programmer intepretation

 and I ask myself which one of these is the real interpretation?

 It matters, because they do not necessarily match up.


That is true, but in some cases they may approximate each other well..

In others, not...

This happens to be a pretty simple case, so the odds of a good
approximation seem high.



  The human
 programmer's intepretation has a massive impact on the system because
 all the inference and other mechanisms are built around the assumption
 that the probabilities mean a certain set of things.  You manipulate
 those p values, and your manipulations are based on assumptions about
 what they mean.



Well, the PLN inference engine's treatment of

ContextLink
home
InheritanceLink Bob_Yifu friend

is in no way tied to whether the system's implicit interpretation of the
ideas of home or friend are humanly natural, or humanly comprehensible.

The same inference rules will be applied to cases like

ContextLink
Node_66655
InheritanceLink Bob_Yifu Node_544

where the concepts involved have no humanly-comprehensible label.

It is true that the interpretation of ContextLink and InheritanceLink are
fixed
by the wiring of the system, in a general way (but what kinds of properties
are referred to by them may vary in a way dynamically determined by the
system).


 In order to completely ground the system, you need to let the system
 build its own symbols, yes, but that is only half the story:  if you
 still have a large component of the system that follows a
 programmer-imposed interpretation of things like probability values
 attached to facts, you have TWO sets of symbol-using mechanisms going
 on, and the system is not properly grounded (it is using both grounded
 and ungrounded symbols within one mechanism).



I don't think the system needs to learn its own probabilistic reasoning
rules
in order to be an AGI.  This, to me, is too much like requiring that a brain
needs
to learn its own methods for modulating the conductances of the bundles of
synapses linking between the neurons in cell assembly A and cell assembly B.

I don't see a problem with the AGI system having hard-wired probabilistic
inference rules, and hard-wired interpretations of probabilistic link
types.  But
the interpretation of any **particular** probabilistic relationship inside
the system, is relative
to the concepts and the empirical and conceptual relationships that the
system
has learned.

You may think that the brain learns its own uncertain inference rules based
on a
lower-level infrastructure that operates in terms entirely unconnected from
ideas
like uncertainty and inference.  I think this is wrong.  I think the brain's
uncertain
inference rules are the result, on the cell assembly level, of Hebbian
learning and
related effects on the neuron/synapse level.  So I think the brain's basic
uncertain
inference rules are wired-in, just as Novamente's are, though of course
using
a radically different infrastructure.

Ultimately an AGI system needs to learn its own reasoning rules and
radically
modify and improve itself, if it's going to become strongly superhuman!  But
that is
not where we need to start...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64998317-8c4281

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Mike Tintner

RL:In order to completely ground the system, you need to let the system
build its own symbols

V. much agree with your whole argument. But -  I may well have missed some 
vital posts - I have yet to get the slightest inkling of how you yourself 
propose to do this. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65013351-96e8f0


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
On Nov 14, 2007 1:36 PM, Mike Tintner [EMAIL PROTECTED] wrote:

 RL:In order to completely ground the system, you need to let the system
 build its own symbols



Correct.  Novamente is designed to be able to build its own symbols.

what is built-in, are mechanisms for building symbols, and for
probabilistically
interrelating symbols once created...

ben g



 V. much agree with your whole argument. But -  I may well have missed
 some
 vital posts - I have yet to get the slightest inkling of how you yourself
 propose to do this.


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65100803-21ddd3

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Bryan Bishop
On Wednesday 14 November 2007 11:28, Richard Loosemore wrote:
 The complaint is not your symbols are not connected to experience.
 Everyone and their mother has an AI system that could be connected to
 real world input.  The simple act of connecting to the real world is
 NOT the core problem.

Are we sure? How much of the real world are we able to get into our AGI 
models anyway? Bandwidth is limited, much more limited than in humans 
and other animals. In fact, it might be the equivalent to worm tech.

To do the calculations would I just have to check out how many neurons 
are in a worm, how many sensory neurons, and rough information 
theoretic estimations as to the minimum and maximums as to amounts of 
information processing that the worm's sensorium could be doing?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65191610-b12544

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Bryan Bishop wrote:

On Wednesday 14 November 2007 11:28, Richard Loosemore wrote:

The complaint is not your symbols are not connected to experience.
Everyone and their mother has an AI system that could be connected to
real world input.  The simple act of connecting to the real world is
NOT the core problem.


Are we sure? How much of the real world are we able to get into our AGI 
models anyway? Bandwidth is limited, much more limited than in humans 
and other animals. In fact, it might be the equivalent to worm tech.


To do the calculations would I just have to check out how many neurons 
are in a worm, how many sensory neurons, and rough information 
theoretic estimations as to the minimum and maximums as to amounts of 
information processing that the worm's sensorium could be doing?


I'm not quite sure where this is at .  but the context of this 
particular discussion is the notion of 'symbol grounding' raised by 
Steven Harnad.  I am essentially talking about how to solve the problem 
he described, and what exactly the problem was.  Hence a lot of 
background behind this one, which if you don't know it might make it 
confusing.



Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65202116-6cf6d0


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Russell Wallace
On Nov 14, 2007 11:58 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
 Are we sure? How much of the real world are we able to get into our AGI
 models anyway? Bandwidth is limited, much more limited than in humans
 and other animals. In fact, it might be the equivalent to worm tech.

 To do the calculations would I just have to check out how many neurons
 are in a worm, how many sensory neurons, and rough information
 theoretic estimations as to the minimum and maximums as to amounts of
 information processing that the worm's sensorium could be doing?

Pretty much.

Let's take as our reference computer system a bog standard video
camera connected to a high-end PC, which can do something (video
compression, object recognition or whatever) with the input in real
time.

On the worm side, consider the model organism Caenorhabditis elegans,
which has a few hundred neurons.

It turns out that the computer has much more bandwidth. Then again,
while intelligence unlike bandwidth isn't a scalar quantity even to a
first approximation, to the extent they are comparable our best
computer systems do seem to be considerably smarter than C. elegans.

If we move up to something like a mouse, then the mouse has
intelligence we can't replicate, and also has much more bandwidth than
the computer system. Insects are somewhere in between, enough so that
the comparison (both bandwidth and intelligence) doesn't produce an
obvious answer; it's therefore considered not unreasonable to say
present-day computers are in the ballpark of insect-smart.

Of course that doesn't mean if we took today's software and connected
it to mouse-bandwidth hardware it would become mouse-smart, but
hopefully it means when we have that hardware we'll be able to use it
to develop software that matches some of the things mice can do.

(And it's still my opinion that by accepting - embracing - slowness on
existing hardware we can work on the software at the same time as the
hardware guys are working on their end, parallel rather than serial
development.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65207531-031731


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Mike Tintner


Sounds a little confusing. Sounds like you plan to evolve a system through 
testing thousands of candidate mechanisms. So one way or another you too 
are taking a view - even if it's an evolutionary, I'm not taking a view 
view -  on, and making a lot of asssumptions about


-how systems evolve
-the known architecture of human cognition.

about which science has extremely patchy and confused knowledge. I don't see 
how any system-builder can avoid taking a view of some kind on such matters, 
yet you seem to be criticising Ben for so doing.


I was hoping that you also had some view on how a system 's symbols should 
be grounded, especially since you mention Harnad, who does make vague 
gestures towards the brain's levels of grounding. But you don't indicate any 
such view.


Sounds like you too, pace MW, are hoping for a number of miracles - IOW 
creative ideas - to emerge, and make your system work.


Anyway, you have to give Ben credit for putting a lot of his stuff  
principles out there  on the line. I think anyone who wants to mount a 
full-scale assault on him ( why not?) should be prepared to reciprocate.








-

RL:

Mike Tintner wrote:

RL:In order to completely ground the system, you need to let the system
build its own symbols

V. much agree with your whole argument. But -  I may well have missed 
some vital posts - I have yet to get the slightest inkling of how you 
yourself propose to do this.


Well, for the purposes of the present discussion I do not need to say how, 
only to say that there is a difference between two different research 
strategies for finding out what the mechanism is that does this.


One strategy (the one that I claim has serious problems) is where you try 
to have your cake and eat it too:  let the system build its own symbols, 
with attached parameters that 'mean' whatever they end up meaning after 
the symbols have been built, BUT then at the same time insist that some of 
the parameters really do 'mean' things like probabilities or likelihood or 
confidence values.  If the programmer does anything at all to include 
mechanisms that rely on these meanings (these interpretations of what the 
parameters signify) then the programmer has second-guessed what the system 
itself was going to use those things for, and you have a conflict between 
the two.


My strategy is to keep my hands off, not do anything to strictly interpret 
those parameters, and experimentally observe the properties of systems 
that seem loosely consistent with the known architecture of human 
cognition.


I have a parameter, for instance, that seems to be a happiness or 
consistency parameter attached to a knowledge-atom.  But beyond roughly 
characterising it as such, I do not insert any mechanisms that (implicitly 
or explicitly) lock the system into such an intepretation. Instead, I have 
a wide variety of different candidate mechanisms that use that parameter, 
and I look at the overall properties of systems that use these different 
candidate mechanisms.  I let the system use the parameter according to the 
dictates of whatever mechanism is in place, but then I just explore the 
consequences (the high level behavior of the system).


In this way I do not get a conflict between what I think the parameter 
'ought' to mean and what the system is implicitly taking it to 'mean' by 
its use of the parameter.


I could start talking about all the different candidate mechanisms, but 
there are thousands of them (at least thousands of candidates that I go so 
far as to test:  they are generated in a semi-automatic way, so there are 
an unlimited number of potential candidates).




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.503 / Virus Database: 
269.15.30/1125 - Release Date: 11/11/2007 9:50 PM






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65232546-91c089


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Mike Tintner wrote:

RL:In order to completely ground the system, you need to let the system
build its own symbols

V. much agree with your whole argument. But -  I may well have missed 
some vital posts - I have yet to get the slightest inkling of how you 
yourself propose to do this.


Well, for the purposes of the present discussion I do not need to say 
how, only to say that there is a difference between two different 
research strategies for finding out what the mechanism is that does this.


One strategy (the one that I claim has serious problems) is where you 
try to have your cake and eat it too:  let the system build its own 
symbols, with attached parameters that 'mean' whatever they end up 
meaning after the symbols have been built, BUT then at the same time 
insist that some of the parameters really do 'mean' things like 
probabilities or likelihood or confidence values.  If the programmer 
does anything at all to include mechanisms that rely on these meanings 
(these interpretations of what the parameters signify) then the 
programmer has second-guessed what the system itself was going to use 
those things for, and you have a conflict between the two.


My strategy is to keep my hands off, not do anything to strictly 
interpret those parameters, and experimentally observe the properties of 
systems that seem loosely consistent with the known architecture of 
human cognition.


I have a parameter, for instance, that seems to be a happiness or 
consistency parameter attached to a knowledge-atom.  But beyond 
roughly characterising it as such, I do not insert any mechanisms that 
(implicitly or explicitly) lock the system into such an intepretation. 
Instead, I have a wide variety of different candidate mechanisms that 
use that parameter, and I look at the overall properties of systems that 
use these different candidate mechanisms.  I let the system use the 
parameter according to the dictates of whatever mechanism is in place, 
but then I just explore the consequences (the high level behavior of the 
system).


In this way I do not get a conflict between what I think the parameter 
'ought' to mean and what the system is implicitly taking it to 'mean' by 
its use of the parameter.


I could start talking about all the different candidate mechanisms, but 
there are thousands of them (at least thousands of candidates that I go 
so far as to test:  they are generated in a semi-automatic way, so there 
are an unlimited number of potential candidates).




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65198894-3ece99


Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Richard Loosemore

Mark Waser wrote:
I'm going to try to put some words into Richard's mouth here since 
I'm curious to see how close I am . . . . (while radically changing the 
words).
 
I think that Richard is not arguing about the possibility of 
Novamente-type solutions as much as he is arguing about the 
predictability of *very* flexible Novamente-type solutions as they grow 
larger and more complex (and the difficulty in getting it to not 
instantaneously crash-and-burn).  Indeed, I have heard a very faint 
shadow of Richard's concerns in your statements about the tuning 
problems that you had with BioMind.


This is true, but not precise enough to capture the true nature of my worry.

Let me focus on one aspect of the problem.  My goal here is to describe 
in a little detail how the Complex Systems Problem actually bites in a 
particular case.


Suppose that in some significant part of Novamente there is a 
representation system that uses probability or likelihood numbers to 
encode the strength of facts, as in [I like cats](p=0.75).  The (p=0.75) 
is supposed to express the idea that the statement [I like cats] is in 
some sense 75% true.


[Quick qualifier:  I know that this oversimplifies the real situation in 
Novamente, but I need to do this simplification in order to get my point 
across, and I am pretty sure this will not affect my argument, so bear 
with me].


We all know that this p value is not quite a probability or 
likelihood or confidence factor.  It plays a very ambigous role in 
the system, because on the one hand we want it to be very much like a 
probability in the sense that we want to do calculations with it:  we 
NEED a calculus of such values in order to combine facts in the system 
to make inferences.  But we also do not want to lock ourselves into a 
particular interpretation of what it means, because we know full well 
that we do not really have a clear semantics for these numbers.


Either way, we have a problem:  a fact like [I like cats](p=0.75) is 
ungrounded because we have to interpret it.  Does it mean that I like 
cats 75% of the time?  That I like 75% of all cats?  75% of each cat? 
Are the cats that I like always the same ones, or is the chance of an 
individual cat being liked by me something that changes?  Does it mean 
that I like all cats, but only 75% as much as I like my human family, 
which I like(p=1.0)?  And so on and so on.


Digging down to the root of this problem (and this is the point where I 
am skipping from baby stuff to hard core AI) we want these numbers to be 
semantically compositional and interpretable, but in order to make sure 
they are grounded, the system itself is going to have to build them 
interpret them without our help ... and it is not clear that this 
grounding can be completely implemented.  Why is it not clear?  Because 
when you try to build the entire grounding mechanism(s) you are forced 
to become explicit about what these numbers mean, during the process of 
building a grounding system that you can trust to be doing its job:  you 
cannot create a mechanism that you *know* is constructing sensible p 
numbers and facts during all of its development *unless* you finally 
bite the bullet and say what the p numbers really mean, in fully cashed 
out terms.


[Suppose you did not do this.  Suppose you built the grounding mechanism 
but remained ambiguous about the meaning of the p numbers.  What would 
the resulting system be computing?  From end to end it would be building 
facts with p numbers, but you the human observer would still be imposing 
an interpretation on the facts.  And if you are still doing anything to 
interpret, it cannot be grounded].


Now, as far as I understand it, the standard approach to this condundrum 
is that researchers (in Novamente and elsewhere) do indeed make an 
attempt to disambiguate the p numbers, but they do it by developing more 
sophisticated logical systems.  First, perhaps, error-value bands of p 
values instead of sharp values.  And temporal logic mechanisms to deal 
with time.  Perhaps clusters of p and q and r and s values, each with 
some slightly different zones of applicability.  More generally, people 
try to give structure to the qualifiers that are appended to the facts: 
[I like cats](qualfier=value) instead of [I like cats](p=0.75).


The question is, does this process of refinement have an end?  Does it 
really lead to a situation where the qualifier is disambiguated and the 
semantics is clear enough to build a trustworthy grounding system?  Is 
there a closed-form solution to the problem of building a logic that 
disambiguates the qualifiers?


Here is what I think will happen if this process is continued.  In order 
to make the semantics unambiguous enough to let the system ground its 
own knowledge without the interpretation of p values, researchers will 
develop more and more sophisticated logics (with more and more 
structured replacements for that simple p value), until they are 

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
Richard,

The idea of the PLN semantics underlying Novamente's probabilistic
truth values is that we can have **both**

-- simple probabilistic truth values without highly specific interpretation

-- more complex, logically refined truth values, when this level of
precision is necessary

To make the discussion more concrete, I'll use a specfic example
to do with virtual animals in Second Life.  Our first version of the
virtual pets won't use PLN in this sort of way, it'll be focused on MOSES
evolutionary learning; but, this is planned for the second version and
is within the scope of what Novamente can feasibly be expected to
do with modest effort.

Consider an avatar identified as Bob_Yifu

And, consider the concept of friend, which is a ConceptNode

-- associated to the WordNode friend via a learned ReferenceLink
-- defined operationally via a number of links such as

ImplicationLink
   AND
  InheritanceLink X friend
  EvaluationLink near (I, X)
   Pleasure

(this one just says that being near a friend confers pleasure.  Other
links about friendship may contain knowledge such as that friends
often give one food, friends help one find things, etc.)

 The concept of friend may be learned, via mining of the animal's
experience-base --
basically, this is a matter of learning that there are certain predicates
whose SatisfyingSets (the set of Atoms that fulfill the predicate)
have significant intersection, and creating a ConceptNode to denote
that intersection.

Then, once the concept of friend has been formed, more links pertaining
to it may be learned via mining the experience base and via inference rules.

Then, we can may find that

InheritanceLink Bob_Yifu friend .9,1

(where the .9,1 is an interval probability, interpreted according to
the indefinite probabilities framework) and this link mixes intensional
and extensional inheritance, and thus is only useful for heuristic
reasoning (which however is a very important kind).

What this link means is basically that Bob_Yifu's node in the memory
has a lot of the same links as the friend node -- or rather, that it
**would**, if all its links were allowed to exist rather than being
pruned to save memory.  So, note that the semantics are actually
tied to the mind itself.

Or we can make more specialized logical constructs if we really
want to, denoting stuff like

-- at certain times Bob_Yifu is a friend
-- Bob displays some characteristics of friendship very strongly,
and others not at all
-- etc.

We can also do crude, heuristic contextualization like

ContextLink .7,.8
 home
 InheritanceLink Bob_Yifu friend

which suggests that Bob is less friendly at home than
in general.

Again this doesn't capture all the subtleties of Bob's friendship in
relation to being at home -- and one could do so if one wanted to, but it
would
require introducing a larger complex of nodes and links, which is
not always the most appropriate
thing to do.

The PLN inference rules are designed to give heuristically
correct conclusions based on heuristically interpreted links;
or more precise conclusions based on more precisely interpreted
links.

Finally, the semantics of PLN relationships is explicitly an
**experiential** semantics.  (One of the early chapters in the PLN
book, to appear via Springer next year, is titled Experiential
Semantics.)  So, all node and link truth values in PLN are
intended to be settable and adjustable via experience, rather than
via programming or importation from databases or something like
that.

Now, the above example is of course a quite simple one.
Discussing a more complex example would go beyond the scope
of what I'm willing to do in an email conversation, but the mechanisms
I've described are not limited to such simple examples.

I am aware that identifying Bob_Yifu as a coherent, distinct entity is a
problem
faced by humans and robots, and eliminated via the simplicity of the SL
environment.  However, there is detailed discussion in the (proprietary) NM
book of
how these same mechanisms may be used to do object recognition and
classification, as well.

You may of course argue that these mechanisms won't scale up
to large knowledge bases and rich experience streams.  I believe that
they will, and have arguments but not rigorous proofs that they will.

-- Ben G



On Nov 13, 2007 12:34 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Mark Waser wrote:
  I'm going to try to put some words into Richard's mouth here since
  I'm curious to see how close I am . . . . (while radically changing the
  words).
 
  I think that Richard is not arguing about the possibility of
  Novamente-type solutions as much as he is arguing about the
  predictability of *very* flexible Novamente-type solutions as they grow
  larger and more complex (and the difficulty in getting it to not
  instantaneously crash-and-burn).  Indeed, I have heard a very faint
  shadow of Richard's concerns in your statements about the tuning
  problems that you had 

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Richard Loosemore

Mike Tintner wrote:

RL:Suppose that in some significant part of Novamente there is a
representation system that uses probability or likelihood numbers to
encode the strength of facts, as in [I like cats](p=0.75).  The (p=0.75)
is supposed to express the idea that the statement [I like cats] is in
some sense 75% true.

This essay seems to be a v.g. demonstration of why the human system 
almost certainly does not use numbers or anything like,  as stores of 
value - but raw, crude emotions.  How much do you like cats [or 
marshmallow ice cream]? Miaow//[or yummy] [those being an expression 
of internal nervous and muscular impulses] And black cats [or 
strawberry marshmallow] ? Miaow-miaoww![or yummy yummy] . It's crude 
but it's practical.


It is all a question of what role the numbers play.  Conventional AI 
wants them at the surface, and transparently interpretable.


I am not saying that there are no numbers, but only that they are below 
the surface, and not directly interpretable.  that might or might not 
gibe with what you are saying ... although I would not go so far as to 
put it in the way you do.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64636829-14d428


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Richard Loosemore


Ben,

Unfortunately what you say below is tangential to my point, which is 
what happens when you reach the stage where you cannot allow any more 
vagueness or subjective interpretation of the qualifiers, because you 
have to force the system to do its own grounding, and hence its own 
interpretation.


What you gave below was a sketch of some more elaborate 'qualifier' 
mechanisms.  But I described the process of generating more and more 
elaborate qualifier mechanisms in the body of the essay, and said why 
this process was of no help in resolving the issue.




Richard Loosemore





Benjamin Goertzel wrote:


Richard,

The idea of the PLN semantics underlying Novamente's probabilistic
truth values is that we can have **both**

-- simple probabilistic truth values without highly specific interpretation

-- more complex, logically refined truth values, when this level of
precision is necessary

To make the discussion more concrete, I'll use a specfic example
to do with virtual animals in Second Life.  Our first version of the
virtual pets won't use PLN in this sort of way, it'll be focused on MOSES
evolutionary learning; but, this is planned for the second version and
is within the scope of what Novamente can feasibly be expected to
do with modest effort.

Consider an avatar identified as Bob_Yifu

And, consider the concept of friend, which is a ConceptNode

-- associated to the WordNode friend via a learned ReferenceLink
-- defined operationally via a number of links such as

ImplicationLink
   AND
  InheritanceLink X friend
  EvaluationLink near (I, X)
   Pleasure

(this one just says that being near a friend confers pleasure.  Other
links about friendship may contain knowledge such as that friends
often give one food, friends help one find things, etc.)

 The concept of friend may be learned, via mining of the animal's 
experience-base --

basically, this is a matter of learning that there are certain predicates
whose SatisfyingSets (the set of Atoms that fulfill the predicate)
have significant intersection, and creating a ConceptNode to denote
that intersection. 


Then, once the concept of friend has been formed, more links pertaining
to it may be learned via mining the experience base and via inference rules.

Then, we can may find that

InheritanceLink Bob_Yifu friend .9,1

(where the .9,1 is an interval probability, interpreted according to
the indefinite probabilities framework) and this link mixes intensional
and extensional inheritance, and thus is only useful for heuristic
reasoning (which however is a very important kind).

What this link means is basically that Bob_Yifu's node in the memory
has a lot of the same links as the friend node -- or rather, that it
**would**, if all its links were allowed to exist rather than being
pruned to save memory.  So, note that the semantics are actually
tied to the mind itself.

Or we can make more specialized logical constructs if we really
want to, denoting stuff like

-- at certain times Bob_Yifu is a friend
-- Bob displays some characteristics of friendship very strongly,
and others not at all
-- etc.

We can also do crude, heuristic contextualization like

ContextLink .7,.8
 home
 InheritanceLink Bob_Yifu friend

which suggests that Bob is less friendly at home than
in general.

Again this doesn't capture all the subtleties of Bob's friendship in
relation to being at home -- and one could do so if one wanted to, but 
it would

require introducing a larger complex of nodes and links, which is
not always the most appropriate
thing to do.

The PLN inference rules are designed to give heuristically
correct conclusions based on heuristically interpreted links;
or more precise conclusions based on more precisely interpreted
links. 


Finally, the semantics of PLN relationships is explicitly an
**experiential** semantics.  (One of the early chapters in the PLN
book, to appear via Springer next year, is titled Experiential
Semantics.)  So, all node and link truth values in PLN are
intended to be settable and adjustable via experience, rather than
via programming or importation from databases or something like
that.

Now, the above example is of course a quite simple one.
Discussing a more complex example would go beyond the scope
of what I'm willing to do in an email conversation, but the mechanisms
I've described are not limited to such simple examples.

I am aware that identifying Bob_Yifu as a coherent, distinct entity is a 
problem

faced by humans and robots, and eliminated via the simplicity of the SL
environment.  However, there is detailed discussion in the (proprietary) 
NM book of

how these same mechanisms may be used to do object recognition and
classification, as well.

You may of course argue that these mechanisms won't scale up
to large knowledge bases and rich experience streams.  I believe that
they will, and have arguments but not rigorous proofs that they will.

-- Ben G



On Nov 13, 2007 12:34 PM, Richard Loosemore 

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
On Nov 13, 2007 2:37 PM, Richard Loosemore [EMAIL PROTECTED] wrote:


 Ben,

 Unfortunately what you say below is tangential to my point, which is
 what happens when you reach the stage where you cannot allow any more
 vagueness or subjective interpretation of the qualifiers, because you
 have to force the system to do its own grounding, and hence its own
 interpretation.



I don't see why you talk about forcing the system to do its own grounding
--
the probabilities in the system are grounded in the first place, as they
are calculated based on experience.

The system observes, records what it sees, abstracts from it, and chooses
actions that it guess will fulfill its goals.  Its goals are ultimately
grounded in in-built
feeling-evaluation routines, measuring stuff like amount of novelty
observed,
amount of food in system etc.

So, the system sees and then acts ... and the concepts it forms and uses
are created/used based on their utility in deriving appropriate actions.

There is no symbol-grounding problem except in the minds of people who
are trying to interpret what the system does, and get confused.  Any symbol
used within the system, and any probability calculated by the system, are
directly grounded in the system's experience.

There is nothing vague about an observation like Bob_Yifu was observed
at time-stamp 599933322, or a fact Command 'wiggle ear' was sent
at time-stamp 54.  These perceptions and actions are the root of the
probabilities the system calculated, and need no further grounding.



 What you gave below was a sketch of some more elaborate 'qualifier'
 mechanisms.  But I described the process of generating more and more
 elaborate qualifier mechanisms in the body of the essay, and said why
 this process was of no help in resolving the issue.


So, if a system can achieve its goals based on choosing procedures that
it thinks are likely to achieve its goals, based on the knowledge it
gathered
via its perceived experience -- why do you think it has a problem?

I don't really understand your point, I guess.  I thought I did -- I thought
your point was that precisely specifying the nature of a conditional
probability
is a rats-nest of complexity.  And my response was basically that in
Novamente we don't need to do that, because we define conditional
probabilities
based on the system's own knowledge-base, i.e.

Inheritance A B .8

means

If A and B were reasoned about a lot, then A would (as measred by an
weighted
average) have 80% of the relationships that B does

But apparently you were making some other point, which I did not grok,
sorry...

Anyway, though, Novamente does NOT require logical relations of escalating
precision and complexity to carry out reasoning, which is one thing you
seemed
to be assuming in your post.

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64644318-8bbdee

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Linas Vepstas
On Tue, Nov 13, 2007 at 12:34:51PM -0500, Richard Loosemore wrote:
 
 Suppose that in some significant part of Novamente there is a 
 representation system that uses probability or likelihood numbers to 
 encode the strength of facts, as in [I like cats](p=0.75).  The (p=0.75) 
 is supposed to express the idea that the statement [I like cats] is in 
 some sense 75% true.
 
 Either way, we have a problem:  a fact like [I like cats](p=0.75) is 
 ungrounded because we have to interpret it.  Does it mean that I like 
 cats 75% of the time?  That I like 75% of all cats?  75% of each cat? 
 Are the cats that I like always the same ones, or is the chance of an 
 individual cat being liked by me something that changes?  Does it mean 
 that I like all cats, but only 75% as much as I like my human family, 
 which I like(p=1.0)?  And so on and so on.

Eh?

You are standing at the proverbial office water coooler, and Aneesh 
says Wen likes cats. On your drive home, you mind races .. does this
mean that Wen is a cat fancier?  You were planning on taking Wen out
on a date, and this tidbit of information could be useful ... 

 when you try to build the entire grounding mechanism(s) you are forced 
 to become explicit about what these numbers mean, during the process of 
 building a grounding system that you can trust to be doing its job:  you 
 cannot create a mechanism that you *know* is constructing sensible p 
 numbers and facts during all of its development *unless* you finally 
 bite the bullet and say what the p numbers really mean, in fully cashed 
 out terms.

But has a human, asking Wen out on a date, I don't really know what 
Wen likes cats ever really meant. It neither prevents me from talking 
to Wen, or from telling my best buddy that ...well, I know, for
instance, that she likes cats...  

Lack of grounding is what makes humour funny, you can do a whole 
Pygmalion / Seinfeld episode on she likes cats.

--linas 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64672202-2af80e


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel


 But has a human, asking Wen out on a date, I don't really know what
 Wen likes cats ever really meant. It neither prevents me from talking
 to Wen, or from telling my best buddy that ...well, I know, for
 instance, that she likes cats...


yes, exactly...

The NLP statement Wen likes cats is vague in the same way as the
Novamente or NARS relationship

EvaluationLink
likes
ListLink
   Wen
cats


is vague  The vagueness passes straight from NLP into the internal KR,
which is how it should be.

And that same vagueness may be there if the relationship is learned via
inference based on experience, rather than acquired by natural language.

I.e., if the above relationship is inferred, it may just mean that

 {the relationship between Wen and cats} shares many relationships with
other person/object relationships that have been categorized as 'liking'
before

In this case, the system can figure out that Wen likes cats without ever
actually making explicit what this means.  All it knows is that, whatever it
means,
it's the same thing that was meant in other circumstances where liking
was used as a label.

So, vagueness can not only be important into an AI system from natural
language,
but also propagated around the AI system via inference.

This is NOT one of the trickier things about building probabilistic AGI,
it's really
kind of elementary...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64674694-3ada83

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel



 So, vagueness can not only be important


imported, I meant


 into an AI system from natural language,
 but also propagated around the AI system via inference.

 This is NOT one of the trickier things about building probabilistic AGI,
 it's really
 kind of elementary...

 -- Ben G




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64674943-4b25e0