Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Pei Wang
On Feb 17, 2008 9:42 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 So far I've been using resolution-based FOL, so there's only 1 inference
 rule and this is not a big issue.  If you're using nonstandard inference
 rules, perhaps even approximate ones, I can see that this distinction is
 important.

Resolution-based FOL on a huge KB is intractable.

Pei

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Pei Wang
All of these rules have exception or implicit condition. If you
treat them as default rules, you run into multiple extension
problem, which has no domain-independent solution in binary logic ---
read http://www.cogsci.indiana.edu/pub/wang.reference_classes.ps for
details.

Pei

On Feb 17, 2008 10:04 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 Yesterday I didn't give a clear explanation of what I mean by rules, so
 here is a better try:

 1.  If I see a turkey inside the microwave, I immediately draw the
 conclusion that it's NOT empty.
 2.  However, if I see some katchup on the inside walls of the microwave, I'd
 say it's dirty but it's empty.
 3.  If I see the rotating plate inside the microwave, I'd still say it's
 empty 'cause the plate is part of the microwave.
 etc etc

 So the AGI may have a rule that sounds like:
 if X is an object inside the microwave, and X satisfies some criteria,
 then the microwave is NOT empty.

 But it would be a very dumb AGI if it has this rule specifically for
 microwave ovens, and then some other rules for washing machines, bottles,
 book shelves, and other containers.  It would be necessary for the AGI to
 have a general rule for emptiness for all containers.  So I'd say a washing
 machine with a sock inside is not empty, but if it's just some lint then
 it's empty.

 Such a general rule for emptiness is certainly not available on the net, at
 least not explicitly expressed.  One solution is to manually encode them
 (perhaps with some machine assistance), which is the approach of Cyc.
 Another solution is to induce them from existing texts on the web -- Ben's
 suggestion.

 If given a large enough corpus and a long enough learning period, Ben's
 solution may work.  The key issue is how to speed up the inductive learning
 of rules.



 YKY
  

  agi | Archives | Modify Your Subscription

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Bob Mottram
On 18/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 Well, the idea is to ask lots of people to contribute to the KB, and pay
 them with virtual credits.  (I expect such people to have a little knowledge
 in logic or Prolog, so they can enter complex rules.  Also, they can be
 assisted by inductive learning algorithms.)  The income of the KB will be
 given back to them.  I'll take a bit of administrative fees =)



In principle this sounds ok, but this is almost exactly the same as the
Mindpixel business model.  Once an element of payment is involved (usually
with some kind of shares in future profits) participants tend to expect that
they're going to be able to realise that value within a relatively short
time, like a few years.  Inevitably when expectations aren't met things get
sticky.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Primal Sketching

2008-02-18 Thread Vladimir Nesov
On Feb 18, 2008 1:37 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 In a closed loop system what you have is a
 synchronisation between data streams.  In part the brain is trying to
 find the best model that it can and superimpose that onto the
 available data (hence the perception of lines which don't really
 exist), and in part the low level data helps to create and maintain
 the higher level models.


I think this is a very important perspective on what mind does. It
supports multiple *processes* that interact with each other,
communicating available data to tune each other, some of these
processes driven by perception, some of them driving action. Inference
is simply a process that is initiated by communicating 'premises' to
it and that is then able to communicate 'conclusion'. Processes are
learned to be in good correspondence with statistics of their I/O, so
when some data is missing they can fill it in (in retrospect,
'predict'). These processes are a simple resource that can be plugged
in to improve prediction (pattern-completion) performed by the rest of
the system.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Mark Waser

All of these rules have exception or implicit condition. If you
treat them as default rules, you run into multiple extension
problem, which has no domain-independent solution in binary logic ---
read http://www.cogsci.indiana.edu/pub/wang.reference_classes.ps for
details.


Pei,

   Do you have a PDF version?  Thanks!

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Pei Wang
Just put one at http://nars.wang.googlepages.com/wang.reference_classes.pdf

On Feb 18, 2008 9:01 AM, Mark Waser [EMAIL PROTECTED] wrote:
  All of these rules have exception or implicit condition. If you
  treat them as default rules, you run into multiple extension
  problem, which has no domain-independent solution in binary logic ---
  read http://www.cogsci.indiana.edu/pub/wang.reference_classes.ps for
  details.

 Pei,

 Do you have a PDF version?  Thanks!


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Mike Tintner
I believe I offered the beginning of a v. useful way to conceive of this 
whole area in an earlier post.


The key concept is inventory of the world.

First of all, what is actually being talked about here is only a 
VERBAL/SYMBOLIC KB.


One of the grand illusions of a literature culture is that words/symbols 
refer to everything. The reality is that we have a v. limited verbal 
inventory of the world. Words do not describe most parts of your body, for 
example, only certain key divisions. Check over your hand for a start and 
see how many bits you can name - minute bit by bit.  When it comes to the 
movements of objects, our vocabulary is breathtakingly limited.


In fact, our verbal/symbolic inventory of the world (as provided for by our 
existing cultural vocabulary - for all its millions of words) is, I suggest, 
only a tiny fraction of our COMMON SENSE KB/ inventory of the world - i.e. 
that knowledge we hold purely in sensory image form - and indeed in 
common-sense form (since as Tye points out, we never actually 
experience/operate one sense in isolation - even though we have the 
intellectual illusion that we do).


When we learn to respect the extent of our true common sense knowledge of 
the world as distinct from our formal, verbal knowledge of the world, we 
will realise another major reason why CYC like projects are doomed. They 
have nothing to do with common sense. Of course they will never be able to 
work out, pace Minsky, whether you can whistle and eat at the same time, or 
whether you can push or pull an object with a string. This is true common 
sense knowledge.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?.. p.s.

2008-02-18 Thread Mike Tintner
I should add to the idea of our common sense knowledge inventory of the 
world - because my talk of objects and movements may make it all sound v. 
physical and external. That common sense inventory also includes a vast 
amount of non-verbal knowledge, paradoxically, about how we think and 
communicate with and understand others.The paradoxical part is that a lot of 
this will be common sense about we use words themselves. Hence it is that 
experts have immense difficulties describing how they think about problems. 
They don't have the words for how they use their words. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread YKY (Yan King Yin)
On 2/18/08, Mike Tintner [EMAIL PROTECTED] wrote:
 I believe I offered the beginning of a v. useful way to conceive of this
 whole area in an earlier post.

 The key concept is inventory of the world.

 First of all, what is actually being talked about here is only a
 VERBAL/SYMBOLIC KB.

 One of the grand illusions of a literature culture is that words/symbols
 refer to everything. The reality is that we have a v. limited verbal
 inventory of the world. Words do not describe most parts of your body, for
 example, only certain key divisions. Check over your hand for a start and
 see how many bits you can name - minute bit by bit.  When it comes to the
 movements of objects, our vocabulary is breathtakingly limited.

 In fact, our verbal/symbolic inventory of the world (as provided for by
our
 existing cultural vocabulary - for all its millions of words) is, I
suggest,
 only a tiny fraction of our COMMON SENSE KB/ inventory of the world - i.e.
 that knowledge we hold purely in sensory image form - and indeed in
 common-sense form (since as Tye points out, we never actually
 experience/operate one sense in isolation - even though we have the
 intellectual illusion that we do).

 When we learn to respect the extent of our true common sense knowledge of
 the world as distinct from our formal, verbal knowledge of the world, we
 will realise another major reason why CYC like projects are doomed. They
 have nothing to do with common sense. Of course they will never be able to
 work out, pace Minsky, whether you can whistle and eat at the same time,
or
 whether you can push or pull an object with a string. This is true common
 sense knowledge.

I can give labels to every tiny sub-section of my hand, thus increasing the
resolution of the symbolic description.  If I give labels to each
very small visual features of my hand, then the distinction between visual
representation and symbolic representation disappears.  Therefore, I think
symbolic KBs like Cyc's is not doomed -- the symbolic KB can merge with
perceptual grounding in a continuum fashion.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Mike Tintner
This raises another  v. interesting dimension of KB's and why they are limited. 
The social dimension. You might, purely for argument's sake, be able to name a 
vast amount of unnamed parts of the world. But you would then have to secure 
social agreement for them to become practically useful. Not realistic  - if you 
were say to add even scores let alone thousands of names for each bit of your 
hand.

And even when there are a set of agreed words - and this is a problem that 
absolutely plagues all of us on this board - there may still not be an agreed 
terminology. For example, we are having massive problems as a community, along 
with our society, with what words like intelligence, AGI, symbol, image 
image schema, etc etc. mean. We may agree broadly on the words that are 
relevant in a given area, but we have no agreed terminology as to which of 
those words should be used when, and what they mean.. And actually, now that I 
think of it, the more carefully intellectuals define their words, the MORE 
disagreements and misunderstandings you often get. Words like free and 
determined for philosophers and scientists (and all of us here) are absolute 
minefields.


  MT: I believe I offered the beginning of a v. useful way to conceive of this
   whole area in an earlier post.
   
   The key concept is inventory of the world.
   
   First of all, what is actually being talked about here is only a
   VERBAL/SYMBOLIC KB.
   
   One of the grand illusions of a literature culture is that words/symbols
   refer to everything. The reality is that we have a v. limited verbal
   inventory of the world. Words do not describe most parts of your body, for
   example, only certain key divisions. Check over your hand for a start and
   see how many bits you can name - minute bit by bit.  When it comes to the
   movements of objects, our vocabulary is breathtakingly limited.
   
   In fact, our verbal/symbolic inventory of the world (as provided for by our
   existing cultural vocabulary - for all its millions of words) is, I suggest,
   only a tiny fraction of our COMMON SENSE KB/ inventory of the world - i.e.
   that knowledge we hold purely in sensory image form - and indeed in
   common-sense form (since as Tye points out, we never actually
   experience/operate one sense in isolation - even though we have the
   intellectual illusion that we do).
   
   When we learn to respect the extent of our true common sense knowledge of
   the world as distinct from our formal, verbal knowledge of the world, we
   will realise another major reason why CYC like projects are doomed. They
   have nothing to do with common sense. Of course they will never be able to
   work out, pace Minsky, whether you can whistle and eat at the same time, or
   whether you can push or pull an object with a string. This is true common
   sense knowledge.


  I can give labels to every tiny sub-section of my hand, thus increasing the 
resolution of the symbolic description.  If I give labels to each very small 
visual features of my hand, then the distinction between visual representation 
and symbolic representation disappears.  Therefore, I think symbolic KBs like 
Cyc's is not doomed -- the symbolic KB can merge with perceptual grounding in a 
continuum fashion.

  YKY

--
agi | Archives  | Modify Your Subscription  



--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.516 / Virus Database: 269.20.7/1285 - Release Date: 2/18/2008 
5:50 AM

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Stephen Reed
Pei: Resolution-based  FOL  on  a  huge  KB  is  intractable.


Agreed.  

However Cycorp spend a great deal of programming effort (i.e. many man-years) 
finding deep inference paths for common queries.  The strategies were:
prune the rule set according to the contextsubstitute procedural code for modus 
ponens in common query paths (e.g. isa-links inferred via graph 
traversal)structure the inference engine as a nested set of iterators so that
easy answers are returned immediately, and harder-to-find answers
trickle out later.establish a battery of inference engine controls (e.g. time 
bounds, speed vs. completeness - whether to employ expensive inference 
strategies for greater coverage of answers) and have the inference engine 
automatically apply the optimal control configuration for queriesdetermine rule 
utility via machine learning and apply prioritized inference modules within the 
given time constraints
My last in-house talk at Cycorp, in the summer of 2006, described a notion of 
mine that Cyc's deductive inference engine behaves as an interpreter, and that 
for a certain set of queries, a dramatic speed improvement (e.g. four orders of 
magnitude) could be achieved by compiling the query, and possibly preprocessing 
incoming facts to suit expected queries.   The queries that interested me were 
those embedded in an intelligent application, and which could be viewed as a 
query template with parameters.  The compilation process I described would 
explore the parameter space with programmer-chosen query examples.  Then the 
resulting proof trees would be compiled into executable code - avoiding 
entirely the time consuming candidate rule search and their application when 
the query executes.  My notion for Cyc's deductive inference engine 
optimization is analogous to SQL query optimization technology.

I expect to use this technique in the Texai project at the point when I need a 
deductive inference engine.
 
-Steve


Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, February 18, 2008 6:17:59 AM
Subject: Re: [agi] would anyone want to use a commonsense KB?

 On  Feb  17,  2008  9:42  PM,  YKY  (Yan  King  Yin)
[EMAIL PROTECTED]  wrote:

  So  far  I've  been  using  resolution-based  FOL,  so  there's  only  1  
 inference
  rule  and  this  is  not  a  big  issue.   If  you're  using  nonstandard  
 inference
  rules,  perhaps  even  approximate  ones,  I  can  see  that  this  
 distinction  is
  important.

Resolution-based  FOL  on  a  huge  KB  is  intractable.

Pei

---
agi
Archives:  http://www.listbox.com/member/archive/303/=now
RSS  Feed:  http://www.listbox.com/member/archive/rss/303/
Modify  Your  Subscription:  http://www.listbox.com/member/?;
Powered  by  Listbox:  http://www.listbox.com







  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore

Harshad RJ wrote:



On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Harshad RJ wrote:
  I read the conversation from the start and believe that Matt's
  argument is correct.

Did you mean to send this only to me?  It looks as though you mean it
for the list.  I will send this reply back to you personally, but let me
know if you prefer it to be copied to the AGI list.


Richard, thanks for replying. I did want to send it to the list.. and 
your email address (as it turns out) was listed on the forum for 
replying to the list.






  There is a difference between intelligence and motive which Richard
  seems to be ignoring. A brilliant instance of intelligence could
still
  be subservient to a malicious or ignorant motive, and I think that is
  the crux of Matt's argument.

With respect, I was not at all ignoring this point:  this is a
misunderstanding that occurs very frequently, and I thought that I
covered it on this occasion (my apologies if I forgot to do so. I
have had to combat this point on so many previous occasions that I may
have overlooked yet another repeat).

The crucial words are ... could still be subservient to a malicious or
ignorant motive.

The implication behind these words is that, somehow, the motive of
this intelligence could arise after the intelligence, as a completely
independent thing over which we had no control.  We are so used to this
pattern in the human case (we can make babies, but we cannot stop the
babies from growing up to be dictators, if that is the way they happen
to go).

This implication is just plain wrong.  



I don't believe so, though your next statement..
 


If you build an artificial
intelligence, you MUST choose how it is motivated before you can even
switch it on. 



... might be true. Yes, a motivation of some form could be coded into 
the system, but the paucity of expression in the level at which it is 
coded, may still allow for unintended motivations to emerge out.


Say, for example, the motivation is coded in a form similar to current 
biological systems. The AGI system is motivated to keep itself happy, 
and it is happy when it has sufficient electrical energy at its disposal 
AND when the pheromones from nearby humans are all screaming positive.


It is easy to see how this kind of motivation could cause unintended 
results. The AGI system could do dramatic things like taking over a 
nuclear power station and manufacturing its own pheromone supply from  a 
chemical plant. Or it could do more subtle things like, manipulating 
government policies to ensure that the above happens!


Even allowing for a higher level of coding for motivation, like those 
Asimov's Robot rules (#1 : Though shall not harm any human), it is very 
easy for the system to go out of hand, since such codings are ambiguous. 
Should stem cell research be allowed for example? It might harm some 
embryos but help more number of adults. Should prostitution be 
legalised? It might harm the human gene pool in some vague way, or 
might even harm some specific individuals, but it also allows the 
victims themselves to earn some money and survive longer.


So, yes, motivation might be coded, but an AGI system would eventually 
need to have the *capability* to deduce its own motivation, and that 
emergent motivation could be malicious/ignorant.


I quote the rest of the message, only for the benefit of the list. 
Otherwise, my case rests here.





Stepping back for a moment, I think the problem that tends to occur in 
discussions of AGI motivation is that the technical aspects get 
overlooked when we go looking for nightmare scenarios.  What this means, 
for me, is that when I reply to a suggestion such as the one you give 
above, my response is not That kind of AGI, and AGI behavioral problem, 
is completely unimaginable, but instead what I have to say is That 
kind of AGI would not actually BE an AGI at all, because, for technical 
reasons, you would never be able to get such a thing to be intelligent 
in the first place.


There is a subtle difference between these two, but what I find is that 
most people mistakenly believe that I am making the first kind of 
reponse instead of the second.


So, to deal with your suggestion in detail.

When I say that some kind of motivation MUST be built into the system, I 
am pretty much uttering a truism:  an AGI without any kind of 
motivational system is like a swimmer with no muscles.  It has to be 
driven to do something, so no drives mean no activity.


Putting that to one side, then, what you propose is an AGI with an 
extremely simple motivational system:  seek electricity and high human 
pheromonal output.


I don't suggest that this is unimaginable (it is!), but what I suggest 
is that you implicitly assume a lot of stuff that, almost certainly, 
will never happen.


You 

Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Pei Wang
Steve,

I also agree with what you said, and what Cyc uses is no longer pure
resolution-based FOL.

A purely resolution-based inference engine is mathematically elegant,
but completely impractical, because after all the knowledge are
transformed into the clause form required by resolution, most of the
semantic information in the knowledge structure is gone, and the
result is equivalent to the original knowledge in truth-value only.
It is hard to control the direction of the inference without semantic
information.

Pei

On Feb 18, 2008 11:13 AM, Stephen Reed [EMAIL PROTECTED] wrote:

 Pei: Resolution-based FOL on a huge KB is intractable.

 Agreed.

 However Cycorp spend a great deal of programming effort (i.e. many
 man-years) finding deep inference paths for common queries.  The strategies
 were:

 prune the rule set according to the context
 substitute procedural code for modus ponens in common query paths (e.g.
 isa-links inferred via graph traversal)
 structure the inference engine as a nested set of iterators so that easy
 answers are returned immediately, and harder-to-find answers trickle out
 later.
 establish a battery of inference engine controls (e.g. time bounds, speed
 vs. completeness - whether to employ expensive inference strategies for
 greater coverage of answers) and have the inference engine automatically
 apply the optimal control configuration for queries
 determine rule utility via machine learning and apply prioritized inference
 modules within the given time constraints
 My last in-house talk at Cycorp, in the summer of 2006, described a notion
 of mine that Cyc's deductive inference engine behaves as an interpreter, and
 that for a certain set of queries, a dramatic speed improvement (e.g. four
 orders of magnitude) could be achieved by compiling the query, and possibly
 preprocessing incoming facts to suit expected queries.   The queries that
 interested me were those embedded in an intelligent application, and which
 could be viewed as a query template with parameters.  The compilation
 process I described would explore the parameter space with programmer-chosen
 query examples.  Then the resulting proof trees would be compiled into
 executable code - avoiding entirely the time consuming candidate rule search
 and their application when the query executes.  My notion for Cyc's
 deductive inference engine optimization is analogous to SQL query
 optimization technology.

 I expect to use this technique in the Texai project at the point when I need
 a deductive inference engine.

 -Steve

 Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860



 - Original Message 
 From: Pei Wang [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Monday, February 18, 2008 6:17:59 AM
 Subject: Re: [agi] would anyone want to use a commonsense KB?

  On Feb 17, 2008 9:42 PM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:
 
  So far I've been using resolution-based FOL, so there's only 1 inference
  rule and this is not a big issue.  If you're using nonstandard inference
  rules, perhaps even approximate ones, I can see that this distinction is
  important.

 Resolution-based FOL on a huge KB is intractable.

 Pei

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


  
 Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it
 now.
  


  agi | Archives | Modify Your Subscription

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Matt Mahoney

--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 On 2/18/08, Matt Mahoney [EMAIL PROTECTED] wrote:
  Heh... I think you could give away read-only access and charge people to
  update it.  Information has negative value, you know.
 
 Well, the idea is to ask lots of people to contribute to the KB, and pay
 them with virtual credits.  (I expect such people to have a little knowledge
 in logic or Prolog, so they can enter complex rules.  Also, they can be
 assisted by inductive learning algorithms.)  The income of the KB will be
 given back to them.  I'll take a bit of administrative fees =)

Why would this approach succeed where Cyc failed?  Cyc paid people to build
the knowledge base.  Then when they couldn't sell it, the tried giving it
away.  Still, nobody used it.

For an AGI to be useful, people have to be able to communicate with it in
natural language.  It is easy to manipulate formulas like if P then Q.  It
is much harder to explain how this knowledge is represented and learned in a
language model.  Cyc did not solve this problem, and we see the result.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Stephen Reed
Pei, 

Another issue with a KB inference engine as contrasted with a FOL theorem 
prover is that the former seeks answers to queries, and the latter often seeks 
to disprove the negation of the theorem by finding a contradiction.   Cycorp 
therefore could not reuse much of the research from the automatic theorem 
proving community.   And on the other hand the database community commonly did 
not investigate deep inference.

As the Semantic Web community continues to develop new deductive inference 
engines tuned to inference (ie. query answering) over large RDF KBs , I expect 
to see open-source forward-chaining, and backward-chaining inference engines 
that can be optimized in the same way that I described for Cyc. 
 
-Steve


Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, February 18, 2008 10:47:43 AM
Subject: Re: [agi] would anyone want to use a commonsense KB?

 Steve,

I  also  agree  with  what  you  said,  and  what  Cyc  uses  is  no  longer  
pure
resolution-based  FOL.

A  purely  resolution-based  inference  engine  is  mathematically  elegant,
but  completely  impractical,  because  after  all  the  knowledge  are
transformed  into  the  clause  form  required  by  resolution,  most  of  the
semantic  information  in  the  knowledge  structure  is  gone,  and  the
result  is  equivalent  to  the  original  knowledge  in  truth-value  only.
It  is  hard  to  control  the  direction  of  the  inference  without  semantic
information.

Pei

On  Feb  18,  2008  11:13  AM,  Stephen  Reed  [EMAIL PROTECTED]  wrote:

  Pei:  Resolution-based  FOL  on  a  huge  KB  is  intractable.

  Agreed.

  However  Cycorp  spend  a  great  deal  of  programming  effort  (i.e.  many
  man-years)  finding  deep  inference  paths  for  common  queries.   The  
 strategies
  were:

  prune  the  rule  set  according  to  the  context
  substitute  procedural  code  for  modus  ponens  in  common  query  paths  
 (e.g.
  isa-links  inferred  via  graph  traversal)
  structure  the  inference  engine  as  a  nested  set  of  iterators  so  
 that  easy
  answers  are  returned  immediately,  and  harder-to-find  answers  trickle  
 out
  later.
  establish  a  battery  of  inference  engine  controls  (e.g.  time  bounds, 
  speed
  vs.  completeness  -  whether  to  employ  expensive  inference  strategies  
 for
  greater  coverage  of  answers)  and  have  the  inference  engine  
 automatically
  apply  the  optimal  control  configuration  for  queries
  determine  rule  utility  via  machine  learning  and  apply  prioritized  
 inference
  modules  within  the  given  time  constraints
  My  last  in-house  talk  at  Cycorp,  in  the  summer  of  2006,  described 
  a  notion
  of  mine  that  Cyc's  deductive  inference  engine  behaves  as  an  
 interpreter,  and
  that  for  a  certain  set  of  queries,  a  dramatic  speed  improvement  
 (e.g.  four
  orders  of  magnitude)  could  be  achieved  by  compiling  the  query,  and 
  possibly
  preprocessing  incoming  facts  to  suit  expected  queries. The  
 queries  that
  interested  me  were  those  embedded  in  an  intelligent  application,  
 and  which
  could  be  viewed  as  a  query  template  with  parameters.   The  
 compilation
  process  I  described  would  explore  the  parameter  space  with  
 programmer-chosen
  query  examples.   Then  the  resulting  proof  trees  would  be  compiled  
 into
  executable  code  -  avoiding  entirely  the  time  consuming  candidate  
 rule  search
  and  their  application  when  the  query  executes.   My  notion  for  Cyc's
  deductive  inference  engine  optimization  is  analogous  to  SQL  query
  optimization  technology.

  I  expect  to  use  this  technique  in  the  Texai  project  at  the  point 
  when  I  need
  a  deductive  inference  engine.

  -Steve

  Stephen  L.  Reed

  Artificial  Intelligence  Researcher
  http://texai.org/blog
  http://texai.org
  3008  Oak  Crest  Ave.
  Austin,  Texas,  USA  78704
  512.791.7860



  -  Original  Message  
  From:  Pei  Wang  [EMAIL PROTECTED]
  To:  agi@v2.listbox.com
  Sent:  Monday,  February  18,  2008  6:17:59  AM
  Subject:  Re:  [agi]  would  anyone  want  to  use  a  commonsense  KB?

   On  Feb  17,  2008  9:42  PM,  YKY  (Yan  King  Yin)
  [EMAIL PROTECTED]  wrote:
  
So  far  I've  been  using  resolution-based  FOL,  so  there's  only  1  
 inference
rule  and  this  is  not  a  big  issue.   If  you're  using  nonstandard 
  inference
rules,  perhaps  even  approximate  ones,  I  can  see  that  this  
 distinction  is
important.

  Resolution-based  FOL  on  a  huge  KB  is  intractable.

  Pei

  ---
  agi
  Archives:  

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Matt Mahoney
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 My argument was (at the beginning of the debate with Matt, I believe)
 that, for a variety of reasons, the first AGI will be built with
 peaceful motivations.  Seems hard to believe, but for various technical
 reasons I think we can make a very powerful case that this is exactly
 what will happen.  After that, every other AGI will be the same way
 (again, there is an argument behind that).  Furthermore, there will not
 be any evolutionary pressures going on, so we will not find that (say)
 the first few million AGIs are built with perfect motivations, and then
 some rogue ones start to develop.

In the context of a distributed AGI, like the one I propose at
http://www.mattmahoney.net/agi.html this scenario would require the first AGI
to take the form of a worm.  It may indeed be peaceful if it depends on human
cooperation to survive and spread, as opposed to exploiting a security flaw. 
So it seems a positive outcome depends on solving the security problem.  If a
worm is smart enough to debug software and discover vulnerabilities faster
than humans can (with millions of copies working in parallel), the problem
becomes more difficult.  (And this *is* an evolutionary process).  I guess I
don't share Richard's optimism.

I suppose a safer approach would be centralized, like most of the projects of
people on this list.  But I don't see how these systems could compete with the
vastly greater resources (human and computer) already available on the
internet.  A distributed system with, say, Novamente and Google as two of its
millions of peers is certainly going to be more intelligent than either system
alone.

You may wonder why I would design a dangerous system.  First, I am not
building it.  (I am busy with other projects).  But I believe that for
practical reasons something like this will eventually be built anyway, and we
need to study the design to make it safer.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Vladimir Nesov
On Feb 18, 2008 7:41 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

 In other words you cannot have your cake and eat it too:  you cannot
 assume that this hypothetical AGI is (a) completely able to build its
 own understanding of the world, right up to the human level and beyond,
 while also being (b) driven by an extremely dumb motivation system that
 makes the AGI seek only a couple of simple goals.


Great summary, Richard. You should probably write it up. This position
that there is a very difficult problem of friendly AGI and much
simpler problem of idiotic AGI that still somehow posits a threat is
too easily accepted.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore

Matt Mahoney wrote:

On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful motivations.  Seems hard to believe, but for various technical
reasons I think we can make a very powerful case that this is exactly
what will happen.  After that, every other AGI will be the same way
(again, there is an argument behind that).  Furthermore, there will not
be any evolutionary pressures going on, so we will not find that (say)
the first few million AGIs are built with perfect motivations, and then
some rogue ones start to develop.


In the context of a distributed AGI, like the one I propose at
http://www.mattmahoney.net/agi.html this scenario would require the first AGI
to take the form of a worm.


That scenario is deeply implausible - and you can only continue to 
advertise it because you ignore all of the arguments I and others have 
given, on many occasions, concerning the implausibility of that scenario.


You repeat this line of black propaganda on every occasion you can, but 
on the other hand you refuse to directly address the many, many reasons 
why that black propaganda is nonsense.


Why?




Richard Loosemore




It may indeed be peaceful if it depends on human
cooperation to survive and spread, as opposed to exploiting a security flaw. 
So it seems a positive outcome depends on solving the security problem.  If a

worm is smart enough to debug software and discover vulnerabilities faster
than humans can (with millions of copies working in parallel), the problem
becomes more difficult.  (And this *is* an evolutionary process).  I guess I
don't share Richard's optimism.

I suppose a safer approach would be centralized, like most of the projects of
people on this list.  But I don't see how these systems could compete with the
vastly greater resources (human and computer) already available on the
internet.  A distributed system with, say, Novamente and Google as two of its
millions of peers is certainly going to be more intelligent than either system
alone.

You may wonder why I would design a dangerous system.  First, I am not
building it.  (I am busy with other projects).  But I believe that for
practical reasons something like this will eventually be built anyway, and we
need to study the design to make it safer.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Pei Wang
On Feb 18, 2008 12:37 PM, Stephen Reed [EMAIL PROTECTED] wrote:

 Pei,

 Another issue with a KB inference engine as contrasted with a FOL theorem
 prover is that the former seeks answers to queries, and the latter often
 seeks to disprove the negation of the theorem by finding a contradiction.
 Cycorp therefore could not reuse much of the research from the automatic
 theorem proving community.   And on the other hand the database community
 commonly did not investigate deep inference.

The automatic theorem proving community does that because resolution
by itself is not complete, while resolution-refutation is complete.

Pei

 As the Semantic Web community continues to develop new deductive inference
 engines tuned to inference (ie. query answering) over large RDF KBs , I
 expect to see open-source forward-chaining, and backward-chaining inference
 engines that can be optimized in the same way that I described for Cyc.

 -Steve

 Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860



 - Original Message 
 From: Pei Wang [EMAIL PROTECTED]
 To: agi@v2.listbox.com

 Sent: Monday, February 18, 2008 10:47:43 AM
 Subject: Re: [agi] would anyone want to use a commonsense KB?

  Steve,

 I also agree with what you said, and what Cyc uses is no longer pure
 resolution-based FOL.

 A purely resolution-based inference engine is mathematically elegant,
 but completely impractical, because after all the knowledge are
 transformed into the clause form required by resolution, most of the
 semantic information in the knowledge structure is gone, and the
 result is equivalent to the original knowledge in truth-value only.
 It is hard to control the direction of the inference without semantic
 information.

 Pei

 On Feb 18, 2008 11:13 AM, Stephen Reed [EMAIL PROTECTED] wrote:
 
  Pei: Resolution-based FOL on a huge KB is intractable.
 
  Agreed.
 
  However Cycorp spend a great deal of programming effort (i.e. many
  man-years) finding deep inference paths for common queries.  The
 strategies
  were:
 
  prune the rule set according to the context
  substitute procedural code for modus ponens in common query paths (e.g.
  isa-links inferred via graph traversal)
  structure the inference engine as a nested set of iterators so that easy
  answers are returned immediately, and harder-to-find answers trickle out
  later.
  establish a battery of inference engine controls (e.g. time bounds, speed
  vs. completeness - whether to employ expensive inference strategies for
  greater coverage of answers) and have the inference engine automatically
  apply the optimal control configuration for queries
  determine rule utility via machine learning and apply prioritized
 inference
  modules within the given time constraints
  My last in-house talk at Cycorp, in the summer of 2006, described a notion
  of mine that Cyc's deductive inference engine behaves as an interpreter,
 and
  that for a certain set of queries, a dramatic speed improvement (e.g. four
  orders of magnitude) could be achieved by compiling the query, and
 possibly
  preprocessing incoming facts to suit expected queries.  The queries that
  interested me were those embedded in an intelligent application, and which
  could be viewed as a query template with parameters.  The compilation
  process I described would explore the parameter space with
 programmer-chosen
  query examples.  Then the resulting proof trees would be compiled into
  executable code - avoiding entirely the time consuming candidate rule
 search
  and their application when the query executes.  My notion for Cyc's
  deductive inference engine optimization is analogous to SQL query
  optimization technology.
 
  I expect to use this technique in the Texai project at the point when I
 need
  a deductive inference engine.
 
  -Steve
 
  Stephen L. Reed
 
  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860
 
 
 
  - Original Message 
  From: Pei Wang [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Monday, February 18, 2008 6:17:59 AM
  Subject: Re: [agi] would anyone want to use a commonsense KB?
 
   On Feb 17, 2008 9:42 PM, YKY (Yan King Yin)

  [EMAIL PROTECTED] wrote:
  
   So far I've been using resolution-based FOL, so there's only 1 inference
   rule and this is not a big issue.  If you're using nonstandard inference
   rules, perhaps even approximate ones, I can see that this distinction is
   important.
 
  Resolution-based FOL on a huge KB is intractable.
 
  Pei
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;

 
  Powered by Listbox: http://www.listbox.com
 
 
   
  Be a 

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Harshad RJ
On Feb 18, 2008 10:11 PM, Richard Loosemore [EMAIL PROTECTED] wrote:


 You assume that the system does not go through a learning phase
 (childhood) during which it acquires its knowledge by itself.  Why do
 you assume this?  Because an AGI that was motivated only to seek
 electricity and pheromones is going to be as curious, as active, as
 knowledge seeking, as exploratory (etc etc etc) as a moth that has been
 preprogrammed to go towards bright lights.  It will never learn aything
 by itself because you left out the [curiosity] motivation (and a lot
 else besides!).


I think your reply points back to the confusion between intelligence and
motivation. Curiosity would be a property of intelligence and not
motivation. After all, you need a motivation to be curious. Moreover, the
curiosity would be guided by the kind of motivation. A benevolent motive
would drive the curiosity to seek benevolent solutions, like say solar
power, while a malevolent motive could drive it to seek destructive ones.

I see motivation as a much more basic property of intelligence. It needs to
answer why not what or how.


 But when we try to get an AGI to have the kind of structured behavior
 necessary to learn by itself, we discover . what?  That you cannot
 have that kind of structured exploratory behavior without also having an
 extremely sophisticated motivation system.


So, in the sense that I mentioned above, why do you say/imply that a
pheromone (or neuro transmitter) based motivation is not sophisticated
enough? And, without getting your hands messy with chemistry, how do you
propose to explain your emotions to a non-human intelligence? How would
you distinguish construction from destruction, chaos from order, or why two
people being able to eat a square meal is somehow better than 2 million
reading Dilbert comics.


In other words you cannot have your cake and eat it too:  you cannot
 assume that this hypothetical AGI is (a) completely able to build its
 own understanding of the world, right up to the human level and beyond,
 while also being (b) driven by an extremely dumb motivation system that
 makes the AGI seek only a couple of simple goals.


In fact, I do think a  b are together possible and they best describe how
human brains work. Our motivation system is extremely dumb: reproduction!
And it is expressed with nothing more than a feed back loop using
neuro-transmitters.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Bob Mottram
On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
  ... might be true. Yes, a motivation of some form could be coded into
  the system, but the paucity of expression in the level at which it is
  coded, may still allow for unintended motivations to emerge out.


It seems that in the AGI arena much emphasis is put on designing goal
systems.  But in nature behavior is not always driven explicitly by
goals.  A lot of behavior I suspect is just drift, and understanding
this requires you to examine the dynamics of the system.  For example
if I'm talking on the phone and doodling with a pen this doesn't
necessarily imply that I explicitly have instantiated a goal of draw
doodle.  Likewise within populations changes in the gene pool do not
necessarily mean that explicit selection forces are at work.

My supposition is that the same dynamics seen in natural systems will
also apply to AGIs, since these are all examples of complex dynamical
systems.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore

Bob Mottram wrote:

On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:

... might be true. Yes, a motivation of some form could be coded into
the system, but the paucity of expression in the level at which it is
coded, may still allow for unintended motivations to emerge out.



It seems that in the AGI arena much emphasis is put on designing goal
systems.  But in nature behavior is not always driven explicitly by
goals.  A lot of behavior I suspect is just drift, and understanding
this requires you to examine the dynamics of the system.  For example
if I'm talking on the phone and doodling with a pen this doesn't
necessarily imply that I explicitly have instantiated a goal of draw
doodle.  Likewise within populations changes in the gene pool do not
necessarily mean that explicit selection forces are at work.

My supposition is that the same dynamics seen in natural systems will
also apply to AGIs, since these are all examples of complex dynamical
systems.


Ooops: the above quote was attached to my name in error:  I believe 
Harshad wrote that, not I.



But regarding your observation, Bob:  I have previously avocated a 
distinction between diffuse motivation systems and goal-stack 
systems.   As you say, most AI systems simply assume that what controls 
the AI is a goal stack.


I will write up this distinction on a web page shortly.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
  My argument was (at the beginning of the debate with Matt, I believe)
  that, for a variety of reasons, the first AGI will be built with
  peaceful motivations.  Seems hard to believe, but for various technical
  reasons I think we can make a very powerful case that this is exactly
  what will happen.  After that, every other AGI will be the same way
  (again, there is an argument behind that).  Furthermore, there will not
  be any evolutionary pressures going on, so we will not find that (say)
  the first few million AGIs are built with perfect motivations, and then
  some rogue ones start to develop.
  
  In the context of a distributed AGI, like the one I propose at
  http://www.mattmahoney.net/agi.html this scenario would require the first
 AGI
  to take the form of a worm.
 
 That scenario is deeply implausible - and you can only continue to 
 advertise it because you ignore all of the arguments I and others have 
 given, on many occasions, concerning the implausibility of that scenario.
 
 You repeat this line of black propaganda on every occasion you can, but 
 on the other hand you refuse to directly address the many, many reasons 
 why that black propaganda is nonsense.
 
 Why?

Perhaps worm is the wrong word.  Unlike today's computer worms, it would be
intelligent, it would evolve, and it would not necessarily be controlled by or
serve the interests of its creator.  Whether or not it is malicious would
depend on the definitions of good and bad, which depend on who you ask.  A
posthuman might say the question is meaningless.

If I understand your proposal, it is:
1. The first AGI to achieve recursive self improvement (RSI) will be friendly.
2. Friendly is hard to define, but because the AGI is intelligent, it would
know what we mean and get it right.
3. The goal system is robust because it is described by a very large number of
soft constraints.
4. The AGI would not change the motivations or goals of its offspring because
it would not want to.
5. The first AGI to achieve RSI will improve its intelligence so fast that all
competing systems will be left far behind.  (Thus, a worm).
6. RSI is deterministic.

My main point of disagreement is 6.  Increasing intelligence requires
increasing algorithmic complexity.  We know that a machine cannot output a
description of another machine with greater complexity.  Therefore
reproduction is probabilistic and experimental, and RSI is evolutionary.  Goal
reproduction can be very close but not exact.  (Although the AGI won't want to
change the goals, it will be unable to reproduce them exactly because goals
are not independent of the rest of the system).  Because RSI is very fast,
goals can change very fast.  The only stable goals in evolution are those that
improve fitness and reproduction, e.g. efficiency and acquisition of computing
resources.

Which part of my interpretation or my argument do you disagree with?



-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful motivations.  Seems hard to believe, but for various technical
reasons I think we can make a very powerful case that this is exactly
what will happen.  After that, every other AGI will be the same way
(again, there is an argument behind that).  Furthermore, there will not
be any evolutionary pressures going on, so we will not find that (say)
the first few million AGIs are built with perfect motivations, and then
some rogue ones start to develop.

In the context of a distributed AGI, like the one I propose at
http://www.mattmahoney.net/agi.html this scenario would require the first

AGI

to take the form of a worm.
That scenario is deeply implausible - and you can only continue to 
advertise it because you ignore all of the arguments I and others have 
given, on many occasions, concerning the implausibility of that scenario.


You repeat this line of black propaganda on every occasion you can, but 
on the other hand you refuse to directly address the many, many reasons 
why that black propaganda is nonsense.


Why?


Perhaps worm is the wrong word.  Unlike today's computer worms, it would be
intelligent, it would evolve, and it would not necessarily be controlled by or
serve the interests of its creator.  Whether or not it is malicious would
depend on the definitions of good and bad, which depend on who you ask.  A
posthuman might say the question is meaningless.


So far, this just repeats the same nonsense:  your scenario is based on 
unsupported assumptions.





If I understand your proposal, it is:
1. The first AGI to achieve recursive self improvement (RSI) will be friendly.


For a variety of converging reasons, yes.



2. Friendly is hard to define, but because the AGI is intelligent, it would
know what we mean and get it right.


No, not correct.  Friendly is not hard to define if you build the AGI 
with a full-fledged motivation system of the diffuse sort I have 
advovcated before.  To put it in a nutshell, the AGI can be made to have 
a primary motivation that involves empathy with the human species as a 
whole, and what this do in practice is that the AGI would stay locked in 
sync with the general desires of the human race.


The question of knowing what we mean by 'friendly' is not relevant, 
because this kind of knowing is explicit declarative knowledge.




3. The goal system is robust because it is described by a very large number of
soft constraints.


Correct.  The motivation system, to be precise, depends for its 
stability on a large number of interconnections, so trying to divert it 
from its main motivation would be like unscrambling an egg.




4. The AGI would not change the motivations or goals of its offspring because
it would not want to.


Exactly.  It would not just not change them, it would take active steps 
to ensure that any other AGI would have exactly the same safeguards in 
its system that it (the mother) would have.




5. The first AGI to achieve RSI will improve its intelligence so fast that all
competing systems will be left far behind.  (Thus, a worm).


No, not thus a worm.  It will simply be an AGI.  The concept of a 
computer worm is so far removed from this AGI that it is misleading to 
recruit the term.



6. RSI is deterministic.


Not correct.

The factors that make a collection of free-floating atoms, in a 
zero-gravity environment) tend to coalesce into a sphere are not 
deterministic in any relevant sense of the term.  A sphere forms 
because a RELAXATION of all the factors involved ends up in the same 
shape every time.


If you mean any other sense of deterministic then you must clarify.



My main point of disagreement is 6.  Increasing intelligence requires
increasing algorithmic complexity.  We know that a machine cannot output a
description of another machine with greater complexity.  Therefore
reproduction is probabilistic and experimental, and RSI is evolutionary.  Goal
reproduction can be very close but not exact.  (Although the AGI won't want to
change the goals, it will be unable to reproduce them exactly because goals
are not independent of the rest of the system).  Because RSI is very fast,
goals can change very fast.  The only stable goals in evolution are those that
improve fitness and reproduction, e.g. efficiency and acquisition of computing
resources.

Which part of my interpretation or my argument do you disagree with?


The last paragraph!  To my mind, this is a wild, free-wheeling 
non-sequiteur that ignores all the parameters laid down in the preceding 
paragraphs:



Increasing intelligence requires increasing algorithmic complexity.

If its motivation system is built the way that I describe it, this is of 
no relevance.



We know 

Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Richard Loosemore

Harshad RJ wrote:


On Feb 18, 2008 10:11 PM, Richard Loosemore [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:



You assume that the system does not go through a learning phase
(childhood) during which it acquires its knowledge by itself.  Why do
you assume this?  Because an AGI that was motivated only to seek
electricity and pheromones is going to be as curious, as active, as
knowledge seeking, as exploratory (etc etc etc) as a moth that has been
preprogrammed to go towards bright lights.  It will never learn aything
by itself because you left out the [curiosity] motivation (and a lot
else besides!).  



I think your reply points back to the confusion between intelligence and 
motivation. Curiosity would be a property of intelligence and not 
motivation. After all, you need a motivation to be curious. Moreover, 
the curiosity would be guided by the kind of motivation. A benevolent 
motive would drive the curiosity to seek benevolent solutions, like say 
solar power, while a malevolent motive could drive it to seek 
destructive ones.


No confusion, really.  I do understand that curiosity is a difficult 
case that lies on the borderline, but what I am talking about is 
systematic exploration-behavior, or playing.  The kind of activity that 
children and curious adults engage in when they deliberately try to find 
something out *because* they feel a curiosity urge (so to speak).


What I think you are referring to is just the understanding-mechanisms 
that enable the intelligence part of the mind to solve problems or 
generally find things out.  Let's call this intelligence-mechanism a 
[Finding-Out] activity, whereas the type of thing children do best is 
[Curiosity], which is a motivation mode that they get into.


Then, using that terminology on your above paragraph:

After all, you need a motivation to be curious translates into You 
need a motivation of some sort to engage in [Finding-Out].  For 
example, before you try to figure out where a particular link is located 
on a web page, you need the (general) motivation that is pushing you to 
do this, as well as the (specific) goal that drives you to find that 
particular link.


Moreover, the curiosity would be guided by the kind of motivation 
translates into The [Finding-Out] activity would be guided by the 
background motivation.  This is what I have just said.


A benevolent motive would drive the curiosity to seek benevolent 
solutions, like say solar power, while a malevolent motive could drive 
it to seek destructive ones.   This translates into  A benevolent 
motivation (and this really is a motivation, in my terminology) would 
drive the [Finding-Out] mechanisms to seek benevolent solutions, like 
say solar power, while a malevolent motivation (again, I would agree 
that this is a motivation) could drive the [Finding-Out] mechanisms to 
seek destructive ones.


What this all amounts to is that the thing I referred to as curiosity 
really is a motivation, because a creature that has an unstructured, 
background desire (a motivation) to find out about the world will 
acquire a lot of background knowledge and become smart.




I see motivation as a much more basic property of intelligence. It needs 
to answer why not what or how.
 


But when we try to get an AGI to have the kind of structured behavior
necessary to learn by itself, we discover . what?  That you cannot
have that kind of structured exploratory behavior without also having an
extremely sophisticated motivation system.


So, in the sense that I mentioned above, why do you say/imply that a 
pheromone (or neuro transmitter) based motivation is not sophisticated 
enough? And, without getting your hands messy with chemistry, how do you 
propose to explain your emotions to a non-human intelligence? How 
would you distinguish construction from destruction, chaos from order, 
or why two people being able to eat a square meal is somehow better than 
2 million reading Dilbert comics.


I frankly don't know if understand the question.

We already have creatures that seek nothing but chemical signals: 
amoebae do this.


Imagine a human baby that did nothing but try to sniff out breast milk: 
 it would never develop because it would never do any of the other 
things, like playing.  It would just sit there and try to sniff for the 
stuff it wanted.





In other words you cannot have your cake and eat it too:  you cannot
assume that this hypothetical AGI is (a) completely able to build its
own understanding of the world, right up to the human level and beyond,
while also being (b) driven by an extremely dumb motivation system that
makes the AGI seek only a couple of simple goals.


In fact, I do think a  b are together possible and they best describe 
how human brains work. Our motivation system is extremely dumb: 
reproduction! And it is expressed with nothing more than a feed back 
loop using neuro-transmitters.



Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-02-18 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  Perhaps worm is the wrong word.  Unlike today's computer worms, it would
 be
  intelligent, it would evolve, and it would not necessarily be controlled
 by or
  serve the interests of its creator.  Whether or not it is malicious would
  depend on the definitions of good and bad, which depend on who you
 ask.  A
  posthuman might say the question is meaningless.
 
 So far, this just repeats the same nonsense:  your scenario is based on 
 unsupported assumptions.

OK, let me use the term mass extinction.  The first AGI that implements RSI
is so successful that it kills off all its competition.

 The question of knowing what we mean by 'friendly' is not relevant, 
 because this kind of knowing is explicit declarative knowledge.

I can accept that an AGI can have empathy toward humans, although no two
people will agree exactly on what this means.

  6. RSI is deterministic.
 
 Not correct.

This is the only point where we disagree, and my whole argument depends on it.

 The factors that make a collection of free-floating atoms, in a 
 zero-gravity environment) tend to coalesce into a sphere are not 
 deterministic in any relevant sense of the term.  A sphere forms 
 because a RELAXATION of all the factors involved ends up in the same 
 shape every time.
 
 If you mean any other sense of deterministic then you must clarify.

I mean in the sense that if RSI was deterministic, then a parent AGI could
predict a child's behavior in any given situation.  If the parent knew as much
as the child, or had the capacity to know as much as the child could know,
then what is the point of RSI?


  Which part of my interpretation or my argument do you disagree with?
 
 Increasing intelligence requires increasing algorithmic complexity.
 
 If its motivation system is built the way that I describe it, this is of 
 no relevance.

Instead of the fuzzy term intelligence let me say amount of knowledge
which most people would agree is correlated with intelligence.  Behavior
depends not just on goals but also on what you know.  A child AGI may have
empathy toward humans just like its parent, but may have a slightly different
idea of what it means to be human.

 We know that a machine cannot output a description of another machine 
 with greater complexity.
 
 When would it ever need to do such a thing?  This factoid, plucked from 
 computational theory, is not about description in the normal 
 scientific and engineering sense, it is about containing a complete copy 
 of the larger system inside the smaller.  I, a mere human, can 
 describe the sun and its dynamics quite well, even though the sun is a 
 system far larger and more complex than myself.  In particular, I can 
 give you some beyond-reasonable-doubt arguments to show that the sun 
 will retain its spherical shape for as long as it is on the Main 
 Sequence, without *ever* changing its shape to resemble Mickey Mouse. 
 Its shape is stable in exactly the same way that an AGI motivation 
 system would be stable, in spite of the fact that I cannot describe 
 this large system in the strict, compututational sense in which some 
 systems describe other systems.

Your model of the sun does not include the position of every atom.  It has
less algorithmic complexity than your brain.  Why is your argument relevant?



-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] Installing MindForth in a robot

2008-02-18 Thread A. T. Murray
Only robots above a certain level of sophistication may receive 
a mind-implant via MindForth. The computerized robot needs to have 
an operating system that will support Forth and sufficient memory 
to hold both the AI program code and a reasonably large knowledge 
base (KB) of experience. A Forth program is so portable from one 
version of Forth to another that robot manufacturers, vendors and 
users should not think of Mind.Forth as restricted to Win32Forth 
for implementation and operation, but as a candidate for upgrading 
to a 64-bit Forth running on a 64-bit system, thereby possessing a 
practically unlimited memory space. The Forth variant iForth is 
supposedly on its way to becoming a 64-bit Forth. People getting 
into Forth AI for the first time and with the option of adopting 
64-bit technology from the very start, should do so with the 
realization that it will be an extremely long time before any 
further upgrade is made to 128-bit or higher technology. It is 
more likely that AI will go down into quantum technology before 
going up to 128-bit technology. So embrace and extend 64-bit AI. 

ATM
--
http://mentifex.virtualentity.com/mind4th.html
http://mentifex.virtualentity.com/m4thuser.html

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com