Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread BillK
On 11/2/07, Eliezer S. Yudkowsky wrote:
 I didn't ask whether it's possible.  I'm quite aware that it's
 possible.  I'm asking if this is what you want for yourself.  Not what
 you think that you ought to logically want, but what you really want.

 Is this what you lived for?  Is this the most that Jiri Jelinek wants
 to be, wants to aspire to?  Forget, for the moment, what you think is
 possible - if you could have anything you wanted, is this the end you
 would wish for yourself, more than anything else?



Well, almost.
Absolute Power over others and being worshipped as a God would be neat as well.

Getting a dog is probably the nearest most humans can get to this.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60258273-c65ec9


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:


Ok, seriously, what's the best possible future for mankind you can imagine?
In other words, where do we want our cool AGIs to get us? I mean
ultimately. What is it at the end as far as you can see?


That's a very personal question, don't you think?

Even the parts I'm willing to answer have long answers.  It doesn't 
involve my turning into a black box with no outputs, though.  Nor 
ceasing to act, nor ceasing to plan, nor ceasing to steer my own 
future through my own understanding of it.  Nor being kept as a pet. 
I'd sooner be transported into a randomly selected anime.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60516560-38feaf


Re: [agi] NLP + reasoning?

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 12:06:05PM -0400, Jiri Jelinek wrote:
 On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  Natural language is a fundamental part of the knowledge
 base, not something you can add on later.
 
 I disagree. You can start with a KB that contains concepts retrieved
 from a well structured non-NL input format only, get the thinking
 algorithms working and then (possibly much later) let the system to
 focus on NL analysis/understanding or build some
 NL-to-the_structured_format translation tools.

Bing. Yes, exactly.

I'm taking an experimental, rather than theoretical approach: build
the minimum amount needed to make something work, then determine what
is the next roadblock, and fix that. And then iterate.  At this point,
for me, sentence-parsing is not a roadblock. Nor is the conversion of 
parser output to narsese (reasoning-engine internal langauge).

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60498503-12ec1f


Re: [agi] NLP + reasoning?

2007-11-02 Thread Jiri Jelinek
On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Natural language is a fundamental part of the knowledge
base, not something you can add on later.

I disagree. You can start with a KB that contains concepts retrieved
from a well structured non-NL input format only, get the thinking
algorithms working and then (possibly much later) let the system to
focus on NL analysis/understanding or build some
NL-to-the_structured_format translation tools.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60477290-62b2ac


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 12:41:16PM -0400, Jiri Jelinek wrote:
 On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 if you could have anything you wanted, is this the end you
 would wish for yourself, more than anything else?
 
 Yes. But don't forget I would also have AGI continuously looking into
 how to improve my (/our) way of perceiving the pleasure-like stuff.

This is a bizarre line of reasoning. One way that my AGI might improve 
my perception of pleasure is to make me dumber -- electroshock me -- 
so that I find gilligan's island reruns incredibly pleasurable. Or, 
I dunno, find that heroin addiction is a great way to live.

Or help me with fugue states: what is the sound of one hand clapping?
feed me zen koans till my head explodes.

But it might also decide that I should be smarter, so that I have a more
acute sense and discernement of pleasure. Make me smarter about roses,
so that I can enjoy my rose garden in a more refined way. And after I'm
smarter, perhaps I'll have a whole new idea of what pleasure is,
and what it takes to make me happy.

Personally, I'd opt for this last possibility.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60495742-7c46a3


Re: [agi] NLP + reasoning + conversational state?

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 11:27:08AM +0300, Vladimir Nesov wrote:
 Linas,
 
 Yes, you probably can code all the patterns you need. But it's only
 the tip of the iceberg: problem is that for those 1M rules there are
 also thousands that are being constantly generated, assessed and
 discarded. Knowledge formation happens all the time and adapts those
 1M rules to gazillion of real-world situations. You can consider those
 additional rules 'inference', but then again, if you can do your
 inference that good, you can do without 1M of hand-coded rules,
 allowing system to learn them from the ground up. If your inference is
 not good enough, it's not clear how many rules you'd need to code in
 manually, it may be 10^6 or 10^12, or 10^30, because you'd also need
 to code _potential_ rules which are not normally stored in human
 brain, but generated on the fly.

Yes. agreed. Right now, I'm looking at all of the code as disposable
scaffolding, as something that might allow enough interaction to make
human-like conversation bearable.  That scaffolding should enable 
some real work.

My current impression is that opencyc's 10^6 assertions make it vaguely 
comparable to a 1st grader: at least, conversationally .. it can make 
simple deductions, write short essays of facts, ... can learn new things, 
but can go astray easily.

Does not yet learn about new sentence types, can't yet guess at new
parses.  Certainly doesn't have spunk or initiative!

Inference is tricky. Even simple things use alarmingly large amounts 
of cpu time.

 I plan to support recent context through a combination of stable
 activation patterns (which is analogous to constant reciting of
 certain phrase, only on lower level) and temporary induction (roughly,
 cooccurrence of concepts in near past leads to them activating each
 other in the present, and similarly there are temporary concepts being
 formed all the time, of which only those which get repeatedly used in
 their short lifetime are retained for longer and longer).

Yes. of course.  Easy to say.  lillybot remembers recent assertions,
and can reason from that.  However, I'm currently hard-coding all 
reasoning on a case-by-case, ad-hoc manner.  I haven't done enough 
of these yet to see what the general pattern might be.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60483902-4e962b


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 01:19:19AM -0400, Jiri Jelinek wrote:
 Or do we know anything better?

I sure do. But ask me again, when I'm smarter, and have had more time to
think about the question.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60487277-501c1f


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
I'm asking if this is what you want for yourself.

Then you could read just the first word from my previous response: YES

if you could have anything you wanted, is this the end you
would wish for yourself, more than anything else?

Yes. But don't forget I would also have AGI continuously looking into
how to improve my (/our) way of perceiving the pleasure-like stuff.

And because I'm influenced by my mirror neurons and care about others,
expect my monster robot-savior eventually breaking through your door,
grabbing you and plugging you into the pleasure grid. ;-)

Ok, seriously, what's the best possible future for mankind you can imagine?
In other words, where do we want our cool AGIs to get us? I mean
ultimately. What is it at the end as far as you can see?

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60486164-589857


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:

On Nov 2, 2007 4:54 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:


You turn it into a tautology by mistaking 'goals' in general for
'feelings'. Feelings form one, somewhat significant at this point,
part of our goal system. But intelligent part of goal system is much
more 'complex' thing and can also act as a goal in itself. You can say
that AGIs will be able to maximize satisfaction of intelligent part
too,


Could you please provide one specific example of a human goal which
isn't feeling-based?


Saving your daughter's life.  Most mothers would prefer to save their 
daughter's life than to feel that they saved their daughter's life. 
In proof of this, mothers sometimes sacrifice their lives to save 
their daughters and never get to feel the result.  Yes, this is 
rational, for there is no truth that destroys it.  And before you 
claim all those mothers were theists, there was an atheist police 
officer, signed up for cryonics, who ran into the World Trade Center 
and died on September 11th.  As Tyrone Pow once observed, for an 
atheist to sacrifice their life is a very profound gesture.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60544283-64b657


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Vladimir Nesov
Jiri,

You turn it into a tautology by mistaking 'goals' in general for
'feelings'. Feelings form one, somewhat significant at this point,
part of our goal system. But intelligent part of goal system is much
more 'complex' thing and can also act as a goal in itself. You can say
that AGIs will be able to maximize satisfaction of intelligent part
too, as they are 'vastly more intelligent', but now it's turned into
general 'they do what we want', which is generally what Friendly AI is
by definition (ignoring specifics about what 'what we want' actually
means).


On 11/2/07, Jiri Jelinek [EMAIL PROTECTED] wrote:
  Is this really what you *want*?
  Out of all the infinite possibilities, this is the world in which you
  would most want to live?

 Yes, great feelings only (for as many people as possible) and the
 engine being continuously improved by AGI which would also take care
 of all related tasks including safety issues etc. The quality of our
 life is in feelings. Or do we know anything better? We do what we do
 for feelings and we alter them very indirectly. We can optimize and
 get the greatest stuff allowed by the current design by direct
 altering/stimulations (changes would be required so we can take it
 non-stop). Whatever you enjoy, it's not really the thing you are
 doing. It's the triggered feeling which can be obtained and
 intensified more directly. We don't know exactly how those great
 feelings (/qualia) work, but there is a number of chemicals and brain
 regions known to play key roles.

 Regards,
 Jiri Jelinek


 On Nov 2, 2007 12:54 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
  Jiri Jelinek wrote:
  
   Let's go to an extreme: Imagine being an immortal idiot.. No matter
   what you do  how hard you try, the others will be always so much
   better in everything that you will eventually become totally
   discouraged or even afraid to touch anything because it would just
   always demonstrate your relative stupidity (/limitations) in some way.
   What a life. Suddenly, there is this amazing pleasure machine as a new
   god-like-style of living for poor creatures like you. What do you do?
 
  Jiri,
 
  Is this really what you *want*?
 
  Out of all the infinite possibilities, this is the world in which you
  would most want to live?
 
  --
  Eliezer S. Yudkowsky  http://singinst.org/
  Research Fellow, Singularity Institute for Artificial Intelligence
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60236618-350050


Re: [agi] NLP + reasoning + conversational state?

2007-11-02 Thread Vladimir Nesov
Linas,

Yes, you probably can code all the patterns you need. But it's only
the tip of the iceberg: problem is that for those 1M rules there are
also thousands that are being constantly generated, assessed and
discarded. Knowledge formation happens all the time and adapts those
1M rules to gazillion of real-world situations. You can consider those
additional rules 'inference', but then again, if you can do your
inference that good, you can do without 1M of hand-coded rules,
allowing system to learn them from the ground up. If your inference is
not good enough, it's not clear how many rules you'd need to code in
manually, it may be 10^6 or 10^12, or 10^30, because you'd also need
to code _potential_ rules which are not normally stored in human
brain, but generated on the fly.

I plan to support recent context through a combination of stable
activation patterns (which is analogous to constant reciting of
certain phrase, only on lower level) and temporary induction (roughly,
cooccurrence of concepts in near past leads to them activating each
other in the present, and similarly there are temporary concepts being
formed all the time, of which only those which get repeatedly used in
their short lifetime are retained for longer and longer).


On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
 On Thu, Nov 01, 2007 at 02:58:07PM -0700, Matt Mahoney wrote:
  --- Linas Vepstas [EMAIL PROTECTED] wrote:
 
   Thus, I find that my interests are now turning to representing
   conversational state. How does novamente deal with it? What
   about Pei Wang's NARS? It seems that NARS is a reasoning system;
   great; but what is holding me back right now is not an ability
   to reason per-se, but the ability to maintain a conversational
   state.
 
  If the matrix is sparse it can be compressed using singular value
  decomposition (SVD).

 [...]

  This is not a model you can tack onto a structured knowledge base.

 Why not? Bot NARS and novamente have real-number-valued associative
 deduction abilities. I see no reason why simple matrix or nerual net
 algo's couldn't be layered on top of it.

  Your approach has been tried
  hundreds of times.

 Yes, I figured as much. I haven't yet seen a cogent explanation of
 why folks gave up. For shrdlu, sure .. compute power was limited.
 There's discussion about grounding, and folks wander off into the weeds.

  There is a great temptation to insert knowledge directly,
  but the result is always the same.  Natural language is a complicated beast.
  You cannot hand code all the language rules.  After 23 years of developing 
  the
  Cyc database, Doug Lenat guesses it is between 0.1% and 10% finished.

 And hasn't stopped trying. We also have Wordnet, and assorted ontology
 projects.

 How many english words are there? About 250K, but this hasn't stopped
 classical dictionary authors, nor wordnet, nor Lenat.

 How many sentence parse patterns are there? 10K? 100K? 1M? Its not
 infinite, even though it can feel that way sometimes. Just because
 you personally don't feel like trying to hand-build an association
 matrix between sentence parse patterns and a semantic current topic
 of conversation dataset doesn't mean its unbuildable.

 I didn't claim the approach I described as being good; its not;
 and I can see its limitations already. I did claim that its practical,
 and after half-a-dozen weekends coding, I have a demo. I'm trying to
 understand just how far the approach can be pushed. I get the impression
 that it hasn't been pushed very far at all, before people give up.

 --linas


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60236054-ec756f


[agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-02 Thread Edward W. Porter
Can humans keep superintelligences under control -- can
superintelligence-augmented humans compete

Richard Loosemore (RL) wrote the following on Fri 11/2/2007 11:15 AM,
in response to a post by Matt Mahoney.

My comments are preceded by ED

RL This is the worst possible summary of the situation, because
instead of
dealing with each issue as if there were many possibilities, it pretends
that there is only one possible outcome to each issue.

In this respect it is as bad as (or worse than) all the science fiction
nonsense that has distorted AI since before AI even existed.

ED You above statement is quite a put down.  Is it justified?  Yes
Matt doesn’t describe all the multiple possibilities, but are the
possibilities he describes really worse than science fiction nonsense?

I personally think Matt has raises some real and important issues.  We
have had go-rounds on this list before about similar subjects, such as
under the thread “Religion-free technical content.” When we did a
significant number of the people on this list seemed to agree with the
notion that superhuman-level AGI poses a treat to mankind.  I think even
Eliezer Yudkowsky, who has spend a lot of time on this issue, does not yet
consider it solved.  That is one of the reasons so many of us believe in
the need for some sort of transhumanist transformation so that our
descendants (whatever they may be) have a chance to continue surviving in
the presence of such superintelligences.

RL Example 1:  ...humans cannot predict -- and therefore cannot
control --
machines that are vastly smarter.  According to some interpretations of
how AI systems will be built, this is simply not true at all.  If AI
systems are built with motivation systems that are stable, then we could
predict that they will remain synchronized with the goals of the human
race until the end of history.  This does not mean that we could
predict them in the sense of knowing everything they would say and do
before they do it, but it would mean that we could know what their goals
abd values were - and this would be the the only important sense of the
word predict.

ED I thought it is far from clear one can create motivational systems
that are stable in an intelligent, learning, human- or superhuman-level
AGI in a complex changing world.  Yes, I think you can make a Roomba have
a stable motivation system and limited AGI’s with them, but that does not
mean you could make a superhuman-level AGI -- learning and acting with a
considerable freedom in our complex world -- have one.

First it is difficult to define goals in ways that cover all the
situations in which they might have to be applied.  Situations may come up
when it is not at all clear what should be done to pursue the goal.  This
is particularly true when it comes to goals such as be friendly to people.


Second, most systems of any complexity will have conflicting goals or be
subject to situations in which application of the same goal to different
parts of the reality can conflict.  For example, should a human-friendly
AGI kill a terrorist before he blows up a suicide bomb in a crowded
market.  If it could have, should it have shot down one or both of the
airplanes that hit the World Trade Center on 9/11 as they were close to
hitting the building.  In an over crowded world, should it kill half of
humanity to make more room and better lives in a for the billions of
remaining human, since those left living would be conscious to appreciate
the improved living conditions and those that were killed wouldn’t know
the difference.

Third, an intelligence of any generality and power has to be able to
modify its goals, such as by creating subgoals and by interpreting how
goals apply in different situations, and these powers conflict with
stability.

I have read your paper on complexity cited by you, I think, when a
somewhat similar discussion arose before.  I came away with the feeling
your paper did not even come close to answering the above questions.  I
can understand how through complexity and grounding a goal could be
defined in ways that would be much more meaningful than if defined in just
words as naked symbols.  I also understand how you could hardwire certain
emotional responses to certain aspects of reality and make achieving or
avoiding such emotional responses top level goals or the top level goal.
But human have similar features in their goal structure, and still
occasionally act in ways that contradict instinctual goals and such
emotionally built in biases.  And it is not clear to what extent such
hardware encoded biases can deal with all of the issues of possible
conflicts and different possible interpretations that are bound to arise
in a complex world.  The world is likely to be more complex than any
grounding and/or biasing system you are going to put into these machines
too insure they stay friendly to people.

So that is why it is essential for humans to stay in the loop during the
transhuman transformation, for 

Re: [agi] NLP + reasoning + conversational state?

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 09:01:42AM -0700, Charles D Hixson wrote:
 To me this point seems only partially valid.  1M hand coded rules seems 
 excessive, but there should be some number (100? 1000?) of hand-coded 
 rules (not unchangeable!) that it can start from.  An absolute minimum 
 would seem to be everything in 'Fun with Dick and Jane' through 'My 
 Little White House'.  That's probably not sufficient, but you need to 
 at least cover those patterns.  Most (though not all) of the later 
 patterns are, or can be, built out of the earlier ones via miscellaneous 
 forms of composition and elision.  This gives context within which other 
 patterns can be learned.
 
 Note that this is extremely much simpler that starting your learning 
 from a clean slate.

Yes, exactly. A clean slate is a very hard place to start.  And so,
yes, this is my current philosophy: build enough scaffolding to be
able to pump some yet-to-be-determined more general mechanism.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60484906-1b585e


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
Linas, BillK

It might currently be hard to accept for association-based human
minds, but things like roses, power-over-others, being worshiped
or loved are just waste of time with indirect feeling triggers
(assuming the nearly-unlimited ability to optimize).

Regards,
Jiri Jelinek

On Nov 2, 2007 12:56 PM, Linas Vepstas [EMAIL PROTECTED] wrote:
 On Fri, Nov 02, 2007 at 12:41:16PM -0400, Jiri Jelinek wrote:
  On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
  if you could have anything you wanted, is this the end you
  would wish for yourself, more than anything else?
 
  Yes. But don't forget I would also have AGI continuously looking into
  how to improve my (/our) way of perceiving the pleasure-like stuff.

 This is a bizarre line of reasoning. One way that my AGI might improve
 my perception of pleasure is to make me dumber -- electroshock me --
 so that I find gilligan's island reruns incredibly pleasurable. Or,
 I dunno, find that heroin addiction is a great way to live.

 Or help me with fugue states: what is the sound of one hand clapping?
 feed me zen koans till my head explodes.

 But it might also decide that I should be smarter, so that I have a more
 acute sense and discernement of pleasure. Make me smarter about roses,
 so that I can enjoy my rose garden in a more refined way. And after I'm
 smarter, perhaps I'll have a whole new idea of what pleasure is,
 and what it takes to make me happy.

 Personally, I'd opt for this last possibility.

 --linas

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60582722-508dcb


Re: [agi] NLP + reasoning + conversational state?

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
 But learning problem isn't changed by it. And if you solve the
 learning problem, you don't need any scaffolding.

But you won't know how to solve the learning problem until you try.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60586997-1226bb


Re: [agi] popularizing injecting sense of urgenc

2007-11-02 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Example 4:  Each successive generation gets smarter, faster, and less 
 dependent on human cooperation.  Absolutely not true.  If humans take 
 advantage of the ability to enhance their own intelligence up to the 
 same level as the AGI systems, the amount of dependence between the 
 two groups will stay exactly the same, for the simple reason that there 
 will not be a sensible distinction between the two groups.

So your answer to my question do you become the godlike intelligence that
replaces the human race? is yes?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60598413-a11c83


Re: [agi] NLP + reasoning + conversational state?

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
 On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
  On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
   But learning problem isn't changed by it. And if you solve the
   learning problem, you don't need any scaffolding.
 
  But you won't know how to solve the learning problem until you try.
 
 Until you try to solve the learning problem. How scaffolding-building
 can help in solving it?

My scaffolding learns. It remembers assertions you make, and it will
parrot them back. It checks to see if the assertions you make fits into
its beleif network before it actually commits them to memory.

It can be told things like aluminum is a mass noun, and then will start
using aluminum instead of the aluminum or an aluminum in future
sentences.

Sure, I hard-coded the part where mass nouns don't require an article,
that's part of the scaffolding.  But that's temporary. That's because 
the thing isn't yet smart enough to understand what the sentence 
mass nouns don't require an article means.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60599515-d71478


Re: [agi] NLP + reasoning?

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 12:56:14PM -0700, Matt Mahoney wrote:
 --- Jiri Jelinek [EMAIL PROTECTED] wrote:
  On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
   Natural language is a fundamental part of the knowledge
  base, not something you can add on later.
  
  I disagree. You can start with a KB that contains concepts retrieved
  from a well structured non-NL input format only, get the thinking
  algorithms working and then (possibly much later) let the system to
  focus on NL analysis/understanding or build some
  NL-to-the_structured_format translation tools.
 
 Well, good luck with that.  Are you aware of how many thousands of times this
 approach has been tried?  You are wading into a swamp.  Progress will be rapid
 at first.

Yes, and in the first email I wrote, that started this thread, I stated,
more or less: yes, I am aware that many have tried, and that its a
swamp, and can anyone elucidate why?  And, so far, no one as been able
to answer that question, even as they firmly assert that surely it is a
swamp. Nor has anyone attempted to posit any mechanisms that avoid that
swamp, other than thought bubbles that state things like starting from
a clean slate, my system will be magic.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60602053-e9d933


Re: [agi] NLP + reasoning + conversational state?

2007-11-02 Thread Vladimir Nesov
On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
 On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
  On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
   On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
But learning problem isn't changed by it. And if you solve the
learning problem, you don't need any scaffolding.
  
   But you won't know how to solve the learning problem until you try.
 
  Until you try to solve the learning problem. How scaffolding-building
  can help in solving it?

 My scaffolding learns. It remembers assertions you make, and it will
 parrot them back. It checks to see if the assertions you make fits into
 its beleif network before it actually commits them to memory.

 It can be told things like aluminum is a mass noun, and then will start
 using aluminum instead of the aluminum or an aluminum in future
 sentences.

 Sure, I hard-coded the part where mass nouns don't require an article,
 that's part of the scaffolding.  But that's temporary. That's because
 the thing isn't yet smart enough to understand what the sentence
 mass nouns don't require an article means.

What I meant is to extract learning and term the rest 'scaffolding'.
In this case, what system actually learns is tagging of terms
('aluminum') with other terms ('is-a-mass-noun'), and this tagging is
provided directly. So it only learns one term-term mapping, which is
coded in explicitly through textual interface (scaffolding) when you
enter phrases like aluminum is a mass noun. It's hardly a
perceptible step in learning dynamics prototyping.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60619081-bed359


Re: [agi] NLP + reasoning + conversational state?

2007-11-02 Thread Linas Vepstas
On Sat, Nov 03, 2007 at 12:06:48AM +0300, Vladimir Nesov wrote:
 On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
  On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
   On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
 But learning problem isn't changed by it. And if you solve the
 learning problem, you don't need any scaffolding.
   
But you won't know how to solve the learning problem until you try.
  
   Until you try to solve the learning problem. How scaffolding-building
   can help in solving it?
 
  My scaffolding learns. It remembers assertions you make, and it will
  parrot them back. It checks to see if the assertions you make fits into
  its beleif network before it actually commits them to memory.
 
  It can be told things like aluminum is a mass noun, and then will start
  using aluminum instead of the aluminum or an aluminum in future
  sentences.
 
  Sure, I hard-coded the part where mass nouns don't require an article,
  that's part of the scaffolding.  But that's temporary. That's because
  the thing isn't yet smart enough to understand what the sentence
  mass nouns don't require an article means.
 
 What I meant is to extract learning and term the rest 'scaffolding'.
 In this case, what system actually learns is tagging of terms
 ('aluminum') with other terms ('is-a-mass-noun'), and this tagging is
 provided directly. So it only learns one term-term mapping, which is
 coded in explicitly through textual interface (scaffolding) when you
 enter phrases like aluminum is a mass noun. It's hardly a
 perceptible step in learning dynamics prototyping.

1) I did not claim to be doing fundamental or groundbreaking AI
   research. In fact, I calimed the opposite: that this has been done
   before, and I know that many folks have abandoned this approach. 
   I am intersted in finding out what the roadblocks were.

2) I recently posed the system a question what is lincoln? and it
   turns out that opencyc knows about 15 or 20 lincoln counties
   scattered around the united states. So, instead of having the
   thing rattle off all 20 counties, I want it to deduce what they all
   have in common, and then respond and lincoln might be one of many
   different counties. I think that this kind of deduction will
   be a few hours to implement: pattern match to find common ancestors.

So, after asserting aluminum is a mass noun, it might plausibly deduce 
most minerals are mass nouns -- one could call this data mining.
This would use the same algo as deducing that many of the things called
lincoln are counties. 

I want to know how far down this path one can go, and how far anyone has
gone. I can see that it might not be good path, but I don't see any
alternatives at the moment.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60624837-d7edb6


Re: [agi] NLP + reasoning?

2007-11-02 Thread Matt Mahoney
--- Linas Vepstas [EMAIL PROTECTED] wrote:

 On Fri, Nov 02, 2007 at 12:56:14PM -0700, Matt Mahoney wrote:
  --- Jiri Jelinek [EMAIL PROTECTED] wrote:
   On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Natural language is a fundamental part of the knowledge
   base, not something you can add on later.
   
   I disagree. You can start with a KB that contains concepts retrieved
   from a well structured non-NL input format only, get the thinking
   algorithms working and then (possibly much later) let the system to
   focus on NL analysis/understanding or build some
   NL-to-the_structured_format translation tools.
  
  Well, good luck with that.  Are you aware of how many thousands of times
 this
  approach has been tried?  You are wading into a swamp.  Progress will be
 rapid
  at first.
 
 Yes, and in the first email I wrote, that started this thread, I stated,
 more or less: yes, I am aware that many have tried, and that its a
 swamp, and can anyone elucidate why?  And, so far, no one as been able
 to answer that question, even as they firmly assert that surely it is a
 swamp. Nor has anyone attempted to posit any mechanisms that avoid that
 swamp, other than thought bubbles that state things like starting from
 a clean slate, my system will be magic.

Actually my research is trying to answer this question.  In 1999 I looked at
language model size vs. compression and found data consistent with Turing's
and Landauer's estimates of 10^9 bits.  This is also about the compressed size
of the Cyc database.  http://cs.fit.edu/~mmahoney/dissertation/

But then I started looking at CPU and memory requirements, which turn out to
be much larger.  Why does the human brain need 10^15 synapses?  When you plot
text compression ratio on the speed-memory surface, it is still very steep,
especially on the memory axis. 
http://cs.fit.edu/~mmahoney/compression/text.html

Unfortunately the data is still far from clear.  The best programs still model
semantics crudely and grammar not at all.  From the data I would guess that an
ungrounded language model could run on a 1000 CPU cluster, plus or minus a
couple orders of magnitude.  The fact that Google hasn't solved the problem
with a 10^6 cluster does not make me hopeful.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60626587-43dede


Re: [agi] NLP + reasoning?

2007-11-02 Thread Linas Vepstas
On Sat, Nov 03, 2007 at 12:15:29AM +0300, Vladimir Nesov wrote:
 I personally don't see how this appearance-building is going to help,
 so the question for me is not 'why can't it succeed?', but 'why do it
 at all?'.

Because absolutely no one has proposed anything better?  

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60629120-5f23a9


Re: [agi] NLP + reasoning + conversational state?

2007-11-02 Thread Vladimir Nesov
Linas,

I mainly tried to show that you are in fact not moving your system
forward learning-wise by attaching a chatbot facade to it. That My
scaffolding learns is an overstatement in this context.

You should probably move in the direction of NARS, it seems
fundamental enough to be near the mark. I repeatedly bump into
constructions which have sort-of counterpart in NARS, but somehow I
didn't see them that way the first time I read papers that describe
them, and instead arrived to them by a bumpy go-around road. At least
I'm more confident now that these constructions are not just drawn
randomly from the design space.

On 11/3/07, Linas Vepstas [EMAIL PROTECTED] wrote:
 On Sat, Nov 03, 2007 at 12:06:48AM +0300, Vladimir Nesov wrote:
  On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
   On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
 On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
  But learning problem isn't changed by it. And if you solve the
  learning problem, you don't need any scaffolding.

 But you won't know how to solve the learning problem until you try.
   
Until you try to solve the learning problem. How scaffolding-building
can help in solving it?
  
   My scaffolding learns. It remembers assertions you make, and it will
   parrot them back. It checks to see if the assertions you make fits into
   its beleif network before it actually commits them to memory.
  
   It can be told things like aluminum is a mass noun, and then will start
   using aluminum instead of the aluminum or an aluminum in future
   sentences.
  
   Sure, I hard-coded the part where mass nouns don't require an article,
   that's part of the scaffolding.  But that's temporary. That's because
   the thing isn't yet smart enough to understand what the sentence
   mass nouns don't require an article means.
 
  What I meant is to extract learning and term the rest 'scaffolding'.
  In this case, what system actually learns is tagging of terms
  ('aluminum') with other terms ('is-a-mass-noun'), and this tagging is
  provided directly. So it only learns one term-term mapping, which is
  coded in explicitly through textual interface (scaffolding) when you
  enter phrases like aluminum is a mass noun. It's hardly a
  perceptible step in learning dynamics prototyping.

 1) I did not claim to be doing fundamental or groundbreaking AI
research. In fact, I calimed the opposite: that this has been done
before, and I know that many folks have abandoned this approach.
I am intersted in finding out what the roadblocks were.

 2) I recently posed the system a question what is lincoln? and it
turns out that opencyc knows about 15 or 20 lincoln counties
scattered around the united states. So, instead of having the
thing rattle off all 20 counties, I want it to deduce what they all
have in common, and then respond and lincoln might be one of many
different counties. I think that this kind of deduction will
be a few hours to implement: pattern match to find common ancestors.

 So, after asserting aluminum is a mass noun, it might plausibly deduce
 most minerals are mass nouns -- one could call this data mining.
 This would use the same algo as deducing that many of the things called
 lincoln are counties.

 I want to know how far down this path one can go, and how far anyone has
 gone. I can see that it might not be good path, but I don't see any
 alternatives at the moment.

 --linas

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60636133-9ba9d0


Re: [agi] NLP + reasoning?

2007-11-02 Thread Matt Mahoney
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
 Although it is possible to fully integrate NL into AGI, such an endeavor
 may not be the highest priority at this moment.  It can give the AGI better
 linguistic abilities, such as understanding human-made texts or speeches,
 even poetry, but I think there're higher priorities than this (eg, learning
 how to do math, how to program, etc).

Computers are already pretty good at math.  But I think before they can write
or debug programs, they will need natural language so you can tell them what
to write.  Otherwise, all you have is a compiler.  At least human programmers
learn to speak before they can write code.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60654358-796643


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
On Nov 2, 2007 2:35 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
  Could you please provide one specific example of a human goal which
  isn't feeling-based?

 It depends on what you mean by 'based' and 'goal'. Does any choice
 qualify as a goal? For example, if I choose to write certain word in
 this e-mail, does a choice to write it form a goal of writing it?
 I can't track source of this goal, it happens subconsciously.

Choice to take particular action generates sub-goal (which might be
deep in the sub-goal chain). If you go up, asking why? on each
level, you eventually reach the feeling level where goals (not just
sub-goals) are coming from. In short, I'm writing these words because
I have reasons to believe that the discussion can in some way support
my /or someone else's AGI R /or D. I want to support it because I
believe AGI can significantly help us to avoid pain and get more
pleasure - which is basically what drives us [by design]. So when we
are 100% done, there will be no pain and an extreme pleasure. Of
course I'm simplifying a bit, but what are the key objections?

 Saying just 'Friendly AI' seems to be
 sufficient to specify a goal for human researchers, but not enough to
 actually build one.

Just build AGI that follows given rules.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60681447-d775a0