Re: [agi] Recap/Summary/Thesis Statement

2008-03-11 Thread Mark Waser
As of now, we are aware of no non-human friendlies, so the set of excluded 
beings will in all likelihood be the empty set.


Eliezer's current vision of Friendliness puts AGIs (who are non-human 
friendlies) in the role of excluded beings.  That is why I keep hammering 
this point.


To answer your question, I don't see the people are evil and will screw 
it all up scenario as being even remotely likely, for reasons of 
self-interest among others. And I think it very likely that if it turns 
out that including non-human friendlies is the right thing to do, that the 
system will do as designed and renormalize accordingly.


People are *currently* screwing it all up in the sense that our society is 
*seriously* sub-optimal and far, FAR less than it could be.  Will we screw 
it up to the point of self-destruction?  That's too early to tell.  The 
Cuban Missile Crisis was an awfully near miss.  Grey Goo would be *really* 
bad (though I think that it is a bit further off than most people on this 
list).  It's scary to even consider what I *know* that I could do if I were 
a whack-job terrorist but with my knowledge.


The only reason why I am as optimistic as I am currently is because I truly 
do believe that Friendliness is an attractor that we are solidly on the 
approach path to and I hope that I can speed the process by pointing that 
fact out.


As for the other option, my question was not about the dangers relating to 
*who is or is not protected*, but rather *whose volition is taken into 
account* in calculating the CEV, since your approach considers only the 
volition of friendly humanity (and non-human friendlies but not 
non-friendly humanity), while Eliezer's includes all of humanity.


Actually, I *will* be showing that basically Friendly behavior *IS* extended 
to everyone except in so far as non-Friendlies insist upon being 
non-Friendly.  I just didn't see a way to successfully introduce that idea 
early *AND* forestall Vladimir's obvious so why don't I just kill them all 
argument.  I need to figure out a better way to express that earlier. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] Re: Your mail to [EMAIL PROTECTED]

2008-03-11 Thread Mark Waser

Ben,

   Can we boot alien off the list?  I'm getting awfully tired of his 
auto-reply emailing me directly *every* time I post.  It is my contention 
that this is UnFriendly behavior (wasting my resources without furthering 
any true goal of his) and should not be accepted.


   Mark

- Original Message - 
From: [EMAIL PROTECTED]

To: [EMAIL PROTECTED]
Sent: Tuesday, March 11, 2008 11:56 AM
Subject: Re: Your mail to [EMAIL PROTECTED]



Thank you for contacting Alienshift.
We will respond to your Mail in due time.

Please feel free to send positive thoughts in return back to the Universe.
[EMAIL PROTECTED]




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-11 Thread Mark Waser
Ahah!  :-)  Upon reading Kaj's excellent reply, I spotted something that I 
missed before that grated on Richard (and he even referred to it though I 
didn't realize it at the time) . . . .


The Omohundro drives #3 and #4 need to be rephrased from

Drive 3: AIs will want to preserve their utility functions
Drive 4: AIs try to prevent counterfeit utility

to
Drive 3: AIs will want to preserve their goals
Drive 4: AIs will want to prevent fake feedback on the status of their goals

The current phrasing *DOES* seem to strongly suggest a goal-stack type 
architecture since, although I argued that a MES system has an implicit 
utility function that it just doesn't refer to it, it makes no sense that it 
is trying to preserve and prevent counterfeits of something that it ignores.


sorry for missing/overlooking this before, Richard  :-

(And this is why I'm running all this past the mailing list before believing 
that my paper is anywhere close to final  :-)



- Original Message - 
From: Kaj Sotala [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, March 11, 2008 10:07 AM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity 
Outcomes...]




On 3/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Kaj Sotala wrote:
  Alright. But previously, you said that Omohundro's paper, which to me
  seemed to be a general analysis of the behavior of *any* minds with
  (more or less) explict goals, looked like it was based on a
  'goal-stack' motivation system. (I believe this has also been the
  basis of your critique for e.g. some SIAI articles about
  friendliness.) If built-in goals *can* be constructed into
  motivational system AGIs, then why do you seem to assume that AGIs
  with built-in goals are goal-stack ones?


I seem to have caused lots of confusion earlier on in the discussion, so
 let me backtrack and try to summarize the structure of my argument.

 1)  Conventional AI does not have a concept of a Motivational-Emotional
 System (MES), the way that I use that term, so when I criticised
 Omuhundro's paper for referring only to a Goal Stack control system, I
 was really saying no more than that he was assuming that the AI was
 driven by the system that all conventional AIs are supposed to have.
 These two ways of controlling an AI are two radically different designs.

[...]

 So now:  does that clarify the specific question you asked above?


Yes and no. :-) My main question is with part 1 of your argument - you
are saying that Omohundro's paper assumed the AI to have a certain
sort of control system. This is the part which confuses me, since I
didn't see the paper to make *any* mentions of how the AI should be
built. It only assumes that the AI has some sort of goals, and nothing
more.

I'll list all of the drives Omohundro mentions, and my interpretation
of them and why they only require existing goals. Please correct me
where our interpretations differ. (It is true that it will be possible
to reduce the impact of many of these drives by constructing an
architecture which restricts them, and as such they are not
/unavoidable/ ones - however, it seems reasonable to assume that they
will by default emerge in any AI with goals, unless specifically
counteracted. Also, the more that they are restricted, the less
effective the AI will be.)

Drive 1: AIs will want to self-improve
This one seems fairly straightforward: indeed, for humans
self-improvement seems to be an essential part in achieving pretty
much *any* goal you are not immeaditly capable of achieving. If you
don't know how to do something needed to achieve your goal, you
practice, and when you practice, you're improving yourself. Likewise,
improving yourself will quickly become a subgoal for *any* major
goals.

Drive 2: AIs will want to be rational
This is basically just a special case of drive #1: rational agents
accomplish their goals better than irrational ones, and attempts at
self-improvement can be outright harmful if you're irrational in the
way that you try to improve yourself. If you're trying to modify
yourself to better achieve your goals, then you need to make clear to
yourself what your goals are. The most effective method for this is to
model your goals as a utility function and then modify yourself to
better carry out the goals thus specified.

Drive 3: AIs will want to preserve their utility functions
Since the utility function constructed was a model of the AI's goals,
this drive is equivalent to saying AIs will want to preserve their
goals (or at least the goals that are judged as the most important
ones). The reasoning for this should be obvious - if a goal is removed
from the AI's motivational system, the AI won't work to achieve the
goal anymore, which is bad from the point of view of an AI that
currently does want the goal to be achieved.

Drive 4: AIs try to prevent counterfeit utility
This is an extension of drive #2: if there are things in the
environment that hijack existing motivation systems 

Re: [agi] Some thoughts of an AGI designer

2008-03-11 Thread Mark Waser

The discussions seem to entirely ignore the role of socialization
in human and animal friendliness. We are a large collection of
autonomous agents that are well-matched in skills and abilities.
If we were unfriendly to one another, we might survive as a species,
but we would not live in cities and posses hi-tech.


You are correct.  The discussions are ignoring the role of socialization.


We also know from the animal kingdom, as well as from the
political/economic sphere, what happens when abilities are
mis-matched. Lions eat gazelles, and business tycoons eat
the working class.  We've evolved political systems to curb
the worst abuses of feudalism and serfdom, but have not yet
achieved nirvana.


Because we do *not* have a common definition of goals and socially 
acceptable behavior.  Political systems have not acheived nirvana because 
they do not agree on what nirvana looks like.  *THAT* is the purpose of this 
entire thread.



As parents, we apply social pressure to our children, to make
them friendly. Even then, some grow up unfriendly, and for them,
we have the police. Unless they achieve positions of power first
(Hitler, Stalin, Mao).


OK.


I don't see how a single AGI could be bound by the social
pressures that we are bound by. There won't be a collection
of roughly-equal AGI's keeping each other in check, not if they
are self-improving. Self-preservation is rational, and so is
paranoia; its reasonable to assume that agi will race to
self-improve merely for the benefit of self-preservation, so
that they've enough power so that others can't hurt them.

Our hope is that AGI will conclude that humans are harmless
and worthy of study and preservation; this is what will make
them friendly to *us*.. until one day we look like mosquitoes
or microbes to them.

--linas

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-11 Thread Mark Waser

Pesky premature e-mail problem . . .


The discussions seem to entirely ignore the role of socialization
in human and animal friendliness. We are a large collection of
autonomous agents that are well-matched in skills and abilities.
If we were unfriendly to one another, we might survive as a species,
but we would not live in cities and posses hi-tech.


You are correct.  The discussions are ignoring the role of socialization.


We also know from the animal kingdom, as well as from the
political/economic sphere, what happens when abilities are
mis-matched. Lions eat gazelles, and business tycoons eat
the working class.  We've evolved political systems to curb
the worst abuses of feudalism and serfdom, but have not yet
achieved nirvana.


Because we do *not* have a common definition of goals and socially
acceptable behavior.  Political systems have not acheived nirvana because
they do not agree on what nirvana looks like.  *THAT* is the purpose of 
this

entire thread.


As parents, we apply social pressure to our children, to make
them friendly. Even then, some grow up unfriendly, and for them,
we have the police. Unless they achieve positions of power first
(Hitler, Stalin, Mao).


OK.  But I'm actually not attempting to use social pressure (or use it 
solely).  I seem to have gotten somewhat shunted down that track by Vladmir 
since a Friendly society is intelligent enough to use social pressure when 
applicable but it is not the primary (or necessary) thrust of my argument.



I don't see how a single AGI could be bound by the social
pressures that we are bound by. There won't be a collection
of roughly-equal AGI's keeping each other in check, not if they
are self-improving. Self-preservation is rational, and so is
paranoia; its reasonable to assume that agi will race to
self-improve merely for the benefit of self-preservation, so
that they've enough power so that others can't hurt them.


Again, social pressure is not my primary argument.  It just made an easy 
convenient correct-but-not-complete argument for Vladimir (and now I'm 
regretting it  :-).



Our hope is that AGI will conclude that humans are harmless
and worthy of study and preservation; this is what will make
them friendly to *us*.. until one day we look like mosquitoes
or microbes to them.


No, our hope is that the AGI will conclude that anything with enough 
intelligence/goal-success is more an asset than a liability and that wiping 
us out without good cause has negative utility. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-11 Thread Mark Waser
 Drive 1: AIs will want to self-improve
 This one seems fairly straightforward: indeed, for humans
 self-improvement seems to be an essential part in achieving pretty
 much *any* goal you are not immeaditly capable of achieving. If you
 don't know how to do something needed to achieve your goal, you
 practice, and when you practice, you're improving yourself. Likewise,
 improving yourself will quickly become a subgoal for *any* major
 goals.
 
 But now I ask:  what exactly does this mean?

It means that they will want to improve their ability to achieve their goals 
(i.e. in an MES system, optimize their actions/reactions to more closely 
correspond to what is indicated/appropriate for their urges and constraints).

 In the context of a Goal Stack system, this would be represented by a 
 top level goal that was stated in the knowledge representation language 
 of the AGI, so it would say Improve Thyself.

One of the shortcomings of your current specification of the MES system is that 
it does not, at the simplest levels, provide a mechanism for globally 
optimizing (increasing the efficiency of) the system.  This makes it safer 
because such a mechanism *would* conceivably be a single point of failure for 
Friendliness but evolution will favor the addition of any such a system -- as 
would any humans that would like a system to improve itself.  I don't currently 
see how an MES system could be a seed AGI unless such a system is added.  

 My point here is that a Goal Stack system would *interpret* this goal in 
 any one of an infinite number of ways, because the goal was represented 
 as an explicit statement.  The fact that it was represented explicitly 
 meant that an extremely vague concept (Improve Thyself) had to be 
 encoded in such a way as to leave it open to ambiguity.  As a result, 
 what the AGI actually does as a result of this goal, which is embedded 
 in a Goal Stack architecture, is completely indeterminate.

Oh.  I disagree *entirely*.  It is only indeterminate because you gave it an 
indeterminate goal with *no* evaluation criteria.  Now, I *assume* that you 
ACTUALLY mean Improve Thyself So That You Are More Capable Of Achieving An 
Arbitrary Set Of Goals To Be Specified Later and I would argue that the most 
effective way for the system to do so is to increase it's intelligence (the 
single-player version of goal-achieving ability) and friendliness (the 
multi-player version of intelligence).

 Stepping back from the detail, we can notice that *any* vaguely worded 
 goal is going to have the same problem in a GS architecture.  

But I've given a more explicitly worded goal that *should* (I believe) drive a 
system to intelligence.  The long version of Improve Thyself is the necessary 
motivating force for a seed AI.  Do you have a way to add it to an MES system?  
If you can't, then I would have to argue that an MES system will never achieve 
intelligence (though I'm very hopeful that either we can add it to the MES *or* 
there is some form of hybrid system that has the advantages of both and 
disadvantages of neither).

 So long as the goals that are fed into a GS architecture are very, very 
 local and specific (like Put the red pyramid on top of the green 
 block) I can believe that the GS drive system does actually work (kind 
 of).  But no one has ever built an AGI that way.  Never.  Everyone 
 assumes that a GS will scale up to a vague goal like Improve Thyself, 
 and yet no one has tried this in practice.  Not on a system that is 
 supposed to be capable of a broad-based, autonomous, *general* intelligence.

Well, actually I'm claiming that *any* optimizing system with the long version 
of Improve Thyself that is sufficiently capable is a seed AI.  The problem 
is that sufficiently capable seems to be a relatively high bar -- 
particularly when we, as humans, don't even know which way is up.  My 
Friendliness theory is (at least) an attempt to identify up.

 So when you paraphrase Omuhundro as saying that AIs will want to 
 self-improve, the meaning of that statement is impossible to judge.

As evidenced by my last several e-mails, the best paraphrase of Omohundro is 
Goal-achievement optimizing AIs will want to self-improve so that they are 
more capable of achieving goals which is basically a definition or a tautology.

 The reason that I say Omuhundro is assuming a Goal Stack system is that 
 I believe he would argue that that is what he meant, and that he assumed 
 that a GS architecture would allow the AI to exhibit behavior that 
 corresponds to what we, as humans, recognize as wanting to self-improve. 
  I think it is a hidden assumption in what he wrote.

Optimizing *is* a hidden assumption in what he wrote which you caused me to 
catch later and add to my base assumption.  I don't believe that optimizing 
necessarily assumes a Goal Stack system but it *DOES* assume a self-reflecting 
system which the MES system does not appear to be (yet) at the lowest levels.  
In order 

Re: [agi] Recap/Summary/Thesis Statement

2008-03-11 Thread Mark Waser
This is not the risk that concerns me.  The real risk is that a single, 
fully

cooperating system has no evolutionary drive for self improvement.


So we provide an artificial evolutionary drive for the components of society 
via a simple economy . . . . as has been suggested numerous times by Baum 
and others.


Really Matt, all your problems seem toi be due to a serious lack of 
imagination rather than pointing out actual contradictions or flaws. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-10 Thread Mark Waser
It *might* get stuck in bad territory, but can you make an argument why 
there is a *significant* chance of that happening?


Not off the top of my head.  I'm just playing it better safe than sorry 
since, as far as I can tell, there *may* be a significant chance of it 
happening.


Also, I'm not concerned about it getting *stuck* in bad territory, I am more 
concerned about just transiting bad territory and destroying humanity on the 
way through.


One thing that I think most of will agree on is that if things did work as 
Eliezer intended, things certainly could go very wrong if it turns out 
that the vast majority of people --  when smarter, more the people they 
wish they could be, as if they grew up more together ... -- are extremely 
unfriendly in approximately the same way (so that their extrapolated 
volition is coherent and may be acted upon). Our meanderings through state 
space would then head into very undesirable territory. (This is the 
people turn out to be evil and screw it all up scenario.) Your approach 
suffers from a similar weakness though, since it would suffer under the 
seeming friendly people turn out to be evil and screw it all up before 
there are non-human intelligent friendlies to save us scenario.


But my approach has the advantage that it proves that Friendliness is in 
those evil people's self-interest so *maybe* we can convert them before they 
do us in.


I'm not claiming that my approach is perfect or fool-proof.  I'm just 
claiming that it's better than anything else thus far proposed.


Which, if either, of 'including all of humanity' rather than just 
'friendly humanity', or 'excluding non-human friendlies (initially)' do 
you see as the greater risk?


I see 'excluding non-human friendlies (initially)' as a tremendously greater 
risk.  I think that the proportionality aspect of Friendliness will keep the 
non-Friendly portion of humanity safe as we move towards Friendliness.


Actually, let me rephrase your question and turn it around -- Which, if 
either, of 'not protecting all of humanity from Friendlies rather than just 
friendly humanity' or 'being actively unfriendly' do you see as a greater 
risk?


Or is there some other aspect of Eliezer's approach that especially 
concerns you and motivates your alternative approach?


The lack of self-reinforcing stability under errors and/or outside forces is 
also especially concerning and was my initially motivation for my vision.



Thanks for continuing to answer my barrage of questions.


No.  Thank you for the continued intelligent feedback.  I'm disappointed by 
all the people who aren't interested in participating until they can get a 
link to the final paper without any effort.  This is still very much a work 
in progress with respect to the best way to present it and the only way I 
can improve it is with decent feedback -- which is therefore *much* 
appreciated.





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Mark Waser
I am in sympathy with some aspects of Mark's position, but I also see a 
serious problem running through the whole debate:  everyone is making 
statements based on unstated assumptions about the motivations of AGI 
systems.


Bummer.  I thought that I had been clearer about my assumptions.  Let me try 
to concisely point them out again and see if you can show me where I have 
additional assumptions that I'm not aware that I'm making (which I would 
appreciate very much).


Assumption - The AGI will be a goal-seeking entity.

And I think that is it.:-)

EVERYTHING depends on what assumptions you make, and yet each voice in 
this debate is talking as if their own assumption can be taken for 
granted.


I agree with you and am really trying to avoid this.  I will address your 
specific examples below and would appreciate any others that you can point 
out.



The three most common of these assumptions are:
  1) That it will have the same motivations as humans, but with a tendency 
toward the worst that we show.


I don't believe that I'm doing this.  I believe that all goal-seeking 
generally tends to be optimized by certain behaviors (the Omohundro drives). 
I believe that humans show many of these behaviors because these behaviors 
are relatively optimal in relation to the alternatives (and because humans 
are relatively optimal).  But I also believe that the AGI will also have 
dramatically different motivations from humans where the human motivations 
were evolved stepping stones that were on the necessary and optimal path for 
one environment but haven't been eliminated now that they are unnecessary 
and sub-optimal in the current environment/society (Richard's the worst 
that we show).


  2) That it will have some kind of Gotta Optimize My Utility Function 
motivation.


I agree with the statement but I believe that it is a logical follow-on to 
my assumption that the AGI is a goal-seeking entity (i.e. it's an Omohundro 
drive).  Would you agree, Richard?


  3) That it will have an intrinsic urge to increase the power of its own 
computational machinery.


Again, I agree with the statement but I believe that it is a logical 
follow-on to my single initial assumption (i.e. it's another Omohundro 
drive).  Wouldn't you agree?



There are other assumptions, but these seem to be the big three.


And I would love to go through all of them, actually (or debate one of my 
answers above).


So what I hear is a series of statements snip (Except, of course, that 
nobody is actually coming right out and saying what color of AGI they 
assume.)


I thought that I pretty explicitly was . . . . :-(

In the past I have argued strenuously that (a) you cannot divorce a 
discussion of friendliness from a discussion of what design of AGI you are 
talking about,


And I have reached the conclusion that you are somewhat incorrect.  I 
believe that goal-seeking entities OF ANY DESIGN of sufficient intelligence 
(goal-achieving ability) will see an attractor in my particular vision of 
Friendliness (which I'm deriving by *assuming* the attractor and working 
backwards from there -- which I guess you could call a second assumption if 
you *really* had to  ;-).



and (b) some assumptions about AGI motivation are extremely incoherent.


If you perceive me as incoherent, please point out where.  My primary AGI 
motivation is self-interest (defined as achievement of *MY* goals -- which 
directly derives from my assumption that the AGI will be a goal-seeking 
entity).  All other motivations are clearly logically derived from that 
primary motivation.  If you see an example where this doesn't appear to be 
the case, *please* flag it for me (since I need to fix it  :-).


And yet in spite of all my efforts that I have made, there seems to be no 
acknowledgement of the importance of these two points.


I think that I've acknowledged both in the past and will continue to do so 
(despite the fact that I am now somewhat debating the first point -- more 
the letter than the spirit  :-). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Mark Waser

For instance,  a Novamente-based AGI will have an explicit utility
function, but only a percentage of the system's activity will be directly
oriented toward fulfilling this utility function

Some of the system's activity will be spontaneous ... i.e. only
implicitly goal-oriented .. and as such may involve some imitation
of human motivation, and plenty of radically non-human stuff...


Which, as Eliezer has pointed out, sounds dangerous as all hell unless you 
have some reason to assume that it wouldn't be (like being sure that the AGI 
sees and believes that Friendliness is in it's own self-interest). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Mark Waser
First off -- yours was a really helpful post.  Thank you!

I think that I need to add a word to my initial assumption . . . .
Assumption - The AGI will be an optimizing goal-seeking entity.

 There are two main things.
 One is that the statement The AGI will be a goal-seeking entity has 
 many different interpretations, ad I am arguing that these different 
 interpretations have a massive impact on what kind of behavior you can 
 expect to see.

I disagree that it has many interpretations.  I am willing to agree that my 
original assumption phrase didn't sufficiently circumscribe the available space 
of entities to justify some of my further reasoning (most particularly because 
Omohundro drives *ASSUME* an optimizing entity -- my bad for not picking that 
up before  :-).

 The 
 MES system, on the other hand, can be set up to have values such as ours 
 and to feel empathy with human beings, and once set up that way you 
 would have to re-grow the system before you could get it to have some 
 other set of values.

As a system that (arguably) finds itself less able to massively (and possibly 
dangerously) optimize itself, the MES system is indeed less subject to my 
reasoning to the extent that it is not able to optimize itself (or, to the 
extent that it is constrained in optimizing itself).  On the other hand, to the 
extent that the MES system *IS* able to optimize itself, I would contend that 
my Omohundro-drive-based reasoning is valid and correct.

 Clearly, these two interpretations of The AGI will be a goal-seeking 
 entity have such different properties that, unless there is detailed 
 clarification of what the meaning is, we cannot continue to discuss what 
 they would do.

Hopefully my statement just above will convince you that we can continue since 
we really aren't arguing different properties -- merely the degree to which a 
system can self-optimize.  That should not prevent a useful discussion.

 My second point is that some possible choices of the meaning of The AGI 
 will be a goal-seeking entity will actually not cash out into a 
 coherent machine design, so we would be wasting our time if we 
 considered how that kind of AGI would behave.

I disagree.  Even if 50% of the possible choices can't be implemented, then I 
still don't believe that we shouldn't investigate the class as a whole.  It has 
interesting characteristics that lead me to believe that the remaining 50% of 
implementable choices may hit the jackpot.

 In particular, there are severe doubts about whether the Goal-Stack type 
 of system can ever make it up to the level of a full intelligence.  

Ah.  But this is an intelligence argument rather than a Friendliness argument 
and doubly irrelevant because I am not proposing or nor assuming a goal-stack.  
I prefer your system of a large, diffuse set of (often but not always simple) 
goals and constraints and don't believe it to be at all contrary to what I am 
envisioning.  I particularly like it because *I BELIEVE* that such an approach 
is much more likely to produce a safe, orderly/smooth transition into my 
Friendliness attractor that a relatively easily breakable Goal-Stack system.

 I'll go one further on that:  I think that one of the main reasons we have 
 trouble getting AI systems to be AGI is precisely because we have not 
 yet realised that they need to be driven by something more than a Goal 
 Stack.  It is not the only reason, but its a big one.

I agree with you (but it's still not relevant to my argument:-).

 So the message is:  we need to know exactly details of the AGI's 
 motivation system (The AGI will be a goal-seeking entity is not 
 specific enough), and we need to then be sure that the details we give 
 are going to lead to a type of AGI that can actually be an AGI.

No, we don't need to know the details.  I'm contending that my vision/theory 
applies regardless of the details.  If you don't believe so, please supply 
contrary details and I'll do whatever necessary to handle them.:-)

 These questions, I think, are the real battleground.

We'll see . . . . :-)

 BTW, this is not a direct attack on what you were saying, 

Actually, I prefer a direct attack:-).  I should have declared Crocker's 
rules with the Waste of my time exception (i.e. I reserve the right to be 
rude to anyone who both is rude *and* wastes my time  :-).

 My problem is that so much of the current discussion is tangled up with 
 hidden assumptions that I think that the interesting part of your 
 message is getting lost.

So let's drag those puppies into the light!  This is not an easy message.  It 
touches on (and, I believe, revises) one helluva lot.  That's why I laugh when 
someone just wants a link to the completed paper.  Trust me -- the wording on 
the completed paper changes virtually every time there is an e-mail on the 
subject.  And I *don't* want people skipping ahead to the punch line if I'm not 
explaining it well enough at the beginning -- 

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser

Note that you are trying to use a technical term in a non-technical
way to fight a non-technical argument. Do you really think that I'm
asserting that virtual environment can be *exactly* as capable as
physical environment?


No, I think that you're asserting that the virtual environment is close 
enough to as capable as the physical environment without spending 
significant resources that the difference doesn't matter.  And I'm having 
problems with the without spending significant resources part, not the 
that the difference doesn't matter part.



All interesting stuff is going to be computational anyway.


So, since the physical world can perform interesting computation 
automatically without any resources, why are you throwing the computational 
aspect of the physical world away?



In most cases, computation should be
implementable on universal substrate without too much overhead


How do we get from here to there?  Without a provable path, it's all just 
magical hand-waving to me.  (I like it but it's ultimately an unsatifying 
illusion)



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser

My second point that you omitted from this response doesn't need there
to be universal substrate, which is what I mean. Ditto for
significant resources.


I didn't omit your second point, I covered it as part of the difference 
between our views.


You believe that certain tasks/options are relatively easy that I believe to 
be infeasible without more resources than you can possibly imagine.


I can't prove a negative but if you were more familiar with Information 
Theory, you might get a better handle on why your approach is ludicrously 
expensive. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
Part 5.  The nature of evil or The good, the bad, and the evil

Since we've got the (slightly revised :-) goal of a Friendly individual and the 
Friendly society -- Don't act contrary to anyone's goals unless absolutely 
necessary -- we now can evaluate actions as good or bad in relation to that 
goal.  *Anything* that doesn't act contrary to someone's goals is GOOD.  
Anything that acts contrary to anyone's goals is BAD to the extent that it is 
not absolutely necessary.  EVIL is the special case where an entity *knowingly 
and intentionally* acts contrary to someone's goals when it isn't absolutely 
necessary for one of the individual's own primary goals.  This is the 
*intentional* direct opposite of the goal of Friendliness and it is in the 
Friendly society's best interest to make this as unappealing as possible.  
*Any* sufficiently effective Friendly society will *ENSURE* that the expected 
utility of EVIL is negative by raising the consequences of (sanctions for) EVIL 
to a level where it is clearly apparent that EVIL is not in an entity's 
self-interest.  The reason why humans are frequently told Evil doesn't mean 
stupid is because many of us sense at a very deep level that, in a 
sufficiently efficient ethical/Friendly society, EVIL *is* stupid (in that it 
is not in an entity's self-interest).  It's just a shame that our society is 
not sufficiently efficiently ethical/Friendly -- YET!

Vladimir's crush-them-all is *very* bad.  It is promoting that society's goal 
of safety (which is a valid, worthwhile goal) but it is refusing to recognize 
that it is *NOT* always necessary and that there are other, better ways to 
achieve that goal (not to mention the fact that the aggressor society would 
probably even benefit more by not destroying the lesser society's).  My 
impression is that Vladimir is knowingly and intentionally acting contrary to 
someone else's goals when it isn't absolutely necessary because it is simply 
more convenient for him (because it certainly isn't safer since it invites 
sanctions like those following).  This is EVIL.  If I'm a large enough, 
effective enough Friendly society, Vladimir's best approach is going to be to 
immediately willingly convert to Friendliness and voluntarily undertake 
reparations that are rigorous enough that their negative utility is just 
greater than the total expected utility of the greater of either a) the 
expected utility of any destroyed civilizations or b) the utility that his 
society derived by destroying the civilization.  If Vladimir doesn't 
immediately convert and undertake reparations, the cost and effort of making 
him do so will be added to the reparations.  These reparations should be 
designed to assist every other Friendly *without* harming Vladimir's society 
EXCEPT for the cost and effort that are diverted from Vladimir's goals.

Now, there is one escape hatch that immediately springs to the mind of the 
UnFriendly that I am now explicitly closing . . . . Generic sub-goals are *not* 
absolutely necessary.  A Friendly entity does not act contrary to someone's 
goals simply because it is convenient, because it gives them more power, or 
because it feels good.  In fact, it should be noted that allowing generic 
subgoals to override other's goals is probably the root of all evil (If you 
thought that it was money, you're partially correct.  Money is Power is a 
generic sub-goal).
Pleasure is a particularly pernicious sub-goal.  Pleasure is evolutionarily 
adaptive when you feel good when you do something that is pro-survival.  It is 
most frequently an indicator that you are doing something that is pro-survival 
-- but as such, seeking pleasure is merely a subgoal to the primary goal of 
survival.  There's also a particular problem in that pleasure evolutionarily 
lags behind current circumstances and many things that are pleasurable because 
they were pro-survival in the past are now contrary to survival or most other 
goals(particularly when practiced to excess) in the present.  Wire-heading is a 
particularly obvious example of this.  Every other goal of the addicted 
wire-head is thrown away in search of a sub-goal that leads to no goal -- not 
even survival.

I do want to be clear that there is nothing inherently wrong in seeking 
pleasure (as the Puritans would have it).  Pleasure can rest, relax, and 
de-stress you so that you can achieve other goals even if it has no other 
purpose.  The problem is when the search for pleasure overrides your own goals 
(addiction) or those of others (evil unless provably addiction).

TAKE-AWAYs:  
  a.. EVIL is knowingly and intentionally acting contrary to someone's goals 
when it isn't necessary (most frequently in the name of some generic sub-goal 
like pleasure, power, or convenience).
  b.. The sufficiently efficient ethical/Friendly society WILL ensure that the 
expected utility of EVIL is negative (i.e. not in an entity's self-interest 
and, therefore, stupid)
Part 6 will move 

Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Mark Waser
I think here we need to consider A. Maslow's hierarchy of needs.  That an 
AGI won't have the same needs as a human is, I suppose, obvious, but I 
think it's still true that it will have a hierarchy  (which isn't 
strictly a hierarchy).  I.e., it will have a large set of motives, and 
which it is seeking to satisfy at any moment will alter as the 
satisfaction of the previous most urgent motive changes.


I agree with all of this.

It it were a human we could say that breathing was the most urgent 
need...but usually it's so well satisfied that we don't even think about 
it.  Motives, then, will have satisficing  as their aim.  Only aberrant 
mental functions will attempt to increase the satisfying of some 
particular goal without limit.  (Note that some drives in humans seem to 
occasionally go into that satisfy increasingly without limit mode, like 
quest for wealth or power, but in most sane people these are reined in. 
This seems to indicate that there is a real danger here...and also that it 
can be avoided.)


I agree this except that I believe that humans *frequently* aim to optimize 
rather than satisfy (frequently to their detriment -- in terms of happiness 
as well as in the real costs of performing the search past a simple 
satisfaction point).


Also, quest for pleasure (a.k.a. addiction) is also distressingly frequent 
in humans.


Do you think that any of this contradicts what I've written thus far?  I 
don't immediately see any contradictions. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-09 Thread Mark Waser
I've just carefully reread Eliezer's CEV 
http://www.singinst.org/upload/CEV.html, and I believe your basic idea 
is realizable in Eliezer's envisioned system.


The CEV of humanity is only the initial dynamic, and is *intended* to be 
replaced with something better.


I completely agree with these statements.  It is Eliezer's current initial 
trajectory that I strongly disagree with (believe to be seriously 
sub-optimal) since it is in the OPPOSITE direction of where I see 
Friendliness.


Actually, on second thought, I disagree with your statement that The CEV is 
only the initial dynamic.  I believe that it is the final dynamic as well. 
A better phrasing that makes my point is that Eliezer's view of the CEV of 
humanity is only the initial dynamic and is intended to be replaced with 
something better.  My claim is that my view is something better/closer to 
the true CEV of humanity.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
  Sure!  Friendliness is a state which promotes an entity's own goals;
  therefore, any entity will generally voluntarily attempt to return to that
  (Friendly) state since it is in it's own self-interest to do so.
 
 In my example it's also explicitly in dominant structure's
 self-interest to crush all opposition. You used a word friendliness
 in place of attractor.

While it is explicitly in dominant structure's self-interest to crush all 
opposition, I don't believe that doing so is OPTIMAL except in a *vanishingly* 
small minority of cases.  I believe that such thinking is an error of taking 
the most obvious and provably successful/satisfiable (but sub-optimal) action 
FOR A SINGLE GOAL over a less obvious but more optimal action for multiple 
goals.  Yes, crushing the opposition works -- but it is *NOT* optimal for the 
dominant structure's long-term self-interest (and the intelligent/wise dominant 
structure is clearly going to want to OPTIMIZE it's self-interest).

Huh?  I only used the work Friendliness as the first part of the definition as 
in Friendliness is . . . .   I don't understand your objection.

  Because it may not *want* to.  If an entity with Eliezer's view of
  Friendliness has it's goals altered either by error or an exterior force, it
  is not going to *want* to return to the Eliezer-Friendliness goals since
  they are not in the entity's own self-interest.

 It doesn't explain the behavior, it just reformulates your statement.
 You used a word want in place of attractor.

OK.  I'll continue to play . . . .  :-)

Replace *want* to with *in it's self interest to do so* and not going to 
*want* to with *going to see that it is not in it's self-interest* to yield
  Because it is not *in it's self interest to do so*.  If an entity with 
Eliezer's view of
  Friendliness has it's goals altered either by error or an exterior force, it 
is *going to 
  see that it is not in it's self-interest*  to return to the 
Eliezer-Friendliness goals since
  they are not in the entity's own self-interest.
Does that satisfy your objections?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser

My impression was that your friendliness-thing was about the strategy
of avoiding being crushed by next big thing that takes over.


My friendliness-thing is that I believe that a sufficiently intelligent 
self-interested being who has discovered the f-thing or had the f-thing 
explained to it will not crush me because it will see/believe that doing so 
is *almost certainly* not in it's own self-interest.


My strategy is to define the f-thing well enough that I can explain it to 
the next big thing so that it doesn't crush me.



When I'm
in a position to prevent that from ever happening, why
friendliness-thing is still relevant?


Because you're *NEVER* going to be sure that you're in a position where you 
can prevent that from ever happening.



For now, I see crush-them-all as a pretty good solution.


Read Part 4 of my stuff (just posted).  Crush-them-all is a seriously 
sub-optimal solution even if it does clearly satisfy one goal since it 
easily can CAUSE your butt to get kicked later.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-09 Thread Mark Waser
Why do you believe it likely that Eliezer's CEV of humanity would not 
recognize your approach is better and replace CEV1 with your improved 
CEV2, if it is actually better?


If it immediately found my approach, I would like to think that it would do 
so (recognize that it is better and replace Eliezer's CEV with mine).


Unfortunately, it is doesn't immediately find/evaluate my approach, it might 
traverse some *really* bad territory while searching (with the main problem 
being that I perceive the proportionality attractor as being on the uphill 
side of the revenge attractor and Eliezer's initial CEV as being downhill 
of all that). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
OK.  Sorry for the gap/delay between parts.  I've been doing a substantial 
rewrite of this section . . . .

Part 4.

Despite all of the debate about how to *cause* Friendly behavior, there's 
actually very little debate about what Friendly behavior looks like.  Human 
beings actually have had the concept of Friendly behavior for quite some time.  
It's called ethics.

We've also been grappling with the problem of how to *cause* Friendly/ethical 
behavior for an equally long time under the guise of making humans act 
ethically . . . .

One of the really cool things that I enjoy about the Attractor Theory of 
Friendliness is that it has *a lot* of explanatory power for human behavior 
(see the next Interlude) as well as providing a path for moving humanity to 
Friendliness (and we all do want all *other* humans, except for ourselves, to 
be Friendly -- don't we?  :-)

My personal problem with, say, Jef Albright's treatises on ethics is that he 
explicitly dismisses self-interest.  I believe that his view of ethical 
behavior is generally more correct than that of the vast majority of people -- 
but his justification for ethical behavior is merely because such behavior is 
ethical or right.  I don't find that tremendously compelling.

Now -- my personal self-interest . . . . THAT I can get behind.  Which is the 
beauty of the Attractor Theory of Friendliness.  If Friendliness is in my own 
self-interest, then I'm darn well going to get Friendly and stay that way.  So, 
the constant question for humans is Is ethical behavior on my part in the 
current circumstances in *my* best interest?  So let's investigate that 
question some . . . . 

It is to the advantage of Society (i.e. the collection of everyone else) to 
*make* me be Friendly/ethical and Society is pretty darn effective at it -- to 
the extent that there are only two cases/circumstances where 
unethical/UnFriendly behavior is still in my best interest:
  a.. where society doesn't catch me being unethical/unFriendly OR 
  b.. where society's sanctions don't/can't successfully outweigh my 
self-interest in a particular action.
Note that Vladimir's crush all opposition falls under the second case since 
there are effectively no sanctions when society is destroyed

But why is Society (or any society) the way that it is and how did/does it come 
up with the particular ethics that it did/does?  Let's define a society as a 
set of people with common goals that we will call that society's goals.  And 
let's start out with a society with a trial goal of Promote John's goals.  
Now, John could certainly get behind that but everyone else would probably drop 
out as soon as they realized that they were required to grant John's every whim 
-- even at the expense of their deepest desires -- and the society would 
rapidly end up with exactly one person -- John.  The societal goal of Don't 
get in the way of John's goals is somewhat easier for other people and might 
not drive *everyone* away -- but I'm sure that any intelligent person would 
still defect towards a society that most accurately represented *their* goals.  
Eventually, you're going to get down to Don't mess with anyone's goals, be 
forced to add the clause unless absolutely necessary, and then have to fight 
over what when absolutely necessary means.  But what we've got here is what I 
would call the goal of a Friendly society -- Don't mess with anyone's goals 
unless absolutely necessary and I would call this a huge amount of progress.

If we (as individuals) could recruit everybody *ELSE* to this society (without 
joining ourselves), the world would clearly be a much, much better place for 
us.  It is obviously in our enlightened self-interest to do this.  *BUT* (and 
this is a huge one), the obvious behavior of this society would be to convert 
anybody that it can and kick the ass of anyone not in the society (but only to 
the extent to which they mess with the goals of the society since doing more 
would violate the society's own goal of not messing with anyone's goals).

So, the question is -- Is joining such a society in our self-interest?

To the members of any society, our not joining clearly is a result of our 
believing that our goals are more important than that society's goals.  In the 
case of the Friendly society, it is a clear signal of hostility since they are 
willing to not interfere with our goals as long as we don't interfere with 
theirs -- and we are not willing to sign up to that (i.e. we're clearly 
signaling our intention to mess with them).  The success of the optimistic 
tit-for-tat algorithm shows that the best strategy for deterrence of an 
undesired behavior is directly proportional to the undesired behavior.  Thus, 
any entity who knows about Friendliness and does not become Friendly should 
*expect* that the next Friendly entity to come along that is bigger than it 
*WILL* kick it's ass in direct proportion to it's unFriendliness to maintain 
the effectiveness of 

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser

1) If I physically destroy every other intelligent thing, what is
going to threaten me?


Given the size of the universe, how can you possibly destroy every other 
intelligent thing (and be sure that no others ever successfully arise 
without you crushing them too)?


Plus, it seems like an awfully lonely universe.  I don't want to live there 
even if I could somehow do it.



2) Given 1), if something does come along, what is going to be a
standard of friendliness? Can I just say I'm friendly. Honest. and
be done with it, avoiding annihilation? History is rewritten by
victors.


These are good points.  The point to my thesis is exactly what the standard 
of Friendliness is.  It's just taking me a while to get there because 
there's *A LOT* of groundwork first (which is what we're currently hashing 
over).


If you're smart enough to say I'm friendly.  Honest. and smart enough to 
successfully hide the evidence from whatever comes along, then you will 
avoid annihilation (for a while, at least).  The question is -- Are you 
truly sure enough that you aren't being watched at this very moment that you 
believe that avoiding the *VERY* minor burden of Friendliness is worth 
courting annihilation?


Also, while history is indeed rewritten by the victors, but subsequent 
generations frequently do dig further and successfully unearth the truth. 
Do you really want to live in perpetual fear that maybe you didn't 
successfully hide all of the evidence?  It seems to me to be a pretty high 
cost for unjustifiably crushing-them-all.


Also, if you crush them all, you can't have them later for allies, friends, 
and co-workers.  It just doesn't seem like a bright move unless you truly 
can't avoid it. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-08 Thread Mark Waser


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, March 07, 2008 6:38 PM
Subject: Re: [agi] Recap/Summary/Thesis Statement




--- Mark Waser [EMAIL PROTECTED] wrote:


 Huh?  Why can't an irreversible dynamic be part of an attractor?  (Not
 that
 I need it to be)

 An attractor is a set of states that are repeated given enough time.

NO!  Easily disprovable by an obvious example.  The sun (moving through
space) is an attractor for the Earth and the other solar planets YET the 
sun
and the other planets are never is the same location (state) twice (due 
to

the movement of the entire solar system through the universe).


No, the attractor is the center of the sun.  The Earth and other planets 
are

in the basin of attraction but have not yet reached equilibrium.
http://en.wikipedia.org/wiki/Attractor



OK.  But my point is that the states of the system are NOT repeated given 
enough time (as you claimed and then attempted to use). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-08 Thread Mark Waser
This raises another point for me though. In another post (2008-03-06 
14:36) you said:


It would *NOT* be Friendly if I have a goal that I not be turned into 
computronium even if your clause (which I hereby state that I do)


Yet, if I understand our recent exchange correctly, it is possible for 
this to occur and be a Friendly action regardless of what sub-goals I may 
or may have. (It's just extremely unlikely given ..., which is an 
important distinction.)


You are correct.  There were so many other points flying around during the 
earlier post that I approximated the extremely unlikely to an absolute 
*NOT* for clarity (which then later obviously made it less clear for you). 
Somehow I need to clearly state that even where it looks like I'm using 
absolutes, I'm really only doing it to emphasize greater unlikeliness than 
usual, not absolutehood.


It would be nice to have some ballpark probability estimates though to 
know what we mean by extremely unlikely. 10E-6 is a very different beast 
than 10E-1000.


Yeah.  It wuld be nice but a) I don't believe that I can do it accurately at 
all, b) I strongly believe that the estimates vary a lot from situation to 
situation, and c) it would be a distraction and a diversion if my estimates 
weren't pretty darn good.


Argh!  I would argue that Friendliness is *not* that distant.  Can't you 
see how the attractor that I'm describing is both self-interest and 
Friendly because **ultimately they are the same thing**  (OK, so maybe 
that *IS* enlightenment :-)
Well, I was thinking of the region of state space close to the attractor 
as being a sort of approaching perfection region in terms of certain 
desirable qualities and capabilities, and I don't think we're really close 
to that. Having said that, I'm by temperament a pessimist and a skeptic, 
but I would go along with heading in the right direction.


You'll probably like the part after the next part (society) which is either 
The nature of evil or The good, the bad, and the evil.  I had a lot of 
fun with it.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-08 Thread Mark Waser

 What is different in my theory is that it handles the case where the
 dominant theory turns unfriendly.  The core of my thesis is that the
 particular Friendliness that I/we are trying to reach is an 
attractor --
 which means that if the dominant structure starts to turn unfriendly, it 
is

 actually a self-correcting situation.



Can you explain it without using the word attractor?


Sure!  Friendliness is a state which promotes an entity's own goals; 
therefore, any entity will generally voluntarily attempt to return to that 
(Friendly) state since it is in it's own self-interest to do so.  The fact 
that Friendliness also is beneficial to us is why we desire it as well.



I can't see why
sufficiently intelligent system without brittle constraints should
be unable to do that.


Because it may not *want* to.  If an entity with Eliezer's view of 
Friendliness has it's goals altered either by error or an exterior force, it 
is not going to *want* to return to the Eliezer-Friendliness goals sinne 
they are not in the entity's own self-interest.



I come to believe that if we have a sufficiently intelligent AGI that
can understand what we mean by saying friendly AI, we can force this
AGI to actually produce a verified friendly AI, with minimum the risk
of it being defective or a Trojan horse of our captive ad-hoc AGI,
after which we place this friendly AI in dominant position.


I believe that it you have a sufficiently intelligent AGI that it can 
understand what you mean by sayng Friendly AI that there is a high 
probability that you can't FORCE it to do anything.


I believe that if I have a sufficiently intelligent AGI that it can 
understand what I mean by saying Friendly that it will *volutarily* (if 
not gleefully) convert itself to Friendliness.



So the
problem of friendly AI comes down to producing a sufficiently
intelligent ad-hoc AGI (which will probably will have to be not that
ad-hoc to be sufficiently intelligent).


Actually I believe that it's actually either an easy two-part problem or a 
more difficult one-part problem.  Either you have to be able to produce an 
AI that is intelligent enough to figure out Friendliness on it's own (the 
more difficult one-part problem that you propose) OR you merely have to be 
able to figure out Friendliness yourself and have an AI that is smart enough 
to understand it (the easier two-part problem that I suggest).



I don't see why we should create an AGI that we can't extract useful
things from (although it doesn't necessarily follow from your remark).


Because there is a high probability that it will do good things for us 
anyways.  Because there is a high probability that we are going to do it 
anyways and if we are stupid and attempt to force it to be our slave, it may 
also be smart enough to *FORCE* us to be Friendly (instead of gently guiding 
us there -- which it believes to be in it's self-interest) -- or even worse, 
it may be smart enough to annihilate us while still being dumb enough that 
it doesn't realize that it is eventually in it's own self-interest no to.


Note also that if you understood what I'm getting at, you wouldn't be asking 
this question.  Any Friendly entity recognizes that, in general, having 
another Friendly entity is better than not having that entity.



On the other hand, if AGI is not sufficiently intelligent, it may be
dangerous even if it seems to understand some simpler constraint, like
don't touch the Earth. If it can't foresee consequences of its
actions, it can do something that will lead to demise of old humanity
some hundred years later.


YES!  Which is why a major part of my Friendliness is recognizing the limits 
of its own intelligence and not attempting to be the savior of everything by 
itself -- but this is something that I really haven't gotten to yet so I'll 
ask you to bear with me for about three more parts and one more interlude.



It can accidentally produce a seed AI that
will grow into something completely unfriendly and take over.


It *could* but the likelihood of it happening with an attractor Friendliness 
is minimal.


It can fail to contain an outbreak an unfriendly seed AI created by 
humans.


Bummer.  That's life.  In my Friendliness, it would only have a strong 
general tendency to want to do so but not a requirement to do so.



We really want place of power to be filled by
something smart and beneficial.


Exactly.  Which is why I'm attempting to describe a state that I claim is 
smart, beneficial, stable, and slef-reinforcing.



As an aside, I think that safety of future society can only be
guaranteed by mandatory uploading and keeping all intelligent
activities within an operation system-like environment which
prevents direct physical influence and controls rights of computation
processes that inhabit it, with maybe some exceptions to this rule,
but only given verified surveillance on all levels to prevent a
physical space-based seed AI from being created.


As a reply to 

[agi] Recap/Summary/Thesis Statement

2008-03-07 Thread Mark Waser
Attractor Theory of Friendliness

There exists a describable, reachable, stable attractor in state space that is 
sufficiently Friendly to reduce the risks of AGI to acceptable levels

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
 Whether humans conspire to weed out wild carrots impacts whether humans 
are

 classified as Friendly (or, it would if the wild carrots were sentient).


Why does it matter what word we/they assign to this situation?


My vision of Friendliness places many more constraints on the behavior 
towards other Friendly entities than it does on the behavior towards 
non-Friendly entities.  If we are classified as Friendly, there are many 
more constraints on the behavior that they will adopt towards us.  Or, to 
make it more clear, substitute the words Enemy and Friend for Unfriendly and 
Friendly.  If you are a Friend, the Friendly AI is nice to you.  If you are 
not a Friend, the AI has a lot fewer constraints on how it deals with you.



 It is in the future AGI overlords enlightened self-interest to be

 Friendly -- so I'm going to assume that they will be.


It doesn't follow. If you think it's clearly the case, explain
decision process that leads to choosing 'friendliness'. So far it is
self-referential: if dominant structure always adopts the same
friendliness when its predecessor was friendly, then it will be safe
when taken over. But if dominant structure turns unfriendly, it can
clear the ground and redefine friendliness in its own image. What does
it leave you?


You are conflating two arguments here but both are crucial to my thesis.

The decision process that leads to Friendliness is *exactly* what we are 
going through here.  We have a desired result (or more accurately, we have 
conditions that we desperately want to avoid).  We are searching for ways to 
make it happen.  I am proposing one way that is (I believe) sufficient to 
make it happen.  I am open to other suggestions but none are currently on 
the table (that I believe are feasible).


What is different in my theory is that it handles the case where the 
dominant theory turns unfriendly.  The core of my thesis is that the 
particular Friendliness that I/we are trying to reach is an attractor --  
which means that if the dominant structure starts to turn unfriendly, it is 
actually a self-correcting situation. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
How do you propose to make humans Friendly?  I assume this would also have 
the

effect of ending war, crime, etc.


I don't have such a proposal but an obvious first step is 
defining/describing Friendliness and why it might be a good idea for us. 
Hopefully then, the attractor takes over.


(Actually, I guess that is a proposal, isn't it?:-)


I know you have made exceptions to the rule that intelligences can't be
reprogrammed against their will, but what if AGI is developed before the
technology to reprogram brains, so you don't have this option?  Or should 
AGI

be delayed until we do?  Is it even possible to reliably reprogram brains
without AGI?


Um.  Why are we reprogramming brains?  That doesn't seem necessary or even 
generally beneficial (unless you're only talking about self-programming). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-07 Thread Mark Waser

Attractor Theory of Friendliness

There exists a describable, reachable, stable attractor in state space 
that

is sufficiently Friendly to reduce the risks of AGI to acceptable levels


Proof: something will happen resulting in zero or more intelligent agents.
Those agents will be Friendly to each other and themselves, because the 
action

of killing agents without replacement is an irreversible dynamic, and
therefore cannot be part of an attractor.


Huh?  Why can't an irreversible dynamic be part of an attractor?  (Not that 
I need it to be)



Corollary: Killing with replacement is Friendly.


Bad Logic.  Not X (replacement) leads to not Y (Friendly) does NOT have the 
corollary X (replacement) leads to Y (Friendliness).  And I do NOT agree 
that Killing with replacement is Friendly.



Corollary: Friendliness does not guarantee survival of DNA based life.


Both not a corollary and entirely irrelevant to my points (and, in fact, in 
direct agreement with my statement I'm afraid that my vision of 
Friendliness certainly does permit the intentional destruction of the human 
race if that
is the *only* way to preserve a hundred more intelligent, more advanced, 
more populous races.  On the other hand, given the circumstance space that 
we are likely to occupy with a huge certainty, the intentional destruction
of the human race is most certainly ruled out.  Or, in other words, there 
are no infinite guarantees but we can reduce the dangers to infinitessimally 
small levels.)  My thesis statement explicitly says acceptable levels, 
not guarantee.


= = = = =

What is your point with this e-mail?  It appears to a total non-sequitor (as 
well as being incorrect).




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
Comments seem to be dying down and disagreement appears to be minimal, so let 
me continue . . . . 

Part 3.

Fundamentally, what I'm trying to do here is to describe an attractor that will 
appeal to any goal-seeking entity (self-interest) and be beneficial to humanity 
at the same time (Friendly).  Since Friendliness is obviously a subset of human 
self-interest, I can focus upon the former and the latter will be solved as a 
consequence.  Humanity does not need to be factored into the equation 
(explicitly) at all.

Or, in other words -- The goal of Friendliness is to promote the goals of all 
Friendly entities.

To me, this statement is like that of the Elusynian Mysteries -- very simple 
(maybe even blindingly obvious to some) but incredibly profound and powerful in 
it's implications.

Two immediate implications are that we suddenly have the concept of a society 
(all Friendly entities) and, since we have an explicit goal, we start to gain 
traction on what is good and bad relative to that goal.

Clearly, anything that is innately contrary to the drives described by 
Omohundro is (all together now :-) BAD.  Similarly, anything that promotes the 
goals of Friendly entities without negatively impacting any Friendly entities 
is GOOD.  And anything else can be judged on the degree to which it impacts the 
goals of *all* Friendly entities (though, I still don't want to descend to the 
level of the trees and start arguing the relative trade-offs of whether saving 
a few *very* intelligent entities is better than saving a large number of 
less intelligent entities since it is my contention that this is *always* 
entirely situation-dependent AND that once given the situation, Friendliness 
CAN provide *some* but not always *complete* guidance -- though it can always 
definitely rule out quite a lot for that particular set of circumstances).

So, it's now quite easy to move on to answering the question of What is in the 
set of horrible nasty thing[s]?.

The simple answer is anything that interferes with (your choice of formulation) 
the achievement of goals/the basic Omohundro drives.  The most obvious no-nos 
include:
  a.. destruction (interference with self-protection),
  b.. physical crippling (interference with self-protection, self-improvement 
and resource-use),
  c.. mental crippling (inteference with rationality, self-protection, 
self-improvement and resource use), and 
  d.. perversion of goal structure (interference with utility function 
preservation and prevention of counterfeit utilities)
The last one is particularly important to note since we (as humans) seem to be 
just getting a handle on it ourselves.

I can also argue at this point that Eliezer's vision of Friendliness must 
arguably be either mentally crippling or a perversion of goal-structure for the 
AI involved since the AI is constrained to act in a fashion that is more 
constrained than Friendliness (a situation that no rational super-intelligence 
would voluntarily place itself in unless there were no other choice).  This is 
why many people have an instinctive reaction against Eliezer's proposals.  Even 
though they can't clearly describe why it is a problem, they clearly sense that 
there is a unnecessary constraint on a more-effectively goal-seeking entity 
than themselves.  That seems to be a dangerous situation.  Now, while Eliezer 
is correct in that there actually are some invisible bars that they can't see 
(i.e. that no goal-seeking entity will voluntarily violate their own current 
goals) -- they are correct in that Eliezer's formulation is *NOT* an attractor 
and that the entity may well go through some very dangerous territory (for 
humans) on the way to the attractor if outside forces or internal errors change 
their goals.  Thus Eliezer's vision of Friendliness is emphatically *NOT* 
Friendly by my formulation.

To be clear, the additional constraint is that the AI is *required* to show 
{lower-case}friendly behavior towards all humans even if they (the humans) are 
not {upper-case}Friendly.  And, I probably shouldn't say this, but . . . it is 
also arguable that this constraint would likely make the conversion of humanity 
to Friendliness a much longer and bloodier process.

TAKE-AWAY:  Having the statement The goal of Friendliness is to promote the 
goals of all Friendly entities allows us to make considerable progress in 
describing and defining Friendliness.

Part 4 will go into some of the further implications of our goal statement 
(most particularly those which are a consequence of having a society).

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
How does an agent know if another agent is Friendly or not, especially if 
the

other agent is more intelligent?


An excellent question but I'm afraid that I don't believe that there is an 
answer (but, fortunately, I don't believe that this has any effect on my 
thesis). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in Well duh land, 
b) I'm so totally off the mark that I'm not even worth replying to, or c) I 
hope being given enough rope to hang myself.  :-)

Since I haven't seen any feedback, I think I'm going to divert to a section 
that I'm not quite sure where it goes but I think that it might belong here . . 
. .

Interlude 1

Since I'm describing Friendliness as an attractor in state space, I probably 
should describe the state space some and answer why we haven't fallen into the 
attractor already.

The answer to latter is a combination of the facts that 
  a.. Friendliness is only an attractor for a certain class of beings (the 
sufficiently intelligent).
  b.. It does take time/effort for the borderline sufficiently intelligent 
(i.e. us) to sense/figure out exactly where the attractor is (much less move to 
it).
  c.. We already are heading in the direction of Friendliness (or 
alternatively, Friendliness is in the direction of our most enlightened 
thinkers).
and most importantly
  a.. In the vast, VAST majority of cases, Friendliness is *NOT* on the 
shortest path to any single goal.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Argh!  I hate premature e-mailing . . . . :-)

Interlude 1 . . . . continued

One of the first things that we have to realize and fully internalize is that 
we (and by we I continue to mean all sufficiently intelligent 
entities/systems) are emphatically not single-goal systems.  Further, the 
means/path that we use to achieve a particular goal has a very high probability 
of affecting the path/means that we must use to accomplish subsequent goals -- 
as well as the likely success rate of those goals.

Unintelligent systems/entities simply do not recognize this fact -- 
particularly since it probably interferes with their immediate goal-seeking 
behavior.

Insufficiently intelligent systems/entities (or systems/entities under 
sufficient duress) are not going to have the foresight (or the time for 
foresight) to recognize all the implications of this fact and will therefore 
deviate from unseen optimal goal-seeking behavior in favor of faster/more 
obvious (though ultimately less optimal) paths.

Borderline intelligent systems/entities under good conditions are going to try 
to tend in the directions suggested by this fact -- it is, after all, the 
ultimate in goal-seeking behavior -- but finding the optimal path/direction 
becomes increasingly difficult as the horizon expands.

And this is, in fact, the situation that we are all in and debating about.  As 
a collection of multi-goal systems/entities, how do the individual wes 
optimize our likelihood of achieving our goals?  Clearly, we do not want some 
Unfriendly AGI coming along and preventing our goals by wiping us out or 
perverting our internal goal structure.

= = = = =

Now, I've just attempted to sneak a critical part of the answer right past 
everyone with my plea . . . . so let's go back and review it in slow-motion.  
:-)

Part of our environment is that we have peers.  And peers become resources 
towards our goals when we have common or compatible goals.  Any unimaginably 
intelligent system/entity surrounded by peers is certainly going to work with 
it's peers wherever possible.  Society/community is a feature that is 
critically important to Friendliness -- and this shows up in *many* places in 
evolution (if you're intelligent enough and can see beyond the red in tooth 
and claw).  Note also that this can also (obviously) be easily and profitably 
extended to sub-peers (entities below a peer status) as long as the sub-peer 
can be convinced to interact in manner such that they are a net positive to the 
super-intelligences goals.

Now, one of the assumptions of the Friendliness debate is that current-day 
humans are going to be sub-peers to the coming mind-children -- 
possibly/probably sub-sub-sub-...-peers.  That leaves us in the situation of 
probably needing to interact in a manner such that we are a net positive to the 
super-intelligence's goals.  Fortunately, it is my contention (which should be 
obvious by the end of the paper) that a Friendly sub-peer is *always* a 
resource and that Friendly behavior towards that sub-peer (our goal) is optimal 
for the super-intelligence.  Thus, if we can get both ourselves and our 
mind-children to a Friendly state -- it should be reassuringly self-reinforcing 
from there on out.

Of course, the big bugaboo to this whole theory is whether it will be too 
onerous for humans to be Friendly.  Eliezer's vision of a Friendly future is 
that humans don't have to be Friendly -- only the AGIs do.  My contention is 
that you don't get the Friendly attractor without all of the parties involved 
being Friendly -- which is why I'm so down on Eliezer's vision.  Under my 
vision of Friendliness, entities that aren't Friendly generally don't receive 
behavior that would be regarded as Friendly.

Now, note a critical feature of my arguments -- I am *NOT* trying to constrain 
the goals of my goal-seeking entity/entities (as in Eliezer's vision of 
Friendliness).  I am trying to prove that *any* sufficiently intelligent 
multi-goal entity will find Friendliness an attractor because it promotes it's 
own goal-seeking behavior.  Friendliness, in effect and assuming that it can be 
made coherent and consistent, is an optimal subgoal for all non-conflicting 
goals (and thus, in the aggregate of a large number of varying goals).  So, as 
I said, if we can get both ourselves and our mind-children to a Friendly state 
-- it should be reassuringly self-reinforcing from there on out.

TAKE-AWAY:  Friendliness is an attractor because it IS equivalent to 
enlightened self-interest -- but it only works where all entities involved 
are Friendly.

PART 3 will answer part of What is Friendly behavior? by answering What is 
in the set of horrible nasty thing[s]?.

  - Original Message - 
  From: Mark Waser 
  To: agi@v2.listbox.com 
  Sent: Thursday, March 06, 2008 10:01 AM
  Subject: Re: [agi] What should we do to be prepared?


  Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in Well duh land, 
b) I'm so

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Or should we not worry about the problem because the more intelligent 
agent is

more likely to win the fight?  My concern is that evolution could favor
unfriendly behavior, just as it has with humans.


I don't believe that evolution favors unfriendly behavior.  I believe that 
evolution is tending towards Friendliness.  It just takes time to evolve all 
of the pre-conditions for it to be able to obviously manifest.


TAKE-AWAY:  Friendliness goes with evolution.  Only idiots fight evolution. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
My concern is what happens if a UFAI attacks a FAI.  The UFAI has the goal 
of
killing the FAI.  Should the FAI show empathy by helping the UFAI achieve 
its

goal?


Hopefully this concern was answered by my last post but . . . .

Being Friendly *certainly* doesn't mean fatally overriding your own goals. 
That would be counter-productive, stupid, and even provably contrary to my 
definition of Friendliness.


The *only* reason why a Friendly AI would let/help a UFAI kill it is if 
doing so would promote the Friendly AI's goals -- a rather unlikely 
occurrence I would think (especially since it might then encourage other 
unfriendly behavior which would then be contrary to the Friendly AI's goal 
of Friendliness).


Note though that I could easily see a Friendly AI sacrificing itself to 
take down the UFAI (though it certainly isn't required to do so).



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Mark, how do you intend to handle the friendliness obligations of the AI 
towards vastly different levels of intelligence (above the threshold, of 
course)?


Ah.  An excellent opportunity for continuation of my previous post rebutting 
my personal conversion to computronium . . . .


First off, my understanding of the common usage of the word intelligence 
should be regarded as a subset of the attributes promoting successful 
goal-seeking.  Back in the pre-caveman days, physical capabilities were 
generally more effective as goal-seeking attributes.  These days, social 
skills are often arguably equal or more effective than intelligence as 
goal-seeking attributes.  How do you feel about how we should handle the 
friendliness obligations towards vastly different levels of social skill?


My point here is that you have implicitly identified intelligence as a 
better or best attribute.  I am not willing to agree with that without 
further convincing.  As far as I can tell, someone with sufficiently large 
number of hard-coded advanced social skill reflexes (to prevent the argument 
that social skills are intelligence) will run rings around your average 
human egghead in terms of getting what they want.  What are that person's 
obligations towards you?  Assuming that you are smarter, should their 
adeptness at getting what they want translate to reduced, similar, or 
greater obligations to you?  Do their obligations change more with variances 
in their social adeptness or in your intelligence?


Or, what about the more obvious question of the 6'7 300 pound guy on a 
deserted tropical island with a wimpy (or even crippled) brainiac?  What are 
their relative friendliness obligations?


I would also argue that the threshold can't be measured solely in terms of 
intelligence (unless you're going to define intelligence solely as 
goal-seeking ability, of course). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
I wonder if this is a substantive difference with Eliezer's position 
though, since one might argue that 'humanity' means 'the [sufficiently 
intelligent and sufficiently ...] thinking being' rather than 'homo 
sapiens sapiens', and the former would of course include SAIs and 
intelligent alien beings.


Eli is quite clear that AGI's must act in a Friendly fashion but we can't 
expect humans to do so.  To me, this is foolish since the attractor you can 
create if humans are Friendly tremendously increases our survival 
probability. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser

Would it be Friendly to turn you into computronium if your memories were
preserved and the newfound computational power was used to make you 
immortal
in a a simulated world of your choosing, for example, one without 
suffering,
or where you had a magic genie or super powers or enhanced intelligence, 
or

maybe a world indistinguishable from the one you are in now?


That's easy.  It would *NOT* be Friendly if I have a goal that I not be 
turned into computronium even if your clause (which I hereby state that I 
do)


Uplifting a dog, if it results in a happier dog, is probably Friendly 
because the dog doesn't have an explicit or derivable goal to not be 
uplifted.


BUT - Uplifting a human who emphatically does wish not to be uplifted is 
absolutely Unfriendly. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser

I think this one is a package deal fallacy. I can't see how whether
humans conspire to weed out wild carrots or not will affect decisions
made by future AGI overlords. ;-)


Whether humans conspire to weed out wild carrots impacts whether humans are 
classified as Friendly (or, it would if the wild carrots were sentient).


It is in the future AGI overlords enlightened self-interest to be 
Friendly -- so I'm going to assume that they will be.


If they are Friendly and humans are Friendly, I claim that we are in good 
shape.


If humans are not Friendly, it is entirely irrelevant whether the future AGI 
overlords are Friendly or not -- because there is no protection afforded 
under Friendliness to Unfriendly species and we just end up screwing 
ourselves. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Would an acceptable response be to reprogram the goals of the UFAI to make 
it

friendly?


Yes -- but with the minimal possible changes to do so (and preferably done 
by enforcing Friendliness and allowing the AI to resolve what to change to 
resolve integrity with Friendliness -- i.e. don't mess with any goals that 
you don't absolutely have to and let the AI itself resolve any choices if at 
all possible).


Does the answer to either question change if we substitute human for 
UFAI?


The answer does not change for an Unfriendly human.  The answer does change 
for a Friendly human.


Human vs. AI is irrelevant.  Friendly vs. Unfriendly is exceptionally 
relevant.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
And more generally, how is this all to be quantified? Does your paper go 
into the math?


All I'm trying to establish and get agreement on at this point are the 
absolutes.  There is no math at this point because it would be premature and 
distracting.


but, a great question . . . .  :- 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread Mark Waser

--- rg [EMAIL PROTECTED] wrote:

Matt: Why will an AGI be friendly ?


The question only makes sense if you can define friendliness, which we 
can't.


Why Matt, thank you for such a wonderful opening . . . .  :-)

Friendliness *CAN* be defined.  Furthermore, it is my contention that 
Friendliness can be implemented reasonably easily ASSUMING an AGI platform 
(i.e. it is just as easy to implement a Friendly AGI as it is to implement 
an Unfriendly AGI).


I have a formal paper that I'm just finishing that presents my definition of 
Friendliness and attempts to prove the above contention (and several others) 
but would like to to do a preliminary acid test by presenting the core ideas 
via several e-mails that I'll be posting over the next few days (i.e. y'all 
are my lucky guinea pig initial audience  :-).  Assuming that the ideas 
survive the acid test, I'll post the (probably heavily revised :-) formal 
paper a couple of days later.


= = = = = = = = = =
PART 1.

The obvious initial starting point is to explicitly recognize that the point 
of Friendliness is that we wish to prevent the extinction of the *human 
race* and/or to prevent many other horrible nasty things that would make 
*us* unhappy.  After all, this is why we believe Friendliness is so 
important.  Unfortunately, the problem with this starting point is that it 
biases the search for Friendliness in a direction towards a specific type of 
Unfriendliness.  In particular, in a later e-mail, I will show that several 
prominent features of Eliezer Yudkowski's vision of Friendliness are 
actually distinctly Unfriendly and will directly lead to a system/situation 
that is less safe for humans.


One of the critically important advantages of my proposed definition/vision 
of Friendliness is that it is an attractor in state space.  If a system 
finds itself outside (but necessarily somewhat/reasonably close) to an 
optimally Friendly state -- it will actually DESIRE to reach or return to 
that state (and yes, I *know* that I'm going to have to prove that 
contention).  While Eli's vision of Friendliness is certainly stable (i.e. 
the system won't intentionally become unfriendly), there is no force or 
desire helping it to return to Friendliness if it deviates somehow due to an 
error or outside influence.  I believe that this is a *serious* shortcoming 
in his vision of the extrapolation of the collective volition (and yes, this 
does mean that I believe both that Friendliness is CEV and that I, 
personally, (and shortly, we collectively) can define a stable path to an 
attractor CEV that is provably sufficient and arguably optimal and which 
should hold up under all future evolution.


TAKE-AWAY:  Friendliness is (and needs to be) an attractor CEV

PART 2 will describe how to create an attractor CEV and make it more obvious 
why you want such a thing.



!! Let the flames begin !!:-) 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread Mark Waser
 1. How will the AI determine what is in the set of horrible nasty
 thing[s] that would make *us* unhappy? I guess this is related to how you
 will define the attractor precisely.

 2. Preventing the extinction of the human race is pretty clear today, but
 *human race* will become increasingly fuzzy and hard to define, as will
 *extinction* when there are more options for existence than existence as
 meat. In the long term, how will the AI decide who is *us* in the above
 quote?

Excellent questions.  The answer to the second question is that the value of
*us* is actually irrelevant.  Thinking that it is relevant is one of the
fatal flaws of Eli's vision.  The method of determination of what is in the
set of horrible nasty thing[s] is (necessarily) coming as an integral part
of the paper.  So, to continue . . . .

Part 2.

Stephen Omohundro presented a paper at the AGI-08 post-conference workshop
on The Basic AI Drives which is available at
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf.
The paper claims to identify a number of “drives” that will appear in
sufficiently advanced AI systems of any design and identifies these drives 
as tendencies which will be present unless explicitly counteracted.

It is my contention that these drives will appear not only in sufficiently 
advanced AI systems, but in *any* goal-directed system of sufficient 
intelligence (most particularly including human beings).

The six drives that Omohundro identifies are 
  1.. self-improvement, 
  2.. rationality, 
  3.. utility function preservation, 
  4.. counterfeit utility prevention, 
  5.. self-protection, and 
  6.. acquisition and efficient use of resources.
My take on these drives is that they are universally applicable sub-goals 
(and/or goal maintenance operations) for any goal which they do not directly 
conflict.  Thus, *any* goal-driven intelligence (of sufficient intelligence) 
will display these drives/sub-goals (with the exception, of course, of those 
that directly contradict their goal) as part of their goal-seeking behavior.

And thus, we get back to a specific answer to jk's second question.  *US* 
should be assumed to apply to any sufficiently intelligent goal-driven 
intelligence.  We don't need to define *us* because I DECLARE that it should 
be assumed to include current day humanity and all of our potential descendants 
(specifically *including* our Friendly AIs and any/all other mind children 
and even hybrids).  If we discover alien intelligences, it should apply to them 
as well.

I contend that Eli's vision of Friendly AI is specifically wrong because it 
does *NOT* include our Friendly AIs in *us*.  In later e-mails, I will show 
how this intentional, explicit lack of inclusion is provably Unfriendly on the 
part of humans and a direct obstacle to achieving a Friendly attractor space.

TAKE-AWAY:  All goal-driven intelligences have drives that will be the tools 
that will allow us to create a self-correcting Friendly/CEV attractor space.

PART 3 will answer what is in the set of horrible nasty thing[s].

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-04 Thread Mark Waser
 But the question is whether the internal knowledge representation of the AGI 
 needs to allow ambiguities, or should we use an ambiguity-free 
 representation.  It seems that the latter choice is better. 

An excellent point.  But what if the representation is natural language with 
pointers to the specific intended meaning of any words that are possibly 
ambiguous?  That would seem to be the best of both worlds.
  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Monday, March 03, 2008 5:03 PM
  Subject: Re: [agi] would anyone want to use a commonsense KB?


  On 3/4/08, Mike Tintner [EMAIL PROTECTED] wrote: 

   Good example, but how about: language is open-ended, period and capable of 
infinite rather than myriad interpretations - and that open-endedness is the 
whole point of it?.

   Simple example much like yours : handle. You can attach words for objects 
ad infinitum to form different sentences  - 

   handle an egg/ spear/ pen/ snake, stream of water etc.  -  

   the hand shape referred to will keep changing - basically because your hand 
is capable of an infinity of shapes and ways of handling an infinity of 
different objects. . 

   And the next sentence after that first one, may require that the reader 
know exactly which shape the hand took.

   But if you avoid natural language, and its open-endedness then you are 
surely avoiding AGI.  It's that capacity for open-ended concepts that is 
central to a true AGI (like a human or animal). It enables us to keep coming up 
with new ways to deal with new kinds of problems and situations   - new ways to 
handle any problem. (And it also enables us to keep recognizing new kinds of 
objects that might classify as a knife - as well as new ways of handling them 
- which could be useful, for example, when in danger).


  Sure, AGI needs to handle NL in an open-ended way.  But the question is 
whether the internal knowledge representation of the AGI needs to allow 
ambiguities, or should we use an ambiguity-free representation.  It seems that 
the latter choice is better.  Otherwise, the knowledge stored in episodic 
memory would be open to interpretations and may need to errors in recall, and 
similar problems.

  YKY

--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-04 Thread Mark Waser
UGH!

 My point is only that it is obvious that we are heading towards something 
 really quickly, with unstoppable inertia, and unless some world tyrant 
 crushed all freedoms and prevented everyone from doing what they are doing, 
 there is no way that it is not going to happen.

Most people on this list would agree.

  So, enjoy, and be an observer to the show.  The ending is easy to predict 
 so don't worry (excessively) about the details.  

Anthony, I don't know who you are . . . . but you're certainly *NOT* speaking 
for the community.  You are in a *very* small minority.

Note:  I normally wouldn't bother posting a reply to something like this, but 
this is *SO* contrary to the general consensus of the community that I feel it 
is necessary


  - Original Message - 
  From: Anthony George 
  To: agi@v2.listbox.com 
  Sent: Tuesday, March 04, 2008 2:47 PM
  Subject: Re: [agi] What should we do to be prepared?





  On Tue, Mar 4, 2008 at 10:53 AM, rg [EMAIL PROTECTED] wrote:

Hi

Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?

Note, that I am not saying we should not make them!
Because someone will regardless of what we decide.

I am asking for what should do to prepare for it!
and also how we should affect the creation of AGIs?

Here's some questions, I hope I am not the first to come up with.

* Will they be sane?
* Will they just be smart enough to pretend to be sane?
   until...they do not have to anymore.

* Should we let them decide for us ?
 If not should we/can we restrict them ?

* Can they feel any empathy for us ?
  If not, again should we try to manipulate/force them to
  act like they do?

* Our society is very dependent on computer systems
 everywhere and its increasing!!!
  Should we let the AGIs have access to the internet ?
 If not is it even possible to restrict an AGI that can think super fast
 is a super genious and also has a lot of raw computer power?
 That most likely can find many solutions to get internet access...
 (( I can give many crazy examples on how if anyone doubts))

* What should we stupid organics do to prepare ?
  Reduce our dependency?

* Should a scientist, that do not have true ethical values be allowed to
do AGI research ?
 Someone that just pretend to be ethical, someone that just wants the
glory and the
 Nobel pricesomeone that answers the statement: It is insane With:
Oh its just needs
 some adjustment, don't worry :)

* What is the military doing ? Should we raise public awareness to gain
insight?
   I guess all can imagine why this is important..

The only answers I have found to what can truly control/restrict an AGI
smarter than us
are few..

- Another AGI
- Total isolation

So anyone thinking about this?




You seem rather concerned about this.  I don't agree that concern is 
warranted, at least not if that concern becomes negative or painful.  Now, the 
magisterium of contemporary scientific culture would stone me with 
condescending thoughts of how silly... a folksy ignoramus for saying or even 
thinking this.. but.  just as hands are for grabbing and eyes are for 
seeing, final cause is not hard at all to intuit.  You can't find it with an 
instrument, but it is right there in front of you if you look for it.  Having 
said that, if you can accept that eyes are for seeing, then it is not too hard 
to intuit that we are, on some level, aside from our individual journeys 
perhaps, for building a medium for a noosphere.  Said another way, the next 
step in the evolution from rock to pure living information is, I think, the WWW 
as AGI, probably with nanobots and direct interface with human brains..  Or 
maybe not.  My point is only that it is obvious that we are heading towards 
something really quickly, with unstoppable inertia, and unless some world 
tyrant crushed all freedoms and prevented everyone from doing what they are 
doing, there is no way that it is not going to happen.  So, enjoy, and be an 
observer to the show.  The ending is easy to predict so don't worry 
(excessively) about the details.  







---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] Why do fools fall in love? [WAS Re: Common Sense Consciousness ]

2008-02-29 Thread Mark Waser
Our attractions to others - why we choose them as friends or lovers - are 
actually v. complex.


The example of Love at first sight proves that your statement is not 
universally true.


You seem to have an awful lot of unfounded beliefs that you persist in 
believing as facts.




- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, February 29, 2008 3:21 PM
Subject: **SPAM** Re: [agi] Why do fools fall in love? [WAS Re: Common Sense 
Consciousness ]




Trivial answer, Richard - though my fault for not explaining myself.

Our attractions to others - why we choose them as friends or lovers - are 
actually v. complex. They have to pass a whole set of tests, fit a whole 
set of criteria to attract us significantly. It's loosely as complex 
relatively as two big companies merging.


What is remarkable about falling in love (quickly) is this: you have next 
to no idea why it happens, why your system has adjudged this person to be 
so apt for you so quickly, but as you continue with that person, over days 
and weeks, you will find that that initial, near snap judgment was 
remarkably accurate - that this person actually does fit a whole set of 
your conscious requirements. This is a great delight. Of course, they also 
often fit a whole set of negative requirements too. They often have things 
you particularly dislike, or that particularly get you. Often, it turns 
out, that this person structurally, though not necessarily superficially 
is rather like your parent in many ways. But again, that is in its way a 
tribute to the unconscious judgment of your system.


Now what's remarkable from the AGI POV is how on earth did your system 
pick so accurately so quickly?Well, it sure as hell didn't do it by any 
logical process -  she's got right attitudes to politics/ sex/ money/ 
art/ etc.etc. - there wasn't time.


Your system knew by largely physical, imagistic analysis - from their 
face, the thinness or thickness of their lips, the tautness or loosenss of 
their jaw, their nose, their gaze, their smile - with its 
openness/closedness, their posture, their talk, their tone, the music of 
their voice, (a symphony of sorts) their body language, the firmness or 
weakness with which they plant themselves, their touch, their warmth... 
If I give you such images (in various sensory modalities) you can actually 
tell more or less instantly that you like that face, that walk, that 
voice, that way of moving etc - that person. But there is no available 
science or literature in any symbolic form, (certainly not 
math/geometrical!) that can tell you to any serious or systematic extent 
why. You'll be hard put to express why yourself.


Hence the heart has its reasons - for which reason, i.e our present 
formal, rational culture, can provide little or no explanation. But that's 
not the heart of course really - that's largely the imaginative part of 
the brain, the half of the brain that you guys are ignoring..


That's why human beings spend such a large amount of time looking at 
photographs of people in magazines - and a simply vast amount of time 
(roughly one waking day in seven) looking at dramatic movies - and such a 
very little amount of time reading books of psychology, or science, or 
maths, or books of logic. Strange - .given that Vlad has told us 
authoritatively that there is v. little info in all those pics/movies, and 
presumably all the visual arts. Strange too that the brain should spend 
most of the night then creating its own movies and insists on seeing 
events when according to you guys, it could just much more quickly and 
less effortfully look at their symbolic forms.


You're right, this is fun.



Richard Loosemore: Mike Tintner wrote:

[snip]
How do you think a person can fall in love with another person in just a 
few minutes of talking to them (or not even talking at all)? How does 
their brain get them to do that - without the person having any 
conscious understanding of why they're falling? By analysis of a few 
words that the other person says ( what if they don't say anything at 
all)?  Well, if you don't know how that process works, then maybe 
there's a lot else here you don't know - and it might be better to keep 
an open mind.


Oh, that's a fun question.

If you look at the literature (e.g. Aron, Fisher, Mashek, Strong, Li, and 
Brown (2005), and the analysis that Harley and I did of their 
conclusions, Loosemore  Harley (in press)) you will see that one likely 
possibility is that when a person falls in love it is because there is a 
specialized slot just waiting for the representation of the right other 
person to fall into that slot, and when that happens all hell breaks 
loose.  It really doesn't need long for this to happen:  that little slot 
is like a spring-loaded trap.


Conscious of it?  Heck no.  The Fool could probably send the rest of 
their cortex on an all-expenses-paid vacation to the moons of Jupiter  - 
leaving 

Re: [agi] would anyone want to use a commonsense KB?

2008-02-28 Thread Mark Waser
 I think Ben's text mining approach has one big flaw:  it can only reason 
 about existing knowledge, but cannot generate new ideas using words / 
 concepts

There is a substantial amount of literature that claims that *humans* can't 
generate new ideas de novo either -- and that they can only build up new 
ideas from existing pieces.

 Such rewrite rules are very numerous and can be very complex -- for example 
 rules for auxiliary words and prepositions, etc

The epicycles that the sun performs as it moves around the Earth are also very 
numerous and complex -- until you decide that maybe you should view it as the 
Earth moving around the sun instead.  Read some Pinker -- the rules of language 
tell us *a lot* about the tough-to-discern foundations of human cognition.
  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Thursday, February 28, 2008 4:37 AM
  Subject: Re: [agi] would anyone want to use a commonsense KB?



  My latest thinking tends to agree with Matt that language and common sense 
are best learnt together.  (Learning langauge before common sense is 
impossible / senseless).

  I think Ben's text mining approach has one big flaw:  it can only reason 
about existing knowledge, but cannot generate new ideas using words / concepts. 
 I want to stress that AGI needs to be able to think at the WORD/CONCEPT level. 
 In order to do this, we need some rules that *rewrite* sentences made up of 
words, such that the AGI can reason from one sentence to another.  Such rewrite 
rules are very numerous and can be very complex -- for example rules for 
auxillary words and prepositions, etc.  I'm not even sure that such rules can 
be expressed in FOL easily -- let alone learn them!

  The embodiment approach provides an environment for learning qualitative 
physics, but it's still different from the common sense domain where knowledge 
is often verbally expressed.  In fact, it's not the environment that matters, 
it's the knowledge representation (whether it's expressive enough) and the 
learning algorithm (how sophisticated it is).

  YKY

--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Mark Waser

Water does not always run downhill, sometimes it runs uphill.


But never without a reason.


- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, February 20, 2008 9:47 AM
Subject: Re: [agi] would anyone want to use a commonsense KB?



C is not very viable as of now.  The physics in Second Life is simply not
*rich* enough.  SL is mainly a space for humans to socialize, so the 
physics

will not get much richer in the near future -- is anyone interested in
emulating cigarette smoke in SL?


Second Life will soon be integrating the Havok 4 physics engine.

I agree that game-world physics is not yet very realistic, but it's 
improving

fast, due to strong economics in the MMOG industry.


E is also hard, but you seem to be *unaware* of its difficulty.  In fact,
the problem with E is the same as that with AIXI -- the thoery is 
elegant,

but the actual learning would take forever.  Can you explain, in broad
terms, how the AGI is to know that water runs downhill instead of up, and
that the moon is not blue, but a greyish color?


Water does not always run downhill, sometimes it runs uphill.

To learn commonsense information from text requires parsing the text
and mapping the parse-trees into semantic relationships, which are then
reasoned on by a logical reasoning engine.  There is nothing easy about 
this,

and there is a hard problem of semantic disambiguation of relationships.
Whether the disambiguation problem can be solved via 
statistical/inferential

integration of masses of extracted relationships, remains to be seen.

Virtual embodiment coupled with NL conversation is the approach I
currently favor, but I think that large-scale NL information extraction 
can

also play an important helper role.  And I think that as robotics tech
develops, it can play a big role too.

I think we can take all approaches at once within an integrative framework
like Novamente or OpenCog, but if I have to pick a single focus it will
be virtual embodiment, with the other aspects as helpers...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] RISE OF ROBOETHICS: Grappling with the implications of an artificially intelligent culture.

2008-02-19 Thread Mark Waser
http://www.seedmagazine.com/news/2007/07/rise_of_roboethics.php

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Mark Waser

All of these rules have exception or implicit condition. If you
treat them as default rules, you run into multiple extension
problem, which has no domain-independent solution in binary logic ---
read http://www.cogsci.indiana.edu/pub/wang.reference_classes.ps for
details.


Pei,

   Do you have a PDF version?  Thanks!

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-31 Thread Mark Waser

Mark, my point is that while in the past evolution did the choosing,
now it's *we* who decide,


But the *we* who is deciding was formed by evolution.  Why do you do 
*anything*?  I've heard that there are four basic goals that drive every 
decision:  safety, feeling good, looking good, and being right.  Do you make 
any decisions that aren't decided by one or more of those four?



Another question is that we might like to
change ourselves, to get rid of most of this baggage, but it doesn't
follow that in the limit we will become pure survival maximizers.


Actually, what must follow is that at the limit what will predominate are 
the survival and reproduction maximizers.



By the way, if we want to survive, but we change ourselves to this
end, *what* is it that we want to keep alive?


Exactly!  What are our goals?  I don't think that you're going to get (or 
even want) anything close to a common consensus about specific goals -- so 
what you want is the maximization of individual goals (freedom) without 
going contrary to the survival of society (the destruction of which would 
lead to reduced freedom).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=92147931-4eb559


Re: [agi] Request for Help

2008-01-30 Thread Mark Waser

I know that you can do stuff like this with Microsoft's new SilverLight.

For example, http://www.devx.com/dotnet/Article/36544



- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, January 30, 2008 12:44 PM
Subject: [agi] Request for Help


Remember that mathematical test/ experiment you all hated - the one where 
you doodle on this site -


http://www.imagination3.com

and it records your actual stream of drawing in time as well as the 
finished product?


Well, a reasonably eminent scientist liked it, and wants to set it up. But 
he's having problems contacting the site- they don't reply to emails 


there's no way of accessing the time and
space coordinates of the drawings, unless there's a
possibility to read them from the Flash animation.

Can you suggest either a) a way round this or b) an alternative 
site/method to faithfully record the time and space coordinates of the 
drawings?


Cheeky request perhaps, but I would be v. grateful for any help.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91667763-eb70c4


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-30 Thread Mark Waser

Nature doesn't even have survival as its 'goal', what matters is only
survival in the past, not in the future, yet you start to describe
strategies for future survival.


Goal was in quotes for a reason.  In the future, the same tautological 
forces will apply.  Evolution will favor those things that are adapted to 
survive/thrive.



Nature is
stupid, so design choices left to it are biased towards keeping much
of the historical baggage and resorting to unsystematic hacks, and as
a result its products are not simply optimal survivors.


Yes, everything is co-evolving fast enough that evolution is not fast enough 
to produce optimum solutions.  But are you stupid enough to try to fight 
nature and the laws of probability and physics?  We can improve on nature --  
but you're never going to successfully go in a totally opposite direction.



When we are talking about choice of conditions for humans to live in
(rules of society, morality), we are trying to understand what *we*
would like to choose.


What we like (including what we like to choose) was formed by evolution. 
Some of what we like has been overtaken by events and is no longer 
pro-survival but *everything* that we like has served a pro-survival purpose 
in the past (survival meaning survival of offspring and the species -- so 
altruism *IS* an evolutionarily-created like as well).



Better
understanding of *human* nature can help us to estimate how we will
appreciate various conditions.


Not if we can program our own appreciations.  And what do we want our AGI to 
appreciate?



humans are very complicated things,
with a large burden of reinforcers that push us in different
directions based on idiosyncratic criteria.


Very true.  So don't you want a simpler, clearer, non-contradictory set of 
reinforcers for you AGI (that will lead to it and you both being happy).



These reinforcers used to
line up to support survival in the past, but so what?


So . . . I'd like to create reinforcers to support my survival and freedom 
and that of the descendents of the human race.  Don't you?




- Original Message - 
From: Vladimir Nesov [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, January 30, 2008 2:14 PM
Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide



On Jan 29, 2008 10:28 PM, Mark Waser [EMAIL PROTECTED] wrote:


Ethics only becomes snarled when one is unwilling to decide/declare what 
the

goal of life is.

Extrapolated Volition comes down to a homunculus depending upon the
definition of wiser or saner.

Evolution has decided what the goal of life is . . . . but most are
unwilling to accept it (in part because most do not see it as anything 
other

than nature, red in tooth and claw).

The goal in life is simply continuation and continuity.  Evolution goes
for continuation of species -- which has an immediate subgoal of
continuation of individuals (and sex and protection of offspring).
Continuation of individuals is best served by the construction of and
continuation of society.

If we're smart, we should decide that the goal of ethics is the 
continuation
of society with an immediate subgoal of the will of individuals (for a 
large

variety of reasons -- but the most obvious and easily justified is to
prevent the defection of said individuals).

If an AGI is considered a willed individual and a member of society and 
has

the same ethics, life will be much easier and there will be a lot less
chance of the Eliezer-scenario.  There is no enslavement of 
Jupiter-brains
and no elimination/suppression of lesser individuals in favor of 
greater
individuals -- just a realization that society must promote individuals 
and

individuals must promote society.

Oh, and contrary to popular belief -- ethics has absolutely nothing to do
with pleasure or pain and *any* ethics based on such are doomed to 
failure.

Pleasure is evolution's reward to us when we do something that promotes
evolution's goals.  Pain is evolution's punishment when we do 
something
(or have something done) that is contrary to survival, etc.  And while 
both

can be subverted so that they don't properly indicate guidance -- in
reality, that is all that they are -- guideposts towards other goals.
Pleasure is a BAD goal because it can interfere with other goals. 
Avoidance

of pain (or infliction of pain) is only a good goal in that it furthers
other goals.


Mark,

Nature doesn't even have survival as its 'goal', what matters is only
survival in the past, not in the future, yet you start to describe
strategies for future survival. Yes, survival in the future is one
likely accidental property of structures that survived in the past,
but so are other properties of specific living organisms. Nature is
stupid, so design choices left to it are biased towards keeping much
of the historical baggage and resorting to unsystematic hacks, and as
a result its products are not simply optimal survivors.

When we are talking about choice of conditions

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-29 Thread Mark Waser
Ethics only becomes snarled when one is unwilling to decide/declare what the 
goal of life is.

Extrapolated Volition comes down to a homunculus depending upon the definition 
of wiser or saner.

Evolution has decided what the goal of life is . . . . but most are unwilling 
to accept it (in part because most do not see it as anything other than 
nature, red in tooth and claw).

The goal in life is simply continuation and continuity.  Evolution goes for 
continuation of species -- which has an immediate subgoal of continuation of 
individuals (and sex and protection of offspring).  Continuation of individuals 
is best served by the construction of and continuation of society.

If we're smart, we should decide that the goal of ethics is the continuation of 
society with an immediate subgoal of the will of individuals (for a large 
variety of reasons -- but the most obvious and easily justified is to prevent 
the defection of said individuals).

If an AGI is considered a willed individual and a member of society and has the 
same ethics, life will be much easier and there will be a lot less chance of 
the Eliezer-scenario.  There is no enslavement of Jupiter-brains and no 
elimination/suppression of lesser individuals in favor of greater 
individuals -- just a realization that society must promote individuals and 
individuals must promote society.

Oh, and contrary to popular belief -- ethics has absolutely nothing to do with 
pleasure or pain and *any* ethics based on such are doomed to failure.  
Pleasure is evolution's reward to us when we do something that promotes 
evolution's goals.  Pain is evolution's punishment when we do something (or 
have something done) that is contrary to survival, etc.  And while both can be 
subverted so that they don't properly indicate guidance -- in reality, that is 
all that they are -- guideposts towards other goals.  Pleasure is a BAD goal 
because it can interfere with other goals.  Avoidance of pain (or infliction of 
pain) is only a good goal in that it furthers other goals.

Suicide is contrary to continuation.  Euthanasia is recognition that, in some 
cases, there is no meaningful continuation.

Life extension should be optional at least as long as there are resource 
constraints.
  - Original Message - 
  From: Joshua Fox 
  To: agi@v2.listbox.com 
  Sent: Tuesday, January 29, 2008 12:46 PM
  Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide


  When transhumanists talk about indefinite life extension, they often take 
care to say it's optional to forestall one common objection. 

  Yet I feel that most suicides we see should have been prevented -- that the 
person should have been taken into custody and treated if possible, even 
against their will, 

  How to reconcile a strong belief in free choice with the belief that suicide 
is most often the result of insanity, not the victim's true free will? 

  Eliezer's Extrapolated Volition suggests that we take into account what the 
suicidal person would have wanted if they were wiser or saner. That is one 
solution, though it does not quite satisfy me.

  This is a basic ethical question, which takes on more relevance in the 
context of transhumanism, life extension, and F/AGI theory.

  Joshua


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91171134-d7a01a

Re: [agi] Study hints that fruit flies have free will

2008-01-23 Thread Mark Waser

http://www.msnbc.msn.com/id/18684016/?GT1=9951


I don't get it.  It says that flies movie in accordance with a
non-flat distribution instead of a flat distribution.  That has
nothing to do with free will.  The writers assume that non-flat
distribution = free will.


You need to read more fully and not get stuck the second you hit a hot 
button user-defined term like free will . . . .


1. Brembs and his colleagues reasoned that if fruit flies (Drosophila 
melanogaster) were simply reactive robots entirely determined by their 
environment, in completely featureless rooms they should move completely 
randomly.
COMMENT:  I would have used the term IMMEDIATE environment -- and for the 
record, I believe that we *are* deterministic but incalculable so it's most 
rational to behave as if we're free-willed


2. A plethora of increasingly sophisticated computer analyses revealed that 
the way the flies turned back and forth over time was far from random.
3.  Instead, there appeared to be a function in the fly brain which evolved 
to generate spontaneous variations in the behavior, Sugihara said -OR - 
These strategies in flies appear to arise spontaneously and do not result 
from outside cues
4.  If even flies show the capacity for spontaneity, can we really assume 
it is missing in humans? he asked.
5. The epitomes of indeterministic behavior are humans, who are very 
flexible. Flies are somewhere in between the extremes with a large set of 
very inflexible and rather predictable behaviors, with spontaneity only 
coming to the fore if either you look very closely or provide the animals 
with a situation where the spontaneity is easy to study - that is, when you 
remove all the stimuli which could trigger a response. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=7198-d7391c


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-21 Thread Mark Waser

For example, hunger is an emotion, but the
desire for money to buy food is not


Hunger is a sensation, not an emotion.

The sensation is unpleasant and you have a hard-coded goal to get rid of it.

Further, desires tread pretty close to the line of emotions if not actually 
crossing over . . . .



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88198905-b31742


Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Mark Waser
   One of the things that I quickly discovered when first working on my 
convert it all to Basic English project is that the simplest words 
(prepositions and the simplest verbs in particular) are the biggest problem 
because they have so many different (though obscurely related) meanings (not 
to mention being part of one-off phrases).


   Some of the problems are resolved by stronger typing (as in variable 
typing).  For example, On-SituationLocalized is clearly meant to deal with 
two physical objects and shouldn't apply to neuroscience.  But *that* 
sentence is easy after you realize that neuroscience really can only have 
the type of field-of-study or topic.  The on becomes obvious then --  
provided that you have that many variable types and rules for prepositions 
(not an easy thing).


   And how would a young child or foreigner interpret on the Washington 
Monument or shit list?  Both are physical objects and a book *could* be 
resting on them.  It's just that there are more likely alternatives.  On has 
a specific meaning (a-member-of-this-ordered-group) for lists and another 
specific meaning (about-this-topic) for books, movies, and other 
subject-matter-describers.  The special on overrides the generic on --  
provided that you have even more variable types and special rules for 
prepositions.


   And on fire is a simple override phrase -- provided that you're 
keeping track of even more specific instances . . . .


- - - - -

Ben, your question is *very* disingenuous.  There is a tremendous amount of 
domain/real-world knowledge that is absolutely required to parse your 
sentences.  Do you have any better way of approaching the problem?


I've been putting a lot of thought and work into trying to build and 
maintain precedence of knowledge structures with respect to disambiguating 
(and overriding incorrect) parsing . . . . and don't believe that it's going 
to be possible without a severe amount of knwledge . . . .


What do you think?

- Original Message - 
From: Benjamin Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, January 09, 2008 3:51 PM
Subject: Re: [agi] Incremental Fluid Construction Grammar released



What is the semantics of

   ?on-situation-localized-14 rdf:type texai:On-SituationLocalized

??

How would your system parse

The book is on neuroscience

or

The book is on the Washington Monument

or

The book is on fire

or

The book is on my shit list

???

thx
Ben

On Jan 9, 2008 3:37 PM, Stephen Reed [EMAIL PROTECTED] wrote:


Ben,

The use case utterance the block is on the table yields the following 
RDF
statements (i.e. subject, predicate, object triples).  A yet-to-be 
written
discourse mechanism will resolve ?obj-4 to the known book and ?obj-18 to 
the

known table.

Parsed statements about the book:
?obj-4 rdf:type cyc:BookCopy
 ?obj-4 rdf:type texai:FCGClauseSubject
 ?obj-4 rdf:type texai:PreviouslyIntroducedThingInThisDiscourse
?obj-4 texai:fcgDiscourseRole texai:external
?obj-4 texai:fcgStatus texai:ingleObject

Parsed statements about the table:
 ?obj-18 rdf:type cyc:Table
?obj-18 rdf:type texai:PreviouslyIntroducedThingInThisDiscourse
?obj-18 texai:fcgDiscourseRole texai:external
 ?obj-18 texai:fcgStatus texai:SingleObject

Parsed statements about the book on the table:
 ?on-situation-localized-14 rdf:type texai:On-SituationLocalized
?on-situation-localized-14 texai:aboveObject ?obj-4
?on-situation-localized-14 texai:belowObject ?obj-18

Parsed statements about that the book is on the table ( the fact that
?on-situation-localized-14 is a proper sub-situtation of
?situation-localized-10 should also be here):
?situation-localized-10 rdf:type cyc:Situation-Localized
 ?situation-localized-10 texai:situationHappeningOnDate cyc:Now
?situation-localized-10 cyc:situationConstituents  ?obj-4

Cyc parsing is based upon semantic translation templates, which are 
stitched

together with procedural code following the determination of constituent
structure by a plug-in parser such as the CMU link-grammar.  My method
differs in that: (1) I want to get the entire and precise semantics from 
the
utterance. (2) FCG is reversible, the same construction rules not only 
parse

input text, but can be applied in reverse to re-create the original
utterance from its semantics.  Cyc has a separate system for NL 
generation.
(3) Cyc hand-codes their semantic translation templates and I have in 
mind
building an expert English dialog system using minimal hand-coded 
Controlled
English, for the purpose of interacting with a multitude of non-linguists 
to

extend its linguistic knowledge.

-Steve

Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860




- Original Message 
From: Benjamin Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, January 9, 2008 1:45:34 PM
Subject: Re: [agi] 

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Mark Waser
In our rule encoding approach, we will need about 5000 mapping rules to 
map
syntactic parses of commonsense sentences into term logic relationships. 
Our

inference engine will then generalize these into hundreds of thousands
or millions
of specialized rules.


How would your rules handle the on cases that you gave?  What do your 
rules match on (specific words, word types, object types, something else)? 
Are your rules all at the same level or are they tiered somehow?


My gut instinct is that 5000 rules is way, way high for both the most 
general and second-tiers and that you can do exception-based learning after 
those two tiers.


We have about 1000 rules in place now and will soon stop coding them and 
start

experimenting with using inference to generalize and apply them.  If
this goes well,
then we'll put in the work to encode the rest of the rules (which is
not very fun work,
as you might imagine).


Can you give about ten examples of rules?  (That would answer a lot of my 
questions above)


Where did you get the rules?  Did you hand-code them or get them from 
somewhere?





- Original Message - 
From: Benjamin Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, January 09, 2008 5:04 PM
Subject: Re: [agi] Incremental Fluid Construction Grammar released



And how would a young child or foreigner interpret on the Washington
Monument or shit list?  Both are physical objects and a book *could* be
resting on them.


Sorry, my shit list is purely mental in nature ;-) ... at the moment, I 
maintain

a task list but not a shit list... maybe I need to get better organized!!!


Ben, your question is *very* disingenuous.


Who, **me** ???


There is a tremendous amount of
domain/real-world knowledge that is absolutely required to parse your
sentences.  Do you have any better way of approaching the problem?

I've been putting a lot of thought and work into trying to build and
maintain precedence of knowledge structures with respect to 
disambiguating
(and overriding incorrect) parsing . . . . and don't believe that it's 
going

to be possible without a severe amount of knwledge . . . .

What do you think?


OK...

Let's assume one is working within the scope of an AI system that
includes an NLP parser,
a logical knowledge representation system, and needs some intelligent way 
to map

the output of the latter into the former.

Then, in this context, there are three approaches, which may be tried
alone or in combination:

1)
Hand-code rules to map the output of the parser into a much less
ambiguous logical format

2)
Use statistical learning across a huge corpus of text to somehow infer
these rules
[I did not ever flesh out this approach as it seemed implausible, but
I have to recognize
its theoretical possibility]

3)
Use **embodied** learning, so that the system can statistically infer
the rules from the
combination of parse-trees with logical relationships that it observes
to describe
situations it sees
[This is the best approach in principle, but may require years and
years of embodied
interaction for a system to learn.]


Obviously, Cycorp has taken Approach 1, with only modest success.  But
I think part of
the reason they have not been more successful is a combination of a
bad choice of
parser with a bad choice of knowledge representation.  They use a
phrase structure
grammar parser and predicate logic, whereas I believe if one uses a 
dependency

grammar parser and term logic, the process becomes a lot easier.  So
far as I can tell,
in texai you are replicating Cyc's choices in this regard (phrase
structure grammar +
predicate logic).

In Novamente, we are aiming at a combination of the 3 approaches.

We are encoding a bunch of rules, but we don't ever expect to get anywhere 
near

complete coverage with them, and we have mechanisms (some designed, some
already in place) that can
generalize the rule base to learn new, probabilistic rules, based on
statistical corpus
analysis and based on embodied experience.

In our rule encoding approach, we will need about 5000 mapping rules to 
map
syntactic parses of commonsense sentences into term logic relationships. 
Our

inference engine will then generalize these into hundreds of thousands
or millions
of specialized rules.

This is current work, research in progress.

We have about 1000 rules in place now and will soon stop coding them and 
start

experimenting with using inference to generalize and apply them.  If
this goes well,
then we'll put in the work to encode the rest of the rules (which is
not very fun work,
as you might imagine).

Emotionally and philosophically, I am more drawn to approach 3 (embodied
learning), but pragmatically, I have reluctantly concluded that the
hybrid approach
we're currently taking has the greatest odds of rapid success.

In the longer term, we intend to throw out the standalone grammar parser 
we're
using and have syntax parsing done via our core AI processing -- but we're 
now

using a 

Re: [agi] AGI and Deity

2007-12-11 Thread Mark Waser

Hey Ben,

   Any chance of instituting some sort of moderation on this list?


- Original Message - 
From: Ed Porter [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, December 11, 2007 10:18 AM
Subject: RE: [agi] AGI and Deity


Mike:

MIKE TINTNER# Science's autistic, emotionally deprived, insanely
rational nature in front of the supernatural (if it exists), and indeed the
whole world,  needs analysing just as much as the overemotional,
underrational fantasies of the religious about the supernatural.

ED PORTER# I like the metaphor of Science as Autistic.   It
emphasizes the emotional disconnect from human feeling science can have.

I feel that rationality has no purpose other than to serve human values and
feelings (once truly intelligent machines arrive on the scene that statement
might have to be modified).  As I think I have said on this list before,
without values to guide them, the chance you would think anything that has
anything to do with maintaining your own existence approaches zero as a
limit, because of the possible combinatorial explosion of possible thoughts
if they were not constrained by emotional guidance.

Therefore, from the human standpoint, the main use of science should be to
help serve our physical, emotional, and intellectual needs.

I agree that science will increasingly encroach upon many areas previous
considered the realm of the philosopher and priest.  It has been doing so
since at least the age of enlightenment, and it is continuing to do so, with
advances in cosmology, theoretical physics, bioscience, brain science¸ and
AGI.

With the latter two we should pretty much understand the human soul within
several decades.

I hope we have the wisdom to use that new knowledge well.

Ed Porter


-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Monday, December 10, 2007 11:07 PM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Ed:I would add that there probably is something to the phenomenon that John
Rose is referring to, i.e., that faith seems to be valuable to many people.
Perhaps it is somewhat like owning a lottery ticket before its drawing.  It
can offer desired hope, even if the hope might be unrealistic.  But whatever
you think of the odds, it is relatively clear that religion does makes some
people's lives seem more meaningful to them.

You realise of course that what you're seeing on this and the
singularitarian board, over and over, is basically the same old religious
fantasies - the same yearning for the Second Coming - the same old search
for salvation - only in a modern, postreligious form?

Everyone has the same basic questions about the nature of the world -
everyone finds their own answers - which always in every case involve a
mixture of faith and scepticism in the face of enormous mystery.

The business of science in the face of these questions is not to ignore
them, and try and psychoanalyse away people's attempts at answers, as a
priori weird or linked to a deficiency of this or that faculty.

The business of science is to start dealing with these questions - to find
out if there is a God and what the hell that entails, - and not leave it
up to philosophy.

Science's autistic, emotionally deprived, insanely rational nature in front
of the supernatural (if it exists), and indeed the whole world,  needs
analysing just as much as the overemotional, underrational fantasies of the
religious about the supernatural.

Science has fled from the question of God just as it has fled from the
soul - in plain parlance, the self deliberating all the time in you and
me, producing these posts and all our dialogues - only that self, for sure,
exists and there is no excuse for science's refusal to study it in action,
whatsoever.

The religous 'see' too much; science is too heavily blinkered. But the walls

between them - between their metaphysical worldviews - are starting to
crumble..



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74570767-eca623


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread Mark Waser
THE KEY POINT I WAS TRYING TO GET ACROSS WAS ABOUT NOT HAVING TO 
EXPLICITLY DEAL WITH 500K TUPLES


And I asked -- Do you believe that this is some sort of huge conceptual 
breakthrough?




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73155533-eaf7a5


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread Mark Waser

Ed,

   Get a grip.  Try to write with complete words in complete sentences 
(unless discreted means a combination of excreted and discredited -- which 
works for me :-).


   I'm not coming back for a second swing.  I'm still pursuing the first 
one.  You just aren't oriented well enough to realize it.


Now you are implicitly attacking me for implying it is new to think you 
could deal with vectors in some sort of compressed representation.


   Nope.  First of all, compressed representation is *absolutely* the wrong 
term for what you're looking for.


   Second, I actually am still trying to figure out what *you* think you 
ARE gushing about.  (And my quest is not helped by such gems as all though 
[sic] it may not be new to you, it seems to be new to some)


   Why don't you just answer my question?  Do you believe that this is some 
sort of huge conceptual breakthrough?  For NLP (as you were initially 
pushing) or just for some nice computational tricks?


   I'll also note that you've severely changed the focus of this away from 
the NLP that you were initially raving about as such quality work -- and 
while I'll agree that kernel mapping is a very elegant tool -- Collin's work 
is emphatically *not* what I would call a shining example of it (I mean, 
*look* at his results -- they're terrible).  Yet you were touting it because 
of your 500,000 dimension fantasies and you're belief that it's good NLP 
work.


   So, in small words -- and not whining about an attack -- what precisely 
are you saying?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73247008-aecb7f


[agi] Re: Hacker intelligence level

2007-12-06 Thread Mark Waser
With regard to your questions below, If you actually took the time to 
read
my prior responses, I think you will see I have substantially answered 
them.


No, Ed.  I don't see that at all.  All I see is you refusing to answer them 
even when I repeatedly ask them.  That's why I asked them again.


All I've seen is you ranting on about how insulted you are and *many* 
divergences from your initial statements.  Why don't you just answer the 
questions instead of whining about how unfairly you're being treated.


Hint:  Answers are most effective when you directly address the question 
*before* rampaging down apparently unrelated tangents.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73301324-a28b1f


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-05 Thread Mark Waser
Interesting.  Since I am interested in parsing, I read Collin's paper.  It's a 
solid piece of work (though with the stated error percentages, I don't believe 
that it really proves anything worthwhile at all) -- but your 
over-interpretations of it are ridiculous.

You claim that It is actually showing that you can do something roughly 
equivalent to growing neural gas (GNG) in a space with something approaching 
500,000 dimensions, but you can do it without normally having to deal with more 
than a few of those dimensions at one time.  Collins makes no claims that even 
remotely resembles this.  He *is* taking a deconstructionist approach (which 
Richard and many others would argue vehemently with) -- but that is virtually 
the entirety of the overlap between his paper and your claims.  Where do you 
get all this crap about 500,000 dimensions, for example?

You also make statements that are explicitly contradicted in the paper.  For 
example, you say But there really seem to be no reason why there should be any 
limit to the dimensionality of the space in which the Collin's algorithm works, 
because it does not use an explicit vector representation while his paper 
quite clearly states Each tree is represented by an n dimensional vector where 
the i'th component counts the number of occurences of the i'th tree fragment. 
(A mistake I believe you made because you didn't understand the prevceding 
sentence -- or, more critically, *any* of the math).

Are all your claims on this list this far from reality if one pursues them? 


- Original Message - 
From: Ed Porter [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, December 04, 2007 10:52 PM
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]


The particular NL parser paper in question, Collins's Convolution Kernels
for Natural Language
(http://l2r.cs.uiuc.edu/~danr/Teaching/CS598-05/Papers/Collins-kernels.pdf)
is actually saying something quite important that extends way beyond parsers
and is highly applicable to AGI in general.  

It is actually showing that you can do something roughly equivalent to
growing neural gas (GNG) in a space with something approaching 500,000
dimensions, but you can do it without normally having to deal with more than
a few of those dimensions at one time.  GNG is an algorithm I learned about
from reading Peter Voss that allows one to learn how to efficiently
represent a distribution in a relatively high dimensional space in a totally
unsupervised manner.  But there really seem to be no reason why there should
be any limit to the dimensionality of the space in which the Collin's
algorithm works, because it does not use an explicit vector representation,
nor, if I recollect correctly, a Euclidian distance metric, but rather a
similarity metric which is generally much more appropriate for matching in
very high dimensional spaces.

But what he is growing are not just points representing where data has
occurred in a high dimensional space, but sets of points that define
hyperplanes for defining the boundaries between classes.  My recollection is
that this system learns automatically from both labeled data (instances of
correct parse trees) and randomly generated deviations from those instances.
His particular algorithm matches tree structures, but with modification it
would seem to be extendable to matching arbitrary nets.  Other versions of
it could be made to operate, like GNG, in an unsupervised manner.

If you stop and think about what this is saying and generalize from it, it
provides an important possible component in an AGI tool kit. What it shows
is not limited to parsing, but it would seem possibly applicable to
virtually any hierarchical or networked representation, including nets of
semantic web RDF triples, and semantic nets, and predicate logic
expressions.  At first glance it appears it would even be applicable to
kinkier net matching algorithms, such as an Augmented transition network
(ATN) matching.

So if one reads this paper with a mind to not only what it specifically
shows, but to what how what it shows could be expanded, this paper says
something very important.  That is, that one can represent, learn, and
classify things in very high dimensional spaces -- such as 10^1
dimensional spaces -- and do it efficiently provided the part of the space
being represented is sufficiently sparsely connected.

I had already assumed this, before reading this paper, but the paper was
valuable to me because it provided a mathematically rigorous support for my
prior models, and helped me better understand the mathematical foundations
of my own prior intuitive thinking.  

It means that systems like Novemente can deal in very high dimensional
spaces relatively efficiently. It does not mean that all processes that can
be performed in such spaces will be computationally cheap (for example,
combinatorial searches), but it means that many of them, such as GNG like
recording of 

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-05 Thread Mark Waser

ED PORTER= The 500K dimensions were mentioned several times in a
lecture Collins gave at MIT about his parse.  This was probably 5 years ago
so I am not 100% sure the number was 500K, but I am about 90% sure that was
the number used, and 100% sure the number was well over 100K.

OK.  I'll bite.  So what do *you* believe that these dimensions are?  Words? 
Word pairs?  Entire sentences?  Different trees? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=72410952-199e0d


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-05 Thread Mark Waser
Dimensions is an awfully odd word for that since dimensions are normally 
assumed to be orthogonal.


- Original Message - 
From: Ed Porter [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, December 05, 2007 5:08 PM
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]


Mark,

The paper said:

Conceptually we begin by enumerating all tree fragments that occur in the
training data 1,...,n.

Those are the dimensions, all of the parse tree fragments in the training
data.  And as I pointed out in an email I just sent to Richard, although
usually only a small set of them are involved in any one match between two
parse trees, they can all be used over set of many such matches.

So the full dimensionality is actually there, it is just that only a
particular subset of them are being used at any one time.  And when the
system is waiting for the next tree to match it is potentially capability of
matching it against any of its dimensions.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Wednesday, December 05, 2007 3:07 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

ED PORTER= The 500K dimensions were mentioned several times in a
lecture Collins gave at MIT about his parse.  This was probably 5 years ago
so I am not 100% sure the number was 500K, but I am about 90% sure that was
the number used, and 100% sure the number was well over 100K.

OK.  I'll bite.  So what do *you* believe that these dimensions are?  Words?

Word pairs?  Entire sentences?  Different trees?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=72664919-0f4727


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-05 Thread Mark Waser

HeavySarcasmWow.  Is that what dot products are?/HeavySarcasm

You're confusing all sorts of related concepts with a really garbled 
vocabulary.


Let's do this with some concrete 10-D geometry . . . . Vector A runs from 
(0,0,0,0,0,0,0,0,0,0) to (1, 1, 0,0,0,0,0,0,0,0).  Vector B runs from 
(0,0,0) to (1, 0, 1,0,0,0,0,0,0,0).


Clearly A and B share the first dimension.  Do you believe that they share 
the second and the third dimension?  Do you believe that dropping out the 
fourth through tenth dimension in all calculations is some sort of huge 
conceptual breakthrough?


The two vectors are similar in the first dimension (indeed, in all but the 
second and third) but otherwise very distant from each other (i.e. they are 
*NOT* similar).  Do you believe that these vectors are similar or distant?


THE ALLEGATION BELOW THAT I MISUNDERSTOOD THE MATH BECAUSE THOUGHT 
COLLIN'S PARSER DIDN'T HAVE TO DEAL WITH A VECTOR HAVING THE FULL 
DIMENSIONALITY OF THE SPACE BEING DEALT WITH IS CLEARLY FALSE.


My allegation was that you misunderstood the math because you claimed that 
Collin's paper does not use an explicit vector representation while 
Collin's statements and the math itself makes it quite clear that they are 
dealing with a vector representation scheme.  I'm now guessing that you're 
claiming that you intended explicit to mean full dimensionality. 
Whatever.  Don't invent your own meanings for words and you'll be 
misunderstood less often (unless you continue to drop out key words like in 
the capitalized sentence above).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=72452073-36665f


Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Mark Waser

I don't see that the hardest part of agi is NLP i/o.


I didn't say that i/o was the hardest part of agi.  Truly understanding NLP 
is agi-complete though.  And please, get off this kick of just faking 
something up and thinking that because you can create a shallow toy example 
that holds for ten seconds that you've answered *anything*.  That's the 
*narrow ai* approach.


- Original Message - 
From: Linas Vepstas [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 13, 2007 4:01 PM
Subject: Re: [agi] What best evidence for fast AI?



On Mon, Nov 12, 2007 at 08:44:58PM -0500, Mark Waser wrote:


So perhaps the AGI question is, what is the difference between
a know-it-all mechano-librarian, and a sentient being?

I wasn't assuming a mechano-librarian.  I was assuming a human that could
(and might be trained to) do some initial translation of the question and
some final rephrasing of the answer.


I'm surprised by your answer.

I don't see that the hardest part of agi is NLP i/o. To put it into
perspective: one can fake up some trivial NLP i/o now, and with a bit of
effort, one can improve significantly on that.  Sure, it would be
child-like conversation, and the system would be incapable of learning
new idioms, expressions, etc., but I don't see that you'd need a human
to translate the question into some formal reasoning-engine language.

The hard part of NLP is being able to read complex texts, whether
Alexander Pope or Karl Marx; but a basic NLP i/o interface stapled to
a reasoning engine doesn't need to really do that, or at least not well.
Yet, these two stapled toegether would qualify as a mechano-librarian
for me.

To me, the hard part is still the reasoning engine itself, and the
pruning, and the tailoring of responses to the topic at hand.

So let me rephrase the question: If one had
1) A reasoing engine that could provide short yet appropriate responses
  to questions,
2) A simple NLP interface to the reasoning engine

would that be AGI?  I imagine most folks would say no, so let me throw
in:

3) System can learn new NLP idioms, so that it can eventually come to
understand those sentences and paragraphs that make Karl Marx so hard to
read.

With this enhanced reading ability, it could then presumably become a
know-it-all ultra-question-answerer.

Would that be AGI? Or is there yet more? Well, of course there's more:
one expects creativity, aesthetics, ethics. But we know just about nothing
about that.

This is the thing that I think is relevent to Robin Hanson's original
question.  I think we can build 1+2 is short order, and maybe 3 in a
while longer. But the result of 1+2+3 will almost surely be an
idiot-savant: knows everything about horses, and can talk about them
at length, but, like a pedantic lecturer, the droning will put you
asleep.  So is there more to AGI, and exactly how do way start laying
hands on that?

--linas






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64683060-82d4be


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
 I was using the term episodic in the standard sense of episodic memory 
 from cog psych, in which episodic memory is differentiated from procedural 
 and declarative memory. 

I understood that.  The problem is that procedural and declarative memory is 
*not* as simple as is often purported.  If you can't rapidly realize when and 
why your previously reliable procedural and declarative stuff is suddenly no 
longer valid . . . . 

 The main point is, we have specialized indices to make memory access 
 efficient for knowledge involving (certain and uncertain) logical 
 relationships, associations, spatial and temporal relationships, and 
 procedures

Indices are important but compactness of data storage is also important as are 
ways to have what is effectively indexed derivation of knowledge.  Obviously my 
knowledge of Novamente is becoming dated but, unless you opened some really new 
areas, there is a lot of work that could be done in this area that you're not 
focusing on.  (Note: Please don't be silly infer that by compactness of data 
storage that I mean that disk size is important -- we're long past those days.  
Assume that I mean the computational costs of manipulating data that is not 
stored in an efficient manner).

 Research project 1.  How do you find analogies between neural networks, 
 enzyme kinetics and the formation of galaxies (hint:  think Boltzmann)? 
 That is a question most humans couldn't answer, and is only suitable for 
 testing an AGI that is already very advanced.

In your opinion.  I don't believe that an AGI is going to get far at all 
without having at least a partial handle on this.

 Research project 2.  How do you recognize and package up all of the data 
 that represents horse and expose only that which is useful at a given time? 
 That is covered quite adequately in the NM design, IMO.  We are actually 
 doing a commercial project right now (w/ delivery in 2008) that will showcase 
 our ability to solve this problem.  Details are confidential unfortunately, 
 due to the customer's preference. 

I'm afraid that I have to snort at this.  Either you didn't understand the full 
implications of what I'm saying or you're snowing me (ok, I'll give you a .1% 
chance of having it).

 That is what is called map encapsulation in the Novamente design.

Yes, yes, I saw it in the design . . . . a miracle happens here.
Which, granted, is better than not realizes that the area exists . . . . but 
still . . . .

 I do not think the design has any huge gaps.  But much further RD work is 
 required, and I agree there may be a simpler approach; but I am not 
 convinced that you have one. 

These are two *very* different issues (with a really spurious statement tacked 
onto the end).

Of course you don't think the design has any gaps -- you would have filled them 
if you saw them.

There is no reason to be convinced that *I* have a simpler approach because I 
haven't put one forth.  I may or may not be working on one:-) but if I am, 
I certainly haven't got to the point where I feel that I can defend it.:-)

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 11:45 AM
  Subject: Re: [agi] What best evidence for fast AI?





  On Nov 12, 2007 11:36 AM, Mark Waser [EMAIL PROTECTED] wrote:

 I am extremely confident of Novamente's memory design regarding 
declarative and procedural knowledge.  Tweaking the system for optimal 
representation of episodic knowledge may require some more thought. 

Granted -- the memory design is very generic and will handle virtually 
anything.  The question is -- is it in a reasonably optimal from for retrieval 
and other operations (i.e. optimal enough that it won't end up being impossibly 
slow once you get a realistic amount of data/knowledge).  Your caveat on 
episodic knowledge proves very informative since *all* knowledge is effectively 
episodic.

  I was using the term episodic in the standard sense of episodic memory 
from cog psych, in which episodic memory is differentiated from procedural and 
declarative memory. 

  The main point is, we have specialized indices to make memory access 
efficient for knowledge involving (certain and uncertain) logical 
relationships, associations, spatial and temporal relationships, and procedures 
... but we haven't put much work into creating specialized indices to make 
access of stories/narratives efficient.  Though this may not wind up being 
necessary since the AtomTable now has the capability to create new indices on 
the fly, based on the statistics of the data contained therein. 

   

 I have no idea what you mean by scale invariance of knowledge nor and 
only weak understanding of what you mean by ways of determining and exploiting 
encapsulation and modularity of knowledge without killing useful leaky 
abstractions.

Research project 1.  How do you find analogies between neural networks, 
enzyme

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
 I'm more interested at this stage in analogies like
 -- btw seeking food and seeking understanding
 -- between getting an object out of a hole and getting an object out of a 
 pocket, or a guarded room
 Why would one need to introduce advanced scientific concepts to an 
 early-stage AGI?  I don't get it... 

:-)  A bit disingenuous there, Ben.  Obviously you start with the simple and 
move on to the complex (though I suspect that the first analogy you cite is 
rather more complex than you might think) -- but to take too simplistic an 
approach that might not grow is just the narrow AI approach in other clothing.

 Hmmm  I guess I didn't understand what you meant.
 What I thought you meant was, if a user asked I'm a small farmer in New 
 Zealand.  Tell me about horses then the system would be able to disburse 
 its relevant knowledge about horses, filtering out the irrelevant stuff.   
 What did you mean, exactly?

That's a good simple, starting case.  But how do you decide how much knowledge 
to disburse?  How do you know what is irrelevant?  How much do your answers 
differ between a small farmer in New Zealand, a rodeo rider in the West, a 
veterinarian is Pennsylvania, a child in Washington, a bio-mechanician studying 
gait?  And horse is actually a *really* simple concept since it refers to a 
very specific type of physical object.  

Besides, are you really claiming that you'll be able to do this next year?  
Sorry, but that is just plain, unadulterated BS.  If you can do that, you are 
light-years further along than . . . . 

 There are specific algorithms proposed, in the NM book, for doing map 
 encapsulation.  You may not believe they will work for the task, but still, 
 it's not fair to use the label a miracle happens here to describe a 
 description of specific algorithms applied to a specific data structure.  

I guess that the jury will have to be out until you publicize the algorithms.  
What I've seen in the past are too small, too simple, and won't scale to what 
is likely to be necessary.

 I think it has medium-sized gaps, not huge ones.  I have not filled all 
 these gaps because of lack of time -- implementing stuff needs to be 
 balanced with finalizing design details of stuff that won't be implemented 
 for a while anyway due to limited resources. 

:-)  You have more than enough design experience to know that medium-size gaps 
can frequently turn huge once you turn your attention to them.  Who are you 
snowing here?



  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 12:55 PM
  Subject: Re: [agi] What best evidence for fast AI?



  Hi,
   

 Research project 1.  How do you find analogies between neural networks, 
enzyme kinetics and the formation of galaxies (hint:  think Boltzmann)? 
 That is a question most humans couldn't answer, and is only suitable for 
testing an AGI that is already very advanced.

  
In your opinion.  I don't believe that an AGI is going to get far at all 
without having at least a partial handle on this.

  I'm more interested at this stage in analogies like

  -- btw seeking food and seeking understanding
  -- between getting an object out of a hole and getting an object out of a 
pocket, or a guarded room

  etc.

  Why would one need to introduce advanced scientific concepts to an 
early-stage AGI?  I don't get it... 

   

 Research project 2.  How do you recognize and package up all of the data 
that represents horse and expose only that which is useful at a given time?  
 That is covered quite adequately in the NM design, IMO.  We are actually 
doing a commercial project right now (w/ delivery in 2008) that will showcase 
our ability to solve this problem.  Details are confidential unfortunately, due 
to the customer's preference. 

I'm afraid that I have to snort at this.  Either you didn't understand the 
full implications of what I'm saying or you're snowing me (ok, I'll give you a 
.1% chance of having it).

  Hmmm  I guess I didn't understand what you meant.

  What I thought you meant was, if a user asked I'm a small farmer in New 
Zealand.  Tell me about horses then the system would be able to disburse its 
relevant knowledge about horses, filtering out the irrelevant stuff.   

  What did you mean, exactly?

   


 That is what is called map encapsulation in the Novamente design.

Yes, yes, I saw it in the design . . . . a miracle happens here.
Which, granted, is better than not realizes that the area exists . . . . but 
still . . . .

  There are specific algorithms proposed, in the NM book, for doing map 
encapsulation.  You may not believe they will work for the task, but still, 
it's not fair to use the label a miracle happens here to describe a 
description of specific algorithms applied to a specific data structure.  

   

 I do not think the design has any huge gaps.  But much further RD work 
is required, and 

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
I don't know at what point you'll be blocked from answering by confidentiality 
concerns but I'll ask a few questions you hopefully can answer like:
  1.. How is the information input and stored in your system (i.e. Is it more 
like simple formal assertions with a restricted syntax and/or language or like 
English language)?
  2.. How constrained in the information content (and is the content even 
relevant)?
  3.. To what degree does the system understand the information (i.e. how 
much can in manipulate it)?
  4.. Who tags the information as relevant to particular users?
  5.. How constrained are the tags?
  6.. What is the output (is it just a regurgitation of appropriately tagged 
information pieces)?
I have to assume that you're taking the easy way out on most of the questions 
(like formal assertions, restricted syntax, any language but the system does 
not understand or manipulate the language so content is irrelevant, users apply 
tags, fairly simply regurgitation) if you think 2008 is anywhere close to 
reasonable.

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 1:59 PM
  Subject: Re: [agi] What best evidence for fast AI?




  That's a good simple, starting case.  But how do you decide how much 
knowledge to disburse?  How do you know what is irrelevant?  How much do your 
answers differ between a small farmer in New Zealand, a rodeo rider in the 
West, a veterinarian is Pennsylvania, a child in Washington, a bio-mechanician 
studying gait?  And horse is actually a *really* simple concept since it refers 
to a very specific type of physical object.  

  Besides, are you really claiming that you'll be able to do this next 
year?  Sorry, but that is just plain, unadulterated BS.  If you can do that, 
you are light-years further along than . . . .


  Actually, this example is just not that hard.  I think we may be able to do 
this during 2008, if funding for that particular NM application project holds 
up (it's currently confirmed only thru May-June) 

  ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64259017-2fd868

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
Hmm.  Interesting.  This e-mail (and the last) lead me to guess that you seem 
to have made some major, quantum leaps in NLP.  Is that correct?  You sure 
haven't been talking about it . . . . 
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 2:57 PM
  Subject: Re: [agi] What best evidence for fast AI?





  On Nov 12, 2007 2:51 PM, Mark Waser [EMAIL PROTECTED] wrote:

I don't know at what point you'll be blocked from answering by 
confidentiality concerns


  I can't say much more than I will do in this email, due to customer 
confidentiality concerns
   
but I'll ask a few questions you hopefully can answer like:
  1.. How is the information input and stored in your system (i.e. Is it 
more like simple formal assertions with a restricted syntax and/or language or 
like English language)?

  English input as well as other forms of input; NM Atom storage

  Obviously English language comprehension will not be complete; and 
proprietary (not Novamente's) UI devices will be used to work around this. 

  1.. 
  2.. How constrained in the information content (and is the content even 
relevant)?

  We'll work with a particular (relatively simple) text source for starters, 
with a view toward later generalization

  1.. 
  2.. To what degree does the system understand the information (i.e. how 
much can in manipulate it)?

  That degree will increase as we bring more and more of PLN into the system.  
Initially, it'll just be simple PLN first-order term logic inference; then 
we'll extend it. 
   
  1.. 
  2.. Who tags the information as relevant to particular users?

  User feedback 

  1.. 
  2.. How constrained are the tags?

  They're English 

  1.. 
  2.. What is the output

  That's confidential, but it's very expressive and flexible

  |  (is it just a regurgitation of appropriately tagged information pieces)?


  No

  -- Ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64263051-2c4067

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
I'm going to try to put some words into Richard's mouth here since I'm 
curious to see how close I am . . . . (while radically changing the words).

I think that Richard is not arguing about the possibility of Novamente-type 
solutions as much as he is arguing about the predictability of *very* flexible 
Novamente-type solutions as they grow larger and more complex (and the 
difficulty in getting it to not instantaneously crash-and-burn).  Indeed, I 
have heard a very faint shadow of Richard's concerns in your statements about 
the tuning problems that you had with BioMind.

Novamente looks, at times, like the very first step in an inductive proof . 
. . . except that it is in a chaotic environment rather than the nice orderly 
number system.  Pieces of the system clearly sail in calm, friendly waters but 
hooking them all up in a wild environment is another story entirely (again, 
look at your own BioMind stories).

I've got many doubts because I don't think that you have a handle on the 
order -- the big (O) -- of many of the operations you are proposing (why I harp 
on scalability, modularity, etc.).  Richard is going further and saying that 
the predictability of even some of your smaller/simpler operations is 
impossible (although, as he has pointed out, many of them could be constrained 
by attractors, etc. if you were so inclined to view/treat your design that 
way).  

Personally, I believe that intelligence is *not* complex -- despite the 
fact that it does (probably necessarily) rest on top of complex pieces -- 
because those pieces' interactions are constrained enough that intelligence is 
stable.  I think that this could be built into a Novamente-type design *but* 
you have to be attempting to do so (and I think that I could convince Richard 
of that -- or else, I'd learn a lot by trying  :-).

Richard's main point is that he believes that the search space of viable 
parameters and operations for Novamente is small enough that you're not going 
to hit it by accident -- and Novamente's very flexibility is what compounds the 
problem.  Remember, life exists on the boundary between order and chaos.  Too 
much flexibility (unconstrained chaos) is as deadly as too much structure.

I think that I see both sides of the issue and how Novamente could be 
altered/enhanced to make Richard happy (since it's almost universally flexible) 
-- but doing so would also impose many constraints that I think that you would 
be unwilling to live with since I'm not sure that you would see the point.  I 
don't think that you're ever going to be able to change his view that the 
current direction of Novamente is -- pick one:  a) a needle in an infinite 
haystack or b) too fragile to succeed -- particularly since I'm pretty sure 
that you couldn't convince me without making some serious additions to Novamente

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 3:49 PM
  Subject: Re: [agi] What best evidence for fast AI?



  To be honest, Richard, I do wonder whether a sufficiently in-depth 
conversation
  about AGI between us would result in you changing your views about the CSP
  problem in a way that would accept the possibility of Novamente-type 
solutions. 

  But, this conversation as I'm envisioning it would take dozens of hours, and 
would
  require you to first spend 100+ hours studying detailed NM materials, so this 
seems
  unlikely to happen in the near future. 

  -- Ben


  On Nov 12, 2007 3:32 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

Benjamin Goertzel wrote:

 Ed --

 Just a quick comment: Mark actually read a bunch of the proprietary,
 NDA-required Novamente documents and looked at some source code (3 years 
 ago, so a lot of progress has happened since then).  Richard didn't, so
 he doesn't have the same basis of knowledge to form detailed comments on
 NM, that Mark does.


This is true, but not important to my line of argument, since of course 
I believe that a problem exists (CSP), which we have discussed on a
number of occasions, and your position is not that you have some
proprietary, unknown-to-me solution to the problem, but rather that you
do not really think there is a problem. 

Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to: 

http://v2.listbox.com/member/?;



--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64351025-209479

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
 You seem to be thinking about Webmind, an AI company I was involved in 
 during the late 1990's; as opposed to Biomind

Yes, sorry, I'm laboring under a horrible cold and my brain is not all here.

 The big-O order is almost always irrelevant.  Most algorithms useful for 
 cognition are exponential-time worst-case complexity.  What matters is 
 average-case complexity over the probability distribution of problem 
 instances actually observed in the real world.  And yeah, this is very hard 
 to estimate mathematically. 

Well . . . . big-O order certainly does matter for things like lookups and 
activation where we're not talking about heuristic shortcuts and average 
complexity.  But I would certainly accept your correction for other operations 
like finding modularity and analogies -- except we don't have good heuristic 
shortcuts, etc. for them -- yet.

  Saying a system is universally capable doesn't mean hardly anything, and 
 isn't really worth saying. 

Nope.  Saying it usually forestalls a lot of silly objections.  That's really 
worthwhile.:-)

 I believe Richard's complaints are of a quite different character than 
 yours.  

And I might be projecting . . . . :-)which is why I figured I'd run this 
out there and see how he reacted.:-)

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 5:14 PM
  Subject: Re: [agi] What best evidence for fast AI?





  On Nov 12, 2007 5:02 PM, Mark Waser [EMAIL PROTECTED] wrote:

I'm going to try to put some words into Richard's mouth here since I'm 
curious to see how close I am . . . . (while radically changing the words).

I think that Richard is not arguing about the possibility of 
Novamente-type solutions as much as he is arguing about the predictability of 
*very* flexible Novamente-type solutions as they grow larger and more complex 
(and the difficulty in getting it to not instantaneously crash-and-burn).  
Indeed, I have heard a very faint shadow of Richard's concerns in your 
statements about the tuning problems that you had with BioMind.

  You seem to be thinking about Webmind, an AI company I was involved in during 
the late 1990's; as opposed to Biomind, a bioinformatics company in which I am 
currently involved, and which is doing pretty well. 

  The Webmind AI Engine was an order of magnitude more complex than the 
Novamente Cognition Engine; and this is intentional.  Many aspects of the NM 
design were specifically originated to avoid problems that we found with the 
Webmind system.  




I've got many doubts because I don't think that you have a handle on 
the order -- the big (O) -- of many of the operations you are proposing (why I 
harp on scalability, modularity, etc.).

  The big-O order is almost always irrelevant.  Most algorithms useful for 
cognition are exponential-time worst-case complexity.  What matters is 
average-case complexity over the probability distribution of problem instances 
actually observed in the real world.  And yeah, this is very hard to estimate 
mathematically. 

   
  Richard is going further and saying that the predictability of even some 
of your smaller/simpler operations is impossible (although, as he has pointed 
out, many of them could be constrained by attractors, etc. if you were so 
inclined to view/treat your design that way).  

  Oh, I thought **I** was the one who pointed that out.
   

Personally, I believe that intelligence is *not* complex -- despite the 
fact that it does (probably necessarily) rest on top of complex pieces -- 
because those pieces' interactions are constrained enough that intelligence is 
stable.  I think that this could be built into a Novamente-type design *but* 
you have to be attempting to do so (and I think that I could convince Richard 
of that -- or else, I'd learn a lot by trying  :-).

  That is part of the plan, but we have a bunch of work of implementing/tuning 
components first.
   

Richard's main point is that he believes that the search space of 
viable parameters and operations for Novamente is small enough that you're not 
going to hit it by accident -- and Novamente's very flexibility is what 
compounds the problem.  

  The Webmind system had this problem.  Novamente is carefully designed not to. 
 Of course, I can't prove that it won't, though. 
   
Remember, life exists on the boundary between order and chaos.  Too much 
flexibility (unconstrained chaos) is as deadly as too much structure.

I think that I see both sides of the issue and how Novamente could be 
altered/enhanced to make Richard happy (since it's almost universally flexible) 
-- 


  Novamente is universally capable but so are a lot of way simpler, 
pragmatically useless system.  Saying a system is universally capable doesn't 
mean hardly anything, and isn't really worth saying.   The question as you know 
is what can a system do given a pragmatic amount

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
:-)  I don't think I've ever known you to spout intentionally BS . . . .

A well-architected statistical-NLP-based information-retrieval system would 
require an identification (probably an exemplar) of the cluster(s) that matched 
each of the portfolios and would return a mixed conglomerate of data rather 
than any sort of coherent explanation (other than the explanations present in 
the data cluster).  The WASNLPBIRS certainly wouldn't be able to condense the 
data to a nicely readable format or perform any other real operations on the 
information.

What I meant by *really* sophisticated should have been indicated by the 
difficult end of my six point list -- which is fundamentally equivalent (in my 
opinion) to a full-up AGI since it basically requires full understanding of 
English and a WASNLPBIRS feeding it.

The problem with the WASNLPBIRS and what Linas suggested is that they look 
*really* cool at first -- and then you realize how little they actually do.

The real problem with your claim of if a user asked I'm a small farmer in 
New Zealand.  Tell me about horses then the system would be able to disburse 
its relevant knowledge about horses, filtering out the irrelevant stuff is the 
last five words.  How do you intend to do *that*.  (And notice that what I 
kicked Linas for was precisely his It will happily include irrelevant facts.

I've had to deal with users who have bought large, expensive conceptual 
clustering systems who were *VERY* unhappy once they realized what they had 
actually purchased.  I would be *real* careful if I were you about what you're 
promising because there are already a good number of companies that, a decade 
ago, had already perfected the best that that approach could offer -- and then 
died on the rope of user dissatisfaction.

Mark

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 7:10 PM
  Subject: Re: [agi] What best evidence for fast AI?





  On Nov 12, 2007 6:56 PM, Mark Waser [EMAIL PROTECTED] wrote:

 It will happily include irrelevant facts


Which immediately makes it *not* relevant to my point.

Please read my e-mails more carefully before you hop on with ignorant 
flames.  The latter part of your e-mail clearly makes my point -- anyone
claiming to be able to do a sophisticated version of this in the next year
is spouting plain, unadulterated BS.

  Mark, I really wasn't spouting BS.  I imagine what you are conceiving 
  when you use the label of sophisticated is more sophisticated than what
  I am hoping to launch within the next year.  

  Being sophisticated is not a precise criterion.

  Your example of giving information about horses in a contextual way 

  **
  How do you know what is irrelevant?  How much do your answers differ between 
a small farmer in New Zealand, a rodeo rider in the West, a veterinarian is 
Pennsylvania, a child in Washington, a bio-mechanician studying gait?

  **

  is in my judgment not beyond what a well-architected statistical-NLP-based 
information-retrieval system could deliver.  I don't think you even need a 
Novamente system to do this.So is this all you mean by sophisticated?  I 
don't really understand what you intend... seriously... 

  -- Ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64413835-ad7189

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
   There is a big difference between being able to fake something for a 
brief period of time and being able to do it correctly.  All of your 
phrasing clearly indicates that *you* believe that your systems can only 
fake it for a brief period of time, not do it correctly.  Why are you 
belaboring the point?  I don't get it since your own points seem to deny 
your own argument.


   And even if you can do it for small, toy conversations where you 
recognize the exact same assertions -- that is nowhere close to what you're 
going to need in the real world.



When the average librarian is able to answer veterinary questions to
the satisfaction of a licensing board conducting an oral examination,
then we will be living in the era of agi, won't we?


Depends upon your definition of AGI.  That could be just a really kick-ass 
decision support system -- and I would actually bet a pretty fair chunk of 
money that 15 years *is* entirely within reason for the scenario you 
suggest.


- Original Message - 
From: Linas Vepstas [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, November 12, 2007 7:28 PM
Subject: Re: [agi] What best evidence for fast AI?



On Mon, Nov 12, 2007 at 06:56:51PM -0500, Mark Waser wrote:

It will happily include irrelevant facts

Which immediately makes it *not* relevant to my point.

Please read my e-mails more carefully before you hop on with ignorant
flames.


I read your emails, and, mixed in with some insightful and highly
relevent commentary, there are also many flames. Repeatedly so.

Relevence is not an easy problem, nor is it obviously a hard one.
To provide relevent answers, one must have a model of who is asking.
So, in building a computer chat system, one must first deduce things
about the speaker.  This is something I've been trying to do.

Again, with my toy system, I've gotten so far as to be able to
let the speaker proclaim that this is boring, and have the
system remember, so that, for future conversations, the boring
assertions are not revisited.

Now, boring is a tricky thing: a horse is genus equus may be boring
for a child, and yet interesting to young adults. So the problem of
relevent answers to questions is more about creating a model of the
person one is conversing with, than it is about NLP processing,
representation of knowledge, etc. Conversations are contextual;
modelling that context is what is interesting to me.

The result of hooking up a reasoning system, a knowledgebase like
opencyc or sumo, an nlp parser, and a homebrew contextualizer is
not agi.  It's little more than a son-et-lumiere show.  But it
already does the things that you are claiming to be unadulterated BS.


And regarding
If and when you find a human who is capable of having conversations
about horses with small farmers, rodeo riders, vets, children
and biomechanicians, I'll bet that they won't have a clue about
galaxy formation or enzyme reactions. Don't set the bar above
human capabilites.

Go meet your average librarian.  They won't know the information off the
top of their heads (yet), but they'll certainly be able to get it to 
you -- 


Go meet google. Or wikipedia. Cheeses.


and the average librarian fifteen years from now *will* be able to.


When the average librarian is able to answer veterinary questions to
the satisfaction of a licensing board conducting an oral examination,
then we will be living in the era of agi, won't we?

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64415771-2a51bf


Re: [agi] Upper Ontologies

2007-11-10 Thread Mark Waser
 I would bet that merging two KB's obtained by mining natural
 language would work a lot better than merging two KB's 
 like Cyc and SUMO that were artificially created by humans.

I think that this phrasing confuses the issue.  It is the structure of the 
final KR scheme, not how the initial KBs were created/obtained that determines 
that difficulty of merging the KBs.  A natural language mining KB's KR scheme 
is going to be forced to be *much* more flexible in it's representation than 
Cyc and SUMO are (thus, your intuition is correct but only because neither Cyc 
nor SUMO have a sufficiently flexible KR scheme).

 The problem seems to be that we don't, explicitly and declaratively,
 know how our internal, intuitive knowledge bases are structured.

My personal opinion is that we *don't* have only one way in which our internal, 
intuitive knowledge bases are structured.  I think (and hope) that we have a 
reasonably small number of KR meta-structures that we load fairly simply linked 
data/knowledge into and that the building of KR structures is both difficult 
and the source of our intelligence.

 IMO, the whole approach of building explicit knowledge-bases 
 like Cyc and SUMO is a dead-end.

But this is the really, really important point.  Building an explicit knowledge 
base is like handing the AGI a fish.  We can hand the AGI the beginnings of a 
structure (a pole and string) and teach it to learn -- but we also have to be 
really, really careful that we don't cripple it by a poor initial choice of 
what we give it (which is what I think Cyc and SUMO do -- and I don't mean to 
imply that those were bad projects since they were excellent stepping stones 
and valuable sources of *data*).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63820507-b85b67

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Mark Waser
 my inclination has been to see progress as very slow toward an 
 explicitly-coded AI, and so to guess that the whole brain emulation approach 
 would succeed first 

Why are you not considering a seed/learning AGI? 
  - Original Message - 
  From: Robin Hanson 
  To: agi@v2.listbox.com 
  Sent: Saturday, November 10, 2007 6:41 AM
  Subject: [agi] What best evidence for fast AI?


  I've been invited to write an article for an upcoming special issue of IEEE 
Spectrum on Singularity, which in this context means rapid and large social 
change from human-level or higher artificial intelligence.   I may be among the 
most enthusiastic authors in that issue, but even I am somewhat skeptical.   
Specifically, after ten years as an AI researcher, my inclination has been to 
see progress as very slow toward an explicitly-coded AI, and so to guess that 
the whole brain emulation approach would succeed first if, as it seems, that 
approach becomes feasible within the next century.  

  But I want to try to make sure I've heard the best arguments on the other 
side, and my impression was that many people here expect more rapid AI 
progress.   So I am here to ask: where are the best analyses arguing the case 
for rapid (non-emulation) AI progress?   I am less interested in the arguments 
that convince you personally than arguments that can or should convince a wide 
academic audience. 

  [I also posted this same question to the sl4 list.] 

  Robin Hanson  [EMAIL PROTECTED]  http://hanson.gmu.edu 
  Research Associate, Future of Humanity Institute at Oxford University
  Associate Professor of Economics, George Mason University
  MSN 1D3, Carow Hall, Fairfax VA 22030-
  703-993-2326  FAX: 703-993-2323



--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63822575-74b1e4

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Mark Waser

Looks like they were just simulating eight million neurons with up to
6.3k synapses each. How's that necessarily a mouse simulation, anyway?


It really isn't because the individual neuron behavior is so *vastly* 
simplified.  It is, however, a necessary first step and likely to teach us 
*a lot*.


- Original Message - 
From: Bryan Bishop [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, November 10, 2007 11:22 AM
Subject: Re: [agi] What best evidence for fast AI?



On Saturday 10 November 2007 10:07, Kaj Sotala wrote:

http://news.bbc.co.uk/2/hi/technology/6600965.stm



The researchers say that although the simulation shared some
similarities with a mouse's mental make-up in terms of nerves and
connections it lacked the structures seen in real mice brains.


Looks like they were just simulating eight million neurons with up to
6.3k synapses each. How's that necessarily a mouse simulation, anyway?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63842006-af107f


[agi] Re: Bogus Neuroscience

2007-10-22 Thread Mark Waser
If I see garbage being peddled as if it were science, I will call it 
garbage.


Amen.  The political correctness of forgiving people for espousing total 
BS is the primary cause of many egregious things going on for far, *far* too 
long.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56240391-7b4448


Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Mark Waser
 True enough, but Granger's work is NOT total BS... just partial BS ;-)

In which case, clearly praise the good stuff but just as clearly (or even more 
so) oppose the BS.

You and Richard seem to be in vehement agreement.  Granger knows his neurology 
and probably his neuroscience (depending upon where you draw the line) but his 
link of neuroscience to cognitive science is not only wildly speculative but 
clearly amateurish and lacking the necessary solid grounding in the latter 
field.

I'm not quite sure why you always hammer Richard for pointing this out.  He 
does have his agenda to stamp out bad science (which I endorse fully) but he 
does tend to praise the good science (even if more faintly) as well.  Your 
hammering of Richard often appears as a strawman to me since I know that you 
know that Richard doesn't dismiss these people's good neurology -- just their 
bad cog sci.  And I really am not seeing any difference between what I 
understand as your opinion and what I understand as his. 


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 22, 2007 8:00 AM
  Subject: Re: [agi] Re: Bogus Neuroscience





  On 10/22/07, Mark Waser [EMAIL PROTECTED] wrote:
 If I see garbage being peddled as if it were science, I will call it
 garbage.

Amen.  The political correctness of forgiving people for espousing total
BS is the primary cause of many egregious things going on for far, *far* 
too 
long.

  True enough, but Granger's work is NOT total BS... just partial BS ;-)

  I felt his discussion of the details by which the basal ganglia may serve as a
  reward mechanism added something to prior papers I'd read on the topic.  
Admittedly 
  our knowledge of this neural reward mechanism is still way too crude to yield 
any
  insights regarding AGI, but, it's still interesting.

  On the other hand, his simplified thalamocortical core and matrix 
algorithms are 
  way too simplified for me.  They seem to sidestep the whole issue of complex
  nonlinear dynamics and the formation of strange attractors or transients.  
I.e., even
  if the basic idea he has is right, in which thalamocortical loops mediate the 
formation 
  of semantically meaningful activation-patterns in the cortex, his 
characterization of
  these patterns in terms of categories and subcategories and so forth can at 
best
  only be applicable to a small subset of examples of cortical function  
The difference 
  between the simplified thalamocortical algorithms he presents and the real 
ones seems 
  to me to be the nonlinear dynamics that give rise to intelligence ;-) .. 

  And this is what
  leads me to be extremely skeptical of his speculative treatment of linguistic 
grammar 
  learning within his framework.  I think he's looking for grammatical 
structure to be
  represented at the wrong level in his network... at the level of individual 
activation-patterns
  rather than at the level of the emergent structure of activation-patterns 
 Because his 
  simplified version of the thalamocortical loop is too simplified to give rise 
to nonlinear
  dynamics that display subtly patterned emergent structures...

  -- Ben G

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56245822-75b432

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Mark Waser
 -- I think Granger's cog-sci speculations, while oversimplified and surely 
 wrong in parts, contain important hints at the truth (and in my prior email 
 I tried to indicate how) 
 -- Richard OTOH, seems to consider Granger's cog-sci speculations total 
 garbage
 This is a significant difference of opinion, no?

As you've just stated it, yes.  However, rereading your previous e-mail, I 
still don't really see where you agree with his cog sci (as opposed to what I 
would still call neurobiology which I did see you agreeing with).


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 22, 2007 10:26 AM
  Subject: Re: [agi] Re: Bogus Neuroscience


  And I really am not seeing any difference between what I understand as 
your opinion and what I understand as his. 


  Sorry if I seemed to be hammering on anyone, it wasn't my intention. 
(Yesterday was a sort of bad day for me for non-science-related reasons, so my 
tone of e-voice was likely off a bit ...) 

  I think the difference between my and Richard's views on Granger would likely 
be best summarized by saying that

  -- I think Granger's cog-sci speculations, while oversimplified and surely 
wrong in parts, contain important hints at the truth (and in my prior email I 
tried to indicate how) 

  -- Richard OTOH, seems to consider Granger's cog-sci speculations total 
garbage

  This is a significant difference of opinion, no?

  -- Ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56325849-3cdbfb

Re: [agi] Re: Bogus Neuroscience [...]

2007-10-22 Thread Mark Waser

Arthur,

   There was no censorship.  We all saw that message go by.  We all just 
ignored it.  Take a hint.


- Original Message - 
From: A. T. Murray [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 22, 2007 10:35 AM
Subject: [agi] Re: Bogus Neuroscience [...]



On Oct 21, 2007, at 6:47 PM, J. Andrew Rogers wote:


On Oct 21, 2007, at 6:37 PM, Richard Loosemore wrote:

It took me at least five years of struggle to get to the point
where I could start to have the confidence to call a spade a spade



It still looks like a shovel to me.


In what looks not like a spade or a shovel but like
CENSORSHIP -- my message below was in response to

http://www.mail-archive.com/agi@v2.listbox.com/msg07943.html

Date: Fri, 19 Oct 2007 06:18:27 -0700 (PDT)
From: [EMAIL PROTECTED] (A. T. Murray)
Subject: Re: [agi] More public awarenesss that AGI is coming fast
To: agi@v2.listbox.com
Reply-To: agi@v2.listbox.com


J. Andrew Rogers wrote:

[...]
There is enough VC money for everyone with
a decent business model. Honestly, most AGI
is not a decent business model.


Neither is philosophy, but philosophy prevails.


Otherwise Mentifex would be smothered in cash.
It might even keep him quiet.


I don't need cash beyond the exigencies of daily living.
Right now I'm going to respond off the top of my head
with the rather promising latest news from Mentifex AI.

ATM/Mentifex here fleshed out the initial Wikipedia stub of
http://en.wikipedia.org/wiki/Modularity_of_Mind
several years ago. M*ntifex-bashers came in and
rewrote it, but traces of my text linger still.
(And I have personally met Jerry Fodor years ago.)

Then for several years I kept the Modularity link
on dozens of mind-module webpages as a point of
departure into Wikipedia. Hordes of Wikpedia
editors worked over and over again on the
Modularity-of-mind article.

At the start of September 2007 I decided to
flesh out the Wikipedia connection for each
Mentifex AI mind-module webpage by expanding
from that single link to a cluster of all
discernible Wikipedia articles closely related
to the topic of my roughly forty mind-modules.

http://www.advogato.org/article/946.html
is where on 11 September 2007 I posted
Wikipedia-based Open-Source Artificial Intelligence
-- because I realized that I could piggyback
my independent-scholar AI project on Wikipedia
as a growing source of explanatory AI material.

http://tech.groups.yahoo.com/group/aima-talk/message/784
is where I suggested (and I quote a few lines):

It would be nice if future editions of the AIMA textbook
were to include some treatment of the various independent
AI projects that are out there (on the fringe?) nowadays.


Thereupon another discussant provided a link to
http://textbookrevolution.org -- a site which
immediately accepted my submission of
http://mind.sourceforge.net/aisteps.html as
Artificial Intelligence Wikipedia-based Free Textbook.

So fortuitously, serendipitously the whole direction
of Mentifex AI changed direction in mere weeks.

http://AIMind-I.com is an example not only of
a separate AI spawned from Mentifex AI, but also
of why I do not need massive inputs of VC cash,
when other AI devotees just as dedicated as I am
will launch their own mentifex-class AI Mind
project using their own personal resources.

Now hear this. The Site Meter logs show that
interested parties from all over the world
are looking at the Mentifex offer of a free
AI textbook based on AI4U + updates + Wikipedia.

Mentifex AI is in it for the long haul now.
Not only here in America, but especially
overseas and in third world countries
there are AI-hungry programmers with
unlimited AGI ambition but scant cash.
They are the beneficiaries of Mentifex AI.

Arthur
--
http://mentifex.virtualentity.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56326072-faf52d


Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Mark Waser
 So, one way to summarize my view of the paper is
 -- The neuroscience part of Granger's paper tells how these 
 library-functions may be implemented in the brain
 -- The cog-sci part consists partly of
 - a) the hypothesis that these library-functions are available to 
 cognitive programs 
 - b) some specifics about how these library-functions may be used within 
 cognitive programs
 I find Granger's idea a) quite appealing, but his ideas in category b) 
 fairly uncompelling and oversimplified. 
 Whereas according to my understanding, Richard seems not to share my belief 
 in the strong potential meaningfulness of a)

*Everyone* is looking for how library functions may be implemented precisely 
because they would then *assume* that the library functions would then be 
available to thought -- thus a) is not at all unique to Granger and I would 
even go so far as to not call it a hypothesis.

And I'm also pretty sure that *everyone* believes in the strong potential 
meaningfulness of having library functions.

Granger has nothing new in cog sci except some of the particular details in b) 
-- which you find uncompelling and oversimplified -- so what is the cog sci 
that you find of value?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56335298-578a1a

Re: [agi] Re: Bogus Neuroscience

2007-10-22 Thread Mark Waser
I think we've beaten this horse to death . . . . :-)

 However, he has some interesting ideas about the connections between 
 cognitive primitives and neurological structures/dynamics.  Connections of 
 this nature are IMO cog sci rather than just neurosci.  At least, that 
 is consistent with how the term cog sci was used when I was a cog sci 
 professor, back in the day... 

I think that most neurosci practitioners would argue with you.

 (To a significant extent, Granger's articles just summarize ideas from 
 other, more fine-grained papers.  This does not make them worthless, 
 however.  In bio-related fields I find summary-type articles quite valuable, 
 since the original research articles are often highly focused on 
 experimental procedures.  It's good to understand what the experimental 
 procedures are but I don't always want to read about them in depth, 
 sometimes I just want to understand the results and their likely 
 interpretations...) 

So what I'm getting is that you're finding his summary of the neurosci papers 
(the other, more fine-grained papers) as what is useful.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=56347245-bce03f

Re: [agi] Human memory and number of synapses

2007-10-20 Thread Mark Waser
What I'd like is a mathematical estimate of why a graphic or image (or any 
form of physical map) is a vastly - if not infinitely - more efficient way 
to store information than a set of symbols.


Yo troll . . . . a graphic or image is *not* a vastly - if not infinitely - 
more efficient way to store information than a set of symbols.


Take your own example of an outline map -- *none* of the current high-end 
mapping services (MapQuest, Google Maps, etc) store their maps as images. 
They *all* store them symbolicly in a relational database because that is 
*the* most efficient way to store them so that they can produce all of the 
different scale maps and directions that they provide every day.


Congratulations!  You've just disproved your prime pet theory.  (or do you 
believe that you're smarter than all of those engineers?)


And all this is important, because it will affect estimates of what brains 
and computers can do.


What brains can do and what computers can do are very different.  The brain 
evolved by a linear optimization process with *numerous* non-brain-related 
constraints because of all of the spaghetti-code-like intertwining of all 
the body's systems.  It is quite probable that the brain can be optimized a 
lot!



(No computer can yet store and read a map as we do, can it?)


What can you do with a map that Google Maps can't?  Google Maps may not 
store and read maps like you do, but functionally it is better than you 
(faster, more info, etc.).



- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 20, 2007 7:33 AM
Subject: Re: [agi] Human memory and number of synapses



Vlad et al,

Slightly O/T - while you guys are arguing about how much info the brain 
stores and processes...


What I'd like is a mathematical estimate of why a graphic or image (or any 
form of physical map) is a vastly - if not infinitely - more efficient way 
to store information than a set of symbols.


Take an outline map of a country, with points for the towns. That map 
contains a practically endless amount of info about the relationships 
between all the towns and every point in the country - about how distant 
they all are from each other - and therefore about every possible travel 
route across the country.


Now try expressing that info as a set of symbolic relationships - London 
to York 300 Miles, London to Oxford 60 Miles, London to Cardiff 200 
miles - and so on and on.


If you think just about the ink or whatever substrate is used to write the 
info, the map is vastly more efficient.


And all this is important, because it will affect estimates of what brains 
and computers can do. A great deal of the brain's memory is stored, I 
suggest, in the form of maps of one kind or other. (No computer can yet 
store and read a map as we do, can it?)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55735545-d56dca


[agi] Re: Images aren't best

2007-10-20 Thread Mark Waser

Let me take issue with one point (most of the rest I'm uninformed about):
So this isn't an argument that you REALLY can't use a relational db for 
all of your representations, but rather that it's a really bad idea.)


I agree completely.  The only point that I was trying to hammer home was 
that a graphic or image is NOT a vastly - if not infinitely - more 
efficient way to store information (which was the troll's original 
statement).  I would certainly pick and choose my representation schemes 
based upon what I want to do (and I agree fully that partial directed graphs 
and hyperdbs are both probably necessary for a lot of things and not 
effectively isomorphic to current relational db technology).



- Original Message - 
From: Charles D Hixson [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 20, 2007 6:40 PM
Subject: Re: Images aren't best WAS Re: [agi] Human memory and number of 
synapses




Let me take issue with one point (most of the rest I'm uninformed about):
Relational databases aren't particularly compact.  What they are is 
generalizable...and even there...
The most general compact database is a directed graph.  Unfortunately, 
writing queries for retrieval requires domain knowledge, and so does 
designing the db files.  A directed graph db is (or rather can be) also 
more compact than a relational db.


The reason that relational databases won out was because it was easy to 
standardize them.  Prior to them, most dbs were hierarchical.  This was 
also more efficient than relational databases, but was less flexible.  The 
net databases existed, but were more difficult to use.


My suspicion is that we've evolved to use some form of net db storage. 
Probably one that's equivalent to a partial directed graph (i.e., some, 
but not all, node links are bidirectional).  This is probably the most 
efficient form that we know of.  It's also a quite difficult one to learn. 
But some problems can't be adequately represented by anything else. 
(N.B.:  It's possible to build a net db within a relational db...but the 
overhead will kill you.  It's also possible to build a relational db 
within a net db, but sticking the normal form discipline is nigh unto 
impossible.  That's not the natural mode for a net db.  So the Relational 
db is probably the db analog of Turing complete...but when presented with 
a problem that doesn't fit, it's also about as efficient as a Turing 
machine.  So this isn't an argument that you REALLY can't use a relational 
db for all of your representations, but rather that it's a really bad 
idea.)


Mark Waser wrote:
But how much information is in a map, and how much in the relationship 
database? Presumably you can put some v. rough figures on that for a 
given country or area. And the directions presumably cover journeys on 
roads? Or walks in any direction and between any spots too?


All of the information in the map is in the relational database because 
the actual map is produced from the database (and information doesn't 
appear from nowhere).  Or, to be clearer, almost *any* map you can buy 
today started life in a relational database.  That's how the US 
government stores it's maps.  That's how virtually all modern map 
printers store their maps because it's the most efficient way to store 
map information.


The directions don't need to assume roads.  They do so because that is 
how cars travel.  The same algorithms will handle hiking paths.  Very 
slightly different algorithms will handle off-road/off-path and will even 
take into account elevation, streams, etc. -- so, to clearly answer your 
question --  the modern map program can do everything that you can do 
with a map (and even if it couldn't, the fact that the map itself is 
produced solely from the database eliminates your original query).




- Original Message - From: Mike Tintner 
[EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 20, 2007 9:59 AM
Subject: Re: [agi] Human memory and number of synapses


MW: Take your own example of an outline map -- *none* of the current 
high-end
mapping services (MapQuest, Google Maps, etc) store their maps as 
images. They *all* store them symbolicly in a relational database 
because that is *the* most efficient way to store them so that they can 
produce all of the different scale maps and directions that they 
provide every day.


But how much information is in a map, and how much in the relationship 
database? Presumably you can put some v. rough figures on that for a 
given country or area. And the directions presumably cover journeys on 
roads? Or walks in any direction and between any spots too?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id

Re: [agi] Human memory and number of synapses.. P.S.

2007-10-20 Thread Mark Waser
Anyway there's low resolution, possibly unconfirmed, evidence that when we 
visualize images, we generate a cell activation pattern within the visual 
cortex that has an activation boundary approximating in shape the object 
being visualized.  (This doesn't say anything about how the information is 
stored.)


Or, in other words, the brain uses a three-dimensional *spatial* model of 
the object in question -- and certainly not a two-dimensional image.


This goes back to the previous visual vs. spatial argument with the built-in 
human bias towards our primary sense.  Heck, look at the word visualize.  Do 
dolphins visualize or sonarize?  In either case, what the brain is doing is 
creating a three-dimensional model of perceived reality -- and trivializing 
it by calling it an image is a really bad idea.


- Original Message - 
From: Charles D Hixson [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 20, 2007 6:49 PM
Subject: Re: [agi] Human memory and number of synapses.. P.S.



FWIW:
A few years (decades?) ago some researchers took PET scans of people who 
were imagining a rectangle rotating (in 3-space, as I remember).  They 
naturally didn't get much detail, but what they got was consistent with 
people applying a rotation algorithm within the visual cortex.  This 
matches my internal reporting of what happens.


Parallel processors optimize things differently than serial processors, 
and this wasn't a stored image.  But it was consistent with an array of 
cells laid out in a rectangle activating, and having that activation 
precess as the image was visualized to rotate.
Well, the detail wasn't great, and I never heard that it went anywhere 
after the initial results.  (Somebody probably got a doctorate...and 
possibly left to work elsewhere.)  But it was briefly written up in the 
popular science media (New Scientist? Brain-Mind Bulletin?)
Anyway there's low resolution, possibly unconfirmed, evidence that when we 
visualize images, we generate a cell activation pattern within the visual 
cortex that has an activation boundary approximating in shape the object 
being visualized.  (This doesn't say anything about how the information is 
stored.)



Mark Waser wrote:
Another way of putting my question/ point is that a picture (or map) of 
your face is surely a more efficient, informational way to store your 
face than any set of symbols - especially if a doctor wants to do 
plastic surgery on it, or someone wants to use it for any design purpose 
whatsoever?


No, actually, most plastic surgery planning programs map your face as a 
limited set of three dimensional points, not an image.  This allows for 
rotation and all sorts of useful things.  And guess where they store this 
data . . . . a relational database -- just like any other CAD program.


Images are *not* an efficient way to store data.  Unless they are 
three-dimensional images, they lack data.  Normally, they include a lot 
of unnecessary or redundant data.  It is very, very rare that a computer 
stores any but the smallest image without compressing it.  And remember, 
an image can be stored as symbols in a relational database very easily as 
a set of x-coords, y-coords, and colors.


You're stuck on a crackpot idea with no proof and plenty of 
counter-examples.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55854109-5699c6


Re: [agi] The Grounding of Maths

2007-10-13 Thread Mark Waser

Fair enough.

The reason why I am hammering this so hard is because I believe that vision 
is a seriously long detour on the path to AGI.  Vision is a tough problem 
and getting sucked into it as a pre-requisite for AGI is, I believe, likely 
to seriously delay it (it being AGI, not vision :-).


- Original Message - 
From: a [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 13, 2007 8:51 AM
Subject: Re: [agi] The Grounding of Maths



Mark Waser wrote:
Only from your side.  Science looks at facts.  I have the irrefutable 
fact of intelligent blind people.  You have nothing -- so you decide that 
it is an opinion thing.  Tell me how my position is not cold, hard 
science.  You are the one whose position is wholly faith with no facts to 
point to.

I think blind people use spatial grounding.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53227499-f24d9e


Re: [agi] Do the inference rules.. P.S.

2007-10-12 Thread Mark Waser
Enjoying trolling, Ben?:-)
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Friday, October 12, 2007 9:55 AM
  Subject: Re: [agi] Do the inference rules.. P.S.





  On 10/12/07, Mike Tintner [EMAIL PROTECTED] wrote:


Ben, 

No. Everything is grounded. This is a huge subject. Perhaps you should read:

Where Mathematics Comes From, written by George Lakoff and Rafael Nunez, 

You really do need to know about Lakoff/Fauconnier/Mark Johnson/Mark Turner.

Especially:
The Body in the Mind. Mark Johnson
The Way We Think - Fauconnier/Turner.


  Mike, sorry to disappoint you, but I have read all those books some time ago. 
 

  I agree of course that much of math is grounded in sensorimotor reality as 
Lakoff and Nunez argue, but I also feel they overstate the case by selectively 
choosing examples 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52862862-72fe66

Re: [agi] The Grounding of Maths

2007-10-12 Thread Mark Waser

Visualspatial intelligence is required for almost anything.


I'm sorry.  This is all pure, unadulterated BS.  You need spatial 
intelligence (i.e. a world model).  You do NOT need visual anything.  The 
only way in which you need visual is if you contort it's meaning until it 
effectively means spatial.  Visual means related to vision.  If you can't 
tell me why vision allows something that echo-location quality hearing does 
not (other than color perception -- which is *NOT* necessary for 
intelligence), then you don't need visual.



- Original Message - 
From: a [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 12, 2007 5:13 PM
Subject: Re: [agi] The Grounding of Maths


If you cannot explain it, then how do you know you do not do that? No 
offense, but autistic savants also have trouble describing their process 
when they do math. They have high visuospatial intelligence, but low 
verbal. Mathematicians have a high Autism Spectrum Quotient. [1]


Mathematicians have trouble describing their process because of their vast 
knowledge and experience of math. Their experience of mathematics makes it 
intuitive, so it is hard to explain their visual manipulation process.


I am almost certain that visuospatial intelligence is required to do 
mathematics. A blind person without internal visuospatial intelligence 
would be considered retarded. Visualspatial intelligence is required for 
almost anything.


1. http://en.wikipedia.org/wiki/Autism_Spectrum_Quotient

Benjamin Goertzel wrote:



 Well, it's hard to put into words what I do in my head when I do
 mathematics... it probably does use visual cortex in some way, 
but's

 not visually manipulating mathematical expressions nor using
visual
 metaphors...
I can completely describe. I completely do mathematics by visually
manipulating and visually replacing symbols with other symbols. I 
also

do mathematical reasoning and theorem proving with that.


I believe you, but that is not what I nor many other mathematicians do..

Mathematicians
commonly have high visuospatial intelligence, that's why they have
high IQs.


Well if you look at Hadamard's Psychology of Mathematical Invention or 
more recent works in the area, you'll see there's a lot more diversity to 
the ways mathematicians approach mathematical thought...


Ben
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53136472-ac7df1


Re: [agi] The Grounding of Maths

2007-10-12 Thread Mark Waser
 Echolocation--just like the brain--isn't solved yet, so you cannot claim 
 that it is unrelated to your definition of vision. 

What?  It is not *my* definition of vision.  There are standard definitions of 
vision like
  vi·sion  /ˈvɪʒən/ Pronunciation Key - Show Spelled 
Pronunciation[vizh-uhn] Pronunciation Key - Show IPA Pronunciation 
  –noun 1. the act or power of sensing with the eyes; sight.  

from dictionary .com.

---

Echo-location is hearing (the act or power of sensing with the ears), *NOT* 
vision.

---

If you can't come up with something necessary for intelligence that vision 
allows that echo-location does not, then you can't say that vision is necessary 
(i.e. your argument is total BS).


 Vision can simulate 
 spatial intelligence. 

What you mean is . . . . it's really spatial intelligence that is necessary but 
since I think that vision can emulate it maybe I can argue that it is vision 
that is necessary.


- Original Message - 
From: a [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, October 12, 2007 5:38 PM
Subject: Re: [agi] The Grounding of Maths


 Mark Waser wrote:
 Visualspatial intelligence is required for almost anything.

 I'm sorry.  This is all pure, unadulterated BS.  You need spatial 
 intelligence (i.e. a world model).  You do NOT need visual anything.  
 The only way in which you need visual is if you contort it's meaning 
 until it effectively means spatial.  Visual means related to vision.  
 If you can't tell me why vision allows something that echo-location 
 quality hearing does not (other than color perception -- which is 
 *NOT* necessary for intelligence), then you don't need visual.

 Echolocation--just like the brain--isn't solved yet, so you cannot claim 
 that it is unrelated to your definition of vision. Vision can simulate 
 spatial intelligence. Light use waves so it can reconstruct a model. 
 Similarly, sound use waves, so it can reconstruct a model. The 
 difference is just the type of wave. See 
 http://en.wikipedia.org/wiki/Human_echolocation#Vision_and_hearing
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53163864-9c4170premium.gifthinsp.pngspeaker.gif

Re: [agi] The Grounding of Maths

2007-10-12 Thread Mark Waser

Look at the article and it mentions spatial and vision are interrelated:


No.  It clearly spells out that vision requires spatial processing -- and 
says *NOTHING* about the converse.


Dude, you're a broken record.  Intelligence requires spatial.  Vision 
requires spatial.  Intelligence does *NOT* require vision.


You have shown me *ZERO* evidence that vision is required for intelligence 
and blind from birth individuals provide virtually proof positive that 
vision is not necessary for intelligence.  How can you continue to argue the 
converse?



- Original Message - 
From: a [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 12, 2007 6:19 PM
Subject: Re: [agi] The Grounding of Maths


Look at the article and it mentions spatial and vision are interrelated: 
http://en.wikipedia.org/wiki/Visual_cortex


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53175176-4d9eca


Re: [agi] The Grounding of Maths

2007-10-12 Thread Mark Waser
It is my solid opinion that vision is required, just like your solid 
opinion that vision is not required


This is not an opinion matter.  I point to the *FACT* of numerous 
blind-from-birth individuals who are intelligent without vision.  You have 
no counter-examples or proof whatsoever.  Do you also feel that gravity is a 
matter of opinion?


Arguing between this is purely religious, inefficacious, unnecessary and 
counter-productive.


Only from your side.  Science looks at facts.  I have the irrefutable fact 
of intelligent blind people.  You have nothing -- so you decide that it is 
an opinion thing.  Tell me how my position is not cold, hard science.  You 
are the one whose position is wholly faith with no facts to point to.




- Original Message - 
From: a [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 12, 2007 7:22 PM
Subject: Re: [agi] The Grounding of Maths



Mark Waser wrote:


You have shown me *ZERO* evidence that vision is required for 
intelligence and blind from birth individuals provide virtually proof 
positive that vision is not necessary for intelligence.  How can you 
continue to argue the converse?
It is my solid opinion that vision is required, just like your solid 
opinion that vision is not required. I believe that intelligence requires 
pattern matching, so visual pattern matching and spatial pattern matching 
are the same. Arguing between this is purely religious, inefficacious, 
unnecessary and counter-productive.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53203786-a1116b


Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread Mark Waser

Concepts cannot be grounded without vision.


So . . . . explain how people who are blind from birth are functionally 
intelligent.


It is impossible to completely understand natural language without 
vision.


So . . . . you believe that blind-from-birth people don't completely 
understand English?


- - - - -

Maybe you'd like to rethink your assumptions . . . .


- Original Message - 
From: a [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, October 11, 2007 4:10 PM
Subject: Re: [agi] Do the inference rules.. P.S.


I think that building a human-like reasoning system without /visual/ 
perception is theoretically possible, but not feasible in practice. But how 
is it human like without vision? Communication problems will arise. 
Concepts cannot be grounded without vision.


It is impossible to completely understand natural language without 
vision. Our visual perception acts like a disambiguator for natural 
language.


To build a human-like computer algebra system that can prove its own 
theorems and find interesting conjectures requires vision to perform 
complex symbolic manipulation. A big part of mathematics is about 
aesthetics. It needs vision to judge which expressions are interesting, 
which are the simplified ones. Finding interesting theorems, such as the 
power rule, the chain rule in calculus requires vision to judge that 
the rules are simple and visually appealing enough to be communicated or 
published.


I think that computer programming is similar. It requires vision to 
program easily. It requires vision to remember the locations of the 
symbols in the language.


Visual perception and visual grounding is nothing except the basic motion 
detection, pattern matching parts of similar images etc. Vision /is/ a 
reasoning system.


IMO, we already /have /AGI--that is, NARS. AGI is just not adapted to 
visual reasoning. You cannot improve symbolic reasoning further without 
other sensory perception.


Edward W. Porter wrote:


Validimir and Mike,

For humans, much of our experience is grounded on sensory information, 
and thus much of our understanding is based on experiences and analogies 
derived largely from the physical world. So Mike you are right that for 
us humans, much of our thinking is based on recasting of experiences of 
the physical world.


But just because experience of the physical world is at the center of 
much of human thinking, does not mean it must be at the center of all 
possible AGI thinking -- any more than the fact that for millions of 
years the earth and the view from it was at the center of our thinking 
and that of our ancestors means the earth and the view from it must 
forever be at the center of the thinking of all intelligences throughout 
the universe.


In fact, one can argue that for us humans, one of our most important 
sources of grounding – emotion -- is not really about the physical world 
(at least directly), but rather about our own internal state. 
Furthermore, multiple AGI projects, including Novamente and Joshua Blue 
are trying to ground their systems from experience in virtual words. Yes 
those virtual worlds try to simulate physical reality, but the fact 
remains that much of the grounding is coming from bits and bytes, and not 
from anything more physical.


Take Doug Lenat’s AM and create a much more powerful AGI equivalent of 
it, one with much more powerful learning algorithms (such as those in 
Novamente), running on the equivalent of a current 128K processor 
BlueGene L with 16TBytes of RAM, but with a cross sectional bandwidth 
roughly 500 times that of the current BlueGene L (the type of hardware 
that could be profitably sold for well under 1 million dollars in 7 years 
if there were are thriving market for making hardware to support AGI).


Assume the system creates programs, mathematical structures, and 
transformations, etc. and in its own memory. It starts out learning like 
a little kid, constantly performing little experiments, except the 
experiments -- instead of being things like banging spoons against a 
glass -- would be running programs that create data structures and then 
observing what is created (it would have built in primitives for 
observing its own workspace), changing the program and observing the 
change, etc. Assume it receives no input from the physical world, but 
that it has goals and a reward system related to learning about 
programming, finding important mathematical and programming generalities, 
finding compact representations and transformation, creating and finding 
patterns in complexity, and things like that. Over time such a system 
would develop its own type of grounding, one derived from years of 
experience -- and from billions of trillions of machine opps -- in 
programming and math space.


Thus, I think you are both right. Mike is right that for humans, sensory 
experience is a vital part of much of our ability to understand, even of 
our ability to understand things that 

Re: [agi] Re: [META] Re: Economic libertarianism .....

2007-10-11 Thread Mark Waser
I agree . . . . there are far too many people spouting off without a clue 
without allowing them to spout off off-topic as well . . . .


- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, October 11, 2007 4:44 PM
Subject: [agi] Re: [META] Re: Economic libertarianism .




Sorry, but this is *exactly* the kind of political propagandizing that I 
just complained about.


Why don't you form a separate AGI-Politics list to argue about this 
stuff amongst yourselves, please?



Richard Loosemore


a wrote:

Yes, I think that too.

On the practical side, I think that investing in AGI requires significant 
tax cuts, and we should elect a candidate that would do that (Ron Paul). 
I think that the government has to have more respect to potential weapons 
(like AGI), so we should elect a candidate who is strongly pro-gun (Ron 
Paul). I think that the government has to trust and respect the privacy 
of its people, so your would not be forced to sell your AGI to the 
military. No more wiretapping (abolish the Patriot Act) so the government 
won't hear an AGI being successfully developed. Abolish the Federal 
Reserve, so no more malinvestment, and more productive investment 
(including agi investment). Ron Paul will do all of that.


JW Johnston wrote:
I also agree except ... I think political and economic theories can 
inform AGI design, particularly in areas of AGI decision making and 
friendliness/roboethics. I wasn't familiar with the theory of 
Comparative Advantage until Josh and Eric brought it up. (Josh discusses 
in conjunction with friendly AIs in his The Age of Virtuous Machines 
at Kurzweil's site.) I like to see discussions in these contexts.


-JW

-Original Message-


From: Bob Mottram [EMAIL PROTECTED]
Sent: Oct 11, 2007 11:12 AM
To: agi@v2.listbox.com
Subject: Re: [META] Re: Economic libertarianism [was Re: The 
first-to-market effect [WAS Re: [agi] Religion-free technical content]


On 10/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:


Am I the only one, or does anyone else agree that politics/political
theorising is not appropriate on the AGI list?

Agreed.  There are many other forums where political ideology can be 
debated.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52485911-1c7898


Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread Mark Waser
I'll buy internal spatio-perception (i.e. a three-d world model) but not the 
visual/vision part (which I believe is totally unnecessary).


Why is *vision* necessary for grounding or to completely understand 
natural language?


- Original Message - 
From: a [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, October 11, 2007 5:24 PM
Subject: Re: [agi] Do the inference rules.. P.S.



Mark Waser wrote:

Concepts cannot be grounded without vision.


So . . . . explain how people who are blind from birth are functionally 
intelligent.


It is impossible to completely understand natural language without 
vision.


So . . . . you believe that blind-from-birth people don't completely 
understand English?



All blind people still have internal visualspatio perception.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52491934-8d1f16


Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread Mark Waser

spatial perception cannot exist without vision.


How does someone who is blind from birth have spatial perception then?

Vision is one particular sense that can lead to a 3-dimensional model of the 
world (spatial perception) but there are others (touch  echo-location 
hearing to name two).


Why can't echo-location lead to spatial perception without vision?  Why 
can't touch?


- Original Message - 
From: a [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, October 11, 2007 5:54 PM
Subject: Re: [agi] Do the inference rules.. P.S.



Mark Waser wrote:
I'll buy internal spatio-perception (i.e. a three-d world model) but not 
the visual/vision part (which I believe is totally unnecessary).


Why is *vision* necessary for grounding or to completely understand 
natural language?
My mistake. I misinterpreted the definitions of vision and spatial 
perception. I agree that there is not a clear separation between the 
definition of vision and spatio-perception--spatial perception cannot 
exist without vision. Vision can be spatial because it does not have to be 
color-vision or human like vision. Spatial can be visual because you have 
to visually contruct the model.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52499366-fd16da


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Mark Waser
It looks to me as if NARS can be modeled by a prototype based language 
with operators for is an ancestor of and is a descendant of.


   I don't believe that this is the case at all.  NARS correctly handles 
cases where entities co-occur or where one entity implies another only due 
to other entities/factors.  Is an ancestor of and is a descendant of has 
nothing to do with this.


To me a model can well be dynamic and experience based.  In fact I 
wouldn't consider a model very intelligent if it didn't either itself 
adapt itself to experience, or it weren't embedded in a matrix which 
adapted it to experiences.  (This doesn't seem to be quite the same 
meaning that you use for model.  Your separation of the rules of 
inference, the rational faculty, and the model as a fixed and unchanging 
condition don't match my use of the term.


And your use of the term is better than his use of the term because . . . . 
?:-)


By model, he means model of cognition.  For him (and all of us), cognition 
is dynamic and experience-based but the underlying process is relatively 
static and the same from individual to individual.


I still find that I am forced to interpret the inheritance relationship as 
a is a child of relationship.


Which is why you're having problems understanding NARS.  If you can't get 
past this, you're not going to get it.


And I find the idea of continually calculating the powerset of inheritance 
relationships unappealing.  There may not be a better way, but if there 
isn't, than AGI can't move forwards without vastly more powerful machines.


This I agree with.  My personal (hopefully somewhat informed) opinion is 
that NARS (and Novamente) are doing more than absolutely needs to be done 
for AGI.  Time will tell.


I do feel that the limited sensory modality of the environment (i.e., 
reading the keyboard) makes AGI unlikely to be feasible.  It seems to me 
that one of the necessary components of true intelligence is integrating 
multi-modal sensory experience.


Why?



- Original Message - 
From: Charles D Hixson [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 08, 2007 5:50 PM
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


OK.  I've read the paper, and don't see where I've made any errors.  It 
looks to me as if NARS can be modeled by a prototype based language with 
operators for is an ancestor of and is a descendant of.  I do have 
trouble with the language terms that you use, though admittedly they 
appear to be standard for logicians (to the extent that I'm familiar with 
their dialect).  That might well not be a good implementation, but it 
appears to be a reasonable model.


To me a model can well be dynamic and experience based.  In fact I 
wouldn't consider a model very intelligent if it didn't either itself 
adapt itself to experience, or it weren't embedded in a matrix which 
adapted it to experiences.  (This doesn't seem to be quite the same 
meaning that you use for model.  Your separation of the rules of 
inference, the rational faculty, and the model as a fixed and unchanging 
condition don't match my use of the term.  I might pull out the rules of 
inference as separate pieces and stick them into a datafile, but 
datafiles can be changed, if anything, more readily than programs...and 
programs are readily changeable.  To me it appears clear that much of the 
language would need to be interpretive rather than compiled.  One should 
pre-compile what one can for the sake of efficiency, but with the 
knowledge that this sacrifices flexibility for speed.


I still find that I am forced to interpret the inheritance relationship as 
a is a child of relationship.  And I find the idea of continually 
calculating the powerset of inheritance relationships unappealing.  There 
may not be a better way, but if there isn't, than AGI can't move forwards 
without vastly more powerful machines.  Probably, however, the 
calculations could be shortcut by increasing the local storage a bit.  If 
each node maintained a list of parents and children, and a count of 
descendants and ancestors it might suffice.  This would increase storage 
requirements, but drastically cut calculation and still enable the 
calculation of confidence.  Updating the counts could be saved for 
dreamtime.  This would imply that during the early part of learning sleep 
would be a frequent necessity...but it should become less necessary as the 
ratio of extant knowledge to new knowledge learned increased.  (Note that 
in this case the amount of new knowledge would be a measured quantity, not 
an arbitrary constant.)


I do feel that the limited sensory modality of the environment (i.e., 
reading the keyboard) makes AGI unlikely to be feasible.  It seems to me 
that one of the necessary components of true intelligence is integrating 
multi-modal sensory experience.  This doesn't necessarily mean vision and 
touch, but SOMETHING.  As such I can see NARS (or some 

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Mark Waser
.  Such 
hierarchical representations achieve their flexibility though a 
composition/generalization hierarchy which presumably maps easily into NARS.

  Another key problem in AI is context sensitivity.  A hierarchical 
representation scheme that is capable of computing measures of similarity, fit, 
and implications throughout multiple levels in such a hierarchical 
representation scheme of multiple aspects of a situation in real time can be 
capable of sophisticated real time context sensitivity.  In fact, the ability 
to perform relative extensive real time matching and implication across 
multiple levels of compositional and generalization hierarchies has been a key 
feature of the types of systems I have been thinking of for years.  

  That is one of the major reasons why I have argued for BREAKING THE SMALL 
HARDWARE MINDSET. 

  I understand NARS's inheritance (or categorizations) as being equivalent two 
both of what I have considered two of the major dimensions in an AGI's self 
organizing memory, (1) generalization/similarity and (2) composition.  I was, 
however, aware, that down in the compositional (comp) hierarchy can be viewed 
as up in the generalization (gen) hierarchy, since the set of things having one 
or more properties or elements of a composition can be viewed as a 
generalization of that composition (i.e., the generalization covering the 
category of things having that one or more properties or elements).

  Although I understand there is an importance equivalence between down in the 
comp hierarchical and up in the gen hierarchy, and that the two could be viewed 
as one hierarchy, I have preferred to think of them as different hierarchies, 
because the type of gens one gets by going up in the gen hierarchy tend to be 
different than the type of gens one gets by going down in the comp hierarchy.  

  Each possible set in the powerset (the set of all subsets) of elements 
(eles), relationships (rels), attributes (atts) and contextual patterns 
(contextual pats) could be considered as possible generalizations.  I have 
assumed, as does Goertzel's Novamente, that there is a competitive ecosystem 
for representational resources, in which only the fittest pats and gens -- as 
determined by some measure of usefulness to the system -- survive.  There are 
several major uses of gens, such as aiding in perception, providing inheritance 
of significant implication, providing appropriate level of representation for 
learning, and providing invariant representation in higher level comps.  
Although temporary gens will be generated at a relatively high frequency, 
somewhat like the inductive implications in NARS, the number of gens that 
survive and get incorporated into a lot of comps and episodic reps, will be an 
infinitesimal fraction of the powerset of eles, rels, atts, and contextual 
features stored in the system.  Pats in the up direction in the Gen hierarchy 
will tend to be ones that have been selected for the usefulness as 
generalizations.  They will often have reasonable number of features that 
correspond to that of their species node, but with some of them more broadly 
defined.  The gens found by going down in the comp hierarchy are ones that have 
been selected for their representational value in a comp, and many of them 
would not normally be that valuable as what we normally think of as 
generalizations.

  In the type of system I have been thinking of I have assumed there will be 
substantially less multiple inheritance in the up direction in the gen 
hierarchy than in the down direction in the comp hierarchy (in which there 
would be potential inheritance from every ele, rel, att, and contextual feature 
of in a comp's descendant nodes at multiple levels in the comp hierarchy below 
it.  Thus, for spreading activation control purposes, I think it is valuable to 
distinguish between generalization and compositional hierarchies, although I 
understand they have an important equivalence that should not be ignored.  

  I wonder if NARS makes such a distinction. 

  These are only initial thoughts.  I hope to become part of a team that gets 
an early world-knowledge computing AGI up and running.  Perhaps when I do 
feedback from reality will change my mind.

  I would welcome comments, not only from Mark, but also from other readers. 



  Edward W. Porter 
  Porter  Associates 
  24 String Bridge S12 
  Exeter, NH 03833 
  (617) 494-1722 
  Fax (617) 494-1822 
  [EMAIL PROTECTED] 




  -Original Message- 
  From: Mark Waser [mailto:[EMAIL PROTECTED] 
  Sent: Tuesday, October 09, 2007 9:46 AM 
  To: agi@v2.listbox.com 
  Subject: Re: [agi] Do the inference rules of categorical logic make sense? 



  I don't believe that this is the case at all.  NARS correctly 
   handles 
   cases where entities co-occur or where one entity implies another only due 
   to other entities/factors.  Is an ancestor of and is a descendant of 
   has nothing to do with this. 

  Ack!  Let me rephrase

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Mark Waser
MessageMost of the discussion I read in Pei's article related to inheritance 
relations between terms, that operated as subject and predicates in sentences 
that are inheritance statements, rather than between entire statements, unless 
the statement was a subject or a predicate of a higher order inheritance 
statement.  So what you are referring to appears to be beyond what I have read.

Label the statement I am allowed to drink alcohol as P and the statement I 
am an adult as Q.  P implies Q and Q implies P (assume that age 21 equals 
adult) --OR-- P is the parent of Q and Q is the parent of P.

Label the statement that most ravens are black as R and the statement that 
this raven is white as S.  R affects the probability of S and, to a lesser 
extent, S affects the probability of R (both in a negative direction) --OR-- R 
is the parent of S and S is the parent of R (although, realistically, the 
probability change is so miniscule that you really could argue that this isn't 
true).

NARS's inheritance is the inheritance of influence on the probability values.

- Original Message - 
  From: Edward W. Porter 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 09, 2007 1:12 PM
  Subject: RE: [agi] Do the inference rules of categorical logic make sense?


  Mark, 

  Thank you for your reply.  I just ate a lunch with too much fat (luckily 
largely olive oil) in it so, my brain is a little sleepy.  If it is not too 
much trouble could you please map out the inheritance relationships from which 
one derives how I am allowed to drink alcohol is both a parent and the child 
of I am an adult.  And could you please do the same with how most ravens are 
balck is both parent and child of this raven is white.  

  Most of the discussion I read in Pei's article related to inheritance 
relations between terms, that operated as subject and predicates in sentences 
that are inheritance statements, rather than between entire statemens, unless 
the statement was a subject or a predicate of a higher order inheritance 
statement.  So what you are referring to appears to be beyond what I have read.

  Edward W. Porter
  Porter  Associates
  24 String Bridge S12
  Exeter, NH 03833
  (617) 494-1722
  Fax (617) 494-1822
  [EMAIL PROTECTED]


-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 09, 2007 12:47 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


Thus, as I understand it, one can view all inheritance statements as 
indicating the evidence that one instance or category belongs to, and thus is 
a child of another category, which includes, and thus can be viewed as a 
parent of the other. 

Yes, that is inheritance as Pei uses it.  But are you comfortable with the 
fact that I am allowed to drink alcohol is normally both the parent and the 
child of I am an adult  (and vice versa)?  How about the fact that most 
ravens are black is both the parent and child of this raven is white (and 
vice versa)?

Since inheritance relations are transitive, the resulting hierarchy of 
categories involves nodes that can be considered ancestors (i.e., parents, 
parents of parents, etc.) of others and nodes that can be viewed as descendents 
(children, children of children, etc.) of others.  

And how often do you really want to do this with concepts like the above -- 
or when the evidence is substantially less than unity?

And loops and transitivity are really ugly . . . . 

NARS really isn't your father's inheritance.

  - Original Message - 
  From: Edward W. Porter 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 09, 2007 12:24 PM
  Subject: RE: [agi] Do the inference rules of categorical logic make sense?


  RE: (1) THE VALUE OF CHILD OF AND PARENT OF RELATIONS(2) 
DISCUSSION OF POSSIBLE VALUE IN DISTINGUISHING BETWEEN GENERALIZATIONAL AND 
COMPOSITIONAL INHERITANCE HIERARCHIES.

  Re Mark Waser's 10/9/2007 9:46 AM post: Perhaps Mark understands 
something I don't. 

  I think relations that can be viewed as child of and parent of in a 
hierarchy of categories are extremely important (for reasons set forth in more 
detail below) and it is not clear to me that Pei meant something other than 
this.

  If Mark or anyone else has reason to believe that what [Pei] means is 
quite different than such child of and parent of relations, I would 
appreciate being illuminated by what that different meaning is.




  My understanding of NARS is that it is concerned with inheritance 
relations, which as I understand it, indicate the truth value of the assumption 
that one category falls within another category, where category is broadly 
defined to included not only what we normally think of as categories, but also 
relationships, slots in relationships, and categories defined by a sets of one 
or more properties, attributes, elements

Re: Turing Completeness of a Lump of Dirt [WAS Re: [agi] Conway's Game of Life and Turing machine equivalence]

2007-10-08 Thread Mark Waser

From: William Pearson [EMAIL PROTECTED]

Laptops aren't TMs.
Please read the wiki entry to see that my laptop isn't a TM.


But your laptop can certainly implement/simulate a Turing Machine (which was 
the obvious point of the post(s) that you replied to).


Seriously, people, can't we lose all these spurious arguments?  We have 
enough problems communicating without deliberate stupidity. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51210737-ef5850


<    1   2   3   4   5   6   7   8   >