I didnt suggest modeling the non-ration part of it, was just responding to the 
other implication that we needed the non-rational part to model AGI as human.
I believe there is very little non-rationality.

 >This is where Ben and I are sort of having a debate.  I  agree with him that 
 >the >brain may well be using the larger number  since it is massively 
 >parallel and it >therefore can.  I think that we  differ on whether or not 
 >the larger is required for >AGI (Me = No,  Ben = Yes) -- which reminds me . . 
 >. 
  
 >Hey Ben, if the larger number IS required for AGI, how do you  >intend to do 
 >this in a computationally feasible way in a >non-massively-parallel  system?

Hmm, for all the toy simple cases I can think of, a small number of features is 
required, or at least there is a decreasing weight for any features after the 
first few, such that they are non-crucial in the actual decision.
  The only way there could be a large amount may be when there are a large 
number of features that all have a more equal weighting.

Can you think of any sample cases like this that I can consider (reasonably) 
that would require more or less features?
  Do these cases decompose into smaller cases that would have fewer features?
And some other smaller number cases that you use to consider your side.

James Ratcliff


Mark Waser <[EMAIL PROTECTED]> wrote:       > Now about  building a rational vs 
non-rational AGI, how would you go about modeling a  non-rational part of it?  
Short of a random number  generator?
  
 Why  would you want to build a non-rational AGI?  It seems like a *really*  
bad idea.  I think I'm missing your point here.
  
 > For the  most part we Do want a rational AGI, and it DOES need to explain 
 > itself.   One fo the first tasks of AGI will be to replace all of the 
 > current expert  systems in fields like medicine.  
  
 Yep.   That's my argument and you expand it well.
  
 > Now for  some tasks it will not be able to do this, or not within a small 
 > amount of data  and explanations.  The level that it is able to generalize 
 > this information  will reflect its usefullness and possibly intelligence.

 Yep.   You're saying exactly what I'm thinking.
  
 >  For  many decisions I believe a small feature set is required, with the 
 > larger  possible features being so lowly weighted as to not have much  
 > impact.
  
 This is where Ben and I are sort of having a debate.  I  agree with him that 
the brain may well be using the larger number  since it is massively parallel 
and it therefore can.  I think that we  differ on whether or not the larger is 
required for AGI (Me = No,  Ben = Yes) -- which reminds me . . . 
  
 Hey Ben, if the larger number IS required for AGI, how do you  intend to do 
this in a computationally feasible way in a non-massively-parallel  system?
 

 


    ----- Original Message ----- 
   From:    James Ratcliff    
   To: [email protected] 
   Sent: Tuesday, December 05, 2006 11:17    AM
   Subject: Re: Re: Re: Re: [agi] A question    on the symbol-system hypothesis
   



BillK <[EMAIL PROTECTED]> wrote:   On      12/4/06, Mark Waser wrote:
>
> Explaining our actions is the      reflective part of our minds evaluating the
> reflexive part of our      mind. The reflexive part of our minds, though,
> operates analogously      to a machine running on compiled code with the
> compilation of code      being largely *not* under the control of our 
> conscious
> mind (though      some degree of this *can* be changed by our conscious 
> minds).
> The      more we can correctly interpret and affect/program the reflexive 
> part      of
> our mind with the reflective part, the more intelligent we are.      And,
> translating this back to the machine realm circles back to my      initial 
> point,
> the better the machine can explain it's reasoning and      use it's 
> explanation
> to improve it's future actions, the more      intelligent the machine is (or, 
> in
> reverse, no explanation = no      intelligence).
>

Your reasoning is getting surreal.

As      Ben tried to explain to you, 'explaining our actions' is      our
consciousness dreaming up excuses for what we want to do anyway.      Are
you saying that the more excuses we can think up, the more      intelligent
we are? (Actually there might be something in      that!).

You seem to have a real difficulty in admitting that humans      behave
irrationally for a lot (most?) of the time. Don't you read      newspapers?
You can redefine rationality if you like to say that all the      crazy
people are behaving rationally within their limited scope, but      what's
the point? Just admit their behaviour is not      rational.

Every time someone (subconsciously) decides to do      something, their
brain presents a list of reasons to go ahead. The      reasons against are
ignored, or weighted down to be less preferred. This      applies to
everything from deciding to get a new job to deciding to sleep      with
your best friend's wife. Sometimes a case arises when you      really,
really want to do something that you *know* is going to end      in
disaster, ruined lives, ruined career, etc. and it is impossible      to
think of good reasons to proceed. But you still go ahead      anyway,
saying that maybe it won't be so bad, maybe nobody will find out,      it's
not all my fault anyway, and so on.....

Human decisions and      activities are mostly emotional and irrational.
That's the way life is.      Because life is uncertain and unpredictable,
human decisions are based on      best guesses, gambles and basic
subconscious desires.

An AGI will      have to cope with this mess. Basing an AGI on iron logic
and      'rationality' alone will lead to what we call      'inhuman'
ruthlessness.


BillK

You just    rationlized the reasons for human choice in your above arguement 
yourself    :}
MOST humans act rationaly MOST of the time.  They may not make    'good' 
decisions, but they are rational ones, if you  decides to sleep    with your 
best friends wife, you do so because you are attracted and you want    her, and 
you rationlize you will probably not get caught.  You have    stated the 
reasons, and you move ahead with that plan.
  Vague stuff    you cant rationalize easily is why you like the appearance of 
someones face,    or why you like this flavor of ice cream.  Those are hard to 
rationalize,    but much of our behaviour is easier.
  Now about building a rational    vs non-rational AGI, how would you go about 
modeling a non-rational part of    it?  Short of a random number generator?

  For the most part    we Do want a rational AGI, and it DOES need to explain 
itself.  One fo    the first tasks of AGI will be to replace all of the current 
expert systems in    fields like medicine.  
  For these it is not merely good enough    to say, (as a Doctor AGI) I think 
he has this cancer, and you should treat him    with this strange procedure.  
There must be an accounting that it can    present to other doctors and say, 
yes, I noticed a coorelation between these    factors that lead me to believe 
this, with this certainty.  An early AI    must also proove its merit by 
explaining what it is doing to build up a level    of trust.
   Further, it is important in another fashion, in that    we can turn around 
and use these smart AI's to further train other Doctors or    specialists with 
the AGI's explainations.

Now for some tasks it will    not be able to do this, or not within a small 
amount of data and    explanations.  The level that it is able to generalize 
this information    will reflect its usefullness and possibly intelligence.

In the Halo    expirement for the Chemistry API, they were graded not only on 
correct answers    but also in their explanations of how they got to those 
answers.
Some of    the explanations were short concise and well reasoned, some fo them 
though,    went down to a very basic level of detail and lasted for a couple of 
   pages.

If you are flying to Austin, and asking a AGI to plan your    route, and it 
chooses a Airline that sounds dodgy that you have never heard    of, mainly 
because it was cheap or some other reasoning, you def want to know    why it 
choose that, and tell it not to weight that feature as    highly.
  For many decisions I believe a small feature set is    required, with the 
larger possible features being so lowly weighted as to not    have much impact.

James    Ratcliff



_______________________________________
James    Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads!    
http://www.falazar.com/projects/Torrents/tvtorrents_show.php      

---------------------------------
   Have a burning question? Go to Yahoo!    Answers and get answers from real 
people who know.   
---------------------------------
    This list is sponsored by AGIRI: http://www.agiri.org/email
To    unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303  
---------------------------------
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 


_______________________________________
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
---------------------------------
Everyone is raving about the all-new Yahoo! Mail beta.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to