Stan, 

You wrote " It isn't the modeling device that limits the "level" of
intelligence, but rather what can be effectively modeled.  "Effectively"
meaning what can be used in a real time "judgment" system."

The type of AGI's I have been talking about will be able to use their much
more complete and complex set of recorded memories and models in an
appropriate dynamic manner to provide exactly the type of "real time
'judgement' system" that you say determines a system's level of
intelligence.  Thus, they will be able to effectively model more things, in
more detail, with more nuance, and with greater speed than humans.

Ed Porter

-----Original Message-----
From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 20, 2007 12:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Ed,

I agree that machines will be faster and may have something equivalent 
to the trillions of synapses in the human brain.

It isn't the modeling device that limits the "level" of intelligence, 
but rather what can be effectively modeled.  "Effectively" meaning what 
can be used in a real time "judgment" system.

Probability is the best we can do for many parts of the model.  This may 
give us decent models but leave us short of "super" intelligence.

Deeper thinking - that means considering more options doesn't it?  If 
so, does extra thinking provide benefit if the evaluation system is only 
at level X?

Yes, "faster" is better than slower, unless you don't have all the 
information yet.  A premature answer could be a jump to conclusion that 
   we regret in the near future. Again, knowing when to act is part of 
being intelligent.  Future intelligences may value high speed response 
because it is measurable - it's harder to measure the quality of the 
performance.  This could be problematic for AI's.

Beliefs also operate in the models.  I can imagine an intelligent 
machine choosing not to trust humans.  Is this intelligent?

Stan













Ed Porter wrote:
> Stan 
> Your web page's major argument against strong AGI seems to be the
following:
> 
>       "Limits to Intelligence 
>       ...
>       Formal Case 
>       ...because intelligence is the process of making choices, and
> choices are a function of models. Models will not be perfect. Both man and
> machine (at least the assumed future machines) can have intricate and
> elaborate models, there is little reason to believe machine models will be
> superior to human. "
> 
> Your statement "there is little reason to believe machine models will be
> superior to human" seems to be the crux of you formal case and it appears
> unsupported.  
> 
> Within a decade or so machines can be built at prices many institutions
> could afford that could store many more models, and more complex models,
> than the human brain.  In addition, such computers could do the type of
> processing the human brain does at a faster speed, enabling them to think
> much faster and at a deeper level.  Just as human brains are generally
more
> intelligent than that of lower primates because they are bigger and can
> support more, and thus presumably more complex memories and models than
> lower primates, future AGI's can be bigger and thus support more, and more
> complex, memories and models than us, and thus would be similarly likely
to
> be more intelligent than us.  And this is not even taking into account
that
> their computational processes could be many times faster.
> 
> To be fair, I only had time to skim your web site.  Perhaps I am missing
> something, but it seems your case against strong AGI does not address the
> obvious argument for the possibility of strong AGI I have made above.
> 
> Ed Porter
> 
> 
> -----Original Message-----
> From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, December 19, 2007 8:55 AM
> To: agi@v2.listbox.com
> Subject: Re: [agi] AGI and Deity
> 
> Greetings Ed,
> 
> I have planted my website.  Although I don't believe AI will be that 
> strong, like other opinions, mine is not rigorously supported.
> 
> The essence - AI will be similar to Human Intelligence due to the 
> relationship of intelligence to an accurate (and effective) model of the 
> world.  There are many model areas where "accurate" doesn't compute.
> 
> 
> Stan
> http://www.footnotestrongai.com
> 
> 
> 
> 
> Ed Porter wrote:
>> Stan,
>>
>> Thanks for speaking up.
>>
>> I look forward to seeing if you can actually provide any strong arguments
>> for the fact that strong AI will probably not be strong.
>>
>> Ed Porter
>>
>> -----Original Message-----
>> From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
>> Sent: Monday, December 10, 2007 5:49 PM
>> To: agi@v2.listbox.com
>> Subject: Re: [agi] AGI and Deity
>>
>> Lest a future AGI scan these communications in developing it's attitude 
>> about God, for the record there are believers on this list. I am one of 
>> them.
>>
>> I'm not pushing my faith, but from this side, the alternatives are not 
>> that impressive either.  Creation by chance, by random fluctuations of 
>> strings that only exist in 12 or 13 imaginary dimensions etc. is not 
>> very brilliant or conclusive.  Even the sacred "evolution" takes a self 
>> replicator to begin the process - if only the nanotechnologists had one 
>> of those simple things...
>>
>> I'm not offended by the discussion, just want to say hi!
>>
>> Hope to have my website up by end of this week.  The thrust of the 
>> website is that STRONG AI might not be that strong.  And, BTW I have 
>> notes about a write up on "Will a Strong AI pray?"
>> I've enjoyed the education I'm getting here.  Only been a few weeks, but 
>>   informative.
>>
>> Stan Nilsen
>> ps Lee Strobel in "The Case for Faith" addresses issues from the 
>> believers point of view in an entertaining way.
>>
>>
>> Ed Porter wrote:
>>> Charles, 
>>>
>>> I agree very much with the first paragraph of your below post, and
>> generally
>>> with much of the rest of what it says.
>>>
>>> I would add that there probably is something to the phenomenon that John
>>> Rose is referring to, i.e., that faith seems to be valuable to many
>> people.
>>> Perhaps it is somewhat like owning a lottery ticket before its drawing.
>> It
>>> can offer desired hope, even if the hope might be unrealistic.  But
>> whatever
>>> you think of the odds, it is relatively clear that religion does makes
>> some
>>> people's lives seem more meaningful to them.
>>>
>>> Ed Porter 
> 
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=78117057-b89f94

<<attachment: winmail.dat>>

Reply via email to