According to my mathematical formalization of the intelligence
concept, intelligence is defined independently of any specific goal,
so that when an organism intelligently seeks self-desctruction or
other self-defeating goals, it is still being intelligent relative to
those goals.   However, it may have less TOTAL intelligence over time
than a comparable system with intelligence relative to
self-perpetuating or self-actualizing goals, because it will survive
less long ;p

http://agi-conf.org/2010/wp-content/uploads/2009/06/paper_14.pdf

-- Ben G

On Wed, Nov 12, 2014 at 10:55 PM, Stanley Nilsen <[email protected]> wrote:
> Just for the fun of it...
>
> What if,
>
> Intelligence is an attribute of a "unit" that correlates with an assessment
> of the units mechanism being used in the process of generating "higher
> quality" choices.
>
> (one could also say that stupidity is an attribute assessing the mechanisms
> that generate poor choices.)
>
> As such, a unit would both have intelligence and stupidity.
>   (Example: an ordinarily intelligent person in love or anger...)
>
>
>
>
> On 11/12/2014 07:37 PM, Ben Goertzel via AGI wrote:
>>>
>>> And, the answer to that, I think from your perspective, is that we are
>>> still looking for as yet undiscovered equations of intelligence, which
>>> would not wholesale copy 3 dimensional math, but would be something
>>> new, like a 0-dimensional math (which I've read about but don't
>>> understand).
>>>
>> Yes, I think that in future there will be nice mathematical equations for
>> the structure and dynamics of intelligence systems...
>>
>> These will not solve all problems about intelligence immediately, of
>> course,
>> just as knowing the Navier-Stokes equation doesn't make the whole of fluid
>> dynamics trivial ... it just gives a solid basis for fluid dynamics
>> work...
>>
>> However, just as the Wright Brothers built a plane without a good
>> aerodynamic
>> theory, it may also be possible to build an AGI prior to the existence
>> of a solid
>> mathematical theory of AGI...
>>
>> I don't think it's useful to describe, say, the math of computer
>> algorithms as
>> "zero dimensional."   It's simply math about structures for which
>> dimensionality
>> is not a relevant concept....
>>
>> -- Ben
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/9320387-ea529a81
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man." -- George Bernard Shaw


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to