Charles,

You are right to call me out on this, as I really don't have much
justification for rejecting that view beyond "I don't like it, it's
not elegant".

But, I don't like it! It's not elegant!

About the connotations of "engineer"... more specifically, I should
say that this prevents us from making one universal normative
mathematical model of intelligence, since our logic cannot describe
itself. Instead, we would be doomed to make a series of more and more
general models (AIXI being the first and most narrow), all of which
fall short of human logic.

Worse, the implication is that this is not because human logic sits at
some sort of maximum; human intelligence would be just another rung in
the ladder from the perspective of some mathematically more powerful
alien species, or human mutant.

--Abram

On Tue, Oct 21, 2008 at 3:29 PM, Charles Hixson
<[EMAIL PROTECTED]> wrote:
> Abram Demski wrote:
>>
>> Ben,
>> ...
>> One reasonable way of avoiding the "humans are magic" explanation of
>> this (or "humans use quantum gravity computing", etc) is to say that,
>> OK, humans really are an approximation of an ideal intelligence
>> obeying those assumptions. Therefore, we cannot understand the math
>> needed to define our own intelligence. Therefore, we can't engineer
>> human-level AGI. I don't like this conclusion! I want a different way
>> out.
>>
>> I'm not sure the "guru" explanation is enough... who was the Guru for
>> Humankind?
>>
>> Thanks,
>>
>> --Abram
>>
>>
>
> You may not like "Therefore, we cannot understand the math needed to define
> our own intelligence.", but I'm rather convinced that it's correct.  OTOH, I
> don't think that it follows from this that humans can't build a better than
> human-level AGI.  (I didn't say "engineer", because I'm not certain what
> connotations you put on that term.)  This does, however, imply that people
> won't be able to understand the "better than human-level AGI".  They may
> well, however, understand parts of it, probably large parts.  And they may
> well be able to predict with fair certitude how it would react in numerous
> situations.  Just not in numerous other situations.
>
> The care, then, must be used in designing so that we can predict favorable
> motivations behind the actions in important-to-us  areas.  Even this is
> probably impossible in detail, but then it doesn't *need* to be understood
> in detail.  If you can predict that it will make better choices than we can,
> and that it's motives are benevolent, and that it has a good understanding
> of our desires...that should suffice.  And I think we'll be able to do
> considerably better than that.
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to