Ben,

This is not what I meant at all! I am not trying to make an argument
from any sort of "intuitive feeling" of "absolute free will" in that
paragraph (or, well, ever).

That paragraph was referring to Terski's undefinability theorem.

Quoting the context directly before the paragraph in question:

"I am suggesting a broader problem that will apply to a wide
class of formulations of idealized intelligence such as AIXI: if their
internal logic obeys a particular set of assumptions, it will become
prone to Tarski's Undefinability Theorem. Therefore, we humans will be
able to point out a particular class of concepts that it cannot reason
about; specifically, the very concepts used in describing the ideal
intelligence in the first place."

To re-explain: We might construct generalizations of AIXI that use a
broader range of models. Specifically, it seems reasonable to try
models that are extensions of first-order arithmetic, such as
second-order arithmetic (analysis), ZF-set theory... (Models in
first-order logic of course could be considered equivalent to
Turing-machine models, the current AIXI.) Description length then
becomes description-length-in-language-X. But, any such extension is
doomed to a simple objection:

(1) We humans understand the semantics of formal system X.
(2) The undefinability theorem shows that formal system X cannot
understand its own semantics.

That is what needs an explanation. The one we can all agree is wrong:
"Humans are magic, we're better than any formal system." The one
Charles Hixon is OK with, but I dislike: "Humans approximate a
generalization of AIXI that satisfies the above assumptions;
therefore, the logic we use is some extension of arithmetic, but we
are incapable of defining that logic thanks to the undefinability
theorem."

--Abram

On Tue, Oct 21, 2008 at 9:21 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> I am completely unable to understand what this paragraph is supposed to
> mean:
>
> ***
> One reasonable way of avoiding the "humans are magic" explanation of
> this (or "humans use quantum gravity computing", etc) is to say that,
> OK, humans really are an approximation of an ideal intelligence
> obeying those assumptions. Therefore, we cannot understand the math
> needed to define our own intelligence. Therefore, we can't engineer
> human-level AGI. I don't like this conclusion! I want a different way
> out.
> ***
>
> Explanation of WHAT?  Of your intuitive feeling that you are uncomputable,
> that you have no limits in what you can do?
>
> Why is this intuitive feeling any more worthwhile than some peoples'
> intuitive
> feeling that they have some kind of absolute free will not allowed by
> classical
> physics or quantum theory??
>
> Personally my view is as follows.  Science does not need to intuitively
> explain all
> aspects of our experience: what it has to do is make predictions about
> finite sets of finite-precision observations, based on previously-collected
> finite sets of finite-precision observations.
>
> It is not impossible that we are unable to engineer intelligence, even
> though
> we are intelligent.  However, your intuitive feeling of awesome
> supercomputable
> powers seems an extremely weak argument in favor of this inability.
>
> You have not convinced me that you can do anything a computer can't do.
>
> And, using language or math, you never will -- because any finite set of
> symbols
> you can utter, could also be uttered by some computational system.
>
> -- Ben G
>
>
>
>
> On Tue, Oct 21, 2008 at 9:13 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>>
>> Charles,
>>
>> You are right to call me out on this, as I really don't have much
>> justification for rejecting that view beyond "I don't like it, it's
>> not elegant".
>>
>> But, I don't like it! It's not elegant!
>>
>> About the connotations of "engineer"... more specifically, I should
>> say that this prevents us from making one universal normative
>> mathematical model of intelligence, since our logic cannot describe
>> itself. Instead, we would be doomed to make a series of more and more
>> general models (AIXI being the first and most narrow), all of which
>> fall short of human logic.
>>
>> Worse, the implication is that this is not because human logic sits at
>> some sort of maximum; human intelligence would be just another rung in
>> the ladder from the perspective of some mathematically more powerful
>> alien species, or human mutant.
>>
>> --Abram
>>
>> On Tue, Oct 21, 2008 at 3:29 PM, Charles Hixson
>> <[EMAIL PROTECTED]> wrote:
>> > Abram Demski wrote:
>> >>
>> >> Ben,
>> >> ...
>> >> One reasonable way of avoiding the "humans are magic" explanation of
>> >> this (or "humans use quantum gravity computing", etc) is to say that,
>> >> OK, humans really are an approximation of an ideal intelligence
>> >> obeying those assumptions. Therefore, we cannot understand the math
>> >> needed to define our own intelligence. Therefore, we can't engineer
>> >> human-level AGI. I don't like this conclusion! I want a different way
>> >> out.
>> >>
>> >> I'm not sure the "guru" explanation is enough... who was the Guru for
>> >> Humankind?
>> >>
>> >> Thanks,
>> >>
>> >> --Abram
>> >>
>> >>
>> >
>> > You may not like "Therefore, we cannot understand the math needed to
>> > define
>> > our own intelligence.", but I'm rather convinced that it's correct.
>> >  OTOH, I
>> > don't think that it follows from this that humans can't build a better
>> > than
>> > human-level AGI.  (I didn't say "engineer", because I'm not certain what
>> > connotations you put on that term.)  This does, however, imply that
>> > people
>> > won't be able to understand the "better than human-level AGI".  They may
>> > well, however, understand parts of it, probably large parts.  And they
>> > may
>> > well be able to predict with fair certitude how it would react in
>> > numerous
>> > situations.  Just not in numerous other situations.
>> >
>> > The care, then, must be used in designing so that we can predict
>> > favorable
>> > motivations behind the actions in important-to-us  areas.  Even this is
>> > probably impossible in detail, but then it doesn't *need* to be
>> > understood
>> > in detail.  If you can predict that it will make better choices than we
>> > can,
>> > and that it's motives are benevolent, and that it has a good
>> > understanding
>> > of our desires...that should suffice.  And I think we'll be able to do
>> > considerably better than that.
>> >
>> >
>> >
>> > -------------------------------------------
>> > agi
>> > Archives: https://www.listbox.com/member/archive/303/=now
>> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> > Modify Your Subscription:
>> > https://www.listbox.com/member/?&;
>> > Powered by Listbox: http://www.listbox.com
>> >
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "Nothing will ever be attempted if all possible objections must be first
> overcome "  - Dr Samuel Johnson
>
>
> ________________________________
> agi | Archives | Modify Your Subscription


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to