> I'm not sure that I'm responding to your intended meaning, but: all
> computers are in reality finite-state machines, including the brain
> (granted we don't think the real-number calculations on the cellular
> level are fundamental to intelligence). However, the finite state
> machines we call PCs are so large that it is convenient to pretend
> they have infinite memory; and when we do this, we get a machine that
> is equivalent in power to a Turing machine. But a turing machine has
> an infinite tape, so it cannot really exist (the real computer
> eventually runs out of memory). Similarly, I'm arguing that the human
> brain is so large in particular ways that it is convenient to treat it
> as an even more powerful machine (perhaps an infinite-time turing
> machine), despite the fact that such a machine cannot exist (we only
> have a finite amount of time to think). Thus a "spurious infinity" is
> not so spurious.

Abrahm,

Thanks for responding.  You know, i might be in a bit over my head with
some of the terminology in your paper, so to apologize in advance, but
just to clarify:  "spurious infinity" according to Hegel is the sleight of
hand the happens when quantity transitions surreptiously into a quality. 
At some point counting up, we are simply not talking about any number at
all, but about a quality of being REALLY SUPER BIG as we make kind of a
leap.

According to him when we talk about infinity we are talking about some
idea of a huge number (in this case of calculations) and to use a phrase
he liked:  "imaginary being."  So since I am kind of a Hegelian of sorts
when I scanned the paper it looked like it argued that it is not possible
to compute something that I had become convinced was imaginary anyway. 
That would be true if you bought into Hegel's definition of infinity and I
realize there aren't a log of hegelians around.  But, tomorrow I will read
further.

Mike


>
> On Mon, Jun 16, 2008 at 9:19 PM, Mike Archbold <[EMAIL PROTECTED]> wrote:
>>> I previously posted here claiming that the human mind (and therefore an
>> ideal AGI) entertains uncomputable models, counter to the
>>> AIXI/Solomonoff model. There was little enthusiasm about this idea. :)
>> Anyway, I hope I'm not being too annoying if I try to argue the point
>> once again. This paper also argues the point:
>>>
>>> http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf
>>>
>>
>> It looks like the paper hinges on:
>> "None of this prior work takes account of G¨odel intuition, repeatedly
>> communicated
>> to Hao Wang, that human minds "converge to infinity" in their power, and
>> for this reason
>> surpass the reach of ordinary Turing machines."
>>
>> The thing to watch out for here is what Hegel described as the "spurious
>> infinity" which is just the imagination thinking some imaginary quantity
>> really big, but no matter how big, you always can envision "+1", but the
>> result is always just another imaginary big number, to which you can add
>> another "+1"... the point being that infinity is a idealistic quality,
>> not
>> a computable numeric quantity at all, ie., not numerical, we are talking
>> about thought as such.
>>
>> I didn't read the whole paper, but the point I wanted to make was that
>> Hegel takes up the issue of infinity in his Science of Logic, which I
>> think is a good ontology in general because it mixes up a lot of issues
>> AI
>> struggles with, like the ideal nature of quality and quantity, and also
>> infinity.
>>
>> Mike Archbold
>> Seattle
>>
>>
>>> The paper includes a study of the uncomputable "busy beaver" function
>>> up
>> to x=6. The authors claim that their success at computing busy beaver
>> strongly suggests that humans can hypercompute.
>>>
>>> I believe the authors take this to imply that AGI cannot succeed on
>> current hardware; I am not suggesting this. Instead, I offer a fairly
>> concrete way to make deductions using a restricted class of
>>> uncomputable models, as an illustration of the idea (and as weak
>> evidence that the general case can be embodied on computers).
>>>
>>> The method is essentially nonmonotonic logic. Computable predicates can
>> be represented in any normal way (1st-order logic, lambda
>>> calculus, a standard programming language...). Computably enumerable
>> predicates (such as the halting problem) are represented by a default
>> assumption of "false", plus the computable method of enumerating true
>> cases. To reason about such a predicate, the system allocates however
>> much time it can spare to trying to prove a case true; if at the end of
>> that time it has not found a proof by the enumeration method, it
>> considers it false. (Of course it could come back later and try
>>> harder, too.) Co-enumerable predicates are similarly assumed true until
>> a counterexample is found.
>>>
>>> Similar methodology can extend the class of uncomputables we can handle
>> somewhat farther. Consider the predicate "all turing machines of class N
>> halt", where N is a computably enumerable class. Neither the true cases
>> nor the false cases of this predicate are computably enumerable.
>> Nonetheless, we can characterize the predicate by assuming it is true
>> until a counterexample is "found": a turing machine that doesn't seem to
>> halt when run as long as we can afford to run it. If our best efforts
>> (within time constraints) fail to find such a
>>> machine, then we stick with the default assumption "true". (A
>>> simplistic nonmonotonic logic can't quite handle this: at any stage of
>> the search, we would have many turing machines still at their default
>> status of "nonhalting", which would make the predicate seem
>>> always-false; we need to only admit assumptions that  have been
>>> "hardened" by trying to disprove them for some amount of time.)
>>>
>>> This may sound "so easy that a Turing machine could do it". And it is,
>> for set cutoffs. But the point is that an AGI that only considered
>> computable models, such as AIXI, would never converge to the correct
>> model in a world that contained anything uncomputable, whereas it seems
>> a human could. (AIXI would find turing machines that were
>>> ever-closer to the right model, but could never see that there was a
>> simple pattern behind these ever-larger machines.)
>>>
>>> I hope that this makes my claim sound less extreme. Uncomputable models
>> of the world are not really so hard to reason about, if we're
>> comfortable with a logic that makes probably-true conclusions rather
>> than definitely-true.
>>>
>>>
>>> -------------------------------------------
>>> agi
>>> Archives: http://www.listbox.com/member/archive/303/=now
>>> RSS Feed: http://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription:
>>> http://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>>
>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: http://www.listbox.com/member/archive/303/=now
>> RSS Feed: http://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: http://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to