> Mike A.:
>
> Well, if you're convinced that infinity and the uncomputable are
> imaginary things, then you've got a self-consistent view that I can't
> directly argue against. But are you really willing to say that
> seemingly understandable notions such as the problem of deciding
> whether a given Turing machine will eventually halt are nonsense,
> simply because we would need infinite time to verify that one doesn't
> halt?
>

Abrahm,

I don't think I really disagree with you, and like I said, I need to go
back and really read the whole paper.  It looks interesting.  I will make
a comment that is not a counterargument but rather something to think
about.

By imaginary I mean that it cannot really be pointed at.

I was thinking about this... you know, if we took a rule base, a finite
set of rules, and let's say we start with that -- clearly finite.  We can
identify each rule and literally point to the rule on the screen (I guess,
think of Cyc).  We have a QUANTITY of rules.

Then suppose we have determined that in order to simulate human thought,
we must make the rule base infinite, as our mathematical/philosophical
investigations have shown that to achieve AGI the infinite is necessary,
let's say infinite time and quantity.

At that point the finite quantity of rules would disappear and we would
cross over strictly into the imaginary idea of a really huge number of
rules and a huge imaginary time:  that is, we are not speaking of a
QUANTITY of rules and time -- we have expressly and surreptitiously
shifted from discussing quantities to qualities.

Whereas before we were talking about a finite set of rules, now we fold
our hands and say, "well, it's an infinite set of rules and time now."  We
have shifted from a finite-quantity that could be examined to an
infinity-quality that cannot be examined and is wholly imaginary, yet
using that as a proof.

When we make this transition, it seems to me that the shift is so radical
that it is impossible to justify making the step, because as I mentioned
it involves a surreptitious shift from quantity to quality.

Incidentally Hegel held that the "true infinite" (as opposed to the
spurious infinite which is the unwarranted transition from quantity to
quality) was human thought.  I've been working on a book written by David
Carlson, a law professor, which makes clear some of the very obscure
writing of Hegel.

Mike


> Ben J.:
>
> "Step 3 requires human society to invent new concepts and techniques,
> and to thereby perform hypercomputation. I don't think that a
> computable nonmonotonic logic really solves this problem."
>
> I agree that nonmonotonic logic is not enough, not nearly. The point
> is just that since there are computable approximations of
> hypercomputers, it is not unreasonable to allow an AGI to reason about
> uncomputable objects.
>
> "My own interpretation of the work is that an individual person is no more
> powerful than a Turing machine (though, this point isn't discussed in the
> paper), but that society as a whole is capable of hypercomputation because
> we can keep drawing upon more resources to solve a problem: we build
> machines, we reproduce, we interact with and record our thoughts in the
> environment. Effectively, society as a whole becomes somewhat like a Zeus
> machine - faster and more complex with each moment."
>
> Something like this is mentioned in the paper as objection #4. But
> personally, I'd respond as follows: if a society of AGIs can
> hypercompute, then why not a single AGI with a society-of-mind style
> architecture? It is difficult to distinguish between a closely-linked
> society and a loosely-knit individual, where AI is concerned. So I
> argue that if a society can (and should) hypercompute, there is no
> reason to suspect that an individual can't (or shouldn't).
>
> On Mon, Jun 16, 2008 at 11:37 PM, Mike Archbold <[EMAIL PROTECTED]>
> wrote:
>>> I'm not sure that I'm responding to your intended meaning, but: all
>>> computers are in reality finite-state machines, including the brain
>>> (granted we don't think the real-number calculations on the cellular
>>> level are fundamental to intelligence). However, the finite state
>>> machines we call PCs are so large that it is convenient to pretend
>>> they have infinite memory; and when we do this, we get a machine that
>>> is equivalent in power to a Turing machine. But a turing machine has
>>> an infinite tape, so it cannot really exist (the real computer
>>> eventually runs out of memory). Similarly, I'm arguing that the human
>>> brain is so large in particular ways that it is convenient to treat it
>>> as an even more powerful machine (perhaps an infinite-time turing
>>> machine), despite the fact that such a machine cannot exist (we only
>>> have a finite amount of time to think). Thus a "spurious infinity" is
>>> not so spurious.
>>
>> Abrahm,
>>
>> Thanks for responding.  You know, i might be in a bit over my head with
>> some of the terminology in your paper, so to apologize in advance, but
>> just to clarify:  "spurious infinity" according to Hegel is the sleight
>> of
>> hand the happens when quantity transitions surreptiously into a quality.
>> At some point counting up, we are simply not talking about any number at
>> all, but about a quality of being REALLY SUPER BIG as we make kind of a
>> leap.
>>
>> According to him when we talk about infinity we are talking about some
>> idea of a huge number (in this case of calculations) and to use a phrase
>> he liked:  "imaginary being."  So since I am kind of a Hegelian of sorts
>> when I scanned the paper it looked like it argued that it is not
>> possible
>> to compute something that I had become convinced was imaginary anyway.
>> That would be true if you bought into Hegel's definition of infinity and
>> I
>> realize there aren't a log of hegelians around.  But, tomorrow I will
>> read
>> further.
>>
>> Mike
>>
>>
>>>
>>> On Mon, Jun 16, 2008 at 9:19 PM, Mike Archbold <[EMAIL PROTECTED]>
>>> wrote:
>>>>> I previously posted here claiming that the human mind (and therefore
>>>>> an
>>>> ideal AGI) entertains uncomputable models, counter to the
>>>>> AIXI/Solomonoff model. There was little enthusiasm about this idea.
>>>>> :)
>>>> Anyway, I hope I'm not being too annoying if I try to argue the point
>>>> once again. This paper also argues the point:
>>>>>
>>>>> http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf
>>>>>
>>>>
>>>> It looks like the paper hinges on:
>>>> "None of this prior work takes account of G¨odel intuition, repeatedly
>>>> communicated
>>>> to Hao Wang, that human minds "converge to infinity" in their power,
>>>> and
>>>> for this reason
>>>> surpass the reach of ordinary Turing machines."
>>>>
>>>> The thing to watch out for here is what Hegel described as the
>>>> "spurious
>>>> infinity" which is just the imagination thinking some imaginary
>>>> quantity
>>>> really big, but no matter how big, you always can envision "+1", but
>>>> the
>>>> result is always just another imaginary big number, to which you can
>>>> add
>>>> another "+1"... the point being that infinity is a idealistic quality,
>>>> not
>>>> a computable numeric quantity at all, ie., not numerical, we are
>>>> talking
>>>> about thought as such.
>>>>
>>>> I didn't read the whole paper, but the point I wanted to make was that
>>>> Hegel takes up the issue of infinity in his Science of Logic, which I
>>>> think is a good ontology in general because it mixes up a lot of
>>>> issues
>>>> AI
>>>> struggles with, like the ideal nature of quality and quantity, and
>>>> also
>>>> infinity.
>>>>
>>>> Mike Archbold
>>>> Seattle
>>>>
>>>>
>>>>> The paper includes a study of the uncomputable "busy beaver" function
>>>>> up
>>>> to x=6. The authors claim that their success at computing busy beaver
>>>> strongly suggests that humans can hypercompute.
>>>>>
>>>>> I believe the authors take this to imply that AGI cannot succeed on
>>>> current hardware; I am not suggesting this. Instead, I offer a fairly
>>>> concrete way to make deductions using a restricted class of
>>>>> uncomputable models, as an illustration of the idea (and as weak
>>>> evidence that the general case can be embodied on computers).
>>>>>
>>>>> The method is essentially nonmonotonic logic. Computable predicates
>>>>> can
>>>> be represented in any normal way (1st-order logic, lambda
>>>>> calculus, a standard programming language...). Computably enumerable
>>>> predicates (such as the halting problem) are represented by a default
>>>> assumption of "false", plus the computable method of enumerating true
>>>> cases. To reason about such a predicate, the system allocates however
>>>> much time it can spare to trying to prove a case true; if at the end
>>>> of
>>>> that time it has not found a proof by the enumeration method, it
>>>> considers it false. (Of course it could come back later and try
>>>>> harder, too.) Co-enumerable predicates are similarly assumed true
>>>>> until
>>>> a counterexample is found.
>>>>>
>>>>> Similar methodology can extend the class of uncomputables we can
>>>>> handle
>>>> somewhat farther. Consider the predicate "all turing machines of class
>>>> N
>>>> halt", where N is a computably enumerable class. Neither the true
>>>> cases
>>>> nor the false cases of this predicate are computably enumerable.
>>>> Nonetheless, we can characterize the predicate by assuming it is true
>>>> until a counterexample is "found": a turing machine that doesn't seem
>>>> to
>>>> halt when run as long as we can afford to run it. If our best efforts
>>>> (within time constraints) fail to find such a
>>>>> machine, then we stick with the default assumption "true". (A
>>>>> simplistic nonmonotonic logic can't quite handle this: at any stage
>>>>> of
>>>> the search, we would have many turing machines still at their default
>>>> status of "nonhalting", which would make the predicate seem
>>>>> always-false; we need to only admit assumptions that  have been
>>>>> "hardened" by trying to disprove them for some amount of time.)
>>>>>
>>>>> This may sound "so easy that a Turing machine could do it". And it
>>>>> is,
>>>> for set cutoffs. But the point is that an AGI that only considered
>>>> computable models, such as AIXI, would never converge to the correct
>>>> model in a world that contained anything uncomputable, whereas it
>>>> seems
>>>> a human could. (AIXI would find turing machines that were
>>>>> ever-closer to the right model, but could never see that there was a
>>>> simple pattern behind these ever-larger machines.)
>>>>>
>>>>> I hope that this makes my claim sound less extreme. Uncomputable
>>>>> models
>>>> of the world are not really so hard to reason about, if we're
>>>> comfortable with a logic that makes probably-true conclusions rather
>>>> than definitely-true.
>>>>>
>>>>>
>>>>> -------------------------------------------
>>>>> agi
>>>>> Archives: http://www.listbox.com/member/archive/303/=now
>>>>> RSS Feed: http://www.listbox.com/member/archive/rss/303/
>>>>> Modify Your Subscription:
>>>>> http://www.listbox.com/member/?&;
>>>> Powered by Listbox: http://www.listbox.com
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> -------------------------------------------
>>>> agi
>>>> Archives: http://www.listbox.com/member/archive/303/=now
>>>> RSS Feed: http://www.listbox.com/member/archive/rss/303/
>>>> Modify Your Subscription: http://www.listbox.com/member/?&;
>>>> Powered by Listbox: http://www.listbox.com
>>>>
>>>
>>>
>>> -------------------------------------------
>>> agi
>>> Archives: http://www.listbox.com/member/archive/303/=now
>>> RSS Feed: http://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription:
>>> http://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: http://www.listbox.com/member/archive/303/=now
>> RSS Feed: http://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: http://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to