OK ... but are both of these hypothetical computer programs on standard
contemporary chips, or do any of them use weird
supposedly-uncomputability-supporting chips?  ;-)

Of course, a computer program can use any axiom set it wants to analyze its
data ... just as we can now use automated theorem-provers to prove stuff
about uncomputable entities, in a formal sense...

By the way, I'm not sure the sense in which I'm a "constructivist."  I'm not
willing to commit to the statement that the universe is finite, or that only
finite math has "meaning."  But, it seems to me that, within the scope of
*science* and *language*, as currently conceived, there is no *need* to
posit anything non-finite.  Science and language are not necessarily
comprehensive of the universe....  Potentially (though I doubt it) mind is
uncomputable in a way that makes it impossible for science and math to grasp
it well enough to guide us in building an AGI ;-) ... and, interestingly, in
this case we could still potentially build an AGI via copying a human brain
... and then randomly tinkering with it!!

ben

On Wed, Oct 29, 2008 at 1:45 PM, Abram Demski <[EMAIL PROTECTED]> wrote:

> Ben,
>
> The difference can I think be best illustrated with two hypothetical
> AGIs. Both are supposed to be learning that "computers are
> approximately Turing machines". The first, made by you, interprets
> this constructively (let's say relative to PA). The second, made by
> me, interprets this classically (so it will always take the strongest
> set of axioms that it suspects to be consistent).
>
> The first AGI will be checking to see how well the computer's halting
> matches with the positive cases it can prove in PA, and the
> non-halting with the negative cases it can prove in PA. It will be
> ignoring the halting/nonhalting behavior when it can prove nothing.
>
> The second AGI will be checking to see how well the computer's halting
> matches with the positive cases it can prove in the axiom system of
> its choice, and the non-halting with the negative cases it can prove
> in PA, *plus* it will look to see if it is non-halting in the cases
> where it can prove nothing (after significant effort).
>
> Of course, both will conclude nearly the same thing: the computer is
> similar to the formal entity within specific restrictions. The second
> AGI will have slightly more data (extra axioms plus information in
> cases when it can't prove anything), but it will be learning a
> formally different statement too, so a direct comparison isn't quite
> fair. Anyway, I think this clarifies the difference.
>
> --Abram
>
> On Wed, Oct 29, 2008 at 1:13 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >>
> >> >
> >> > But the question is what does this mean about any actual computer,
> >> > or any actual physical object -- which we can only communicate about
> >> > clearly
> >> > insofar as it can be boiled down to a finite dataset.
> >>
> >> What it means to me is that "Any actual computer will not halt (with a
> >> correct output) for this program". An actual computer will keep
> >> crunching away until some event happens that breaks the metaphor
> >> between it and the abstract machine-- memory overload, power failure,
> >> et cetera.
> >
> > Yes ... this can be concluded **if** you can convince yourself that the
> > formal model corresponds to the physical machine.
> >
> > And to do *this*, you need to use a finite set of finite data points ;-)
> >
> > ben
> >
> > ________________________________
> > agi | Archives | Modify Your Subscription
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects."  -- Robert Heinlein



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to