Ben,

A few points concerning the central argument:

--Reading the argument again, I again mistakenly interpreted it the
way I had the first time (until I recalled the details of our previous
discussion). The presentation of the argument causes me to assume that
U is some kind of oracle directly accessible to all the agents in the
community, which as we previously discussed causes the argument to
fail. I think the argument could be made clearer by emphasizing that
this is not supposed to be the case.

--I am not clear about your intentions for the "YES" case. All I can
see is you admitting that in the YES case "it may be easier for A2 to
internally make use of U".

And now a more off-the-wall idea. It seems possible to show that
beings whose mind "ran by" Method X (be it finite-state-machine,
turing-machine, or hyper-machine) cannot possibly find scientific use
for concepts which involve Method X. This is somewhat inexact. One way
to formalize it is to take "methods" to mean different logics (rather
than different types of machine), and derive the result as a trivial
corollary  of tarski's undefinability theorem: entities that use
Method X cannot understand it, so of *course* they cannot find a place
for it in their science.

Perhaps other formalizations of the idea are more interesting. Of
course, as AGI people we hope that we *can* understand the mind in
some sense.

--Abram

On Mon, Dec 29, 2008 at 1:45 PM, Ben Goertzel <[email protected]> wrote:
>
> Hi,
>
> I expanded a previous blog entry of mine on hypercomputation and AGI into a
> conference paper on the topic ... here is a rough draft, on which I'd
> appreciate commentary from anyone who's knowledgeable on the subject:
>
> http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf
>
> This is a theoretical rather than practical paper, although it does attempt
> to explore some of the practical implications as well -- e.g., in the
> hypothesis that intelligence does require hypercomputation, how might one go
> about creating AGI?   I come to a somewhat surprising conclusion, which is
> that -- even if intelligence fundamentally requires hypercomputation -- it
> could still be possible to create an AI via making Turing computer programs
> ... it just wouldn't be possible to do this in a manner guided entirely by
> science; one would need to use some other sort of guidance too, such as
> chance, imitation or intuition...
>
> -- Ben G
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [email protected]
>
> "I intend to live forever, or die trying."
> -- Groucho Marx
>
> ________________________________
> agi | Archives | Modify Your Subscription



-- 
Abram Demski
Public address: [email protected]
Public archive: http://groups.google.com/group/abram-demski
Private address: [email protected]


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to