Re: [agi] Hypercomputation and AGI

2009-01-01 Thread J. Andrew Rogers


On Dec 30, 2008, at 11:45 AM, Steve Richfield wrote:
Bingo! You have to tailor the techniques to the problem - more  
than just solving the equations, but often the representation of  
quantities needs to be in some sort of multivalued form.



What I meant is that if the standard algebraic reduction algorithm is  
not possible, there are other algorithms you can use to generate a set  
of equations that can be solved using algebraic reduction.  Humans are  
pretty limited in their ability to manually apply the generate a set  
of equations that can be solved algorithm(s) because there are too  
many dimensions, but computers have no problem.  I cut my teeth  
working on these types of solvers (in FORTRAN, yech).



I wonder if we aren't really talking about analog computation (i.e.  
computing with analogues, e.g. molecules) here? Analog computers  
have been handily out-computing digital computers for a long time.



Since digital and analog are the same thing computationally  
(digital is a subset of analog), and non-digital computers have  
been generally superior for several decades, this is not relevant. The  
difference between digital and analog is the signal-to-noise ratio  
(SNR) that has to be maintained by the computer system. You can  
simulate with perfect fidelity high SNR computers on low SNR computers  
(like digital computers) since they are equivalent, trading SNR for  
frequency.  If you apply the formula for converting digital bits to  
analog SNR (analog SNR = 1.76+6.02*bits), it becomes obvious why  
things like thermal noise make it impossible to directly implement  
e.g. a modest 32-bit digital processor as a non-digital equivalent.



When most people talk about analog computation, they are really  
talking about real computers (whether they realize it or not), which  
are a form of hypercomputer.  If it was possible to build such a  
computer, it would have some strange consequences for physics that are  
not in evidence.


Cheers,

J. Andrew Rogers





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2009-01-01 Thread J. Andrew Rogers


On Jan 1, 2009, at 2:35 PM, J. Andrew Rogers wrote:
Since digital and analog are the same thing computationally  
(digital is a subset of analog), and non-digital computers have  
been generally superior for several decades, this is not relevant.



Gah, that should be *digital* computers have generally been superior  
for several decades  (the last non-digital hold-outs I am aware of  
were designed in the late 1970s).


J. Andrew Rogers



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2009-01-01 Thread Abram Demski
Ben,

A few points concerning the central argument:

--Reading the argument again, I again mistakenly interpreted it the
way I had the first time (until I recalled the details of our previous
discussion). The presentation of the argument causes me to assume that
U is some kind of oracle directly accessible to all the agents in the
community, which as we previously discussed causes the argument to
fail. I think the argument could be made clearer by emphasizing that
this is not supposed to be the case.

--I am not clear about your intentions for the YES case. All I can
see is you admitting that in the YES case it may be easier for A2 to
internally make use of U.

And now a more off-the-wall idea. It seems possible to show that
beings whose mind ran by Method X (be it finite-state-machine,
turing-machine, or hyper-machine) cannot possibly find scientific use
for concepts which involve Method X. This is somewhat inexact. One way
to formalize it is to take methods to mean different logics (rather
than different types of machine), and derive the result as a trivial
corollary  of tarski's undefinability theorem: entities that use
Method X cannot understand it, so of *course* they cannot find a place
for it in their science.

Perhaps other formalizations of the idea are more interesting. Of
course, as AGI people we hope that we *can* understand the mind in
some sense.

--Abram

On Mon, Dec 29, 2008 at 1:45 PM, Ben Goertzel b...@goertzel.org wrote:

 Hi,

 I expanded a previous blog entry of mine on hypercomputation and AGI into a
 conference paper on the topic ... here is a rough draft, on which I'd
 appreciate commentary from anyone who's knowledgeable on the subject:

 http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf

 This is a theoretical rather than practical paper, although it does attempt
 to explore some of the practical implications as well -- e.g., in the
 hypothesis that intelligence does require hypercomputation, how might one go
 about creating AGI?   I come to a somewhat surprising conclusion, which is
 that -- even if intelligence fundamentally requires hypercomputation -- it
 could still be possible to create an AI via making Turing computer programs
 ... it just wouldn't be possible to do this in a manner guided entirely by
 science; one would need to use some other sort of guidance too, such as
 chance, imitation or intuition...

 -- Ben G


 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 I intend to live forever, or die trying.
 -- Groucho Marx

 
 agi | Archives | Modify Your Subscription



-- 
Abram Demski
Public address: abram-dem...@googlegroups.com
Public archive: http://groups.google.com/group/abram-demski
Private address: abramdem...@gmail.com


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2009-01-01 Thread Steve Richfield
J. Andrew,

On 1/1/09, J. Andrew Rogers and...@ceruleansystems.com wrote:


 On Jan 1, 2009, at 2:35 PM, J. Andrew Rogers wrote:

 Since digital and analog are the same thing computationally (digital
 is a subset of analog), and non-digital computers have been generally
 superior for several decades, this is not relevant.



 Gah, that should be *digital* computers have generally been superior for
 several decades  (the last non-digital hold-outs I am aware of were designed
 in the late 1970s).


Ignoring the issues or representation and display, I agree. However,
consider three interesting cases...

1.  I only survived my college differential equations course with the help
of a (now antique) EAI analog computer. Therein, I could simply wire it up
as the equation stated, with about as many wires as symbols in the
equations, without (much) concern for the internal workings of either the
computer or the equation, and get out a parametric plot any way I wanted.
However, with a digital computer, maybe there is suitable software by now,
but I would have to worry about how the computer did things, e.g. how fine
the time slices are, etc. Further, I couldn't just throw the equation at
the machine with a digital computer much as I could do with the analog
computer, though again, maybe software has caught up by now.

2.  Related to above and mentioned earlier, electrolytic fish-tank analogs
have long been used to characterize electric and magnetic fields. While
these may not be as accurate as digital simulation, they are in TRUE
walk-around 3-D representation, and changes can be made in seconds with no
need to verify that the change indeed reflects the intended change. This is
another example where, at the loss of a few down in the noise digits, you
can be SURE that the model indeed simulates reality. The same was long true
of wind tunnels, until things got SO valuable (and competitive) that it was
worth the millions of dollars to go after those last few digits.

3.  Conditioning high-speed phenomena. Transistors are now SO fast and have
SO much gain that they have become nearly perfect mathematical components.
Most people don't think of their TV tuners as being analog computers, but...

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread J. Andrew Rogers


On Dec 30, 2008, at 12:51 AM, Steve Richfield wrote:
On a side note, there is the clean math that people learn on their  
way to a math PhD, and then there is the dirty math that governs  
physical systems. Dirty math is fraught with all sorts of multi- 
valued functions, fundamental uncertainties, etc. To work in the  
world of dirty math, you must escape the notation and figure out  
what the equation is all about, and find some way of representing  
THAT, which may well not involve simple numbers on the real-number  
line, or even on the complex number plane.



What does dirty math really mean?  There are engineering disciplines  
essentially *built* on solving equations with gross internal  
inconsistencies and unsolvable systems of differential equations. The  
modern world gets along pretty admirably suffering the very profitable  
and ubiquitous consequences of their quasi-solutions to those  
problems.  But it is still a lot of hairy notational math and  
equations, just applied in a different context that has function  
uncertainty as an assumption. The unsolvability does not lead them to  
pull numbers out of a hat, they have sound methods for brute-forcing  
fine approximations across a surprisingly wide range of situations.  
When the clean mathematical methods do not apply, there are other  
different (not dirty) mathematical methods that you can use.


Indeed, I have sometimes said the only real education I ever got in AI  
was spending years studying an engineering discipline that is nothing  
but reducing very complex systems of pervasively polluted data and  
nonsense equations to precise predictive models where squeezing out an  
extra 1% accuracy meant huge profit.  None of it is directly  
applicable, the value was internalizing that kind of systems  
perspective and thinking about every complex systems problem in those  
terms, with a lot of experience algorithmically producing predictive  
models from them. It was different but it was still ordinary math,  
just math appropriate for the particular problem.  The only thing you  
could really say about it was that it produced a lot of great computer  
scientists and no mathematicians to speak of (an odd bias, that).



 With this as background, as I see it, hypercomputation is just  
another attempt to evade dealing with some hard mathematical problems.



The definition of hypercomputation captures some very specific  
mathematical concepts that are not captured in other conceptual  
terms.  I do not see what is being evaded, since it is more like  
pointing out the obvious with respect to certain limits implied by the  
conventional Turing model.


Cheers,

J. Andrew Rogers



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread William Pearson
2008/12/29 Ben Goertzel b...@goertzel.org:

 Hi,

 I expanded a previous blog entry of mine on hypercomputation and AGI into a
 conference paper on the topic ... here is a rough draft, on which I'd
 appreciate commentary from anyone who's knowledgeable on the subject:

 http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf

I'm still a bit fuzzy about your argument. So I am going to ask a
question to hopefully clarify things somewhat.

Couldn't you use similar arguments to say that we can't use science to
distinguish between finite state machines and Turing machines? And
thus question the usefulness of Turing Machines for science? As if you
are talking about a finite data sets these can always be represented
by a  compressed giant look up table.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Ben Goertzel
It seems to come down to the simplicity measure... if you can have

simplicity(Turing program P that generates lookup table T)

simplicity(compressed lookup table T)

then the Turing program P can be considered part of a scientific
explanation...


On Tue, Dec 30, 2008 at 10:02 AM, William Pearson wil.pear...@gmail.comwrote:

 2008/12/29 Ben Goertzel b...@goertzel.org:
 
  Hi,
 
  I expanded a previous blog entry of mine on hypercomputation and AGI into
 a
  conference paper on the topic ... here is a rough draft, on which I'd
  appreciate commentary from anyone who's knowledgeable on the subject:
 
  http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf
 
 I'm still a bit fuzzy about your argument. So I am going to ask a
 question to hopefully clarify things somewhat.

 Couldn't you use similar arguments to say that we can't use science to
 distinguish between finite state machines and Turing machines? And
 thus question the usefulness of Turing Machines for science? As if you
 are talking about a finite data sets these can always be represented
 by a  compressed giant look up table.

  Will


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread William Pearson
2008/12/30 Ben Goertzel b...@goertzel.org:

 It seems to come down to the simplicity measure... if you can have

 simplicity(Turing program P that generates lookup table T)
 
 simplicity(compressed lookup table T)

 then the Turing program P can be considered part of a scientific
 explanation...


Can you clarify what type of language this is in? You mention
L-expressions however that is not very clear what that means. lambda
expressions I'm guessing.

If you start with a language that has infinity built in to its fabric,
TMs will be simple, however if you started with a language that only
allowed FSM to be specified e.g. regular expressions, you wouldn't be
able to simply specify TMs, as you need to represent an infinitely
long tape in order to define a TM.

Is this analogous to the argument at the end of section 3? It is that
bit that is the least clear as far as I am concerned.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Ben Goertzel
I'm heading off on a vacation for 4-5 days [with occasional email access]
and will probably respond to this when i get back ... just wanted to let you
know I'm not ignoring the question ;-)

ben

On Tue, Dec 30, 2008 at 1:26 PM, William Pearson wil.pear...@gmail.comwrote:

 2008/12/30 Ben Goertzel b...@goertzel.org:
 
  It seems to come down to the simplicity measure... if you can have
 
  simplicity(Turing program P that generates lookup table T)
  
  simplicity(compressed lookup table T)
 
  then the Turing program P can be considered part of a scientific
  explanation...
 

 Can you clarify what type of language this is in? You mention
 L-expressions however that is not very clear what that means. lambda
 expressions I'm guessing.

 If you start with a language that has infinity built in to its fabric,
 TMs will be simple, however if you started with a language that only
 allowed FSM to be specified e.g. regular expressions, you wouldn't be
 able to simply specify TMs, as you need to represent an infinitely
 long tape in order to define a TM.

 Is this analogous to the argument at the end of section 3? It is that
 bit that is the least clear as far as I am concerned.

  Will


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Steve Richfield
J. Andrew,

On 12/30/08, J. Andrew Rogers and...@ceruleansystems.com wrote:


 On Dec 30, 2008, at 12:51 AM, Steve Richfield wrote:

 On a side note, there is the clean math that people learn on their way
 to a math PhD, and then there is the dirty math that governs physical
 systems. Dirty math is fraught with all sorts of multi-valued functions,
 fundamental uncertainties, etc. To work in the world of dirty math, you
 must escape the notation and figure out what the equation is all about, and
 find some way of representing THAT, which may well not involve simple
 numbers on the real-number line, or even on the complex number plane.



 What does dirty math really mean?  There are engineering disciplines
 essentially *built* on solving equations with gross internal inconsistencies
 and unsolvable systems of differential equations. The modern world gets
 along pretty admirably suffering the very profitable and ubiquitous
 consequences of their quasi-solutions to those problems.  But it is still a
 lot of hairy notational math and equations, just applied in a different
 context that has function uncertainty as an assumption. The unsolvability
 does not lead them to pull numbers out of a hat, they have sound methods for
 brute-forcing fine approximations across a surprisingly wide range of
 situations. When the clean mathematical methods do not apply, there are
 other different (not dirty) mathematical methods that you can use.


The dirty line is rather fuzzy, but you know you've crossed it when
instead of locations, things have probability spaces, when you are trying
to numerically solve systems of simultaneous equations and it always seems
that at least one of them produces NANs, etc. Algebra was designed for the
real world as we experience it, and works for most engineering problems,
but often runs aground in theoretical physics, at least until you abandon
the idea of a 1:1 correspondence between states and variables.

Indeed, I have sometimes said the only real education I ever got in AI was
 spending years studying an engineering discipline that is nothing but
 reducing very complex systems of pervasively polluted data and nonsense
 equations to precise predictive models where squeezing out an extra 1%
 accuracy meant huge profit.  None of it is directly applicable, the value
 was internalizing that kind of systems perspective and thinking about every
 complex systems problem in those terms, with a lot of experience
 algorithmically producing predictive models from them. It was different but
 it was still ordinary math, just math appropriate for the particular
 problem.


Bingo! You have to tailor the techniques to the problem - more than just
solving the equations, but often the representation of quantities needs to
be in some sort of multivalued form.

The only thing you could really say about it was that it produced a lot of
 great computer scientists and no mathematicians to speak of (an odd bias,
 that).


Yea, but I'd bet that you got pretty good at numerical analysis  ;-)

  With this as background, as I see it, hypercomputation is just another
 attempt to evade dealing with some hard mathematical problems.



 The definition of hypercomputation captures some very specific
 mathematical concepts that are not captured in other conceptual terms.  I do
 not see what is being evaded,


... which is where the break probably is. If someone is going to claim that
Turing machines are incapable of doing something, then it seems important to
state just what that something is.

since it is more like pointing out the obvious with respect to certain
 limits implied by the conventional Turing model.


I wonder if we aren't really talking about analog computation (i.e.
computing with analogues, e.g. molecules) here? Analog computers have been
handily out-computing digital computers for a long time. One analog computer
that produced tide tables, now in a glass case at the NOAA headquarters,
performed well for ~100 years until it was finally replaced by a large CDC
computer - and probably now with a PC. Some magnetic systems engineers still
resort to fish tank analogs rather than deal with software.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Hypercomputation and AGI

2008-12-29 Thread Ben Goertzel
Hi,

I expanded a previous blog entry of mine on hypercomputation and AGI into a
conference paper on the topic ... here is a rough draft, on which I'd
appreciate commentary from anyone who's knowledgeable on the subject:

http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf

This is a theoretical rather than practical paper, although it does attempt
to explore some of the practical implications as well -- e.g., in the
hypothesis that intelligence does require hypercomputation, how might one go
about creating AGI?   I come to a somewhat surprising conclusion, which is
that -- even if intelligence fundamentally requires hypercomputation -- it
could still be possible to create an AI via making Turing computer programs
... it just wouldn't be possible to do this in a manner guided entirely by
science; one would need to use some other sort of guidance too, such as
chance, imitation or intuition...

-- Ben G


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-29 Thread J. Andrew Rogers


On Dec 29, 2008, at 10:45 AM, Ben Goertzel wrote:
I expanded a previous blog entry of mine on hypercomputation and AGI  
into a conference paper on the topic ... here is a rough draft, on  
which I'd appreciate commentary from anyone who's knowledgeable on  
the subject:


http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf

This is a theoretical rather than practical paper, although it does  
attempt to explore some of the practical implications as well --  
e.g., in the hypothesis that intelligence does require  
hypercomputation, how might one go about creating AGI?   I come to a  
somewhat surprising conclusion, which is that -- even if  
intelligence fundamentally requires hypercomputation -- it could  
still be possible to create an AI via making Turing computer  
programs ... it just wouldn't be possible to do this in a manner  
guided entirely by science; one would need to use some other sort of  
guidance too, such as chance, imitation or intuition...



As more of a meta-comment, the whole notion of hypercomputation  
seems to be muddled, insofar as super-recursive algorithms may be a  
limited example of it.


I was doing a lot of work with inductive Turing machines several years  
ago, and most of the differences seemed to be definitional e.g. what  
constitutes an algorithm or answer.  For most practical purposes, the  
price of implementing them in conventional discrete space is the  
introduction of some (usually acceptable) error.  But if they  
approximate to the point of functional convergence on a normal Turing  
machine...  As best I have been able to tell, and I have not really  
been paying attention because the arguments seem to mostly be people  
talking past each other, is that ITMs raise some interesting  
philosophical questions regarding hypercomputation.



We cannot implement a *strict* hypercomputer, but to what extent does  
it count if we can asymptotically converge on the functional  
consequences of a hypercomputer using a normal computer?  It suspect  
it will be hard to evict the belief in Penrosian magic from the error  
bars in any case.


Cheers,

J. Andrew Rogers



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-29 Thread Ben Goertzel
Well, some of the papers in the references of my paper give formal
mathematical definitions of hypercomputation, though my paper is brief and
conceptual and not of that nature.  So although the generic concept may be
muddled, there are certainly some fully precise variants of it.

This paper surveys various formally defined varieties of hypercomputing,
though I haven't read it closely..

http://www.amirrorclear.net/academic/papers/many-forms.pdf

Anyway the argument in my paper is pretty strong and applies to any variant
with power beyond that of ordinary Turing machines, it would seem...

-- ben g

On Mon, Dec 29, 2008 at 4:18 PM, J. Andrew Rogers 
and...@ceruleansystems.com wrote:


 On Dec 29, 2008, at 10:45 AM, Ben Goertzel wrote:

 I expanded a previous blog entry of mine on hypercomputation and AGI into
 a conference paper on the topic ... here is a rough draft, on which I'd
 appreciate commentary from anyone who's knowledgeable on the subject:

 http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf

 This is a theoretical rather than practical paper, although it does
 attempt to explore some of the practical implications as well -- e.g., in
 the hypothesis that intelligence does require hypercomputation, how might
 one go about creating AGI?   I come to a somewhat surprising conclusion,
 which is that -- even if intelligence fundamentally requires
 hypercomputation -- it could still be possible to create an AI via making
 Turing computer programs ... it just wouldn't be possible to do this in a
 manner guided entirely by science; one would need to use some other sort of
 guidance too, such as chance, imitation or intuition...



 As more of a meta-comment, the whole notion of hypercomputation seems to
 be muddled, insofar as super-recursive algorithms may be a limited example
 of it.

 I was doing a lot of work with inductive Turing machines several years ago,
 and most of the differences seemed to be definitional e.g. what constitutes
 an algorithm or answer.  For most practical purposes, the price of
 implementing them in conventional discrete space is the introduction of some
 (usually acceptable) error.  But if they approximate to the point of
 functional convergence on a normal Turing machine...  As best I have been
 able to tell, and I have not really been paying attention because the
 arguments seem to mostly be people talking past each other, is that ITMs
 raise some interesting philosophical questions regarding hypercomputation.


 We cannot implement a *strict* hypercomputer, but to what extent does it
 count if we can asymptotically converge on the functional consequences of
 a hypercomputer using a normal computer?  It suspect it will be hard to
 evict the belief in Penrosian magic from the error bars in any case.

 Cheers,

 J. Andrew Rogers



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-29 Thread J. Andrew Rogers


On Dec 29, 2008, at 1:22 PM, Ben Goertzel wrote:


Well, some of the papers in the references of my paper give formal  
mathematical definitions of hypercomputation, though my paper is  
brief and conceptual and not of that nature.  So although the  
generic concept may be muddled, there are certainly some fully  
precise variants of it.



My comment was not really against the argument you make in the paper,  
nor do I disagree with your definition of hypercomputation. (BTW,  
run spellcheck.)  I was referring to the somewhat anomalous difficulty  
of deciding whether or not some computational models truly meet that  
definition as a practical matter.



Anyway the argument in my paper is pretty strong and applies to any  
variant with power beyond that of ordinary Turing machines, it would  
seem...



No disagreement with that, which is why I called it a meta- 
comment. :-)


Super-recursive algorithms, inductive Turing machines, and related  
computational models can be made to sit in a somewhat fuzzy place with  
respect to whether or not they are hypercomputers or normal Turing  
machines.  A Turing machine that asymptotically converges on producing  
the same result as a hypercomputer is an interesting case insofar as  
the results they produce may be close enough that you can consider the  
difference to be below the noise floor, and if they are functionally  
equivalent using that somewhat unusual definition then you effectively  
have equivalence to a hypercomputer without the hypercomputer.  Not  
strictly by definition, but within some strictly implied error bound  
for the purposes of comparing output (which is all we usually care  
about).


The concept of non-isotropic distributions of random numbers has  
always interested me for much the same reason, since there seems to be  
a similar concept at work there.


Cheers,

J. Andrew Rogers




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com