Re: [agi] the uncomputable

2008-07-07 Thread Linas Vepstas
2008/7/2 Hector Zenil [EMAIL PROTECTED]:

 Hypercomputational models basically pretend to take advantage from
 either infinite time or infinite space (including models such as
 infinite resources, Zeno machines or the Omega-rule, real computation,
 etc.), from the continuum. Depending of the density of that space/time
 continuum one can think of several models taking advantage at several
 levels of the arithmetical hierarchy.

Lest various readers of this thread be confused, lets be careful
to make a distinction between physical infinity and the multitude
of mathematical infinities.  Whether or not one beleives that
some model of of physical quantum computer depends on
what one might believe about physics at the plank length --
which would be speculative.

Next, using words like infinite space is misleading: the
surface of a sphere has an infinite number of points; yet,
upon hearing the words infinite space, its unusual to think
of a sphere as an example.  Note that a single qubit is a
2-sphere, and so, in the same way that there are an infinite
number of points in the surface of a sphere, so to are there
an infinite number of states in a qubit.

Does any given quantum computer make use of the infinite
number of states in one qubit? That depends ... but the point
is that one can ponder hypercomputation in a finite volume;
this is crucial; don't let words like infinite space accidentally
imply infinite volume - it does not.

 But even if there is infinite
 space or time another issue is how to verify a hypercomputation. One
 would need another hypercomputer to verify the first and then trust in
 one.

Sure ... but we can express many theorems about Turing
machines, and know them to be true, without verifying
them on some other computer which we need to magically
trust. Similarly, one can make plenty of true statements
about hypercomputation without actually having to build
one and verifying its operation.  But perhaps I missed
some subtle point.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-07-07 Thread Hector Zenil
On Mon, Jul 7, 2008 at 5:44 PM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/7/2 Hector Zenil [EMAIL PROTECTED]:

 Hypercomputational models basically pretend to take advantage from
 either infinite time or infinite space (including models such as
 infinite resources, Zeno machines or the Omega-rule, real computation,
 etc.), from the continuum. Depending of the density of that space/time
 continuum one can think of several models taking advantage at several
 levels of the arithmetical hierarchy.

 Lest various readers of this thread be confused, lets be careful
 to make a distinction between physical infinity and the multitude
 of mathematical infinities.  Whether or not one beleives that
 some model of of physical quantum computer depends on
 what one might believe about physics at the plank length --
 which would be speculative.

 Next, using words like infinite space is misleading: the
 surface of a sphere has an infinite number of points; yet,
 upon hearing the words infinite space, its unusual to think
 of a sphere as an example.

If a sphere has infinite number of points then it is an infinite
bounded space and you can theoretically build a hypercomputer on it.

Any level in the arithmetical hierarchy can correspond to a level in
the physical world. It depends on the cardinality of the physical
space, from the continuum to any power of it, and whether its dense
and how dense.

 Note that a single qubit is a
 2-sphere, and so, in the same way that there are an infinite
 number of points in the surface of a sphere, so to are there
 an infinite number of states in a qubit.

 Does any given quantum computer make use of the infinite
 number of states in one qubit? That depends ... but the point
 is that one can ponder hypercomputation in a finite volume;
 this is crucial; don't let words like infinite space accidentally
 imply infinite volume - it does not.

 But even if there is infinite
 space or time another issue is how to verify a hypercomputation. One
 would need another hypercomputer to verify the first and then trust in
 one.

 Sure ... but we can express many theorems about Turing
 machines, and know them to be true, without verifying
 them on some other computer which we need to magically
 trust. Similarly, one can make plenty of true statements
 about hypercomputation without actually having to build
 one and verifying its operation.  But perhaps I missed
 some subtle point.

The big difference is that in principle you can verify the output of
any digital computer by hand in finite time, however you cannot verify
any non-Turing output from a hypercomputer by hand, not even in
principle unless having infinite time.


 --linas


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Hector Zenilhttp://zenil.mathrix.org


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-07-02 Thread Abram Demski
So yes, I think there are perfectly fine, rather simple
definitions for computing machines that can (it seems
like) perform calculations that turing machines cannot.
It should really be noted that quantum computers fall
into this class.

This is very interesting. Previously, I had heard (but not from a
definitive source) that quantum computers could compute in principle
only what a Turing machine could compute, but could do it much more
efficiently (something like the square root of the effort a Turing
machine would need, at least for some tasks). Can you cite any source
on this?

But I should emphasize that what I am really interested in is
computable approximation of uncomputable things. My stance is that an
AGI should be able to reason about uncomputable concepts in a coherent
manner (like we can), not that it needs to be able to actually compute
them (which we can't).

On Tue, Jul 1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

 I missed your earlier posts. However, I believe that there
 are models of computation can compute things that turing
 machines cannot, and that this is not arcane, just not widely
 known or studied.  Here is a quick sketch:

 Topological finite automata, or geometric finite automata,
 (of which the quantum finite automata is a special case)
 generalize the notion of non-deterministic finite automata
 by replacing its powerset of states with a general topological
 or geometric space (complex projective space in the quantum
 case). It is important to note that these general spaces are
 in general uncountable (have the cardinality of the continuum).

 It is well known that the languages accepted by quantum
 finite automata are not regular languages, they are bigger
 and more complex in some ways. I am not sure what is
 known about the languages accepted by quantum push-down
 automata, but intuitively these are clearly different (and bigger)
 than the class of context-free languages.

 I believe the concepts of topological finite automata extend
 just fine to a generalization of turing machines, but I also
 believe this is a poorly-explored area of mathematics.
 I beleive such machines can compute things that turing
 machiens can't ..  this should not be a surprise, since,
 after all, these systems have, in general, an uncountably
 infinite number of internal states (cardinality of the
 continuum!), and (as a side effect of the definition),
 perform infinite-precision addition and multiplication
 in finite time.

 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 Considerably more confusing is the relationship of
 such machines (and the languages they accept) to
 lambda calculus, or first-order (or higher-order) logic.
 This is where the rubber hits the road, and even for
 the simplest examples, the systems are poorly
 understood, or not even studied.  So, yeah, I think
 there's plenty of room for the uncomputable in
 some rather simple mathematical models of generalized
 computation.

 --linas


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-07-02 Thread Hector Zenil
The standard model of quantum computation as defined by Feynman and
Deutsch is Turing computable (based on the concept of qubits). As
proven by Deutsch they compute the same set of functions than Turing
machines but faster (if they are feasible).

Non-standard models of quantum computation are not widely accepted,
and even when they could hypercompute many doubt that we could take
any from continuum entangling to perform computations. Non-standard
quantum computers have not yet being well defined (and that is one of
the many issues of hypercomputation: each time one comes up with a
standard model of hypercomputation there is always another not
equivalent model of hypercomputation that computes a different set of
functions, i.e. there is no convergence in models unlike what happened
when digital computation was characterized).

Hypercomputational models basically pretend to take advantage from
either infinite time or infinite space (including models such as
infinite resources, Zeno machines or the Omega-rule, real computation,
etc.), from the continuum. Depending of the density of that space/time
continuum one can think of several models taking advantage at several
levels of the arithmetical hierarchy. But even if there is infinite
space or time another issue is how to verify a hypercomputation. One
would need another hypercomputer to verify the first and then trust in
one.

Whether you think hypercomputation, the following paper is a most read
for those interested on the topic. Martin Davis' articulates several
criticisms:
The myth of hypercomputation, in: C. Teuscher (Ed.), Alan Turing: Life
and Legacy of a Great Thinker (2004)

Serious work on analogous computation can be found in papers from
Felix Costa et al.:
http://fgc.math.ist.utl.pt/jfc.htm

My master's thesis was on the subject so if you are interested in
getting an electronic copy just let me know. It is in French though.




On Wed, Jul 2, 2008 at 11:15 AM, Abram Demski [EMAIL PROTECTED] wrote:
 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 This is very interesting. Previously, I had heard (but not from a
 definitive source) that quantum computers could compute in principle
 only what a Turing machine could compute, but could do it much more
 efficiently (something like the square root of the effort a Turing
 machine would need, at least for some tasks). Can you cite any source
 on this?

 But I should emphasize that what I am really interested in is
 computable approximation of uncomputable things. My stance is that an
 AGI should be able to reason about uncomputable concepts in a coherent
 manner (like we can), not that it needs to be able to actually compute
 them (which we can't).

 On Tue, Jul 1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

 I missed your earlier posts. However, I believe that there
 are models of computation can compute things that turing
 machines cannot, and that this is not arcane, just not widely
 known or studied.  Here is a quick sketch:

 Topological finite automata, or geometric finite automata,
 (of which the quantum finite automata is a special case)
 generalize the notion of non-deterministic finite automata
 by replacing its powerset of states with a general topological
 or geometric space (complex projective space in the quantum
 case). It is important to note that these general spaces are
 in general uncountable (have the cardinality of the continuum).

 It is well known that the languages accepted by quantum
 finite automata are not regular languages, they are bigger
 and more complex in some ways. I am not sure what is
 known about the languages accepted by quantum push-down
 automata, but intuitively these are clearly different (and bigger)
 than the class of context-free languages.

 I believe the concepts of topological finite automata extend
 just fine to a generalization of turing machines, but I also
 believe this is a poorly-explored area of mathematics.
 I beleive such machines can compute things that turing
 machiens can't ..  this should not be a surprise, since,
 after all, these systems have, in general, an uncountably
 infinite number of internal states (cardinality of the
 continuum!), and (as a side effect of the definition),
 perform infinite-precision addition and multiplication
 in finite time.

 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class

Re: [agi] the uncomputable

2008-07-02 Thread Abram Demski
Hector Zenil said:
and that is one of the many issues of hypercomputation: each time one
comes up with a standard model of hypercomputation there is always
another not equivalent model of hypercomputation that computes a
different set of functions, i.e. there is no convergence in models
unlike what happened when digital computation was characterized

This is not entirely true. Turing's oracle machines turn out to
correspond to infinite-time machines, and both correspond to the
arithmetical hierarchy.

On Wed, Jul 2, 2008 at 12:38 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 The standard model of quantum computation as defined by Feynman and
 Deutsch is Turing computable (based on the concept of qubits). As
 proven by Deutsch they compute the same set of functions than Turing
 machines but faster (if they are feasible).

 Non-standard models of quantum computation are not widely accepted,
 and even when they could hypercompute many doubt that we could take
 any from continuum entangling to perform computations. Non-standard
 quantum computers have not yet being well defined (and that is one of
 the many issues of hypercomputation: each time one comes up with a
 standard model of hypercomputation there is always another not
 equivalent model of hypercomputation that computes a different set of
 functions, i.e. there is no convergence in models unlike what happened
 when digital computation was characterized).

 Hypercomputational models basically pretend to take advantage from
 either infinite time or infinite space (including models such as
 infinite resources, Zeno machines or the Omega-rule, real computation,
 etc.), from the continuum. Depending of the density of that space/time
 continuum one can think of several models taking advantage at several
 levels of the arithmetical hierarchy. But even if there is infinite
 space or time another issue is how to verify a hypercomputation. One
 would need another hypercomputer to verify the first and then trust in
 one.

 Whether you think hypercomputation, the following paper is a most read
 for those interested on the topic. Martin Davis' articulates several
 criticisms:
 The myth of hypercomputation, in: C. Teuscher (Ed.), Alan Turing: Life
 and Legacy of a Great Thinker (2004)

 Serious work on analogous computation can be found in papers from
 Felix Costa et al.:
 http://fgc.math.ist.utl.pt/jfc.htm

 My master's thesis was on the subject so if you are interested in
 getting an electronic copy just let me know. It is in French though.




 On Wed, Jul 2, 2008 at 11:15 AM, Abram Demski [EMAIL PROTECTED] wrote:
 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 This is very interesting. Previously, I had heard (but not from a
 definitive source) that quantum computers could compute in principle
 only what a Turing machine could compute, but could do it much more
 efficiently (something like the square root of the effort a Turing
 machine would need, at least for some tasks). Can you cite any source
 on this?

 But I should emphasize that what I am really interested in is
 computable approximation of uncomputable things. My stance is that an
 AGI should be able to reason about uncomputable concepts in a coherent
 manner (like we can), not that it needs to be able to actually compute
 them (which we can't).

 On Tue, Jul 1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

 I missed your earlier posts. However, I believe that there
 are models of computation can compute things that turing
 machines cannot, and that this is not arcane, just not widely
 known or studied.  Here is a quick sketch:

 Topological finite automata, or geometric finite automata,
 (of which the quantum finite automata is a special case)
 generalize the notion of non-deterministic finite automata
 by replacing its powerset of states with a general topological
 or geometric space (complex projective space in the quantum
 case). It is important to note that these general spaces are
 in general uncountable (have the cardinality of the continuum).

 It is well known that the languages accepted by quantum
 finite automata are not regular languages, they are bigger
 and more complex in some ways. I am not sure what is
 known about the languages accepted by quantum push-down
 automata, but intuitively these are clearly different (and bigger)
 than the class of context-free languages.

 I believe the concepts of topological finite automata extend
 just fine to a generalization of turing machines, but I also
 believe this is a poorly-explored area of mathematics

Re: [agi] the uncomputable

2008-07-02 Thread Hector Zenil
On Wed, Jul 2, 2008 at 1:30 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Hector Zenil said:
 and that is one of the many issues of hypercomputation: each time one
 comes up with a standard model of hypercomputation there is always
 another not equivalent model of hypercomputation that computes a
 different set of functions, i.e. there is no convergence in models
 unlike what happened when digital computation was characterized

 This is not entirely true. Turing's oracle machines turn out to
 correspond to infinite-time machines, and both correspond to the
 arithmetical hierarchy.

At each level of the arithmetical hierarchy there is a universal
oracle machine (a hypercomputer), so there is no standard model of
hypercomputation unless you make strong assumptions, unlike digital
computation. There are even hiperarithmetical machines and as stated
by Post's problem, intermediate non-comparable degrees at each level
of the arithmetical and hiperarithmetical (that's why the Turing
universe does not build a total order).


 On Wed, Jul 2, 2008 at 12:38 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 The standard model of quantum computation as defined by Feynman and
 Deutsch is Turing computable (based on the concept of qubits). As
 proven by Deutsch they compute the same set of functions than Turing
 machines but faster (if they are feasible).

 Non-standard models of quantum computation are not widely accepted,
 and even when they could hypercompute many doubt that we could take
 any from continuum entangling to perform computations. Non-standard
 quantum computers have not yet being well defined (and that is one of
 the many issues of hypercomputation: each time one comes up with a
 standard model of hypercomputation there is always another not
 equivalent model of hypercomputation that computes a different set of
 functions, i.e. there is no convergence in models unlike what happened
 when digital computation was characterized).

 Hypercomputational models basically pretend to take advantage from
 either infinite time or infinite space (including models such as
 infinite resources, Zeno machines or the Omega-rule, real computation,
 etc.), from the continuum. Depending of the density of that space/time
 continuum one can think of several models taking advantage at several
 levels of the arithmetical hierarchy. But even if there is infinite
 space or time another issue is how to verify a hypercomputation. One
 would need another hypercomputer to verify the first and then trust in
 one.

 Whether you think hypercomputation, the following paper is a most read
 for those interested on the topic. Martin Davis' articulates several
 criticisms:
 The myth of hypercomputation, in: C. Teuscher (Ed.), Alan Turing: Life
 and Legacy of a Great Thinker (2004)

 Serious work on analogous computation can be found in papers from
 Felix Costa et al.:
 http://fgc.math.ist.utl.pt/jfc.htm

 My master's thesis was on the subject so if you are interested in
 getting an electronic copy just let me know. It is in French though.




 On Wed, Jul 2, 2008 at 11:15 AM, Abram Demski [EMAIL PROTECTED] wrote:
 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 This is very interesting. Previously, I had heard (but not from a
 definitive source) that quantum computers could compute in principle
 only what a Turing machine could compute, but could do it much more
 efficiently (something like the square root of the effort a Turing
 machine would need, at least for some tasks). Can you cite any source
 on this?

 But I should emphasize that what I am really interested in is
 computable approximation of uncomputable things. My stance is that an
 AGI should be able to reason about uncomputable concepts in a coherent
 manner (like we can), not that it needs to be able to actually compute
 them (which we can't).

 On Tue, Jul 1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

 I missed your earlier posts. However, I believe that there
 are models of computation can compute things that turing
 machines cannot, and that this is not arcane, just not widely
 known or studied.  Here is a quick sketch:

 Topological finite automata, or geometric finite automata,
 (of which the quantum finite automata is a special case)
 generalize the notion of non-deterministic finite automata
 by replacing its powerset of states with a general topological
 or geometric space (complex projective space in the quantum
 case). It is important to note that these general spaces are
 in general uncountable (have the cardinality

Re: [agi] the uncomputable

2008-07-02 Thread Abram Demski
Yes, I was not claiming that there was just one type of hypercomputer,
merely that some initially very different-looking types do turn out to
be equivalent.

You seem quite knowledgeable about the subject. Can you recommend any
books or papers?

On Wed, Jul 2, 2008 at 1:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Wed, Jul 2, 2008 at 1:30 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Hector Zenil said:
 and that is one of the many issues of hypercomputation: each time one
 comes up with a standard model of hypercomputation there is always
 another not equivalent model of hypercomputation that computes a
 different set of functions, i.e. there is no convergence in models
 unlike what happened when digital computation was characterized

 This is not entirely true. Turing's oracle machines turn out to
 correspond to infinite-time machines, and both correspond to the
 arithmetical hierarchy.

 At each level of the arithmetical hierarchy there is a universal
 oracle machine (a hypercomputer), so there is no standard model of
 hypercomputation unless you make strong assumptions, unlike digital
 computation. There are even hiperarithmetical machines and as stated
 by Post's problem, intermediate non-comparable degrees at each level
 of the arithmetical and hiperarithmetical (that's why the Turing
 universe does not build a total order).


 On Wed, Jul 2, 2008 at 12:38 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 The standard model of quantum computation as defined by Feynman and
 Deutsch is Turing computable (based on the concept of qubits). As
 proven by Deutsch they compute the same set of functions than Turing
 machines but faster (if they are feasible).

 Non-standard models of quantum computation are not widely accepted,
 and even when they could hypercompute many doubt that we could take
 any from continuum entangling to perform computations. Non-standard
 quantum computers have not yet being well defined (and that is one of
 the many issues of hypercomputation: each time one comes up with a
 standard model of hypercomputation there is always another not
 equivalent model of hypercomputation that computes a different set of
 functions, i.e. there is no convergence in models unlike what happened
 when digital computation was characterized).

 Hypercomputational models basically pretend to take advantage from
 either infinite time or infinite space (including models such as
 infinite resources, Zeno machines or the Omega-rule, real computation,
 etc.), from the continuum. Depending of the density of that space/time
 continuum one can think of several models taking advantage at several
 levels of the arithmetical hierarchy. But even if there is infinite
 space or time another issue is how to verify a hypercomputation. One
 would need another hypercomputer to verify the first and then trust in
 one.

 Whether you think hypercomputation, the following paper is a most read
 for those interested on the topic. Martin Davis' articulates several
 criticisms:
 The myth of hypercomputation, in: C. Teuscher (Ed.), Alan Turing: Life
 and Legacy of a Great Thinker (2004)

 Serious work on analogous computation can be found in papers from
 Felix Costa et al.:
 http://fgc.math.ist.utl.pt/jfc.htm

 My master's thesis was on the subject so if you are interested in
 getting an electronic copy just let me know. It is in French though.




 On Wed, Jul 2, 2008 at 11:15 AM, Abram Demski [EMAIL PROTECTED] wrote:
 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 This is very interesting. Previously, I had heard (but not from a
 definitive source) that quantum computers could compute in principle
 only what a Turing machine could compute, but could do it much more
 efficiently (something like the square root of the effort a Turing
 machine would need, at least for some tasks). Can you cite any source
 on this?

 But I should emphasize that what I am really interested in is
 computable approximation of uncomputable things. My stance is that an
 AGI should be able to reason about uncomputable concepts in a coherent
 manner (like we can), not that it needs to be able to actually compute
 them (which we can't).

 On Tue, Jul 1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

 I missed your earlier posts. However, I believe that there
 are models of computation can compute things that turing
 machines cannot, and that this is not arcane, just not widely
 known or studied.  Here is a quick sketch:

 Topological finite automata, or geometric finite automata,
 (of which the quantum finite automata

Re: [agi] the uncomputable

2008-07-02 Thread Hector Zenil
 a
 definitive source) that quantum computers could compute in principle
 only what a Turing machine could compute, but could do it much more
 efficiently (something like the square root of the effort a Turing
 machine would need, at least for some tasks). Can you cite any source
 on this?

 But I should emphasize that what I am really interested in is
 computable approximation of uncomputable things. My stance is that an
 AGI should be able to reason about uncomputable concepts in a coherent
 manner (like we can), not that it needs to be able to actually compute
 them (which we can't).

 On Tue, Jul 1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

 I missed your earlier posts. However, I believe that there
 are models of computation can compute things that turing
 machines cannot, and that this is not arcane, just not widely
 known or studied.  Here is a quick sketch:

 Topological finite automata, or geometric finite automata,
 (of which the quantum finite automata is a special case)
 generalize the notion of non-deterministic finite automata
 by replacing its powerset of states with a general topological
 or geometric space (complex projective space in the quantum
 case). It is important to note that these general spaces are
 in general uncountable (have the cardinality of the continuum).

 It is well known that the languages accepted by quantum
 finite automata are not regular languages, they are bigger
 and more complex in some ways. I am not sure what is
 known about the languages accepted by quantum push-down
 automata, but intuitively these are clearly different (and bigger)
 than the class of context-free languages.

 I believe the concepts of topological finite automata extend
 just fine to a generalization of turing machines, but I also
 believe this is a poorly-explored area of mathematics.
 I beleive such machines can compute things that turing
 machiens can't ..  this should not be a surprise, since,
 after all, these systems have, in general, an uncountably
 infinite number of internal states (cardinality of the
 continuum!), and (as a side effect of the definition),
 perform infinite-precision addition and multiplication
 in finite time.

 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 Considerably more confusing is the relationship of
 such machines (and the languages they accept) to
 lambda calculus, or first-order (or higher-order) logic.
 This is where the rubber hits the road, and even for
 the simplest examples, the systems are poorly
 understood, or not even studied.  So, yeah, I think
 there's plenty of room for the uncomputable in
 some rather simple mathematical models of generalized
 computation.

 --linas


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Hector Zenilhttp://zenil.mathrix.org


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Hector Zenilhttp://zenil.mathrix.org


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Hector Zenilhttp://zenil.mathrix.org


---
agi
Archives: http://www.listbox.com/member

Re: [agi] the uncomputable

2008-07-01 Thread Linas Vepstas
2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

I missed your earlier posts. However, I believe that there
are models of computation can compute things that turing
machines cannot, and that this is not arcane, just not widely
known or studied.  Here is a quick sketch:

Topological finite automata, or geometric finite automata,
(of which the quantum finite automata is a special case)
generalize the notion of non-deterministic finite automata
by replacing its powerset of states with a general topological
or geometric space (complex projective space in the quantum
case). It is important to note that these general spaces are
in general uncountable (have the cardinality of the continuum).

It is well known that the languages accepted by quantum
finite automata are not regular languages, they are bigger
and more complex in some ways. I am not sure what is
known about the languages accepted by quantum push-down
automata, but intuitively these are clearly different (and bigger)
than the class of context-free languages.

I believe the concepts of topological finite automata extend
just fine to a generalization of turing machines, but I also
believe this is a poorly-explored area of mathematics.
I beleive such machines can compute things that turing
machiens can't ..  this should not be a surprise, since,
after all, these systems have, in general, an uncountably
infinite number of internal states (cardinality of the
continuum!), and (as a side effect of the definition),
perform infinite-precision addition and multiplication
in finite time.

So yes, I think there are perfectly fine, rather simple
definitions for computing machines that can (it seems
like) perform calculations that turing machines cannot.
It should really be noted that quantum computers fall
into this class.

Considerably more confusing is the relationship of
such machines (and the languages they accept) to
lambda calculus, or first-order (or higher-order) logic.
This is where the rubber hits the road, and even for
the simplest examples, the systems are poorly
understood, or not even studied.  So, yeah, I think
there's plenty of room for the uncomputable in
some rather simple mathematical models of generalized
computation.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-19 Thread Samantha Atkins

Abram Demski wrote:

On Wed, Jun 18, 2008 at 9:54 AM, Benjamin Johnston
[EMAIL PROTECTED] wrote:
[...]
  

In any case, this whole conversation bothers me. It seems like we're
focussing on the wrong problems; like using the Theory of Relativity to
decide on an appropriate speed limit for cars in school zones. If it could
take 1,000 years of thought and creativity to go from BB(n) to BB(n+1) for
some n, we're talking about problems of an incredible scale, far beyond what
most of us have in mind for our first prototypes. A challenge with the busy
beaver problem is that when n becomes big enough, you start being able to
encode long-standing and very difficult mathematical conjectures.

-Ben



My point is simply that an AGI should be able to think about such
concepts, like we do. It doesn't need to solve them. In this sense I
think it is a fundamental concern: how is it possible to have a form
of knowledge representation that can in principle capture all ideas a
human might express? 



Intuition suggests that there should be a simple
sufficient representation, like 1st-order logic. But 1st-order logic
isn't enough, and neither are 2nd-order logics, 3rd order...

  



Well, what exactly are the constraints you wish you place on capture.  
Clearly humans can express the ideas so in some sense they are trivially 
(say text and graphics) captured.  :-)


- samantha




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-19 Thread Abram Demski
 Well, what exactly are the constraints you wish you place on capture.
  Clearly humans can express the ideas so in some sense they are trivially
 (say text and graphics) captured.  :-)

 - samantha

Personally, the constraint that I want to satisfy is that the rules of
manipulation should reflect the semantics. Regular higher-order logic
does not satisfy this because the rules of manipulation for
higher-order logics can be captured by a typed first-order logic
(http://en.wikipedia.org/wiki/Second-order_logic#Semantics).

I'll admit that the idea of reflecting the semantics is a bit vague.
The strongest form of the statement would be the logic must prove
true what is true and prove false what is false, but this is too
strong to be satisfiable. A weaker version would be the logic must
eventually converge to the truth, but even this is too strong (thanks
to one of the examples I gave in my initial post).


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-18 Thread Vladimir Nesov
On Wed, Jun 18, 2008 at 1:58 AM, Abram Demski [EMAIL PROTECTED] wrote:
 Vladimir Nesov,

 Then do you agree with my hypothetical extremist version of Mike?

 (Aside: For the example we are talking about, it is totally necessary
 to stick the undecidable cases in F rather than T: if a Turing machine
 halts, then it is possible to prove that it halts (simply by running
 it for long enough). So if a Turing machine is one of those whose
 halting is formally undecidable, then it must not halt, because if it
 did then a proof of its halting would exist.)


If T* is enumerable set of halting machines, F* is enumerable (by our
machine) subset of never-halting machines, and X is set of remaining
machines, the situation is symmetric. If you dump X into T*, you
destroy its property to be enumerable, but likewise with F*.

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] the uncomputable

2008-06-18 Thread Benjamin Johnston

  My own interpretation of the work is that an individual person
  is no more powerful than a Turing machine (though, this point 
  isn't discussed in the paper), but that society as a whole is 
  capable of hypercomputation because we can keep drawing upon 
  more resources to solve a problem: we build machines, we 
  reproduce, we interact with and record our thoughts in the
  environment. Effectively, society as a whole becomes somewhat
  like a Zeus machine - faster and more complex with each moment.

 Something like this is mentioned in the paper as objection #4. 
 But personally, I'd respond as follows: if a society of AGIs 
 can hypercompute, then why not a single AGI with a society-of-
 mind style architecture? It is difficult to distinguish 
 between a closely-linked society and a loosely-knit 
 individual, where AI is concerned. So I argue that if a 
 society can (and should) hypercompute, there is no reason to
 suspect that an individual can't (or shouldn't).

The point I was making was not that the social structure is important. Sure,
a society of processes on an individual machine should have no more
computing power than a single process on that machine.

Instead, the important aspect is the fact that society creates machines,
reproduces and interacts with the environment. By doing these things we
increase our problem solving ability exponentially. As I understand the
claims, a single AGI could achieve similar kinds of hypercomputation if it
can design and build new hardware to extend its own capabilities: if it can
actually make itself get faster. It would, in effect, be a Turing machine of
unbounded complexity.

A single AGI running on fixed hardware and no way of extending itself
computationally is not going to be able to do this, even if it is
implemented as a society-of-mind.


In any case, this whole conversation bothers me. It seems like we're
focussing on the wrong problems; like using the Theory of Relativity to
decide on an appropriate speed limit for cars in school zones. If it could
take 1,000 years of thought and creativity to go from BB(n) to BB(n+1) for
some n, we're talking about problems of an incredible scale, far beyond what
most of us have in mind for our first prototypes. A challenge with the busy
beaver problem is that when n becomes big enough, you start being able to
encode long-standing and very difficult mathematical conjectures. 
 
-Ben




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-18 Thread Abram Demski
On Wed, Jun 18, 2008 at 9:54 AM, Benjamin Johnston
[EMAIL PROTECTED] wrote:
[...]
 In any case, this whole conversation bothers me. It seems like we're
 focussing on the wrong problems; like using the Theory of Relativity to
 decide on an appropriate speed limit for cars in school zones. If it could
 take 1,000 years of thought and creativity to go from BB(n) to BB(n+1) for
 some n, we're talking about problems of an incredible scale, far beyond what
 most of us have in mind for our first prototypes. A challenge with the busy
 beaver problem is that when n becomes big enough, you start being able to
 encode long-standing and very difficult mathematical conjectures.

 -Ben

My point is simply that an AGI should be able to think about such
concepts, like we do. It doesn't need to solve them. In this sense I
think it is a fundamental concern: how is it possible to have a form
of knowledge representation that can in principle capture all ideas a
human might express? Intuition suggests that there should be a simple
sufficient representation, like 1st-order logic. But 1st-order logic
isn't enough, and neither are 2nd-order logics, 3rd order...


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-18 Thread Hector Zenil
On Mon, Jun 16, 2008 at 5:34 PM, Abram Demski [EMAIL PROTECTED] wrote:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)
 Anyway, I hope I'm not being too annoying if I try to argue the point
 once again. This paper also argues the point:

 http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf

 The paper includes a study of the uncomputable busy beaver function
 up to x=6. The authors claim that their success at computing busy
 beaver strongly suggests that humans can hypercompute.


They refer to Rado's busy beaver original problem and even quote him
about the hardness of the busy beaver problem for larger n. However,
the paper authors do the 4-tuple version of TMs (EITHER move OR write
a new symbol). Those TMs need more states to do the same work as the
5-tuple ones. Rado's original bb formalism is 5-tuple. They haven't
therefore solved any busy beaver as originally formulated but a
tailor-made version. The current know values of the bb remain as
before the paper up to 4 states.

This is a second flaw of the paper from my point of view, (besides my
objection already made before) considering that their whole argument
is based on the empirical fact they were able to calculate a bb
greater than 4... (and then the whole argument about being able to
calculate n+1).


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-18 Thread Mark Waser

My point is simply that an AGI should be able to think about such
concepts, like we do. It doesn't need to solve them. In this sense I
think it is a fundamental concern: how is it possible to have a form
of knowledge representation that can in principle capture all ideas a
human might express? Intuition suggests that there should be a simple
sufficient representation, like 1st-order logic. But 1st-order logic
isn't enough, and neither are 2nd-order logics, 3rd order...


Yes, there is a simple sufficient representation that nature has put *a lot* 
of effort into evolving and continues to evolve every day (and not only 
that -- there are numerous variants of it to study and determine what is 
key/core and what can be varied).


It's simple, infinitely expandable and you see it and use it every day.

Can you guess what it is?

scroll down to see the answer










































Yes, it's ordinary human language -- whether written or spoken; English or 
Spanish or Chinese or whatever . . . . .






- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, June 18, 2008 2:20 PM
Subject: Re: [agi] the uncomputable



On Wed, Jun 18, 2008 at 9:54 AM, Benjamin Johnston
[EMAIL PROTECTED] wrote:
[...]

In any case, this whole conversation bothers me. It seems like we're
focussing on the wrong problems; like using the Theory of Relativity to
decide on an appropriate speed limit for cars in school zones. If it 
could
take 1,000 years of thought and creativity to go from BB(n) to BB(n+1) 
for
some n, we're talking about problems of an incredible scale, far beyond 
what
most of us have in mind for our first prototypes. A challenge with the 
busy

beaver problem is that when n becomes big enough, you start being able to
encode long-standing and very difficult mathematical conjectures.

-Ben


My point is simply that an AGI should be able to think about such
concepts, like we do. It doesn't need to solve them. In this sense I
think it is a fundamental concern: how is it possible to have a form
of knowledge representation that can in principle capture all ideas a
human might express? Intuition suggests that there should be a simple
sufficient representation, like 1st-order logic. But 1st-order logic
isn't enough, and neither are 2nd-order logics, 3rd order...


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-18 Thread Abram Demski
 Yes, it's ordinary human language -- whether written or spoken; English or
 Spanish or Chinese or whatever . . . . .

I was tempted to include that in my statement, but decided against for
brevity... the thing is, we have the language, but we don't know what
to do with it. Solving the problem of how to use natural language
would (very probably) be equivalent to solving the problem of AGI.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-18 Thread Mark Waser

Solving the problem of how to use natural language
would (very probably) be equivalent to solving the problem of AGI.


I agree with you -- but I would point out that language is a very concrete 
thing and gives us something to experiment upon, work with, and be guided by 
on the road to AGI.


It seems as if our focus is on discovery systems and rebuilding everything 
from the ground (links-and-nodes) up.  I think that we're missing out on *A 
lot* with this approach.




- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, June 18, 2008 3:45 PM
Subject: Re: [agi] the uncomputable


Yes, it's ordinary human language -- whether written or spoken; English 
or

Spanish or Chinese or whatever . . . . .


I was tempted to include that in my statement, but decided against for
brevity... the thing is, we have the language, but we don't know what
to do with it. Solving the problem of how to use natural language
would (very probably) be equivalent to solving the problem of AGI.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Abram Demski
Mike A.:

Well, if you're convinced that infinity and the uncomputable are
imaginary things, then you've got a self-consistent view that I can't
directly argue against. But are you really willing to say that
seemingly understandable notions such as the problem of deciding
whether a given Turing machine will eventually halt are nonsense,
simply because we would need infinite time to verify that one doesn't
halt?

Ben J.:

Step 3 requires human society to invent new concepts and techniques,
and to thereby perform hypercomputation. I don't think that a
computable nonmonotonic logic really solves this problem.

I agree that nonmonotonic logic is not enough, not nearly. The point
is just that since there are computable approximations of
hypercomputers, it is not unreasonable to allow an AGI to reason about
uncomputable objects.

My own interpretation of the work is that an individual person is no more
powerful than a Turing machine (though, this point isn't discussed in the
paper), but that society as a whole is capable of hypercomputation because
we can keep drawing upon more resources to solve a problem: we build
machines, we reproduce, we interact with and record our thoughts in the
environment. Effectively, society as a whole becomes somewhat like a Zeus
machine - faster and more complex with each moment.

Something like this is mentioned in the paper as objection #4. But
personally, I'd respond as follows: if a society of AGIs can
hypercompute, then why not a single AGI with a society-of-mind style
architecture? It is difficult to distinguish between a closely-linked
society and a loosely-knit individual, where AI is concerned. So I
argue that if a society can (and should) hypercompute, there is no
reason to suspect that an individual can't (or shouldn't).

On Mon, Jun 16, 2008 at 11:37 PM, Mike Archbold [EMAIL PROTECTED] wrote:
 I'm not sure that I'm responding to your intended meaning, but: all
 computers are in reality finite-state machines, including the brain
 (granted we don't think the real-number calculations on the cellular
 level are fundamental to intelligence). However, the finite state
 machines we call PCs are so large that it is convenient to pretend
 they have infinite memory; and when we do this, we get a machine that
 is equivalent in power to a Turing machine. But a turing machine has
 an infinite tape, so it cannot really exist (the real computer
 eventually runs out of memory). Similarly, I'm arguing that the human
 brain is so large in particular ways that it is convenient to treat it
 as an even more powerful machine (perhaps an infinite-time turing
 machine), despite the fact that such a machine cannot exist (we only
 have a finite amount of time to think). Thus a spurious infinity is
 not so spurious.

 Abrahm,

 Thanks for responding.  You know, i might be in a bit over my head with
 some of the terminology in your paper, so to apologize in advance, but
 just to clarify:  spurious infinity according to Hegel is the sleight of
 hand the happens when quantity transitions surreptiously into a quality.
 At some point counting up, we are simply not talking about any number at
 all, but about a quality of being REALLY SUPER BIG as we make kind of a
 leap.

 According to him when we talk about infinity we are talking about some
 idea of a huge number (in this case of calculations) and to use a phrase
 he liked:  imaginary being.  So since I am kind of a Hegelian of sorts
 when I scanned the paper it looked like it argued that it is not possible
 to compute something that I had become convinced was imaginary anyway.
 That would be true if you bought into Hegel's definition of infinity and I
 realize there aren't a log of hegelians around.  But, tomorrow I will read
 further.

 Mike



 On Mon, Jun 16, 2008 at 9:19 PM, Mike Archbold [EMAIL PROTECTED] wrote:
 I previously posted here claiming that the human mind (and therefore an
 ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)
 Anyway, I hope I'm not being too annoying if I try to argue the point
 once again. This paper also argues the point:

 http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf


 It looks like the paper hinges on:
 None of this prior work takes account of G¨odel intuition, repeatedly
 communicated
 to Hao Wang, that human minds converge to infinity in their power, and
 for this reason
 surpass the reach of ordinary Turing machines.

 The thing to watch out for here is what Hegel described as the spurious
 infinity which is just the imagination thinking some imaginary quantity
 really big, but no matter how big, you always can envision +1, but the
 result is always just another imaginary big number, to which you can add
 another +1... the point being that infinity is a idealistic quality,
 not
 a computable numeric quantity at all, ie., not numerical, we are talking
 about thought as such.

 I didn't read the whole paper, but the point I

Re: [agi] the uncomputable

2008-06-17 Thread Vladimir Nesov
On Tue, Jun 17, 2008 at 9:10 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Mike A.:

 Well, if you're convinced that infinity and the uncomputable are
 imaginary things, then you've got a self-consistent view that I can't
 directly argue against. But are you really willing to say that
 seemingly understandable notions such as the problem of deciding
 whether a given Turing machine will eventually halt are nonsense,
 simply because we would need infinite time to verify that one doesn't
 halt?


Every thing that you understand is imaginary, your understanding
itself is an image in your mind, which could get there reflecting
reality, through limited number of steps (or so physicists keep
telling), or could be generated by overly vivid finite imagination.

No nonsense, just finite sense. What is this with verification that a
machine doesn't halt? One can't do it, so what is the problem?

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Abram Demski
No nonsense, just finite sense. What is this with verification that a
machine doesn't halt? One can't do it, so what is the problem?

The idea would be (if Mike is really willing to go that far): It
makes sense to say that a given Turing machine DOES halt; I know what
that means. But to say that one DOESN'T halt? How can I make sense of
that? Either a given machine has halted, or it has not halted yet. But
to say that it never halts requires infinity, a nonsensical concept.

An AI that only understood computable concepts would agree with the
above. What I am saying is that such a view is... inhuman.

On Tue, Jun 17, 2008 at 1:29 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Tue, Jun 17, 2008 at 9:10 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Mike A.:

 Well, if you're convinced that infinity and the uncomputable are
 imaginary things, then you've got a self-consistent view that I can't
 directly argue against. But are you really willing to say that
 seemingly understandable notions such as the problem of deciding
 whether a given Turing machine will eventually halt are nonsense,
 simply because we would need infinite time to verify that one doesn't
 halt?


 Every thing that you understand is imaginary, your understanding
 itself is an image in your mind, which could get there reflecting
 reality, through limited number of steps (or so physicists keep
 telling), or could be generated by overly vivid finite imagination.

 No nonsense, just finite sense. What is this with verification that a
 machine doesn't halt? One can't do it, so what is the problem?

 --
 Vladimir Nesov
 [EMAIL PROTECTED]


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Vladimir Nesov
On Tue, Jun 17, 2008 at 10:14 PM, Abram Demski [EMAIL PROTECTED] wrote:
 No nonsense, just finite sense. What is this with verification that a
 machine doesn't halt? One can't do it, so what is the problem?

 The idea would be (if Mike is really willing to go that far): It
 makes sense to say that a given Turing machine DOES halt; I know what
 that means. But to say that one DOESN'T halt? How can I make sense of
 that? Either a given machine has halted, or it has not halted yet. But
 to say that it never halts requires infinity, a nonsensical concept.

 An AI that only understood computable concepts would agree with the
 above. What I am saying is that such a view is... inhuman.


It wasn't worded correctly, there are many machines that you can prove
don't halt, but also others for which you can't prove that. Why would
that be inhuman to not be able to do impossible?

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Mike Archbold
 of
 hand the happens when quantity transitions surreptiously into a quality.
 At some point counting up, we are simply not talking about any number at
 all, but about a quality of being REALLY SUPER BIG as we make kind of a
 leap.

 According to him when we talk about infinity we are talking about some
 idea of a huge number (in this case of calculations) and to use a phrase
 he liked:  imaginary being.  So since I am kind of a Hegelian of sorts
 when I scanned the paper it looked like it argued that it is not
 possible
 to compute something that I had become convinced was imaginary anyway.
 That would be true if you bought into Hegel's definition of infinity and
 I
 realize there aren't a log of hegelians around.  But, tomorrow I will
 read
 further.

 Mike



 On Mon, Jun 16, 2008 at 9:19 PM, Mike Archbold [EMAIL PROTECTED]
 wrote:
 I previously posted here claiming that the human mind (and therefore
 an
 ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea.
 :)
 Anyway, I hope I'm not being too annoying if I try to argue the point
 once again. This paper also argues the point:

 http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf


 It looks like the paper hinges on:
 None of this prior work takes account of G¨odel intuition, repeatedly
 communicated
 to Hao Wang, that human minds converge to infinity in their power,
 and
 for this reason
 surpass the reach of ordinary Turing machines.

 The thing to watch out for here is what Hegel described as the
 spurious
 infinity which is just the imagination thinking some imaginary
 quantity
 really big, but no matter how big, you always can envision +1, but
 the
 result is always just another imaginary big number, to which you can
 add
 another +1... the point being that infinity is a idealistic quality,
 not
 a computable numeric quantity at all, ie., not numerical, we are
 talking
 about thought as such.

 I didn't read the whole paper, but the point I wanted to make was that
 Hegel takes up the issue of infinity in his Science of Logic, which I
 think is a good ontology in general because it mixes up a lot of
 issues
 AI
 struggles with, like the ideal nature of quality and quantity, and
 also
 infinity.

 Mike Archbold
 Seattle


 The paper includes a study of the uncomputable busy beaver function
 up
 to x=6. The authors claim that their success at computing busy beaver
 strongly suggests that humans can hypercompute.

 I believe the authors take this to imply that AGI cannot succeed on
 current hardware; I am not suggesting this. Instead, I offer a fairly
 concrete way to make deductions using a restricted class of
 uncomputable models, as an illustration of the idea (and as weak
 evidence that the general case can be embodied on computers).

 The method is essentially nonmonotonic logic. Computable predicates
 can
 be represented in any normal way (1st-order logic, lambda
 calculus, a standard programming language...). Computably enumerable
 predicates (such as the halting problem) are represented by a default
 assumption of false, plus the computable method of enumerating true
 cases. To reason about such a predicate, the system allocates however
 much time it can spare to trying to prove a case true; if at the end
 of
 that time it has not found a proof by the enumeration method, it
 considers it false. (Of course it could come back later and try
 harder, too.) Co-enumerable predicates are similarly assumed true
 until
 a counterexample is found.

 Similar methodology can extend the class of uncomputables we can
 handle
 somewhat farther. Consider the predicate all turing machines of class
 N
 halt, where N is a computably enumerable class. Neither the true
 cases
 nor the false cases of this predicate are computably enumerable.
 Nonetheless, we can characterize the predicate by assuming it is true
 until a counterexample is found: a turing machine that doesn't seem
 to
 halt when run as long as we can afford to run it. If our best efforts
 (within time constraints) fail to find such a
 machine, then we stick with the default assumption true. (A
 simplistic nonmonotonic logic can't quite handle this: at any stage
 of
 the search, we would have many turing machines still at their default
 status of nonhalting, which would make the predicate seem
 always-false; we need to only admit assumptions that  have been
 hardened by trying to disprove them for some amount of time.)

 This may sound so easy that a Turing machine could do it. And it
 is,
 for set cutoffs. But the point is that an AGI that only considered
 computable models, such as AIXI, would never converge to the correct
 model in a world that contained anything uncomputable, whereas it
 seems
 a human could. (AIXI would find turing machines that were
 ever-closer to the right model, but could never see that there was a
 simple pattern behind these ever-larger machines.)

 I hope that this makes my claim sound less extreme

Re: [agi] the uncomputable

2008-06-17 Thread Abram Demski
V. N.,
What is inhuman to me, is to claim that the halting problem is no
problem on such a basis: that the statement Turing machine X does not
halt only is true of Turing machines that are *provably* non-halting.
And this is the view we are forced into if we abandon the reality of
the uncomputable.

A. D.

On Tue, Jun 17, 2008 at 2:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Tue, Jun 17, 2008 at 10:14 PM, Abram Demski [EMAIL PROTECTED] wrote:
 No nonsense, just finite sense. What is this with verification that a
 machine doesn't halt? One can't do it, so what is the problem?

 The idea would be (if Mike is really willing to go that far): It
 makes sense to say that a given Turing machine DOES halt; I know what
 that means. But to say that one DOESN'T halt? How can I make sense of
 that? Either a given machine has halted, or it has not halted yet. But
 to say that it never halts requires infinity, a nonsensical concept.

 An AI that only understood computable concepts would agree with the
 above. What I am saying is that such a view is... inhuman.


 It wasn't worded correctly, there are many machines that you can prove
 don't halt, but also others for which you can't prove that. Why would
 that be inhuman to not be able to do impossible?

 --
 Vladimir Nesov
 [EMAIL PROTECTED]


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Hector Zenil
People interested on this thread subject might be interested to read a
paper we wrote some years ago published by World Scientific:

---
Hector Zenil, Francisco Hernandez-Quiroz, On the possible
Computational Power of the Human Mind, WORLDVIEWS, SCIENCE AND US,
edited by Carlos Gershenson, Diederik Aerts and Bruce Edmonds, World
Scientific, 2007.

available online: http://arxiv.org/abs/cs/0605065

Abstract
The aim of this paper is to address the question: Can an artificial
neural network (ANN) model be used as a possible characterization of
the power of the human mind? We will discuss what might be the
relationship between such a model and its natural counterpart. A
possible characterization of the different power capabilities of the
mind is suggested in terms of the information contained (in its
computational complexity) or achievable by it. Such characterization
takes advantage of recent results based on natural neural networks
(NNN) and the computational power of arbitrary artificial neural
networks (ANN). The possible acceptance of neural networks as the
model of the human mind's operation makes the aforementioned quite
relevant.

Presented as a talk at the Complexity, Science and Society Conference,
2005, University of Liverpool, UK.
---

On the other hand, Goedelian type arguments (such as
http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf) have been widely
accepted to be disproved since Hofstadter's Escher, Goedel and Bach in
the 70s or before.

I consider myself as someone within the busy beaver field since my own
research on what we call experimental algorithmic information theory
is very related to. I don't see how either Solomonoff's induction or
the Busy Beaver problem can be used as evidence or be conceived as an
explaination of the human mind as a hypercomputer. I don't see in the
development of the two fields anything not Turing computable.

There are known values of the busy beaver up to 4 state 2 symbol
Turing machines (although it seems they claim to have calculated up to
6 states...). To determine whether a Turing machine halts up to that
number of states is a relatively easy task by using very computable
tricks, (including the Christmas Tree method).

I think their main argument is that (a) once known the value of a busy
beaver for n states, one learns how to crack the set of n+1 states and
eventually get it. (i) They then use a kind of mathematical induction
to proof that any given Turing machine with a fixed number of states
will eventually fail, while the human mind can go on. However it seems
pretty clear that the method evidently fails for n large enough, and
hence disproving their claim. Now suppose their claim is right (a),
now let's conceive the following method: (b) that each time we learn
how to crack n+1 we build a Turing machine T that computes n+1, using
their own argument (i) then Turing machine are hypercomputers!

I might be missing something, if so please feel free to point it out.

Best regards,



-- 
Hector Zenilhttp://zenil.mathrix.org


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Vladimir Nesov
On Tue, Jun 17, 2008 at 11:38 PM, Abram Demski [EMAIL PROTECTED] wrote:
 V. N.,
 What is inhuman to me, is to claim that the halting problem is no
 problem on such a basis: that the statement Turing machine X does not
 halt only is true of Turing machines that are *provably* non-halting.
 And this is the view we are forced into if we abandon the reality of
 the uncomputable.


Why, you can also mark up the remaining territory by true and
false, these labels just won't mean anything there. Set up to sets,
T and F, place all true things in T, all false things in F, and all
unknown things however you like, but don't tell anybody how. Some
people like to place all unknown things in F, their call.
Mathematically it can be convenient, but really, even of computable
things you can't really compute that much, so the argument is void for
all practical concerns anyway.

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Hector Zenil
On Tue, Jun 17, 2008 at 5:58 PM, Abram Demski [EMAIL PROTECTED] wrote:


 Hector Zenil,

 I do not think I understand you. Your argument seems similar to the following:

 I do not see why Turing machines are necessary. If we can compute a
 function f(x) by some Turing machine, then we could compute it up to
 some value x=n. But we could construct a lookup table of all values
 f(0), f(1), f(2),... , f(n) which contains just as much information.

 Obviously the above is a silly argument, but I don't know how else to
 interpret you. A Turing machine can capture a finite number of the
 outputs of a hypercomputer. Does that in any way make the
 hypercomputer reducible to the Turing machine?


This nicely boils the fallacy down from 20 pages to a few lines.
Merely providing the lookup table or adding more states is not
sufficient to turn a Turing machine into a hypercomputer as it would
follow from the paper main argument: that humans can always find
bb(n+1) once bb(n) calculated, therefore humans are capable of
hypercomputing (modulo other strong assumptions).

In fact the paper acknowledges that more information is needed at each
jump, so eventually one would reach either a physical or a feasible
limit unless the brain/mind is infinite in capabilities, falling into
the traditional claims on hypercomputation, and not necessarily a new
one.

I recall that my suggestion was (reductio ad absurdum) to encode (or
provide the program) a n-state Turing machine T_n after knowing bb(n)
so at every moment when people is working on bb(n+1) there is always a
T_n behind able to calculate bb(n). Once the hyperhuman finds bb(n+1)
then he encodes T_{n+1} to compute bb(n+1) while the hyperhuman H
computes bb(n+2) but one knows that at the next step one will be able
to code T_{n+2} to calculate bb(n+2), just as H does. Following their
argument, if there is always a machine able to calculate bb(n+1) for
any n when bb(n) is calculated (as there is a hyperhuman according to
their claim), therefore T (the universal Turing machine that emulates
all those T_i for all i) would turn into a hypercomputer (absurd since
it would collapse the classes of computability!).

Notice that my use of hypercomputer is the traditional use of a
computer: a machine able to compute at  a Turing degree other than the
first.

I still might be missing something, but hope this clarifies my objection.

People might be also interested in the work of Kevin Kelly:
Uncomputability: The Problem of Induction Internalized, Theoretical
Computer Science, pp. 317: 2004, 227-249.

as an epistemological approach to traditional computability, as some
have suggested in this thread induction as evidence for
hypercomputability.


-- 
Hector Zenilhttp://zenil.mathrix.org


 On Tue, Jun 17, 2008 at 4:35 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Tue, Jun 17, 2008 at 11:38 PM, Abram Demski [EMAIL PROTECTED] wrote:
 V. N.,
 What is inhuman to me, is to claim that the halting problem is no
 problem on such a basis: that the statement Turing machine X does not
 halt only is true of Turing machines that are *provably* non-halting.
 And this is the view we are forced into if we abandon the reality of
 the uncomputable.


 Why, you can also mark up the remaining territory by true and
 false, these labels just won't mean anything there. Set up to sets,
 T and F, place all true things in T, all false things in F, and all
 unknown things however you like, but don't tell anybody how. Some
 people like to place all unknown things in F, their call.
 Mathematically it can be convenient, but really, even of computable
 things you can't really compute that much, so the argument is void for
 all practical concerns anyway.

 --
 Vladimir Nesov
 [EMAIL PROTECTED]


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Mike Archbold

 Mike Archbold,

 It seems you've made a counterargument without meaning to.

 When we make this transition, it seems to me that the shift is so radical
 that it is impossible to justify making the step, because as I mentioned
 it involves a surreptitious shift from quantity to quality.

 I maintain that the jump is justified. To me it is like observing the
 sequence 1, 2, 4, 8, 16, 32... and concluding that each number is
 twice the previous. It is a jump from several quantities to a single
 quality.



Fair enough.  I think what we are saying is that in the transition of
quantity to quality -- as in your example -- is a kind of appeal to the
infinite, ie., is an instance of hypercompuation?  That does have a Godel
sound to it, as I understand it, we appeal beyond the data in question
although I have only seen Godel's writings (not read them).

I read more of the paper.  I like the part about the Zeus Machine.  Cool.

I guess I am a bit more aligned to the philosophy side than the
Turing-Godel-computational side of the house.  In my studies as I
mentioned of Hegel's Logic, there is a constant interplay between quantity
and quality, given as measure -- measure here being the result of quantity
and quality intermixing.

I guess measure in this sense is roughly equivalent to hypercomputation if
I have my Godels and Hegels lined up in a row.  Hegel's philosophy was of
course totally predicated on the mind which as I said he held to be
infinite.  Although, we have to be careful in so much as there exist
multiple definitions of the infinite.

Mike Archbold





 On Tue, Jun 17, 2008 at 4:35 PM, Vladimir Nesov [EMAIL PROTECTED]
 wrote:
 On Tue, Jun 17, 2008 at 11:38 PM, Abram Demski [EMAIL PROTECTED]
 wrote:
 V. N.,
 What is inhuman to me, is to claim that the halting problem is no
 problem on such a basis: that the statement Turing machine X does not
 halt only is true of Turing machines that are *provably* non-halting.
 And this is the view we are forced into if we abandon the reality of
 the uncomputable.


 Why, you can also mark up the remaining territory by true and
 false, these labels just won't mean anything there. Set up to sets,
 T and F, place all true things in T, all false things in F, and all
 unknown things however you like, but don't tell anybody how. Some
 people like to place all unknown things in F, their call.
 Mathematically it can be convenient, but really, even of computable
 things you can't really compute that much, so the argument is void for
 all practical concerns anyway.

 --
 Vladimir Nesov
 [EMAIL PROTECTED]


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] the uncomputable

2008-06-16 Thread Abram Demski
I previously posted here claiming that the human mind (and therefore
an ideal AGI) entertains uncomputable models, counter to the
AIXI/Solomonoff model. There was little enthusiasm about this idea. :)
Anyway, I hope I'm not being too annoying if I try to argue the point
once again. This paper also argues the point:

http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf

The paper includes a study of the uncomputable busy beaver function
up to x=6. The authors claim that their success at computing busy
beaver strongly suggests that humans can hypercompute.

I believe the authors take this to imply that AGI cannot succeed on
current hardware; I am not suggesting this. Instead, I offer a fairly
concrete way to make deductions using a restricted class of
uncomputable models, as an illustration of the idea (and as weak
evidence that the general case can be embodied on computers).

The method is essentially nonmonotonic logic. Computable predicates
can be represented in any normal way (1st-order logic, lambda
calculus, a standard programming language...). Computably enumerable
predicates (such as the halting problem) are represented by a default
assumption of false, plus the computable method of enumerating true
cases. To reason about such a predicate, the system allocates however
much time it can spare to trying to prove a case true; if at the end
of that time it has not found a proof by the enumeration method, it
considers it false. (Of course it could come back later and try
harder, too.) Co-enumerable predicates are similarly assumed true
until a counterexample is found.

Similar methodology can extend the class of uncomputables we can
handle somewhat farther. Consider the predicate all turing machines
of class N halt, where N is a computably enumerable class. Neither
the true cases nor the false cases of this predicate are computably
enumerable. Nonetheless, we can characterize the predicate by assuming
it is true until a counterexample is found: a turing machine that
doesn't seem to halt when run as long as we can afford to run it. If
our best efforts (within time constraints) fail to find such a
machine, then we stick with the default assumption true. (A
simplistic nonmonotonic logic can't quite handle this: at any stage of
the search, we would have many turing machines still at their default
status of nonhalting, which would make the predicate seem
always-false; we need to only admit assumptions that  have been
hardened by trying to disprove them for some amount of time.)

This may sound so easy that a Turing machine could do it. And it is,
for set cutoffs. But the point is that an AGI that only considered
computable models, such as AIXI, would never converge to the correct
model in a world that contained anything uncomputable, whereas it
seems a human could. (AIXI would find turing machines that were
ever-closer to the right model, but could never see that there was a
simple pattern behind these ever-larger machines.)

I hope that this makes my claim sound less extreme. Uncomputable
models of the world are not really so hard to reason about, if we're
comfortable with a logic that makes probably-true conclusions rather
than definitely-true.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-16 Thread Mike Archbold
 I previously posted here claiming that the human mind (and therefore an
ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)
Anyway, I hope I'm not being too annoying if I try to argue the point
once again. This paper also argues the point:

 http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf


It looks like the paper hinges on:
None of this prior work takes account of G¨odel intuition, repeatedly
communicated
to Hao Wang, that human minds converge to infinity in their power, and
for this reason
surpass the reach of ordinary Turing machines.

The thing to watch out for here is what Hegel described as the spurious
infinity which is just the imagination thinking some imaginary quantity
really big, but no matter how big, you always can envision +1, but the
result is always just another imaginary big number, to which you can add
another +1... the point being that infinity is a idealistic quality, not
a computable numeric quantity at all, ie., not numerical, we are talking
about thought as such.

I didn't read the whole paper, but the point I wanted to make was that
Hegel takes up the issue of infinity in his Science of Logic, which I
think is a good ontology in general because it mixes up a lot of issues AI
struggles with, like the ideal nature of quality and quantity, and also
infinity.

Mike Archbold
Seattle


 The paper includes a study of the uncomputable busy beaver function up
to x=6. The authors claim that their success at computing busy beaver
strongly suggests that humans can hypercompute.

 I believe the authors take this to imply that AGI cannot succeed on
current hardware; I am not suggesting this. Instead, I offer a fairly
concrete way to make deductions using a restricted class of
 uncomputable models, as an illustration of the idea (and as weak
evidence that the general case can be embodied on computers).

 The method is essentially nonmonotonic logic. Computable predicates can
be represented in any normal way (1st-order logic, lambda
 calculus, a standard programming language...). Computably enumerable
predicates (such as the halting problem) are represented by a default
assumption of false, plus the computable method of enumerating true
cases. To reason about such a predicate, the system allocates however
much time it can spare to trying to prove a case true; if at the end of
that time it has not found a proof by the enumeration method, it
considers it false. (Of course it could come back later and try
 harder, too.) Co-enumerable predicates are similarly assumed true until
a counterexample is found.

 Similar methodology can extend the class of uncomputables we can handle
somewhat farther. Consider the predicate all turing machines of class N
halt, where N is a computably enumerable class. Neither the true cases
nor the false cases of this predicate are computably enumerable.
Nonetheless, we can characterize the predicate by assuming it is true
until a counterexample is found: a turing machine that doesn't seem to
halt when run as long as we can afford to run it. If our best efforts
(within time constraints) fail to find such a
 machine, then we stick with the default assumption true. (A
 simplistic nonmonotonic logic can't quite handle this: at any stage of
the search, we would have many turing machines still at their default
status of nonhalting, which would make the predicate seem
 always-false; we need to only admit assumptions that  have been
 hardened by trying to disprove them for some amount of time.)

 This may sound so easy that a Turing machine could do it. And it is,
for set cutoffs. But the point is that an AGI that only considered
computable models, such as AIXI, would never converge to the correct
model in a world that contained anything uncomputable, whereas it seems
a human could. (AIXI would find turing machines that were
 ever-closer to the right model, but could never see that there was a
simple pattern behind these ever-larger machines.)

 I hope that this makes my claim sound less extreme. Uncomputable models
of the world are not really so hard to reason about, if we're
comfortable with a logic that makes probably-true conclusions rather
than definitely-true.


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com







---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-16 Thread Abram Demski
I'm not sure that I'm responding to your intended meaning, but: all
computers are in reality finite-state machines, including the brain
(granted we don't think the real-number calculations on the cellular
level are fundamental to intelligence). However, the finite state
machines we call PCs are so large that it is convenient to pretend
they have infinite memory; and when we do this, we get a machine that
is equivalent in power to a Turing machine. But a turing machine has
an infinite tape, so it cannot really exist (the real computer
eventually runs out of memory). Similarly, I'm arguing that the human
brain is so large in particular ways that it is convenient to treat it
as an even more powerful machine (perhaps an infinite-time turing
machine), despite the fact that such a machine cannot exist (we only
have a finite amount of time to think). Thus a spurious infinity is
not so spurious.

On Mon, Jun 16, 2008 at 9:19 PM, Mike Archbold [EMAIL PROTECTED] wrote:
 I previously posted here claiming that the human mind (and therefore an
 ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)
 Anyway, I hope I'm not being too annoying if I try to argue the point
 once again. This paper also argues the point:

 http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf


 It looks like the paper hinges on:
 None of this prior work takes account of G¨odel intuition, repeatedly
 communicated
 to Hao Wang, that human minds converge to infinity in their power, and
 for this reason
 surpass the reach of ordinary Turing machines.

 The thing to watch out for here is what Hegel described as the spurious
 infinity which is just the imagination thinking some imaginary quantity
 really big, but no matter how big, you always can envision +1, but the
 result is always just another imaginary big number, to which you can add
 another +1... the point being that infinity is a idealistic quality, not
 a computable numeric quantity at all, ie., not numerical, we are talking
 about thought as such.

 I didn't read the whole paper, but the point I wanted to make was that
 Hegel takes up the issue of infinity in his Science of Logic, which I
 think is a good ontology in general because it mixes up a lot of issues AI
 struggles with, like the ideal nature of quality and quantity, and also
 infinity.

 Mike Archbold
 Seattle


 The paper includes a study of the uncomputable busy beaver function up
 to x=6. The authors claim that their success at computing busy beaver
 strongly suggests that humans can hypercompute.

 I believe the authors take this to imply that AGI cannot succeed on
 current hardware; I am not suggesting this. Instead, I offer a fairly
 concrete way to make deductions using a restricted class of
 uncomputable models, as an illustration of the idea (and as weak
 evidence that the general case can be embodied on computers).

 The method is essentially nonmonotonic logic. Computable predicates can
 be represented in any normal way (1st-order logic, lambda
 calculus, a standard programming language...). Computably enumerable
 predicates (such as the halting problem) are represented by a default
 assumption of false, plus the computable method of enumerating true
 cases. To reason about such a predicate, the system allocates however
 much time it can spare to trying to prove a case true; if at the end of
 that time it has not found a proof by the enumeration method, it
 considers it false. (Of course it could come back later and try
 harder, too.) Co-enumerable predicates are similarly assumed true until
 a counterexample is found.

 Similar methodology can extend the class of uncomputables we can handle
 somewhat farther. Consider the predicate all turing machines of class N
 halt, where N is a computably enumerable class. Neither the true cases
 nor the false cases of this predicate are computably enumerable.
 Nonetheless, we can characterize the predicate by assuming it is true
 until a counterexample is found: a turing machine that doesn't seem to
 halt when run as long as we can afford to run it. If our best efforts
 (within time constraints) fail to find such a
 machine, then we stick with the default assumption true. (A
 simplistic nonmonotonic logic can't quite handle this: at any stage of
 the search, we would have many turing machines still at their default
 status of nonhalting, which would make the predicate seem
 always-false; we need to only admit assumptions that  have been
 hardened by trying to disprove them for some amount of time.)

 This may sound so easy that a Turing machine could do it. And it is,
 for set cutoffs. But the point is that an AGI that only considered
 computable models, such as AIXI, would never converge to the correct
 model in a world that contained anything uncomputable, whereas it seems
 a human could. (AIXI would find turing machines that were
 ever-closer to the right model, but could never see

RE: [agi] the uncomputable

2008-06-16 Thread Benjamin Johnston

Hi Abram,

I believe the key point of the paper is:

...human minds develop through time over generations, they invent new
concepts and techniques, which in turn allow previously resistant problems
to be solved. There seems to be no upward bound whatsoever to this
ascension.

This is captured in step 3 of the algorithm at the bottom of page 8. Step 3
requires human society to invent new concepts and techniques, and to thereby
perform hypercomputation.

I don't think that a computable nonmonotonic logic really solves this
problem. Nonmonotonic logics allow for jumping to conclusions within a fixed
axiomatization or conceptual structure. Bringsjord's paper, however,
requires the development of entirely new conceptual structures for problem
solving, as part of the problem solving process itself.


My own interpretation of the work is that an individual person is no more
powerful than a Turing machine (though, this point isn't discussed in the
paper), but that society as a whole is capable of hypercomputation because
we can keep drawing upon more resources to solve a problem: we build
machines, we reproduce, we interact with and record our thoughts in the
environment. Effectively, society as a whole becomes somewhat like a Zeus
machine - faster and more complex with each moment. I think that a
Turing-computable AGI can fit within such a society (or a similar all-AGI
society) and thereby achieve the same kind of hypercomputation.


(I should add that I'm not opposed to nonmonotonic logics, nor am I
attempting to contradict your claim that they're a valid way to AGI... I'm
merely commenting that I don't think the paper you present supports your
ideas about nonmonotonic logics)

-Ben

-Original Message-
From: Abram Demski [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, 17 June 2008 7:34 AM
To: agi@v2.listbox.com
Subject: [agi] the uncomputable

I previously posted here claiming that the human mind (and therefore
an ideal AGI) entertains uncomputable models, counter to the
AIXI/Solomonoff model. There was little enthusiasm about this idea. :)
Anyway, I hope I'm not being too annoying if I try to argue the point
once again. This paper also argues the point:

http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf

The paper includes a study of the uncomputable busy beaver function
up to x=6. The authors claim that their success at computing busy
beaver strongly suggests that humans can hypercompute.

I believe the authors take this to imply that AGI cannot succeed on
current hardware; I am not suggesting this. Instead, I offer a fairly
concrete way to make deductions using a restricted class of
uncomputable models, as an illustration of the idea (and as weak
evidence that the general case can be embodied on computers).

The method is essentially nonmonotonic logic. Computable predicates
can be represented in any normal way (1st-order logic, lambda
calculus, a standard programming language...). Computably enumerable
predicates (such as the halting problem) are represented by a default
assumption of false, plus the computable method of enumerating true
cases. To reason about such a predicate, the system allocates however
much time it can spare to trying to prove a case true; if at the end
of that time it has not found a proof by the enumeration method, it
considers it false. (Of course it could come back later and try
harder, too.) Co-enumerable predicates are similarly assumed true
until a counterexample is found.

Similar methodology can extend the class of uncomputables we can
handle somewhat farther. Consider the predicate all turing machines
of class N halt, where N is a computably enumerable class. Neither
the true cases nor the false cases of this predicate are computably
enumerable. Nonetheless, we can characterize the predicate by assuming
it is true until a counterexample is found: a turing machine that
doesn't seem to halt when run as long as we can afford to run it. If
our best efforts (within time constraints) fail to find such a
machine, then we stick with the default assumption true. (A
simplistic nonmonotonic logic can't quite handle this: at any stage of
the search, we would have many turing machines still at their default
status of nonhalting, which would make the predicate seem
always-false; we need to only admit assumptions that  have been
hardened by trying to disprove them for some amount of time.)

This may sound so easy that a Turing machine could do it. And it is,
for set cutoffs. But the point is that an AGI that only considered
computable models, such as AIXI, would never converge to the correct
model in a world that contained anything uncomputable, whereas it
seems a human could. (AIXI would find turing machines that were
ever-closer to the right model, but could never see that there was a
simple pattern behind these ever-larger machines.)

I hope that this makes my claim sound less extreme. Uncomputable
models of the world are not really so hard to reason about, if we're
comfortable

Re: [agi] the uncomputable

2008-06-16 Thread Mike Archbold
 I'm not sure that I'm responding to your intended meaning, but: all
 computers are in reality finite-state machines, including the brain
 (granted we don't think the real-number calculations on the cellular
 level are fundamental to intelligence). However, the finite state
 machines we call PCs are so large that it is convenient to pretend
 they have infinite memory; and when we do this, we get a machine that
 is equivalent in power to a Turing machine. But a turing machine has
 an infinite tape, so it cannot really exist (the real computer
 eventually runs out of memory). Similarly, I'm arguing that the human
 brain is so large in particular ways that it is convenient to treat it
 as an even more powerful machine (perhaps an infinite-time turing
 machine), despite the fact that such a machine cannot exist (we only
 have a finite amount of time to think). Thus a spurious infinity is
 not so spurious.

Abrahm,

Thanks for responding.  You know, i might be in a bit over my head with
some of the terminology in your paper, so to apologize in advance, but
just to clarify:  spurious infinity according to Hegel is the sleight of
hand the happens when quantity transitions surreptiously into a quality. 
At some point counting up, we are simply not talking about any number at
all, but about a quality of being REALLY SUPER BIG as we make kind of a
leap.

According to him when we talk about infinity we are talking about some
idea of a huge number (in this case of calculations) and to use a phrase
he liked:  imaginary being.  So since I am kind of a Hegelian of sorts
when I scanned the paper it looked like it argued that it is not possible
to compute something that I had become convinced was imaginary anyway. 
That would be true if you bought into Hegel's definition of infinity and I
realize there aren't a log of hegelians around.  But, tomorrow I will read
further.

Mike



 On Mon, Jun 16, 2008 at 9:19 PM, Mike Archbold [EMAIL PROTECTED] wrote:
 I previously posted here claiming that the human mind (and therefore an
 ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)
 Anyway, I hope I'm not being too annoying if I try to argue the point
 once again. This paper also argues the point:

 http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf


 It looks like the paper hinges on:
 None of this prior work takes account of G¨odel intuition, repeatedly
 communicated
 to Hao Wang, that human minds converge to infinity in their power, and
 for this reason
 surpass the reach of ordinary Turing machines.

 The thing to watch out for here is what Hegel described as the spurious
 infinity which is just the imagination thinking some imaginary quantity
 really big, but no matter how big, you always can envision +1, but the
 result is always just another imaginary big number, to which you can add
 another +1... the point being that infinity is a idealistic quality,
 not
 a computable numeric quantity at all, ie., not numerical, we are talking
 about thought as such.

 I didn't read the whole paper, but the point I wanted to make was that
 Hegel takes up the issue of infinity in his Science of Logic, which I
 think is a good ontology in general because it mixes up a lot of issues
 AI
 struggles with, like the ideal nature of quality and quantity, and also
 infinity.

 Mike Archbold
 Seattle


 The paper includes a study of the uncomputable busy beaver function
 up
 to x=6. The authors claim that their success at computing busy beaver
 strongly suggests that humans can hypercompute.

 I believe the authors take this to imply that AGI cannot succeed on
 current hardware; I am not suggesting this. Instead, I offer a fairly
 concrete way to make deductions using a restricted class of
 uncomputable models, as an illustration of the idea (and as weak
 evidence that the general case can be embodied on computers).

 The method is essentially nonmonotonic logic. Computable predicates can
 be represented in any normal way (1st-order logic, lambda
 calculus, a standard programming language...). Computably enumerable
 predicates (such as the halting problem) are represented by a default
 assumption of false, plus the computable method of enumerating true
 cases. To reason about such a predicate, the system allocates however
 much time it can spare to trying to prove a case true; if at the end of
 that time it has not found a proof by the enumeration method, it
 considers it false. (Of course it could come back later and try
 harder, too.) Co-enumerable predicates are similarly assumed true until
 a counterexample is found.

 Similar methodology can extend the class of uncomputables we can handle
 somewhat farther. Consider the predicate all turing machines of class N
 halt, where N is a computably enumerable class. Neither the true cases
 nor the false cases of this predicate are computably enumerable.
 Nonetheless, we can characterize the predicate by assuming