Re: join post

2008-11-28 Thread Russell Standish

I guess I haven't read those papers, so sorry if I was leading you up
the garden path re GTMs.

It sounds interesting that the universal prior could work for
generalisation of the Turing machine, although I'm not sure what the
implications would be. Anyway, it sounds like you've got a research
programme :)

Cheers

On Thu, Nov 27, 2008 at 06:12:49PM -0500, Abram Demski wrote:
 
 Russel,
 
 The paper does indeed showcase one example of a universal prior that
 includes non-computable universes... Theorem 4.1. So it's *possible*.
 Of course it then proceeds to dash hopes for a universal prior over a
 broader domain, defined by GTMs. So, it would be interesting to know
 more about the conditions that make universality possible.
 
 --Abram
 
 
-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-27 Thread Russell Standish

On Wed, Nov 26, 2008 at 02:55:08PM -0500, Abram Demski wrote:
 
 Russel,
 
 I do not see why some appropriately modified version of that theorem
 couldn't be proven for other settings. As a concrete example let's
 just use Schmidhuber's GTMs. There would be universal GTMs and a
 constant cost for conversion and everything else needed for a version
 of the theorem, wouldn't there be? (I am assuming things, I will look
 up some details this afternoon... I have the book you refer to, I'll
 look at the theorem... but I suppose I should also re-read the paper
 about GTMs before making bold claims...)
 
 --Abram
 

IIRC, Schmidhuber's machines were non-prefix Turing machines. As such
there may or may not be a probability distribution in the first
place. Solomonoff's original proposal using universal Turing machine
didn't work because the probability distribution could not be defined.
If, however, a probility distribution could be defined, then it would
probably end up being equivalent to the S-L universal prior.

Cheers

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-27 Thread Abram Demski

Russel,

Hmm, can't we simply turn any coding into a prefix-free-coding by
prefacing each code for a GTM with a number of 1s indicating the
length of the following description, and then a 0 signaling the
beginning of the actual description? I am not especially familiar with
the prefix issue, so please forgive me if I am wrong...

Also, I do not understand why there would be reason to suspect that
the probability distribution, once properly defined, would turn out to
be equivalent to the S-L prior. GTMs can formally represent more
things than TMs, so why would those things not end up in the
probability distribution?

--Abram Demski

On Thu, Nov 27, 2008 at 5:18 AM, Russell Standish [EMAIL PROTECTED] wrote:

 On Wed, Nov 26, 2008 at 02:55:08PM -0500, Abram Demski wrote:

 Russel,

 I do not see why some appropriately modified version of that theorem
 couldn't be proven for other settings. As a concrete example let's
 just use Schmidhuber's GTMs. There would be universal GTMs and a
 constant cost for conversion and everything else needed for a version
 of the theorem, wouldn't there be? (I am assuming things, I will look
 up some details this afternoon... I have the book you refer to, I'll
 look at the theorem... but I suppose I should also re-read the paper
 about GTMs before making bold claims...)

 --Abram


 IIRC, Schmidhuber's machines were non-prefix Turing machines. As such
 there may or may not be a probability distribution in the first
 place. Solomonoff's original proposal using universal Turing machine
 didn't work because the probability distribution could not be defined.
 If, however, a probility distribution could be defined, then it would
 probably end up being equivalent to the S-L universal prior.

 Cheers

 --

 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052 [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au
 

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-27 Thread Russell Standish

On Thu, Nov 27, 2008 at 02:40:04PM -0500, Abram Demski wrote:
 
 Russel,
 
 Hmm, can't we simply turn any coding into a prefix-free-coding by
 prefacing each code for a GTM with a number of 1s indicating the
 length of the following description, and then a 0 signaling the
 beginning of the actual description? I am not especially familiar with
 the prefix issue, so please forgive me if I am wrong...

Sure - but you also need to change the machine to recognise your code.

 
 Also, I do not understand why there would be reason to suspect that
 the probability distribution, once properly defined, would turn out to
 be equivalent to the S-L prior. GTMs can formally represent more
 things than TMs, so why would those things not end up in the
 probability distribution?
 
 --Abram Demski
 

Its been a while since I read Schmidhuber's papers, but I thought he
was talking about machines that continuosly updated their output, but
would eventually converge (ie for each bit i of the output, there was
a time t_i after which that bit would not change).

This seems to be a restriction on the notion of Turing machine to me,
but not as restrictive as a prefix machine.

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-27 Thread Abram Demski

Russel,

I just went to look at the paper Hierarchies of generalized
Kolmogorov complexities and nonenumerable universal measures
computable in the limit-- to find a quote showing that GTMs were a
generalization of Turing Machines rather then a restriction. I found
such a quote as soon as page 2:

This constructive notion of describability is less restrictive then
the traditional notion of computability, mainly because we do not
insist on the existence of a halting program that computes an upper
bound of the convergence time [...]

But, as soon as the abstract, I found:

Among other things we show [...] that there is no universal
approximable distribution [...]

So scratch that example! I now remember reading that when I first
encountered the paper, but obviously I did not pay much attention or I
would have recalled it sooner... I'll look over the proof again and
try to figure out whether it applies to even more general models (like
priors based on arithmetic describability or analytic describability).

--Abram

On Thu, Nov 27, 2008 at 5:22 PM, Russell Standish [EMAIL PROTECTED] wrote:

 On Thu, Nov 27, 2008 at 02:40:04PM -0500, Abram Demski wrote:

 Russel,

 Hmm, can't we simply turn any coding into a prefix-free-coding by
 prefacing each code for a GTM with a number of 1s indicating the
 length of the following description, and then a 0 signaling the
 beginning of the actual description? I am not especially familiar with
 the prefix issue, so please forgive me if I am wrong...

 Sure - but you also need to change the machine to recognise your code.


 Also, I do not understand why there would be reason to suspect that
 the probability distribution, once properly defined, would turn out to
 be equivalent to the S-L prior. GTMs can formally represent more
 things than TMs, so why would those things not end up in the
 probability distribution?

 --Abram Demski


 Its been a while since I read Schmidhuber's papers, but I thought he
 was talking about machines that continuosly updated their output, but
 would eventually converge (ie for each bit i of the output, there was
 a time t_i after which that bit would not change).

 This seems to be a restriction on the notion of Turing machine to me,
 but not as restrictive as a prefix machine.

 --

 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052 [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au
 

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-27 Thread Abram Demski

Russel,

The paper does indeed showcase one example of a universal prior that
includes non-computable universes... Theorem 4.1. So it's *possible*.
Of course it then proceeds to dash hopes for a universal prior over a
broader domain, defined by GTMs. So, it would be interesting to know
more about the conditions that make universality possible.

--Abram

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-26 Thread Bruno Marchal

Hi Abram,


On 26 Nov 2008, at 00:01, Abram Demski wrote:


 Bruno,

 Yes, I have encountered the provability logics before, but I am no  
 expert.


We will perhaps have opportunity to talk about this.




 In any given
 generation, the entity who can represent the truth-predicate of the
 most other entities will dominate.

 Why?

 The notion of the entities adapting their logics in order to better
 reason about each other is meant to be more of an informal
 justification than an exact proof, so I'm not worried about stating my
 assumptions precisely... If I did, I might simply take this to be an
 assumption rather than a derived fact. But, here is an informal
 justification.

 Since the entities start out using first-order logic, it will be
 useful to solve the halting problem to reach conclusions about what a
 fellow-creature *won't* ever reach conclusions about. This means a
 provable predicate will be useful. To support deduction with this
 predicate, of course, the entities will gain more and more axioms over
 time; axioms that help solve instances of the halting problem will
 survive, while axioms that provide incorrect information will not.
 This means that the provable predicate has a moving target: more and
 more is provable over time.


All right.



 Eventually it will become useful to
 abstract away from the details with a true predicate.


Here, assuming the mechanist hypothesis (or some weakening), the way  
the truth predicate is introduced is really what will decide if the  
soul of the machine will fall in Hell, or get enlightened and go to  
Heaven. The all encompassing truth is not even nameable or  
describable by the machines.





 The true
 predicate essentially says provable by some sufficiently evolved
 system. This allows an entity to ignore the details of the entity it
 is currently reasoning about.


If PA (Peano Arithmetic) deduces I can prove that I am consistent  
from I can prove that ZF (Zermelo Fraenkel Set Theory) proves that I  
am consistent, then PA goes to hell!
If an entity refers to a more powerful entity, even if we trust that  
more powerful entity, it just an invalid argument of authority.
Of course if PA begins to *believe* in the axioms of ZF, then PA  
becomes ZF, and can assert the consistency of PA without problem. But  
then, we are no more talking *about* PA, but about ZF.



 This won't always work-- sometimes it
 will need to resort to reasoning about provability again. But, it
 should be a useful concept (after all, we find it to be so).


Sure. But truth is really an interrogation mark. We can only search  
it.





 Of course, this gives rise to an outlandish number of truth-values  
 (one
 for each ordinal number), when normally any more than 2 is  
 considered
 questionable.


 Not really, because those truth value are, imo, not really truth
 value, but they quantify a ladder toward infinite credibility,
 assurance or something. Perhaps security.

 I agree that the explosion of truth-values is acceptable because
 they are not really truth-values... but they do not go further and
 further into absolute confidence, but rather further and further into
 meaninglessness. Obviously my previous explanation was not adequate.

 First we have true and false. Dealing with these in an
 unrestricted manner, we can construct sentences such as this sentence
 is false.


I don't think we can really do that. We cannot, I think.  (And I can  
prove this making the assumption
that we are ideally sound universal machines).





 We need to label these somehow as meaningless or
 pathological. I think either a fixed-point construction or the
 revision theory are OK options for doing this;


In my opinion, revision theories are useful when a machine begins to  
bet on an universal environment independent of herself. Above her  
Godel-Lob-Solovay correct self-reference logic, she will have to  
develop a non monotonic surface to be able to handle its errors,  
dreams, etc. It is a bit more close to practical artifiicial  
intelligence engineering than machine theology, but I am ok with that.



 perhaps one is better
 than the other, perhaps they are ultimately equivalent where it
 matters, I don't know. Anyway, now we are stuck with a new predicate:
 meaningless. Using this in an unrestricted manner, I can say this
 sentence is either meaningless or false. I need to rule this out, but
 I can't label it meaningless, or I will then conclude it is true
 (assuming something like classical logic). So I need to invent a new
 predicate, 2-meaningless. Using this in an unrestricted manner again
 would lead to trouble, so I'll need 3-meaningless and 4-meaningless
 and finitely-meaningless and countably-meaningless and so on.


Indeed. It seems you make the point.

Best,


Bruno Marchal

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To 

Re: join post

2008-11-26 Thread Abram Demski

Russel,

I do not see why some appropriately modified version of that theorem
couldn't be proven for other settings. As a concrete example let's
just use Schmidhuber's GTMs. There would be universal GTMs and a
constant cost for conversion and everything else needed for a version
of the theorem, wouldn't there be? (I am assuming things, I will look
up some details this afternoon... I have the book you refer to, I'll
look at the theorem... but I suppose I should also re-read the paper
about GTMs before making bold claims...)

--Abram

On Tue, Nov 25, 2008 at 5:41 PM, Russell Standish [EMAIL PROTECTED] wrote:

 On Tue, Nov 25, 2008 at 04:58:41PM -0500, Abram Demski wrote:

 Russel,

 Can you point me to any references? I am curious to hear why the
 universality goes away, and what crucially depends means, et cetera.

 -Abram Demski


 This is sort of discussed in my book Theory of Nothing, but not in
 technical detail. Excuse the LaTeX notation below.

 Basically any mapping O(x) from the set of infinite binary strings
 {0,1}\infty (equivalently the set of reals [0,1) ) to the integers
 induces a probability distribution relative to the uniform measure dx
 over {0,1}\infty

 P(x) = \int_{y\in O^{-1}(x)} dx

 In the case where O(x) is a universal prefix machine, P(x) is just the
 usual Solomonoff-Levin universal prior, as discussed in chapter 3 of
 Li and Vitanyi. In the case where O(x) is not universal, or perhaps
 even not a machine at all, the important Coding theorem (Thm 4.3.3 in
 Li and Vitanyi)  no longer holds, so the distribution is no longer
 universal, however it is still a probability distribution (provided
 O(x) is defined for all x in {0,1}\infty) that depends on the choice
 of observer map O(x).

 Hope this is clear.

 --

 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052 [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au
 

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-26 Thread Abram Demski

Bruno,

I am glad for the opportunity to discuss these things with someone who
knows something about these issues.

 In my opinion, revision theories are useful when a machine begins to
 bet on an universal environment independent of herself. Above her
 Godel-Lob-Solovay correct self-reference logic, she will have to
 develop a non monotonic surface to be able to handle its errors,
 dreams, etc. It is a bit more close to practical artifiicial
 intelligence engineering than machine theology, but I am ok with that.

I am interested in nonmonotonic logics as an explanation of how we can
have concepts that don't just reduce to first-order theories--
specifically, concepts such as number that fall prey to Godelian
incompleteness. In other words, I think that we use nonmonotonic
logic is at least a partial answer to what I called the little
puzzle.

 First we have true and false. Dealing with these in an
 unrestricted manner, we can construct sentences such as this sentence
 is false.

 I don't think we can really do that. We cannot, I think.  (And I can
 prove this making the assumption
 that we are ideally sound universal machines).

I'm not claiming that we can *consistently* construct such sentences,
just that we can try to construct them, and then run into problems
when we try to reason about them. Luckily we have what you called a
nonmonotonic surface so we draw back and either give up or try from
different angles.

--Abram

On Wed, Nov 26, 2008 at 10:54 AM, Bruno Marchal [EMAIL PROTECTED] wrote:

 Hi Abram,


 On 26 Nov 2008, at 00:01, Abram Demski wrote:


 Bruno,

 Yes, I have encountered the provability logics before, but I am no
 expert.


 We will perhaps have opportunity to talk about this.




 In any given
 generation, the entity who can represent the truth-predicate of the
 most other entities will dominate.

 Why?

 The notion of the entities adapting their logics in order to better
 reason about each other is meant to be more of an informal
 justification than an exact proof, so I'm not worried about stating my
 assumptions precisely... If I did, I might simply take this to be an
 assumption rather than a derived fact. But, here is an informal
 justification.

 Since the entities start out using first-order logic, it will be
 useful to solve the halting problem to reach conclusions about what a
 fellow-creature *won't* ever reach conclusions about. This means a
 provable predicate will be useful. To support deduction with this
 predicate, of course, the entities will gain more and more axioms over
 time; axioms that help solve instances of the halting problem will
 survive, while axioms that provide incorrect information will not.
 This means that the provable predicate has a moving target: more and
 more is provable over time.


 All right.



 Eventually it will become useful to
 abstract away from the details with a true predicate.


 Here, assuming the mechanist hypothesis (or some weakening), the way
 the truth predicate is introduced is really what will decide if the
 soul of the machine will fall in Hell, or get enlightened and go to
 Heaven. The all encompassing truth is not even nameable or
 describable by the machines.





 The true
 predicate essentially says provable by some sufficiently evolved
 system. This allows an entity to ignore the details of the entity it
 is currently reasoning about.


 If PA (Peano Arithmetic) deduces I can prove that I am consistent
 from I can prove that ZF (Zermelo Fraenkel Set Theory) proves that I
 am consistent, then PA goes to hell!
 If an entity refers to a more powerful entity, even if we trust that
 more powerful entity, it just an invalid argument of authority.
 Of course if PA begins to *believe* in the axioms of ZF, then PA
 becomes ZF, and can assert the consistency of PA without problem. But
 then, we are no more talking *about* PA, but about ZF.



 This won't always work-- sometimes it
 will need to resort to reasoning about provability again. But, it
 should be a useful concept (after all, we find it to be so).


 Sure. But truth is really an interrogation mark. We can only search
 it.





 Of course, this gives rise to an outlandish number of truth-values
 (one
 for each ordinal number), when normally any more than 2 is
 considered
 questionable.


 Not really, because those truth value are, imo, not really truth
 value, but they quantify a ladder toward infinite credibility,
 assurance or something. Perhaps security.

 I agree that the explosion of truth-values is acceptable because
 they are not really truth-values... but they do not go further and
 further into absolute confidence, but rather further and further into
 meaninglessness. Obviously my previous explanation was not adequate.

 First we have true and false. Dealing with these in an
 unrestricted manner, we can construct sentences such as this sentence
 is false.


 I don't think we can really do that. We cannot, I think.  (And I can
 prove this making the assumption
 

Re: join post

2008-11-25 Thread Abram Demski

Russel,

Can you point me to any references? I am curious to hear why the
universality goes away, and what crucially depends means, et cetera.

-Abram Demski

On Tue, Nov 25, 2008 at 5:44 AM, Russell Standish [EMAIL PROTECTED] wrote:

 On Mon, Nov 24, 2008 at 11:52:55AM -0500, Abram Demski wrote:

 As I said, I'm also interested in the notion of probability. I
 disagree with Solomonoff's universal distribution
 (http://en.wikipedia.org/wiki/Ray_Solomonoff), because it assumes that
 the universe is computable. I cannot say whether the universe we
 actually live in is computable or not; however, I argue that,
 regardless, an uncomputable universe is at least conceivable, even if
 it has a low credibility. So, a universal probability distribution
 should include that possibility.


 Solomonoff-Levin distributions can be found for non computable
 universes (ie for non Turing complete observers). However, the notion
 of universality disappears, and the actual probability distribution
 obtained critically depends on the actual interpretation of data chosen.

 --

 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052 [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au
 

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-25 Thread Russell Standish

On Tue, Nov 25, 2008 at 04:58:41PM -0500, Abram Demski wrote:
 
 Russel,
 
 Can you point me to any references? I am curious to hear why the
 universality goes away, and what crucially depends means, et cetera.
 
 -Abram Demski
 

This is sort of discussed in my book Theory of Nothing, but not in
technical detail. Excuse the LaTeX notation below.

Basically any mapping O(x) from the set of infinite binary strings
{0,1}\infty (equivalently the set of reals [0,1) ) to the integers
induces a probability distribution relative to the uniform measure dx
over {0,1}\infty

P(x) = \int_{y\in O^{-1}(x)} dx

In the case where O(x) is a universal prefix machine, P(x) is just the
usual Solomonoff-Levin universal prior, as discussed in chapter 3 of
Li and Vitanyi. In the case where O(x) is not universal, or perhaps
even not a machine at all, the important Coding theorem (Thm 4.3.3 in
Li and Vitanyi)  no longer holds, so the distribution is no longer
universal, however it is still a probability distribution (provided
O(x) is defined for all x in {0,1}\infty) that depends on the choice
of observer map O(x).

Hope this is clear.

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-25 Thread Abram Demski

Bruno,

Yes, I have encountered the provability logics before, but I am no expert.

 In any given
 generation, the entity who can represent the truth-predicate of the
 most other entities will dominate.

 Why?

The notion of the entities adapting their logics in order to better
reason about each other is meant to be more of an informal
justification than an exact proof, so I'm not worried about stating my
assumptions precisely... If I did, I might simply take this to be an
assumption rather than a derived fact. But, here is an informal
justification.

Since the entities start out using first-order logic, it will be
useful to solve the halting problem to reach conclusions about what a
fellow-creature *won't* ever reach conclusions about. This means a
provable predicate will be useful. To support deduction with this
predicate, of course, the entities will gain more and more axioms over
time; axioms that help solve instances of the halting problem will
survive, while axioms that provide incorrect information will not.
This means that the provable predicate has a moving target: more and
more is provable over time. Eventually it will become useful to
abstract away from the details with a true predicate. The true
predicate essentially says provable by some sufficiently evolved
system. This allows an entity to ignore the details of the entity it
is currently reasoning about. This won't always work-- sometimes it
will need to resort to reasoning about provability again. But, it
should be a useful concept (after all, we find it to be so).

 Of course, this gives rise to an outlandish number of truth-values (one
 for each ordinal number), when normally any more than 2 is considered
 questionable.


 Not really, because those truth value are, imo, not really truth
 value, but they quantify a ladder toward infinite credibility,
 assurance or something. Perhaps security.

I agree that the explosion of truth-values is acceptable because
they are not really truth-values... but they do not go further and
further into absolute confidence, but rather further and further into
meaninglessness. Obviously my previous explanation was not adequate.

First we have true and false. Dealing with these in an
unrestricted manner, we can construct sentences such as this sentence
is false. We need to label these somehow as meaningless or
pathological. I think either a fixed-point construction or the
revision theory are OK options for doing this; perhaps one is better
than the other, perhaps they are ultimately equivalent where it
matters, I don't know. Anyway, now we are stuck with a new predicate:
meaningless. Using this in an unrestricted manner, I can say this
sentence is either meaningless or false. I need to rule this out, but
I can't label it meaningless, or I will then conclude it is true
(assuming something like classical logic). So I need to invent a new
predicate, 2-meaningless. Using this in an unrestricted manner again
would lead to trouble, so I'll need 3-meaningless and 4-meaningless
and finitely-meaningless and countably-meaningless and so on.

--Abram

On Mon, Nov 24, 2008 at 5:03 PM, Bruno Marchal [EMAIL PROTECTED] wrote:


 On 24 Nov 2008, at 21:52, Abram Demski wrote:


 Hi Bruno,

 I am not sure I follow you here. All what Godel's incompleteness
 proves is that no machine, or no axiomatisable theory can solve all
 halting problems.
 The undecidability is always relative to such or such theory or
 machine prover. For self-modifying theorem prover, the undecidable
 sentence can evolve.  (extensionaly, and yet remain the same
 intensionally)

 I agree with everything you say there, so I'm not sure where you
 aren't following me. I definitely agree with the idea of a
 self-modifying theorem prover that becomes stronger over time-- I
 think it is the right model. What I am saying in the paragraph you
 quoted is that one way out is to claim that when we add axioms to
 strengthen our system, we can choose either the axiom or the negation
 arbitrarily, since either is consistent with the system so far. I've
 argued with people who explicitly claim this. My opinion is that there
 is only one correct choice for each addition.

 I agree, for a category of pure machine, not yet confronted to some
 bigger or older universal machine.


 In the case of the
 halting problem, we want to reflect the actual truth about halting; in
 the (equivalent) domain of undecidable numerical statements, we still
 want the actual truth.

 Well some of them, like you, equivalent or bigger machine, will never
 know, unless you are infinitely patient.
 Insolubility here is hard, but it makes the mathematical world
 necessarily ever surprising.
 You can hope for a theory of everything, which preserves the mystery,
 which makes it even more mysterious.




 Also, I should mention that the arithmetical hierarchy shows that some
 problems are more undecidable than halting: if we had a halting
 oracle, we would still be unable to decide those problems.
 

Re: join post

2008-11-24 Thread Bruno Marchal

Hi Abram, welcome.

On 24 Nov 2008, at 17:52, Abram Demski wrote (in part):

 The little puzzle is this: Godel's theorem tells us that any
 sufficiently strong logic does not have a complete set of deduction
 rules; the axioms will fail to capture all truths about the logical
 entities we're trying to talk about. But if these entities cannot be
 completely axiomized, then in what sense are they well-defined? How is
 logic logical, if it is doomed to be incompletely specified? One way
 out here is to say that numbers (which happen to be the logical
 entities that Godel showed were doomed to incompleteness, though of
 course the incompleteness theorem has since been generalized to other
 domains) really are incompletely specified: the axioms are incomplete
 in that they fail to prove every sentence about numbers either true or
 false, but they are complete in that the ones they miss are in some
 sense actually not specified by our notion of number. I don't like
 this answer, because it is equivalent to saying that the halting
 problem really has no answer in the cases where it is undecidable.



I am not sure I follow you here. All what Godel's incompleteness  
proves is that no machine, or no axiomatisable theory can solve all  
halting problems.
The undecidability is always relative to such or such theory or  
machine prover. For self-modifying theorem prover, the undecidable  
sentence can evolve.  (extensionaly, and yet remain the same  
intensionally)

For such machine the self-stopping problem become absolutely-yet- 
relatively-to-them undecidable.
Actually I am very happy with this, because , assuming comp, this  
could explain why humans fight on this question since so long. And we  
can bet it is not finished!

Tarski 's theorem is even more religious, in the computationalist  
setting. It means that the concept of truth (about a machine) acts  
already like a god for that machine. No (sound) machine can givee a  
name to its own proof predicate.

See my paper on the arithmetical interpretation of Plotinus to know  
more. But the main reason of that paper is the failing of Aristotle  
materialism to address the mind-body problem. This is what we talk in  
the MGA thread in case you want catch the train. You can see my url  
for the papers if interested in the foundation of physics and mind.

Best,

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-24 Thread Abram Demski

Hi Bruno,

 I am not sure I follow you here. All what Godel's incompleteness
 proves is that no machine, or no axiomatisable theory can solve all
 halting problems.
 The undecidability is always relative to such or such theory or
 machine prover. For self-modifying theorem prover, the undecidable
 sentence can evolve.  (extensionaly, and yet remain the same
 intensionally)

I agree with everything you say there, so I'm not sure where you
aren't following me. I definitely agree with the idea of a
self-modifying theorem prover that becomes stronger over time-- I
think it is the right model. What I am saying in the paragraph you
quoted is that one way out is to claim that when we add axioms to
strengthen our system, we can choose either the axiom or the negation
arbitrarily, since either is consistent with the system so far. I've
argued with people who explicitly claim this. My opinion is that there
is only one correct choice for each addition. In the case of the
halting problem, we want to reflect the actual truth about halting; in
the (equivalent) domain of undecidable numerical statements, we still
want the actual truth.

Also, I should mention that the arithmetical hierarchy shows that some
problems are more undecidable than halting: if we had a halting
oracle, we would still be unable to decide those problems.
Schmidhuber's super-omegas are a perfect example.

http://www.idsia.ch/~juergen/kolmogorov.html

But you probably knew that already.


 For such machine the self-stopping problem become absolutely-yet-
 relatively-to-them undecidable.
 Actually I am very happy with this, because , assuming comp, this
 could explain why humans fight on this question since so long. And we
 can bet it is not finished!

This argument is given in longer form elsewhere? Perhaps that paper
you mention later on?


 Tarski 's theorem is even more religious, in the computationalist
 setting. It means that the concept of truth (about a machine) acts
 already like a god for that machine. No (sound) machine can givee a
 name to its own proof predicate.

To extend the model of the self-modifying theorem prover... the
scenario I use when thinking about truth is a population of entities
which, among other things, need to reason properly about each other.
(I could perhaps reduce this to one entity that, among other things,
needs to reason about itself; but that is needlessly recursive.) The
logic that these entities use evolves over time. In any given
generation, the entity who can represent the truth-predicate of the
most other entities will dominate. The question, then, is: what logic
will the population eventually converge to?

I think fair arguments could be given for the fixed-point or revision
theories in this scenario, but like I said, both create reference
gaps... so some creature could dominate over these by inventing a
predicate to fill the gap. That creature will then have its own
reference gaps, and yet more gap-filling predicates will be created.

My current thinking is that each gap-filling predicate will correspond
to an ordinal number, so that the maximal logic will be one with a
gap-filling predicate for each ordinal number. No gap will be left,
because if there was such a gap, it would correspond to an ordinal
number larger than all ordinal numbers, which is impossible. Of
course, this gives rise to an outlandish number of truth-values (one
for each ordinal number), when normally any more than 2 is considered
questionable.


--Abram Demski

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-24 Thread Bruno Marchal


On 24 Nov 2008, at 21:52, Abram Demski wrote:


 Hi Bruno,

 I am not sure I follow you here. All what Godel's incompleteness
 proves is that no machine, or no axiomatisable theory can solve all
 halting problems.
 The undecidability is always relative to such or such theory or
 machine prover. For self-modifying theorem prover, the undecidable
 sentence can evolve.  (extensionaly, and yet remain the same
 intensionally)

 I agree with everything you say there, so I'm not sure where you
 aren't following me. I definitely agree with the idea of a
 self-modifying theorem prover that becomes stronger over time-- I
 think it is the right model. What I am saying in the paragraph you
 quoted is that one way out is to claim that when we add axioms to
 strengthen our system, we can choose either the axiom or the negation
 arbitrarily, since either is consistent with the system so far. I've
 argued with people who explicitly claim this. My opinion is that there
 is only one correct choice for each addition.

I agree, for a category of pure machine, not yet confronted to some  
bigger or older universal machine.


 In the case of the
 halting problem, we want to reflect the actual truth about halting; in
 the (equivalent) domain of undecidable numerical statements, we still
 want the actual truth.

Well some of them, like you, equivalent or bigger machine, will never  
know, unless you are infinitely patient.
Insolubility here is hard, but it makes the mathematical world  
necessarily ever surprising.
You can hope for a theory of everything, which preserves the mystery,  
which makes it even more mysterious.




 Also, I should mention that the arithmetical hierarchy shows that some
 problems are more undecidable than halting: if we had a halting
 oracle, we would still be unable to decide those problems.
 Schmidhuber's super-omegas are a perfect example.


That is why the computationalist hypothesis is non trivial. It gives a  
prominent role to Sigma_1 completeness. The oracle can still play an  
important role from inside, but even this is not sure. (and then  
comp gives importance to sigma_1 completeness relatively to an oracle).





 http://www.idsia.ch/~juergen/kolmogorov.html

 But you probably knew that already.


 For such machine the self-stopping problem become absolutely-yet-
 relatively-to-them undecidable.
 Actually I am very happy with this, because , assuming comp, this
 could explain why humans fight on this question since so long. And we
 can bet it is not finished!

 This argument is given in longer form elsewhere? Perhaps that paper
 you mention later on?


I have not publish this. I explain it informally in my french thesis  
long version. But it is obvious I think, especially assuming comp. It  
is implicit in my Plotinus paper (still on my first page of my url I  
think).
Finite machine are limited. But finite machine which believes in the  
induction axioms can know that they are limited, and can build  
theories explaining those limitation. Formally this gives autonomous  
progression. The correct one have to converge to some non convergence  
possible in the horizon ...





 Tarski 's theorem is even more religious, in the computationalist
 setting. It means that the concept of truth (about a machine) acts
 already like a god for that machine. No (sound) machine can givee a
 name to its own proof predicate.

 To extend the model of the self-modifying theorem prover... the
 scenario I use when thinking about truth is a population of entities
 which, among other things, need to reason properly about each other.
 (I could perhaps reduce this to one entity that, among other things,
 needs to reason about itself; but that is needlessly recursive.) The
 logic that these entities use evolves over time.

Relatively to some universal machine. I see subtle difficulties I  
don't want to bore you with.


 In any given
 generation, the entity who can represent the truth-predicate of the
 most other entities will dominate.

Why?




 The question, then, is: what logic
 will the population eventually converge to?


If the entities believes in classical logic, and if they believe in  
induction, they will converge toward the self-reference logic of  
Solovay (G and G*, or GL and GLS nowadays).




 I think fair arguments could be given for the fixed-point or revision
 theories in this scenario, but like I said, both create reference
 gaps... so some creature could dominate over these by inventing a
 predicate to fill the gap. That creature will then have its own
 reference gaps, and yet more gap-filling predicates will be created.

I think so.





 My current thinking is that each gap-filling predicate will correspond
 to an ordinal number, so that the maximal logic will be one with a
 gap-filling predicate for each ordinal number. No gap will be left,
 because if there was such a gap, it would correspond to an ordinal
 number larger than all ordinal numbers, which is impossible. Of
 course, this