Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

*That* is what I was asking about when I asked which side you fell on.

Do you think such extensions are arbitrary, or do you think there is a
fact of the matter?

The extensions are clearly judged on whether or not they accurately reflect 
the empirical world *as currently known* -- so they aren't arbitrary in that 
sense.


On the other hand, there may not be just a single set of extensions that 
accurately reflect the world so I guess that you could say that choosing 
among sets of extensions that both accurately reflect the world is 
(necessarily) an arbitrary process since there is no additional information 
to go on (though there are certainly heuristics like Occam's razor -- but 
they are more about getting a usable or more likely to hold up under 
future observations or more likely to be easily modified to match future 
observations theory . . . .).


The world is real.  Our explanations and theories are constructed.  For any 
complete system, you can take the classical approach but incompleteness (of 
current information which then causes undecidability) ever forces you into 
constructivism to create an ever-expanding series of shells of stronger 
systems to explain those systems contained by them.


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 27, 2008 5:43 PM
Subject: Re: [agi] constructivist issues


Mark,

Sorry, I accidentally called you Mike in the previous email!

Anyway, you said:

Also, you seem to be ascribing arbitrariness to constructivism which
is emphatically not the case.

I didn't mean to ascribe arbitrariness to constructivism-- what I
meant was that constructivists would (as I understand it) ascribe
arbitrariness to extensions of arithmetic. A constructivist sees the
fact of the matter as undefined for undecidable statements, so adding
axioms that make them decidable is necessarily an arbitrary process.
The classical view, on the other hand, sees it as an attempt to
increase the amount of true information contained in the axioms-- so
there is a right and wrong.

*That* is what I was asking about when I asked which side you fell on.
Do you think such extensions are arbitrary, or do you think there is a
fact of the matter?

--Abram

On Mon, Oct 27, 2008 at 3:33 PM, Mark Waser [EMAIL PROTECTED] wrote:

The number of possible descriptions is countable


I disagree.


if we were able to randomly pick a real number between 1 and 0, it would
be indescribable with probability 1.


If we were able to randomly pick a real number between 1 and 0, it would 
be

indescribable with probability *approaching* 1.


Which side do you fall on?


I still say that the sides are parts of the same coin.

In other words, we're proving arithmetic consistent only by adding to 
its
definition, which hardly counts. The classical viewpoint, of course, is 
that

the stronger system is actually correct. Its additional axioms are not
arbitrary. So, the proof reflects the truth.


What is the stronger system other than an addition?  And the viewpoint 
that

the stronger system is actually correct -- is that an assumption? a truth?
what?  (And how do you know?)

Also, you seem to be ascribing arbitrariness to constructivism which is
emphatically not the case.


- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, October 27, 2008 2:53 PM
Subject: Re: [agi] constructivist issues


Mark,

The number of possible descriptions is countable, while the number of
possible real numbers is uncountable. So, there are infinitely many
more real numbers that are individually indescribable, then
describable; so much so that if we were able to randomly pick a real
number between 1 and 0, it would be indescribable with probability 1.
I am getting this from Chaitin's book Meta Math!.

I believe that arithmetic is a formal and complete system.  I'm not a
constructivist where formal and complete systems are concerned (since
there is nothing more to construct).

Oh, I believe there is some confusion here because of my use of the
word arithmetic. I don't mean grade-school
addition/subtraction/multiplication/division. What I mean is the
axiomatic theory of numbers, which Godel showed to be incomplete if it
is consistent. Godel also proved that one of the incompletenesses in
arithmetic was that it could not prove its own consistency. Stronger
logical systems can and have proven its consistency, but any
particular logical system cannot prove its own consistency. It seems
to me that the constructivist viewpoint says, The so-called stronger
system merely defines truth in more cases; but, we could just as
easily take the opposite definitions. In other words, we're proving
arithmetic consistent only by adding to its definition, which hardly
counts. The classical viewpoint, of course, is that the stronger
system is actually correct. Its additional axioms are not arbitrary.
So, the proof reflects the truth.

Which side do you 

Re: [agi] constructivist issues

2008-10-28 Thread Abram Demski
Mark,

You assert that the extensions are judged on how well they reflect the world.

The extension currently under discussion is one that allows us to
prove the consistency of Arithmetic. So, it seems, you count that as
something observable in the world-- no mathematician has ever proved a
contradiction from the axioms of arithmetic, so they seem consistent.
If this is indeed what you are saying, then you are in line with the
classical view in this respect (and with my opinion).

But, if this is your view, I don't see how you can maintain the
constructivist assertion that Godelian statements are undecidable
because they are undefined by the axioms. It seems that, instead, you
are agreeing with the classical notion that there is in fact a truth
of the matter concerning Godelian statements, we're just unable to
deduce that truth from the axioms.

--Abram

On Tue, Oct 28, 2008 at 7:21 AM, Mark Waser [EMAIL PROTECTED] wrote:
 *That* is what I was asking about when I asked which side you fell on.

 Do you think such extensions are arbitrary, or do you think there is a
 fact of the matter?

 The extensions are clearly judged on whether or not they accurately reflect
 the empirical world *as currently known* -- so they aren't arbitrary in that
 sense.

 On the other hand, there may not be just a single set of extensions that
 accurately reflect the world so I guess that you could say that choosing
 among sets of extensions that both accurately reflect the world is
 (necessarily) an arbitrary process since there is no additional information
 to go on (though there are certainly heuristics like Occam's razor -- but
 they are more about getting a usable or more likely to hold up under
 future observations or more likely to be easily modified to match future
 observations theory . . . .).

 The world is real.  Our explanations and theories are constructed.  For any
 complete system, you can take the classical approach but incompleteness (of
 current information which then causes undecidability) ever forces you into
 constructivism to create an ever-expanding series of shells of stronger
 systems to explain those systems contained by them.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
Abram,

I could agree with the statement that there are uncountably many *potential* 
numbers but I'm going to argue that any number that actually exists is 
eminently describable.

Take the set of all numbers that are defined far enough after the decimal point 
that they never accurately describe anything manifest in the physical universe 
and are never described or invoked by any entity in the physical universe 
(specifically including a method for the generation of that number).

Pi is clearly not in the set since a) it describes all sorts of ratios in the 
physical universe and b) there is a clear formula for generating successive 
approximations of it.

My question is -- do these numbers really exist?  And, if so, by what 
definition of exist since my definition is meant to rule out any form of 
manifestation whether physical or as a concept.

Clearly these numbers have the potential to exist -- but it should be equally 
clear that they do not actually exist (i.e. they are never individuated out 
of the class).

Any number which truly exists has at least one description either of the type 
of a) the number which is manifest as or b) the number which is generated by. 

Classicists seem to want to insist that all of these potential numbers actually 
do exist -- so they can make statements like There are uncountably many real 
numbers that no one can ever describe in any manner.  

I ask of them (and you) -- Show me just one.:-)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

Hi,

   We keep going around and around because you keep dropping my distinction 
between two different cases . . . .


   The statement that The cat is red is undecidable by arithmetic because 
it can't even be defined in terms of the axioms of arithmetic (i.e. it has 
*meaning* outside of arithmetic).  You need to construct 
additions/extensions to arithmetic to even start to deal with it.


   The statement that Pi is a normal number is decidable by arithmetic 
because each of the terms has meaning in arithmetic (so it certainly can be 
disproved by counter-example).  It may not be deducible from the axioms but 
the meaning of the statement is contained within the axioms.


   The first example is what you call a constructivist view.  The second 
example is what you call a classical view.  Which one I take is eminently 
context-dependent and you keep dropping the context.  If the meaning of the 
statement is contained within the system, it is decidable even if it is not 
deducible.  If the meaning is beyond the system, then it is not decidable 
because you can't even express what you're deciding.


   Mark


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 9:32 AM
Subject: Re: [agi] constructivist issues



Mark,

You assert that the extensions are judged on how well they reflect the 
world.


The extension currently under discussion is one that allows us to
prove the consistency of Arithmetic. So, it seems, you count that as
something observable in the world-- no mathematician has ever proved a
contradiction from the axioms of arithmetic, so they seem consistent.
If this is indeed what you are saying, then you are in line with the
classical view in this respect (and with my opinion).

But, if this is your view, I don't see how you can maintain the
constructivist assertion that Godelian statements are undecidable
because they are undefined by the axioms. It seems that, instead, you
are agreeing with the classical notion that there is in fact a truth
of the matter concerning Godelian statements, we're just unable to
deduce that truth from the axioms.

--Abram

On Tue, Oct 28, 2008 at 7:21 AM, Mark Waser [EMAIL PROTECTED] wrote:

*That* is what I was asking about when I asked which side you fell on.


Do you think such extensions are arbitrary, or do you think there is a
fact of the matter?

The extensions are clearly judged on whether or not they accurately 
reflect
the empirical world *as currently known* -- so they aren't arbitrary in 
that

sense.

On the other hand, there may not be just a single set of extensions that
accurately reflect the world so I guess that you could say that choosing
among sets of extensions that both accurately reflect the world is
(necessarily) an arbitrary process since there is no additional 
information

to go on (though there are certainly heuristics like Occam's razor -- but
they are more about getting a usable or more likely to hold up under
future observations or more likely to be easily modified to match future
observations theory . . . .).

The world is real.  Our explanations and theories are constructed.  For 
any
complete system, you can take the classical approach but incompleteness 
(of
current information which then causes undecidability) ever forces you 
into

constructivism to create an ever-expanding series of shells of stronger
systems to explain those systems contained by them.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Ben Goertzel
Mark,

The question that is puzzling, though, is: how can it be that these
uncomputable, inexpressible entities are so bloody useful ;-)  ... for
instance in differential calculus ...

Also, to say that uncomputable entities don't exist because they can't be
finitely described, is basically just to *define* existence as finite
describability.  So this is more a philosophical position on what exists
means than an argument that could convince anyone.

I have some more detailed thoughts on these issues that I'll write down
sometime soon when I get the time.   My position is fairly close to yours
but I think that with these sorts of issues, the devil is in the details.

ben

On Tue, Oct 28, 2008 at 6:53 AM, Mark Waser [EMAIL PROTECTED] wrote:

  Abram,

 I could agree with the statement that there are uncountably many
 *potential* numbers but I'm going to argue that any number that actually
 exists is eminently describable.

 Take the set of all numbers that are defined far enough after the decimal
 point that they never accurately describe anything manifest in the physical
 universe and are never described or invoked by any entity in the physical
 universe (specifically including a method for the generation of that
 number).

 Pi is clearly not in the set since a) it describes all sorts of ratios in
 the physical universe and b) there is a clear formula for generating
 successive approximations of it.

 My question is -- do these numbers really exist?  And, if so, by what
 definition of exist since my definition is meant to rule out any form of
 manifestation whether physical or as a concept.

 Clearly these numbers have the potential to exist -- but it should be
 equally clear that they do not actually exist (i.e. they are never
 individuated out of the class).

 Any number which truly exists has at least one description either of the
 type of a) the number which is manifest as or b) the number which is
 generated by.

 Classicists seem to want to insist that all of these potential numbers
 actually do exist -- so they can make statements like There are uncountably
 many real numbers that no one can ever describe in any manner.

 I ask of them (and you) -- Show me just one.:-)


 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mike Tintner

MW:Pi is a normal number is decidable by arithmetic
because each of the terms has meaning in arithmetic

Can it be expressed in purely mathematical terms/signs without using 
language? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Ben Goertzel
yes

On Tue, Oct 28, 2008 at 8:46 AM, Mike Tintner [EMAIL PROTECTED]wrote:

 MW:Pi is a normal number is decidable by arithmetic
 because each of the terms has meaning in arithmetic

 Can it be expressed in purely mathematical terms/signs without using
 language?



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Occam's Razor and its abuse

2008-10-28 Thread Pei Wang
Triggered by several recent discussions, I'd like to make the
following position statement, though won't commit myself to long
debate on it. ;-)

Occam's Razor, in its original form, goes like entities must not be
multiplied beyond necessity, and it is often stated as All other
things being equal, the simplest solution is the best or when
multiple competing theories are equal in other respects, the principle
recommends selecting the theory that introduces the fewest assumptions
and postulates the fewest entities --- all from
http://en.wikipedia.org/wiki/Occam's_razor

I fully agree with all of the above statements.

However, to me, there are two common misunderstandings associated with
it in the context of AGI and philosophy of science.

(1) To take this statement as self-evident or a stand-alone postulate

To me, it is derived or implied by the insufficiency of resources. If
a system has sufficient resources, it has no good reason to prefer a
simpler theory.

(2) To take it to mean The simplest answer is usually the correct answer.

This is a very different statement, which cannot be justified either
analytically or empirically.  When theory A is an approximation of
theory B, usually the former is simpler than the latter, but less
correct or accurate, in terms of its relation with all available
evidence. When we are short in resources and have a low demand on
accuracy, we often prefer A over B, but it does not mean that by doing
so we judge A as more correct than B.

In summary, in choosing among alternative theories or conclusions, the
preference for simplicity comes from shortage of resources, though
simplicity and correctness are logically independent of each other.

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Abram Demski
Mark,

Yes, I do keep dropping the context. This is because I am concerned
only with mathematical knowledge at the moment. I should have been
more specific.

So, if I understand you right, you are saying that you take the
classical view when it comes to mathematics. In that case, shouldn't
you agree with the classical perspective on Godelian incompleteness,
since Godel's incompleteness theorem is about mathematical systems?

--Abram

On Tue, Oct 28, 2008 at 10:20 AM, Mark Waser [EMAIL PROTECTED] wrote:
 Hi,

   We keep going around and around because you keep dropping my distinction
 between two different cases . . . .

   The statement that The cat is red is undecidable by arithmetic because
 it can't even be defined in terms of the axioms of arithmetic (i.e. it has
 *meaning* outside of arithmetic).  You need to construct
 additions/extensions to arithmetic to even start to deal with it.

   The statement that Pi is a normal number is decidable by arithmetic
 because each of the terms has meaning in arithmetic (so it certainly can be
 disproved by counter-example).  It may not be deducible from the axioms but
 the meaning of the statement is contained within the axioms.

   The first example is what you call a constructivist view.  The second
 example is what you call a classical view.  Which one I take is eminently
 context-dependent and you keep dropping the context.  If the meaning of the
 statement is contained within the system, it is decidable even if it is not
 deducible.  If the meaning is beyond the system, then it is not decidable
 because you can't even express what you're deciding.

   Mark


 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Tuesday, October 28, 2008 9:32 AM
 Subject: Re: [agi] constructivist issues


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Pei Wang
Ben,

Thanks. So the other people now see that I'm not attacking a straw man.

My solution to Hume's problem, as embedded in the experience-grounded
semantics, is to assume no predictability, but to justify induction as
adaptation. However, it is a separate topic which I've explained in my
other publications.

Here I just want to point out that the original and basic meaning of
Occam's Razor and those two common (mis)usages of it are not
necessarily the same. I fully agree with the former, but not the
latter, and I haven't seen any convincing justification of the latter.
Instead, they are often taken as granted, under the name of Occam's
Razor.

Pei

On Tue, Oct 28, 2008 at 12:37 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Hi Pei,

 This is an interesting perspective; I just want to clarify for others on the
 list that it is a particular and controversial perspective, and contradicts
 the perspectives of many other well-informed research professionals and deep
 thinkers on relevant topics.

 Many serious thinkers in the area *do* consider Occam's Razor a standalone
 postulate.  This fits in naturally with the Bayesian perspective, in which
 one needs to assume *some* prior distribution, so one often assumes some
 sort of Occam prior (e.g. the Solomonoff-Levin prior, the speed prior, etc.)
 as a standalone postulate.

 Hume pointed out that induction (in the old sense of extrapolating from the
 past into the future) is not solvable except by introducing some kind of a
 priori assumption.  Occam's Razor, in one form or another, is a suitable a
 prior assumption to plug into this role.

 If you want to replace the Occam's Razor assumption with the assumption that
 the world is predictable by systems with limited resources, and we will
 prefer explanations that consume less resources, that seems unproblematic
 as it's basically equivalent to assuming an Occam prior.

 On the other hand, I just want to point out that to get around Hume's
 complaint you do need to make *some* kind of assumption about the regularity
 of the world.  What kind of assumption of this nature underlies your work on
 NARS (if any)?

 ben

 On Tue, Oct 28, 2008 at 8:58 AM, Pei Wang [EMAIL PROTECTED] wrote:

 Triggered by several recent discussions, I'd like to make the
 following position statement, though won't commit myself to long
 debate on it. ;-)

 Occam's Razor, in its original form, goes like entities must not be
 multiplied beyond necessity, and it is often stated as All other
 things being equal, the simplest solution is the best or when
 multiple competing theories are equal in other respects, the principle
 recommends selecting the theory that introduces the fewest assumptions
 and postulates the fewest entities --- all from
 http://en.wikipedia.org/wiki/Occam's_razor

 I fully agree with all of the above statements.

 However, to me, there are two common misunderstandings associated with
 it in the context of AGI and philosophy of science.

 (1) To take this statement as self-evident or a stand-alone postulate

 To me, it is derived or implied by the insufficiency of resources. If
 a system has sufficient resources, it has no good reason to prefer a
 simpler theory.

 (2) To take it to mean The simplest answer is usually the correct
 answer.

 This is a very different statement, which cannot be justified either
 analytically or empirically.  When theory A is an approximation of
 theory B, usually the former is simpler than the latter, but less
 correct or accurate, in terms of its relation with all available
 evidence. When we are short in resources and have a low demand on
 accuracy, we often prefer A over B, but it does not mean that by doing
 so we judge A as more correct than B.

 In summary, in choosing among alternative theories or conclusions, the
 preference for simplicity comes from shortage of resources, though
 simplicity and correctness are logically independent of each other.

 Pei


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 A human being should be able to change a diaper, plan an invasion, butcher
 a hog, conn a ship, design a building, write a sonnet, balance accounts,
 build a wall, set a bone, comfort the dying, take orders, give orders,
 cooperate, act alone, solve equations, analyze a new problem, pitch manure,
 program a computer, cook a tasty meal, fight efficiently, die gallantly.
 Specialization is for insects.  -- Robert Heinlein


 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Abram Demski
Ben,

You assert that Pei is forced to make an assumption about the
regulatiry of the world to justify adaptation. Pei could also take a
different argument. He could try to show that *if* a strategy exists
that can be implemented given the finite resources, NARS will
eventually find it. Thus, adaptation is justified on a sort of we
might as well try basis. (The proof would involve showing that NARS
searches the state of finite-state-machines that can be implemented
with the resources at hand, and is more probable to stay for longer
periods of time in configurations that give more reward, such that
NARS would eventually settle on a configuration if that configuration
consistently gave the highest reward.)

So, some form of learning can take place with no assumptions. The
problem is that the search space is exponential in the resources
available, so there is some maximum point where the system would
perform best (because the amount of resources match the problem), but
giving the system more resources would hurt performance (because the
system searches the unnecessarily large search space). So, in this
sense, the system's behavior seems counterintuitive-- it does not seem
to be taking advantage of the increased resources.

I'm not claiming NARS would have that problem, of course just that
a theoretical no-assumption learner would.

--Abram

On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel [EMAIL PROTECTED] wrote:


 On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang [EMAIL PROTECTED] wrote:

 Ben,

 Thanks. So the other people now see that I'm not attacking a straw man.

 My solution to Hume's problem, as embedded in the experience-grounded
 semantics, is to assume no predictability, but to justify induction as
 adaptation. However, it is a separate topic which I've explained in my
 other publications.

 Right, but justifying induction as adaptation only works if the environment
 is assumed to have certain regularities which can be adapted to.  In a
 random environment, adaptation won't work.  So, still, to justify induction
 as adaptation you have to make *some* assumptions about the world.

 The Occam prior gives one such assumption: that (to give just one form) sets
 of observations in the world tend to be producible by short computer
 programs.

 For adaptation to successfully carry out induction, *some* vaguely
 comparable property to this must hold, and I'm not sure if you have
 articulated which one you assume, or if you leave this open.

 In effect, you implicitly assume something like an Occam prior, because
 you're saying that  a system with finite resources can successfully adapt to
 the world ... which means that sets of observations in the world *must* be
 approximately summarizable via subprograms that can be executed within this
 system.

 So I argue that, even though it's not your preferred way to think about it,
 your own approach to AI theory and practice implicitly assumes some variant
 of the Occam prior holds in the real world.


 Here I just want to point out that the original and basic meaning of
 Occam's Razor and those two common (mis)usages of it are not
 necessarily the same. I fully agree with the former, but not the
 latter, and I haven't seen any convincing justification of the latter.
 Instead, they are often taken as granted, under the name of Occam's
 Razor.

 I agree that the notion of an Occam prior is a significant conceptual beyond
 the original Occam's Razor precept enounced long ago.

 Also, I note that, for those who posit the Occam prior as a **prior
 assumption**, there is not supposed to be any convincing justification for
 it.  The idea is simply that: one must make *some* assumption (explicitly or
 implicitly) if one wants to do induction, and this is the assumption that
 some people choose to make.

 -- Ben G



 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Pei Wang
Ben,

It seems that you agree the issue I pointed out really exists, but
just take it as a necessary evil. Furthermore, you think I also
assumed the same thing, though I failed to see it. I won't argue
against the necessary evil part --- as far as you agree that those
postulates (such as the universe is computable) are not
convincingly justified. I won't try to disprove them.

As for the latter part, I don't think you can convince me that you
know me better than I know myself. ;-)

The following is from
http://nars.wang.googlepages.com/wang.semantics.pdf , page 28:

If the answers provided by NARS are fallible, in what sense these answers are
better than arbitrary guesses? This leads us to the concept of rationality.
When infallible predictions cannot be obtained (due to insufficient knowledge
and resources), answers based on past experience are better than arbitrary
guesses, if the environment is relatively stable. To say an answer is only a
summary of past experience (thus no future confirmation guaranteed) does
not make it equal to an arbitrary conclusion — it is what adaptation means.
Adaptation is the process in which a system changes its behaviors as if the
future is similar to the past. It is a rational process, even though individual
conclusions it produces are often wrong. For this reason, valid inference rules
(deduction, induction, abduction, and so on) are the ones whose conclusions
correctly (according to the semantics) summarize the evidence in the premises.
They are truth-preserving in this sense, not in the model-theoretic sense that
they always generate conclusions which are immune from future revision.

--- so you see, I don't assume adaptation will always be successful,
even successful to a certain probability. You can dislike this
conclusion, though you cannot say it is the same as what is assumed by
Novamente and AIXI.

Pei

On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel [EMAIL PROTECTED] wrote:


 On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang [EMAIL PROTECTED] wrote:

 Ben,

 Thanks. So the other people now see that I'm not attacking a straw man.

 My solution to Hume's problem, as embedded in the experience-grounded
 semantics, is to assume no predictability, but to justify induction as
 adaptation. However, it is a separate topic which I've explained in my
 other publications.

 Right, but justifying induction as adaptation only works if the environment
 is assumed to have certain regularities which can be adapted to.  In a
 random environment, adaptation won't work.  So, still, to justify induction
 as adaptation you have to make *some* assumptions about the world.

 The Occam prior gives one such assumption: that (to give just one form) sets
 of observations in the world tend to be producible by short computer
 programs.

 For adaptation to successfully carry out induction, *some* vaguely
 comparable property to this must hold, and I'm not sure if you have
 articulated which one you assume, or if you leave this open.

 In effect, you implicitly assume something like an Occam prior, because
 you're saying that  a system with finite resources can successfully adapt to
 the world ... which means that sets of observations in the world *must* be
 approximately summarizable via subprograms that can be executed within this
 system.

 So I argue that, even though it's not your preferred way to think about it,
 your own approach to AI theory and practice implicitly assumes some variant
 of the Occam prior holds in the real world.


 Here I just want to point out that the original and basic meaning of
 Occam's Razor and those two common (mis)usages of it are not
 necessarily the same. I fully agree with the former, but not the
 latter, and I haven't seen any convincing justification of the latter.
 Instead, they are often taken as granted, under the name of Occam's
 Razor.

 I agree that the notion of an Occam prior is a significant conceptual beyond
 the original Occam's Razor precept enounced long ago.

 Also, I note that, for those who posit the Occam prior as a **prior
 assumption**, there is not supposed to be any convincing justification for
 it.  The idea is simply that: one must make *some* assumption (explicitly or
 implicitly) if one wants to do induction, and this is the assumption that
 some people choose to make.

 -- Ben G



 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Ben Goertzel
Most certainly ... and the human mind seems to make a lot of other, more
specialized assumptions about the environment also ... so that unless the
environment satisfies a bunch of these other more specialized assumptions,
its adaptation will be very slow and resource-inefficient...

ben g

On Tue, Oct 28, 2008 at 12:05 PM, Pei Wang [EMAIL PROTECTED] wrote:

 We can say the same thing for the human mind, right?

 Pei

 On Tue, Oct 28, 2008 at 2:54 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  Sure ... but my point is that unless the environment satisfies a certain
  Occam-prior-like property, NARS will be useless...
 
  ben
 
  On Tue, Oct 28, 2008 at 11:52 AM, Abram Demski [EMAIL PROTECTED]
  wrote:
 
  Ben,
 
  You assert that Pei is forced to make an assumption about the
  regulatiry of the world to justify adaptation. Pei could also take a
  different argument. He could try to show that *if* a strategy exists
  that can be implemented given the finite resources, NARS will
  eventually find it. Thus, adaptation is justified on a sort of we
  might as well try basis. (The proof would involve showing that NARS
  searches the state of finite-state-machines that can be implemented
  with the resources at hand, and is more probable to stay for longer
  periods of time in configurations that give more reward, such that
  NARS would eventually settle on a configuration if that configuration
  consistently gave the highest reward.)
 
  So, some form of learning can take place with no assumptions. The
  problem is that the search space is exponential in the resources
  available, so there is some maximum point where the system would
  perform best (because the amount of resources match the problem), but
  giving the system more resources would hurt performance (because the
  system searches the unnecessarily large search space). So, in this
  sense, the system's behavior seems counterintuitive-- it does not seem
  to be taking advantage of the increased resources.
 
  I'm not claiming NARS would have that problem, of course just that
  a theoretical no-assumption learner would.
 
  --Abram
 
  On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
  
  
   On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang [EMAIL PROTECTED]
   wrote:
  
   Ben,
  
   Thanks. So the other people now see that I'm not attacking a straw
 man.
  
   My solution to Hume's problem, as embedded in the experience-grounded
   semantics, is to assume no predictability, but to justify induction
 as
   adaptation. However, it is a separate topic which I've explained in
 my
   other publications.
  
   Right, but justifying induction as adaptation only works if the
   environment
   is assumed to have certain regularities which can be adapted to.  In a
   random environment, adaptation won't work.  So, still, to justify
   induction
   as adaptation you have to make *some* assumptions about the world.
  
   The Occam prior gives one such assumption: that (to give just one
 form)
   sets
   of observations in the world tend to be producible by short computer
   programs.
  
   For adaptation to successfully carry out induction, *some* vaguely
   comparable property to this must hold, and I'm not sure if you have
   articulated which one you assume, or if you leave this open.
  
   In effect, you implicitly assume something like an Occam prior,
 because
   you're saying that  a system with finite resources can successfully
   adapt to
   the world ... which means that sets of observations in the world
 *must*
   be
   approximately summarizable via subprograms that can be executed within
   this
   system.
  
   So I argue that, even though it's not your preferred way to think
 about
   it,
   your own approach to AI theory and practice implicitly assumes some
   variant
   of the Occam prior holds in the real world.
  
  
   Here I just want to point out that the original and basic meaning of
   Occam's Razor and those two common (mis)usages of it are not
   necessarily the same. I fully agree with the former, but not the
   latter, and I haven't seen any convincing justification of the
 latter.
   Instead, they are often taken as granted, under the name of Occam's
   Razor.
  
   I agree that the notion of an Occam prior is a significant conceptual
   beyond
   the original Occam's Razor precept enounced long ago.
  
   Also, I note that, for those who posit the Occam prior as a **prior
   assumption**, there is not supposed to be any convincing justification
   for
   it.  The idea is simply that: one must make *some* assumption
   (explicitly or
   implicitly) if one wants to do induction, and this is the assumption
   that
   some people choose to make.
  
   -- Ben G
  
  
  
   
   agi | Archives | Modify Your Subscription
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: 

Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Pei Wang
We can say the same thing for the human mind, right?

Pei

On Tue, Oct 28, 2008 at 2:54 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Sure ... but my point is that unless the environment satisfies a certain
 Occam-prior-like property, NARS will be useless...

 ben

 On Tue, Oct 28, 2008 at 11:52 AM, Abram Demski [EMAIL PROTECTED]
 wrote:

 Ben,

 You assert that Pei is forced to make an assumption about the
 regulatiry of the world to justify adaptation. Pei could also take a
 different argument. He could try to show that *if* a strategy exists
 that can be implemented given the finite resources, NARS will
 eventually find it. Thus, adaptation is justified on a sort of we
 might as well try basis. (The proof would involve showing that NARS
 searches the state of finite-state-machines that can be implemented
 with the resources at hand, and is more probable to stay for longer
 periods of time in configurations that give more reward, such that
 NARS would eventually settle on a configuration if that configuration
 consistently gave the highest reward.)

 So, some form of learning can take place with no assumptions. The
 problem is that the search space is exponential in the resources
 available, so there is some maximum point where the system would
 perform best (because the amount of resources match the problem), but
 giving the system more resources would hurt performance (because the
 system searches the unnecessarily large search space). So, in this
 sense, the system's behavior seems counterintuitive-- it does not seem
 to be taking advantage of the increased resources.

 I'm not claiming NARS would have that problem, of course just that
 a theoretical no-assumption learner would.

 --Abram

 On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
 
  On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang [EMAIL PROTECTED]
  wrote:
 
  Ben,
 
  Thanks. So the other people now see that I'm not attacking a straw man.
 
  My solution to Hume's problem, as embedded in the experience-grounded
  semantics, is to assume no predictability, but to justify induction as
  adaptation. However, it is a separate topic which I've explained in my
  other publications.
 
  Right, but justifying induction as adaptation only works if the
  environment
  is assumed to have certain regularities which can be adapted to.  In a
  random environment, adaptation won't work.  So, still, to justify
  induction
  as adaptation you have to make *some* assumptions about the world.
 
  The Occam prior gives one such assumption: that (to give just one form)
  sets
  of observations in the world tend to be producible by short computer
  programs.
 
  For adaptation to successfully carry out induction, *some* vaguely
  comparable property to this must hold, and I'm not sure if you have
  articulated which one you assume, or if you leave this open.
 
  In effect, you implicitly assume something like an Occam prior, because
  you're saying that  a system with finite resources can successfully
  adapt to
  the world ... which means that sets of observations in the world *must*
  be
  approximately summarizable via subprograms that can be executed within
  this
  system.
 
  So I argue that, even though it's not your preferred way to think about
  it,
  your own approach to AI theory and practice implicitly assumes some
  variant
  of the Occam prior holds in the real world.
 
 
  Here I just want to point out that the original and basic meaning of
  Occam's Razor and those two common (mis)usages of it are not
  necessarily the same. I fully agree with the former, but not the
  latter, and I haven't seen any convincing justification of the latter.
  Instead, they are often taken as granted, under the name of Occam's
  Razor.
 
  I agree that the notion of an Occam prior is a significant conceptual
  beyond
  the original Occam's Razor precept enounced long ago.
 
  Also, I note that, for those who posit the Occam prior as a **prior
  assumption**, there is not supposed to be any convincing justification
  for
  it.  The idea is simply that: one must make *some* assumption
  (explicitly or
  implicitly) if one wants to do induction, and this is the assumption
  that
  some people choose to make.
 
  -- Ben G
 
 
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 A human being should be able to change a diaper, plan an invasion, butcher
 a hog, conn a ship, design a building, write a sonnet, balance accounts,
 build a wall, set a bone, comfort the dying, take orders, give orders,
 cooperate, act alone, solve equations, analyze 

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
 The question that is puzzling, though, is: how can it be that these 
 uncomputable, inexpressible entities are so bloody useful ;-)  ... for 
 instance in differential calculus ...

Differential calculus doesn't use those individual entities . . . . 

 Also, to say that uncomputable entities don't exist because they can't be 
 finitely described, is basically just to *define* existence as finite 
 describability.

I never said any such thing.  I referenced a class of numbers that I defined as 
never physically manifesting and never being conceptually distinct and then 
asked if they existed.  Clearly some portion of your liver that I can't define 
finitely still exists because it is physically manifest.

 So this is more a philosophical position on what exists  means than an 
 argument that could convince anyone.

Yes, in that I basically defined my version of exists as physically manifest 
and/or described or invoked and then asked if that matched Abram's definition.  
No, in that you're now coming in with half (or less) of my definition and 
arguing that I'm unconvincing.  :-)


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 11:44 AM
  Subject: Re: [agi] constructivist issues



  Mark,

  The question that is puzzling, though, is: how can it be that these 
uncomputable, inexpressible entities are so bloody useful ;-)  ... for instance 
in differential calculus ...

  Also, to say that uncomputable entities don't exist because they can't be 
finitely described, is basically just to *define* existence as finite 
describability.  So this is more a philosophical position on what exists  
means than an argument that could convince anyone.

  I have some more detailed thoughts on these issues that I'll write down 
sometime soon when I get the time.   My position is fairly close to yours but I 
think that with these sorts of issues, the devil is in the details.

  ben


  On Tue, Oct 28, 2008 at 6:53 AM, Mark Waser [EMAIL PROTECTED] wrote:

Abram,

I could agree with the statement that there are uncountably many 
*potential* numbers but I'm going to argue that any number that actually exists 
is eminently describable.

Take the set of all numbers that are defined far enough after the decimal 
point that they never accurately describe anything manifest in the physical 
universe and are never described or invoked by any entity in the physical 
universe (specifically including a method for the generation of that number).

Pi is clearly not in the set since a) it describes all sorts of ratios in 
the physical universe and b) there is a clear formula for generating successive 
approximations of it.

My question is -- do these numbers really exist?  And, if so, by what 
definition of exist since my definition is meant to rule out any form of 
manifestation whether physical or as a concept.

Clearly these numbers have the potential to exist -- but it should be 
equally clear that they do not actually exist (i.e. they are never 
individuated out of the class).

Any number which truly exists has at least one description either of the 
type of a) the number which is manifest as or b) the number which is generated 
by. 

Classicists seem to want to insist that all of these potential numbers 
actually do exist -- so they can make statements like There are uncountably 
many real numbers that no one can ever describe in any manner.  

I ask of them (and you) -- Show me just one.:-)




  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects.  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

In that case, shouldn't
you agree with the classical perspective on Godelian incompleteness,
since Godel's incompleteness theorem is about mathematical systems?


It depends.  Are you asking me a fully defined question within the current 
axioms of what you call mathematical systems (i.e. a pi question) or a cat 
question (which could *eventually* be defined by some massive extensions to 
your mathematical systems but which isn't currently defined in what you're 
calling mathematical systems)?


Saying that Gödel is about mathematical systems is not saying that it's not 
about cat-including systems.


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 12:06 PM
Subject: Re: [agi] constructivist issues



Mark,

Yes, I do keep dropping the context. This is because I am concerned
only with mathematical knowledge at the moment. I should have been
more specific.

So, if I understand you right, you are saying that you take the
classical view when it comes to mathematics. In that case, shouldn't
you agree with the classical perspective on Godelian incompleteness,
since Godel's incompleteness theorem is about mathematical systems?

--Abram

On Tue, Oct 28, 2008 at 10:20 AM, Mark Waser [EMAIL PROTECTED] wrote:

Hi,

  We keep going around and around because you keep dropping my 
distinction

between two different cases . . . .

  The statement that The cat is red is undecidable by arithmetic 
because
it can't even be defined in terms of the axioms of arithmetic (i.e. it 
has

*meaning* outside of arithmetic).  You need to construct
additions/extensions to arithmetic to even start to deal with it.

  The statement that Pi is a normal number is decidable by arithmetic
because each of the terms has meaning in arithmetic (so it certainly can 
be
disproved by counter-example).  It may not be deducible from the axioms 
but

the meaning of the statement is contained within the axioms.

  The first example is what you call a constructivist view.  The second
example is what you call a classical view.  Which one I take is eminently
context-dependent and you keep dropping the context.  If the meaning of 
the
statement is contained within the system, it is decidable even if it is 
not

deducible.  If the meaning is beyond the system, then it is not decidable
because you can't even express what you're deciding.

  Mark


- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 9:32 AM
Subject: Re: [agi] constructivist issues



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Pei Wang
Abram,

I agree with your basic idea in the following, though I usually put it
in different form.

Pei

On Tue, Oct 28, 2008 at 2:52 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Ben,

 You assert that Pei is forced to make an assumption about the
 regulatiry of the world to justify adaptation. Pei could also take a
 different argument. He could try to show that *if* a strategy exists
 that can be implemented given the finite resources, NARS will
 eventually find it. Thus, adaptation is justified on a sort of we
 might as well try basis. (The proof would involve showing that NARS
 searches the state of finite-state-machines that can be implemented
 with the resources at hand, and is more probable to stay for longer
 periods of time in configurations that give more reward, such that
 NARS would eventually settle on a configuration if that configuration
 consistently gave the highest reward.)

 So, some form of learning can take place with no assumptions. The
 problem is that the search space is exponential in the resources
 available, so there is some maximum point where the system would
 perform best (because the amount of resources match the problem), but
 giving the system more resources would hurt performance (because the
 system searches the unnecessarily large search space). So, in this
 sense, the system's behavior seems counterintuitive-- it does not seem
 to be taking advantage of the increased resources.

 I'm not claiming NARS would have that problem, of course just that
 a theoretical no-assumption learner would.

 --Abram

 On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel [EMAIL PROTECTED] wrote:


 On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang [EMAIL PROTECTED] wrote:

 Ben,

 Thanks. So the other people now see that I'm not attacking a straw man.

 My solution to Hume's problem, as embedded in the experience-grounded
 semantics, is to assume no predictability, but to justify induction as
 adaptation. However, it is a separate topic which I've explained in my
 other publications.

 Right, but justifying induction as adaptation only works if the environment
 is assumed to have certain regularities which can be adapted to.  In a
 random environment, adaptation won't work.  So, still, to justify induction
 as adaptation you have to make *some* assumptions about the world.

 The Occam prior gives one such assumption: that (to give just one form) sets
 of observations in the world tend to be producible by short computer
 programs.

 For adaptation to successfully carry out induction, *some* vaguely
 comparable property to this must hold, and I'm not sure if you have
 articulated which one you assume, or if you leave this open.

 In effect, you implicitly assume something like an Occam prior, because
 you're saying that  a system with finite resources can successfully adapt to
 the world ... which means that sets of observations in the world *must* be
 approximately summarizable via subprograms that can be executed within this
 system.

 So I argue that, even though it's not your preferred way to think about it,
 your own approach to AI theory and practice implicitly assumes some variant
 of the Occam prior holds in the real world.


 Here I just want to point out that the original and basic meaning of
 Occam's Razor and those two common (mis)usages of it are not
 necessarily the same. I fully agree with the former, but not the
 latter, and I haven't seen any convincing justification of the latter.
 Instead, they are often taken as granted, under the name of Occam's
 Razor.

 I agree that the notion of an Occam prior is a significant conceptual beyond
 the original Occam's Razor precept enounced long ago.

 Also, I note that, for those who posit the Occam prior as a **prior
 assumption**, there is not supposed to be any convincing justification for
 it.  The idea is simply that: one must make *some* assumption (explicitly or
 implicitly) if one wants to do induction, and this is the assumption that
 some people choose to make.

 -- Ben G



 
 agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Abram Demski
Mark,

Thank you, that clarifies somewhat.

But, *my* answer to *your* question would seem to depend on what you
mean when you say fully defined. Under the classical interpretation,
yes: the question is fully defined, so it is a pi question. Under
the constructivist interpretation, no: the question is not fully
defined, so it is a cat question.

 Numbers can be fully defined in the classical sense, but not in the
constructivist sense. So, when you say fully defined question, do
you mean a question for which all answers are stipulated by logical
necessity (classical), or logical deduction (constructivist)?

--Abram Demski

On Tue, Oct 28, 2008 at 3:28 PM, Mark Waser [EMAIL PROTECTED] wrote:
 In that case, shouldn't
 you agree with the classical perspective on Godelian incompleteness,
 since Godel's incompleteness theorem is about mathematical systems?

 It depends.  Are you asking me a fully defined question within the current
 axioms of what you call mathematical systems (i.e. a pi question) or a cat
 question (which could *eventually* be defined by some massive extensions to
 your mathematical systems but which isn't currently defined in what you're
 calling mathematical systems)?

 Saying that Gödel is about mathematical systems is not saying that it's not
 about cat-including systems.

 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Tuesday, October 28, 2008 12:06 PM
 Subject: Re: [agi] constructivist issues




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

Numbers can be fully defined in the classical sense, but not in the

constructivist sense. So, when you say fully defined question, do
you mean a question for which all answers are stipulated by logical
necessity (classical), or logical deduction (constructivist)?

How (or why) are numbers not fully defined in a constructionist sense?

(I was about to ask you whether or not you had answered your own question 
until that caught my eye on the second or third read-through).



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 3:47 PM
Subject: Re: [agi] constructivist issues


Mark,

Thank you, that clarifies somewhat.

But, *my* answer to *your* question would seem to depend on what you
mean when you say fully defined. Under the classical interpretation,
yes: the question is fully defined, so it is a pi question. Under
the constructivist interpretation, no: the question is not fully
defined, so it is a cat question.

Numbers can be fully defined in the classical sense, but not in the
constructivist sense. So, when you say fully defined question, do
you mean a question for which all answers are stipulated by logical
necessity (classical), or logical deduction (constructivist)?

--Abram Demski

On Tue, Oct 28, 2008 at 3:28 PM, Mark Waser [EMAIL PROTECTED] wrote:

In that case, shouldn't
you agree with the classical perspective on Godelian incompleteness,
since Godel's incompleteness theorem is about mathematical systems?


It depends.  Are you asking me a fully defined question within the current
axioms of what you call mathematical systems (i.e. a pi question) or a cat
question (which could *eventually* be defined by some massive extensions 
to

your mathematical systems but which isn't currently defined in what you're
calling mathematical systems)?

Saying that Gödel is about mathematical systems is not saying that it's 
not

about cat-including systems.

- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 12:06 PM
Subject: Re: [agi] constructivist issues





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread William Pearson
2008/10/28 Ben Goertzel [EMAIL PROTECTED]:

 On the other hand, I just want to point out that to get around Hume's
 complaint you do need to make *some* kind of assumption about the regularity
 of the world.  What kind of assumption of this nature underlies your work on
 NARS (if any)?

Not directed to me, but my take on this interesting question. The
initial architecture would have limited assumptions about the world.
Then the programming in the architecture would for the bias.

Initially the system would divide up the world into the simple
(inanimate) and highly complex (animate). Why should the system expect
animate things to be complex? Because it applies the intentional
stance and thinks that they are optimal problem solvers. Optimal
problems solvers in a social environment tend to high complexity, as
there is an arms race as to who can predict the others, but not be
predicted and exploited by the others.

Thinking, there are other things like me out here, when you are a
complex entity entails thinking things are complex, even when there
might be simpler explanations. E.g. what causes weather.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Abram Demski
Mark,

That is thanks to Godel's incompleteness theorem. Any formal system
that describes numbers is doomed to be incomplete, meaning there will
be statements that can be constructed purely by reference to numbers
(no red cats!) that the system will fail to prove either true or
false.

So my question is, do you interpret this as meaning Numbers are not
well-defined and can never be (constructivist), or do you interpret
this as It is impossible to pack all true information about numbers
into an axiom system (classical)?

Hmm By the way, I might not be using the term constructivist in
a way that all constructivists would agree with. I think
intuitionist (a specific type of constructivist) would be a better
term for the view I'm referring to.

--Abram Demski

On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Numbers can be fully defined in the classical sense, but not in the

 constructivist sense. So, when you say fully defined question, do
 you mean a question for which all answers are stipulated by logical
 necessity (classical), or logical deduction (constructivist)?

 How (or why) are numbers not fully defined in a constructionist sense?

 (I was about to ask you whether or not you had answered your own question
 until that caught my eye on the second or third read-through).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Matt Mahoney
--- On Tue, 10/28/08, Mike Tintner [EMAIL PROTECTED] wrote:

 MW:Pi is a normal number is decidable by arithmetic
 because each of the terms has meaning in arithmetic
 
 Can it be expressed in purely mathematical terms/signs
 without using language? 

No, because mathematics is a language.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mike Tintner
Ben,

What are the mathematical or logical signs for normal number/ rational 
number? My assumption would be that neither logic nor maths can be done 
without some language attached - such as the term rational number -  but I'm 
asking from extensive ignorance.

Ben:yes

MT:MW:Pi is a normal number is decidable by arithmetic

because each of the terms has meaning in arithmetic


Can it be expressed in purely mathematical terms/signs without using 
language? 






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Ben Goertzel
All of math can be done without any words ... it just gets annoying to read

for instance, all math can be formalized in this sort of manner

http://www.cs.miami.edu/~tptp/MizarTPTP/TPTPProofs/arithm/arithm__t1_arithm

and the words in there like

v1_ordinal1(B)

could be replaced with

v1_1234(B)

or whatever, and it wouldn't make any difference...

ben



On Tue, Oct 28, 2008 at 2:10 PM, Mike Tintner [EMAIL PROTECTED]wrote:

  Ben,

 What are the mathematical or logical signs for normal number/ rational
 number? My assumption would be that neither logic nor maths can be done
 without some language attached - such as the term rational number -  but
 I'm asking from extensive ignorance.

 Ben:yes

 MT:MW:Pi is a normal number is decidable by arithmetic


 because each of the terms has meaning in arithmetic

 Can it be expressed in purely mathematical terms/signs without using
 language?




 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Ben Goertzel
Hi guys,

I took a couple hours on a red-eye flight last night to write up in more
detail my
argument as to why uncomputable entities are useless for science:

http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-for.html

Of course, I had to assume a specific formal model of science which may be
controversial.  But at any rate, I think I did succeed in writing down my
argument in a more
clear way than I'd been able to do in scattershot emails.

The only real AGI relevance here is some comments on Penrose's nasty AI
theories, e.g.
in the last paragraph and near the intro...

-- Ben G


On Tue, Oct 28, 2008 at 2:02 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Mark,

 That is thanks to Godel's incompleteness theorem. Any formal system
 that describes numbers is doomed to be incomplete, meaning there will
 be statements that can be constructed purely by reference to numbers
 (no red cats!) that the system will fail to prove either true or
 false.

 So my question is, do you interpret this as meaning Numbers are not
 well-defined and can never be (constructivist), or do you interpret
 this as It is impossible to pack all true information about numbers
 into an axiom system (classical)?

 Hmm By the way, I might not be using the term constructivist in
 a way that all constructivists would agree with. I think
 intuitionist (a specific type of constructivist) would be a better
 term for the view I'm referring to.

 --Abram Demski

 On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] wrote:
  Numbers can be fully defined in the classical sense, but not in the
 
  constructivist sense. So, when you say fully defined question, do
  you mean a question for which all answers are stipulated by logical
  necessity (classical), or logical deduction (constructivist)?
 
  How (or why) are numbers not fully defined in a constructionist sense?
 
  (I was about to ask you whether or not you had answered your own question
  until that caught my eye on the second or third read-through).
 
 


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Ben Goertzel
What Hutter proved is (very roughly) that given massive computational
resources, following Occam's Razor will be -- within some possibly quite
large constant -- the best way to achieve goals in a computable
environment...

That's not exactly proving Occam's Razor, though it is a proof related to
Occam's Razor...

One could easily argue it is totally irrelevant to AI due to its assumption
of massive computational resources

ben g

On Tue, Oct 28, 2008 at 2:23 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 Hutter proved Occam's Razor (AIXI) for the case of any environment with a
 computable probability distribution. It applies to us because the observable
 universe is Turing computable according to currently known laws of physics.
 Specifically, the observable universe has a finite description length
 (approximately 2.91 x 10^122 bits, the Bekenstein bound of the Hubble
 radius).

 AIXI has nothing to do with insufficiency of resources. Given unlimited
 resources we would still prefer the (algorithmically) simplest explanation
 because it is the most likely under a Solomonoff distribution of possible
 environments.

 Also, AIXI does not state the simplest answer is the best answer. It says
 that the simplest answer consistent with observation so far is the best
 answer. When we are short on resources (and we always are because AIXI is
 not computable), then we may choose a different explanation than the
 simplest one. However this does not make the alternative correct.

 -- Matt Mahoney, [EMAIL PROTECTED]


 --- On Tue, 10/28/08, Pei Wang [EMAIL PROTECTED] wrote:

  From: Pei Wang [EMAIL PROTECTED]
  Subject: [agi] Occam's Razor and its abuse
  To: agi@v2.listbox.com
  Date: Tuesday, October 28, 2008, 11:58 AM
  Triggered by several recent discussions, I'd like to
  make the
  following position statement, though won't commit
  myself to long
  debate on it. ;-)
 
  Occam's Razor, in its original form, goes like
  entities must not be
  multiplied beyond necessity, and it is often stated
  as All other
  things being equal, the simplest solution is the best
  or when
  multiple competing theories are equal in other respects,
  the principle
  recommends selecting the theory that introduces the fewest
  assumptions
  and postulates the fewest entities --- all from
  http://en.wikipedia.org/wiki/Occam's_razorhttp://en.wikipedia.org/wiki/Occam%27s_razor
 
  I fully agree with all of the above statements.
 
  However, to me, there are two common misunderstandings
  associated with
  it in the context of AGI and philosophy of science.
 
  (1) To take this statement as self-evident or a stand-alone
  postulate
 
  To me, it is derived or implied by the insufficiency of
  resources. If
  a system has sufficient resources, it has no good reason to
  prefer a
  simpler theory.
 
  (2) To take it to mean The simplest answer is usually
  the correct answer.
 
  This is a very different statement, which cannot be
  justified either
  analytically or empirically.  When theory A is an
  approximation of
  theory B, usually the former is simpler than the latter,
  but less
  correct or accurate, in terms of
  its relation with all available
  evidence. When we are short in resources and have a low
  demand on
  accuracy, we often prefer A over B, but it does not mean
  that by doing
  so we judge A as more correct than B.
 
  In summary, in choosing among alternative theories or
  conclusions, the
  preference for simplicity comes from shortage of resources,
  though
  simplicity and correctness are logically independent of
  each other.
 
  Pei



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Ben Goertzel
Au contraire, I suspect that the fact that biological organisms grow
via the same sorts of processes as the biological environment in which
the live, causes the organisms' minds to be built with **a lot** of implicit
bias that is useful for surviving in the environment...

Some have argued that this kind of bias is **all you need** for evolution...
see Evolution without Selection by A. Lima de Faria.  I think that is
wrong, but it's interesting that there's enough evidence to even try to
make the argument...

ben g

On Tue, Oct 28, 2008 at 2:37 PM, Ed Porter [EMAIL PROTECTED] wrote:

 It appears to me that the assumptions about initial priors used by a self
 learning AGI or an evolutionary line of AGI's could be quite minimal.

 My understanding is that once a probability distribution starts receiving
 random samples from its distribution the effect of the original prior
 becomes rapidly lost, unless it is a rather rare one.  Such rare problem
 priors would get selected against quickly by evolution.  Evolution would
 tend to tune for the most appropriate priors for the success of subsequent
 generations (either or computing in the same system if it is capable of
 enough change or of descendant systems).  Probably the best priors would
 generally be ones that could be trained moderately rapidly by data.

 So it seems an evolutionary system or line could initially learn priors
 without any assumptions for priors other than a random picking of priors.
 Over time and multiple generations it might develop hereditary priors, an
 perhaps even different hereditary priors for parts of its network connected
 to different inputs, outputs or internal controls.

 The use of priors in an AGI could be greatly improved by having a gen/comp
 hiearachy in which models for a given concept could be inherited from the
 priors of sets of models for similar concepts, and that the set of priors
 appropriate could change contextually.  It would also seem that the notion
 of a prior could be improve by blending information from episodic and
 probabilistic models.

 It would appear than in almost any generally intelligent system, being able
 to approximate reality in a manner sufficient for evolutionary success with
 the most efficient representations would be a characteristic that would be
 greatly preferred by evolution, because it would allow systems to better
 model more of their environement sufficiently well for evolutionary success
 with whatever current modeling capacity they have.

 So, although a completely accurate description of virtually anything may
 not
 find much use for Occam's Razor, as a practically useful representation it
 often will.  It seems to me that Occam's Razor is more oriented to deriving
 meaningful generalizations that it is exact descriptions of anything.

 Furthermore, it would seem to me that a more simple set of preconditions,
 is
 generally more probable than a more complex one, because it requires less
 coincidence.  It would seem to me this would be true under most random sets
 of priors for the probabilities of the possible sets of components involved
 and Occam's Razor type selection.

 The are the musings of an untrained mind, since I have not spent much time
 studying philosophy, because such a high percent of it was so obviously
 stupid (such as what was commonly said when I was young, that you can't
 have
 intelligence without language) and my understanding of math is much less
 than that of many on this list.  But none the less I think much of what I
 have said above is true.

 I think its gist is not totally dissimilar to what Abram has said.

 Ed Porter




 -Original Message-
 From: Pei Wang [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, October 28, 2008 3:05 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Occam's Razor and its abuse


 Abram,

 I agree with your basic idea in the following, though I usually put it in
 different form.

 Pei

 On Tue, Oct 28, 2008 at 2:52 PM, Abram Demski [EMAIL PROTECTED]
 wrote:
  Ben,
 
  You assert that Pei is forced to make an assumption about the
  regulatiry of the world to justify adaptation. Pei could also take a
  different argument. He could try to show that *if* a strategy exists
  that can be implemented given the finite resources, NARS will
  eventually find it. Thus, adaptation is justified on a sort of we
  might as well try basis. (The proof would involve showing that NARS
  searches the state of finite-state-machines that can be implemented
  with the resources at hand, and is more probable to stay for longer
  periods of time in configurations that give more reward, such that
  NARS would eventually settle on a configuration if that configuration
  consistently gave the highest reward.)
 
  So, some form of learning can take place with no assumptions. The
  problem is that the search space is exponential in the resources
  available, so there is some maximum point where the system would
  perform best (because the amount of 

Re: [agi] constructivist issues

2008-10-28 Thread Ben Goertzel
Any formal system that contains some basic arithmetic apparatus equivalent
to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be incomplete with
respect to statements about numbers... that is what Godel originally
showed...

On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED] wrote:

 That is thanks to Godel's incompleteness theorem. Any formal system
 that describes numbers is doomed to be incomplete


 Yes, any formal system is doomed to be incomplete.  Emphatically, NO!  It
 is not true that any formal system is doomed to be incomplete WITH RESPECT
 TO NUMBERS.

 It is entirely possible (nay, almost certain) that there is a larger system
 where the information about numbers is complete but that the other things
 that the system describes are incomplete.

  So my question is, do you interpret this as meaning Numbers are not
 well-defined and can never be (constructivist), or do you interpret
 this as It is impossible to pack all true information about numbers
 into an axiom system (classical)?


 Hmmm.  From a larger reference framework, the former
 claimed-to-be-constructivist view isn't true/correct because it clearly *is*
 possible that numbers may be well-defined within a larger system (i.e. the
 can never be is incorrect).

 Does that mean that I'm a classicist or that you are mis-interpreting
 constructivism (because you're attributing a provably false statement to
 constructivists)?  I'm leaning towards the latter currently.  ;-)

 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Tuesday, October 28, 2008 5:02 PM
 Subject: Re: [agi] constructivist issues


  Mark,

 That is thanks to Godel's incompleteness theorem. Any formal system
 that describes numbers is doomed to be incomplete, meaning there will
 be statements that can be constructed purely by reference to numbers
 (no red cats!) that the system will fail to prove either true or
 false.

 So my question is, do you interpret this as meaning Numbers are not
 well-defined and can never be (constructivist), or do you interpret
 this as It is impossible to pack all true information about numbers
 into an axiom system (classical)?

 Hmm By the way, I might not be using the term constructivist in
 a way that all constructivists would agree with. I think
 intuitionist (a specific type of constructivist) would be a better
 term for the view I'm referring to.

 --Abram Demski

 On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Numbers can be fully defined in the classical sense, but not in the


 constructivist sense. So, when you say fully defined question, do
 you mean a question for which all answers are stipulated by logical
 necessity (classical), or logical deduction (constructivist)?

 How (or why) are numbers not fully defined in a constructionist sense?

 (I was about to ask you whether or not you had answered your own question
 until that caught my eye on the second or third read-through).




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
 Any formal system that contains some basic arithmetic apparatus equivalent 
 to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be incomplete with 
 respect to statements about numbers... that is what Godel originally 
 showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been WITH 
RESPECT TO THE DEFINITION OF NUMBERS since I was responding to Numbers are not 
well-defined and can never be.  Further, I should not have said information 
about numbers when I meant definition of numbers.  two radically different 
thingsArgh!

= = = = = = = = 

So Ben, how would you answer Abram's question So my question is, do you 
interpret this as meaning Numbers are not well-defined and can never be 
(constructivist), or do you interpret this as It is impossible to pack all 
true information about numbers into an axiom system (classical)?

Does the statement that a formal system is incomplete with respect to 
statements about numbers mean that Numbers are not well-defined and can never 
be.

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about constructivism as in 
Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
Ick!  I emphatically do not believe When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence.



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper hole  :-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your argument as 
to why uncomputable entities are useless for science.  I'm going to need to go 
back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 5:55 PM
  Subject: Re: [agi] constructivist issues



  Any formal system that contains some basic arithmetic apparatus equivalent to 
http://en.wikipedia.org/wiki/Peano_axioms is doomed to be incomplete with 
respect to statements about numbers... that is what Godel originally showed...


  On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED] wrote:

  That is thanks to Godel's incompleteness theorem. Any formal system
  that describes numbers is doomed to be incomplete



Yes, any formal system is doomed to be incomplete.  Emphatically, NO!  It 
is not true that any formal system is doomed to be incomplete WITH RESPECT TO 
NUMBERS.

It is entirely possible (nay, almost certain) that there is a larger system 
where the information about numbers is complete but that the other things that 
the system describes are incomplete.



  So my question is, do you interpret this as meaning Numbers are not
  well-defined and can never be (constructivist), or do you interpret
  this as It is impossible to pack all true information about numbers
  into an axiom system (classical)?



Hmmm.  From a larger reference framework, the former 
claimed-to-be-constructivist view isn't true/correct because it clearly *is* 
possible that numbers may be well-defined within a larger system (i.e. the can 
never be is incorrect).

Does that mean that I'm a classicist or that you are mis-interpreting 
constructivism (because you're attributing a provably false statement to 
constructivists)?  I'm leaning towards the latter currently.  ;-)


- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com

Sent: Tuesday, October 28, 2008 5:02 PM

Subject: Re: [agi] constructivist issues



  Mark,

  That is thanks to Godel's incompleteness theorem. Any formal system
  that describes numbers is doomed to be incomplete, meaning there will
  be statements that can be constructed purely by reference to numbers
  (no red cats!) that the system will fail to prove either true or
  false.

  So my question is, do you interpret this as meaning Numbers are not
  well-defined and can never be (constructivist), or do you interpret
  this as It is impossible to pack all true information about numbers
  into an axiom system (classical)?

  Hmm By the way, I might not be using the term constructivist in
  a way that all constructivists would agree with. I think
  intuitionist (a specific type of constructivist) would be a better
  term for the view I'm referring to.

  --Abram Demski

  On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] wrote:

Numbers can be fully defined in the classical sense, but not in the


constructivist sense. So, when you say fully defined question, do
you mean a question for which all answers are stipulated by logical

Re: [agi] constructivist issues

2008-10-28 Thread Ben Goertzel
well-defined is not well-defined in my view...

However, it does seem clear that the integers (for instance) is not an
entity with *scientific* meaning, if you accept my formalization of science
in the blog entry I recently posted...



On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser [EMAIL PROTECTED] wrote:

   Any formal system that contains some basic arithmetic apparatus
 equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be
 incomplete with respect to statements about numbers... that is what Godel
 originally showed...

 Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been WITH
 RESPECT TO THE DEFINITION OF NUMBERS since I was responding to Numbers are
 not well-defined and can never be.  Further, I should not have said
 information about numbers when I meant definition of numbers.  two
 radically different thingsArgh!

 = = = = = = = =

 So Ben, how would you answer Abram's question So my question is, do you
 interpret this as meaning Numbers are not well-defined and can never be
 (constructivist), or do you interpret this as It is impossible to pack all
 true information about numbers into an axiom system (classical)?

 Does the statement that a formal system is incomplete with respect to
 statements about numbers mean that Numbers are not well-defined and can
 never be.

 = = = = = = =

 (Semi-)Retraction - maybe? (mostly for Abram).

 Ick again!  I was assuming that we were talking about constructivism as in
 Constructivist epistemology (
 http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just
 had Constructivism (mathematics) pointed out to me (
 http://en.wikipedia.org/wiki/Constructivism_(mathematicshttp://en.wikipedia.org/wiki/Constructivism_%28mathematics))
 All I can say is Ick!  I emphatically do not believe When one assumes
 that an object does not exist and derives a contradiction from that
 assumption http://en.wikipedia.org/wiki/Reductio_ad_absurdum, one still
 has not found the object and therefore not proved its existence.


 = = = = = = = =

 I'm quitting and going home now to avoid digging myself a deeper hole  :-)

 Mark

 PS.  Ben, I read and, at first glance, liked and agreed with your argument
 as to why uncomputable entities are useless for science.  I'm going to need
 to go back over it a few more times though.:-)

 - Original Message -

 *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Tuesday, October 28, 2008 5:55 PM
 *Subject:* Re: [agi] constructivist issues


 Any formal system that contains some basic arithmetic apparatus equivalent
 to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be incomplete
 with respect to statements about numbers... that is what Godel originally
 showed...

 On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED] wrote:

  That is thanks to Godel's incompleteness theorem. Any formal system
 that describes numbers is doomed to be incomplete


 Yes, any formal system is doomed to be incomplete.  Emphatically, NO!  It
 is not true that any formal system is doomed to be incomplete WITH RESPECT
 TO NUMBERS.

 It is entirely possible (nay, almost certain) that there is a larger
 system where the information about numbers is complete but that the other
 things that the system describes are incomplete.

 So my question is, do you interpret this as meaning Numbers are not
 well-defined and can never be (constructivist), or do you interpret
 this as It is impossible to pack all true information about numbers
 into an axiom system (classical)?


 Hmmm.  From a larger reference framework, the former
 claimed-to-be-constructivist view isn't true/correct because it clearly *is*
 possible that numbers may be well-defined within a larger system (i.e. the
 can never be is incorrect).

 Does that mean that I'm a classicist or that you are mis-interpreting
 constructivism (because you're attributing a provably false statement to
 constructivists)?  I'm leaning towards the latter currently.  ;-)

 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Tuesday, October 28, 2008 5:02 PM
 Subject: Re: [agi] constructivist issues


   Mark,

 That is thanks to Godel's incompleteness theorem. Any formal system
 that describes numbers is doomed to be incomplete, meaning there will
 be statements that can be constructed purely by reference to numbers
 (no red cats!) that the system will fail to prove either true or
 false.

 So my question is, do you interpret this as meaning Numbers are not
 well-defined and can never be (constructivist), or do you interpret
 this as It is impossible to pack all true information about numbers
 into an axiom system (classical)?

 Hmm By the way, I might not be using the term constructivist in
 a way that all constructivists would agree with. I think
 intuitionist (a specific type of constructivist) would be a better
 term for the view I'm referring to.

 --Abram Demski

 On Tue, Oct 

Re: [agi] constructivist issues

2008-10-28 Thread Mike Tintner

Matt,

Interesting question re the differences between mathematics - i.e. 
arithmetic, algebra - and logic vs language.


I haven't really thought about this, but I wouldn't call maths a language.

Maths consists of symbolic systems of quantification and schematic patterns 
(geometry) which can only be applied to distinct entities - and is very 
limited in its capacity to describe the world.


Language is vastly more general and abstract and actually not normally meant 
to be reduced to distinct quantities, patterns or entities, or pinned down, 
period, as maths is  e,g.


LIFE TAKES LOTS OF FORMS  [life is a supra-entity, lots a 
supra-quantity, form a supra-pattern ]


ditto: MATT MAHONEY IS A PERSONALITY IN PROGRESS

Verbal statements like these aren't meant to be pinned down or definitively 
defined - and beyond the reach of maths.


Language consists of open-ended classes;  maths consists of closed-ended 
classes. Only language has the capacity to comprehensively describe the 
world. Maths is more of a sub-language than a true, full language.



Matt:



MW:Pi is a normal number is decidable by arithmetic
because each of the terms has meaning in arithmetic

Can it be expressed in purely mathematical terms/signs
without using language?


No, because mathematics is a language.







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Pei Wang
Matt,

The currently known laws of physics is a *description* of the
universe at a certain level, which is fundamentally different from the
universe itself. Also, All human knowledge can be reduced into
physics is not a view point accepted by everyone.

Furthermore, computable is a property of a mathematical function. It
takes a bunch of assumptions to be applied to a statement, and some
additional ones to be applied to an object --- Is the Earth
computable? Does the previous question ever make sense?

Whenever someone prove something outside mathematics, it is always
based on certain assumptions. If the assumptions are not well
justified, there is no strong reason for people to accept the
conclusion, even though the proof process is correct.

Pei

On Tue, Oct 28, 2008 at 5:23 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Hutter proved Occam's Razor (AIXI) for the case of any environment with a 
 computable probability distribution. It applies to us because the observable 
 universe is Turing computable according to currently known laws of physics. 
 Specifically, the observable universe has a finite description length 
 (approximately 2.91 x 10^122 bits, the Bekenstein bound of the Hubble radius).

 AIXI has nothing to do with insufficiency of resources. Given unlimited 
 resources we would still prefer the (algorithmically) simplest explanation 
 because it is the most likely under a Solomonoff distribution of possible 
 environments.

 Also, AIXI does not state the simplest answer is the best answer. It says 
 that the simplest answer consistent with observation so far is the best 
 answer. When we are short on resources (and we always are because AIXI is not 
 computable), then we may choose a different explanation than the simplest 
 one. However this does not make the alternative correct.

 -- Matt Mahoney, [EMAIL PROTECTED]


 --- On Tue, 10/28/08, Pei Wang [EMAIL PROTECTED] wrote:

 From: Pei Wang [EMAIL PROTECTED]
 Subject: [agi] Occam's Razor and its abuse
 To: agi@v2.listbox.com
 Date: Tuesday, October 28, 2008, 11:58 AM
 Triggered by several recent discussions, I'd like to
 make the
 following position statement, though won't commit
 myself to long
 debate on it. ;-)

 Occam's Razor, in its original form, goes like
 entities must not be
 multiplied beyond necessity, and it is often stated
 as All other
 things being equal, the simplest solution is the best
 or when
 multiple competing theories are equal in other respects,
 the principle
 recommends selecting the theory that introduces the fewest
 assumptions
 and postulates the fewest entities --- all from
 http://en.wikipedia.org/wiki/Occam's_razor

 I fully agree with all of the above statements.

 However, to me, there are two common misunderstandings
 associated with
 it in the context of AGI and philosophy of science.

 (1) To take this statement as self-evident or a stand-alone
 postulate

 To me, it is derived or implied by the insufficiency of
 resources. If
 a system has sufficient resources, it has no good reason to
 prefer a
 simpler theory.

 (2) To take it to mean The simplest answer is usually
 the correct answer.

 This is a very different statement, which cannot be
 justified either
 analytically or empirically.  When theory A is an
 approximation of
 theory B, usually the former is simpler than the latter,
 but less
 correct or accurate, in terms of
 its relation with all available
 evidence. When we are short in resources and have a low
 demand on
 accuracy, we often prefer A over B, but it does not mean
 that by doing
 so we judge A as more correct than B.

 In summary, in choosing among alternative theories or
 conclusions, the
 preference for simplicity comes from shortage of resources,
 though
 simplicity and correctness are logically independent of
 each other.

 Pei



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Matt Mahoney
--- On Tue, 10/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 What Hutter proved is (very roughly) that given massive computational 
 resources, following Occam's Razor will be -- within some possibly quite 
 large constant -- the best way to achieve goals in a computable environment...

 That's not exactly proving Occam's Razor, though it is a proof related to 
 Occam's Razor...

No, that's AIXI^tl. I was talking about AIXI. Hutter proved both.

 One could easily argue it is totally irrelevant to AI due to its assumption 
 of massive computational resources

If you mean AIXI^tl, I agree. However, it is AIXI that proves Occam's Razor. 
AIXI is useful to AGI exactly because it proves noncomputability. We can stop 
looking for a neat solution.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Occam's Razor and its abuse

2008-10-28 Thread Eric Baum

Pei Triggered by several recent discussions, I'd like to make the
Pei following position statement, though won't commit myself to long
Pei debate on it. ;-)

Pei Occam's Razor, in its original form, goes like entities must not
Pei be multiplied beyond necessity, and it is often stated as All
Pei other things being equal, the simplest solution is the best or
Pei when multiple competing theories are equal in other respects,
Pei the principle recommends selecting the theory that introduces the
Pei fewest assumptions and postulates the fewest entities --- all
Pei from http://en.wikipedia.org/wiki/Occam's_razor

Pei I fully agree with all of the above statements.

Pei However, to me, there are two common misunderstandings associated
Pei with it in the context of AGI and philosophy of science.

Pei (1) To take this statement as self-evident or a stand-alone
Pei postulate

Pei To me, it is derived or implied by the insufficiency of
Pei resources. If a system has sufficient resources, it has no good
Pei reason to prefer a simpler theory.

With all due respect, this is mistaken. 
Occam's Razor, in some form, is the heart of Generalization, which
is the essence (and G) of GI.

For example, if you study concept learning from examples,
say in the PAC learning context (related theorems
hold in some other contexts as well), 
there are theorems to the effect that if you find
a hypothesis from a simple enough class of a hypotheses
it will with very high probability accurately classify new 
examples chosen from the same distribution, 

and conversely theorems that state (roughly speaking) that
any method that chooses a hypothesis from too expressive a class
of hypotheses will have a probability that can be bounded below
by some reasonable number like 1/7,
of having large error in its predictions on new examples--
in other words it is impossible to PAC learn without respecting
Occam's Razor.

For discussion of the above paragraphs, I'd refer you to
Chapter 4 of What is Thought? (MIT Press, 2004).

In other words, if you are building some system that learns
about the world, it had better respect Occam's razor if you
want whatever it learns to apply to new experience. 
(I use the term Occam's razor loosely; using
hypotheses that are highly constrained in ways other than
just being concise may work, but you'd better respect
simplicity broadly defined. See Chap 6 of WIT? for
more discussion of this point.)

The core problem of GI is generalization: you want to be able to
figure out new problems as they come along that you haven't seen
before. In order to do that, you basically must implicitly or
explicitly employ some version
of Occam's Razor, independent of how much resources you have.

In my view, the first and most important question to ask about
any proposal for AGI is, in what way is it going to produce
Occam hypotheses. If you can't answer that, don't bother implementing
a huge system in hopes of capturing your many insights, because
the bigger your implementation gets, the less likely it is to 
get where you want in the end.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Occam's Razor and its abuse

2008-10-28 Thread Ed Porter
===Below Ben wrote===
I suspect that the fact that biological organisms grow
via the same sorts of processes as the biological environment in which
the live, causes the organisms' minds to be built with **a lot** of implicit
bias that is useful for surviving in the environment...
 
===My Response==
Au Similaire.  That was  one of the points I was trying to make!   And that
arguably supports at least part of what Pei was arguing.
 
I am not arguing it is all you need.  You at least need some mechanism for
exploring at least some subspace space of possible priors, but you don't
need any specific pre-selected set of priors.
 
Ed Porter
 
 
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 28, 2008 5:50 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Occam's Razor and its abuse




Au contraire, I suspect that the fact that biological organisms grow
via the same sorts of processes as the biological environment in which
the live, causes the organisms' minds to be built with **a lot** of implicit
bias that is useful for surviving in the environment...

Some have argued that this kind of bias is **all you need** for evolution...
see Evolution without Selection by A. Lima de Faria.  I think that is
wrong, but it's interesting that there's enough evidence to even try to
make the argument...

ben g


On Tue, Oct 28, 2008 at 2:37 PM, Ed Porter [EMAIL PROTECTED] wrote:


It appears to me that the assumptions about initial priors used by a self
learning AGI or an evolutionary line of AGI's could be quite minimal.

My understanding is that once a probability distribution starts receiving
random samples from its distribution the effect of the original prior
becomes rapidly lost, unless it is a rather rare one.  Such rare problem
priors would get selected against quickly by evolution.  Evolution would
tend to tune for the most appropriate priors for the success of subsequent
generations (either or computing in the same system if it is capable of
enough change or of descendant systems).  Probably the best priors would
generally be ones that could be trained moderately rapidly by data.

So it seems an evolutionary system or line could initially learn priors
without any assumptions for priors other than a random picking of priors.
Over time and multiple generations it might develop hereditary priors, an
perhaps even different hereditary priors for parts of its network connected
to different inputs, outputs or internal controls.

The use of priors in an AGI could be greatly improved by having a gen/comp
hiearachy in which models for a given concept could be inherited from the
priors of sets of models for similar concepts, and that the set of priors
appropriate could change contextually.  It would also seem that the notion
of a prior could be improve by blending information from episodic and
probabilistic models.

It would appear than in almost any generally intelligent system, being able
to approximate reality in a manner sufficient for evolutionary success with
the most efficient representations would be a characteristic that would be
greatly preferred by evolution, because it would allow systems to better
model more of their environement sufficiently well for evolutionary success
with whatever current modeling capacity they have.

So, although a completely accurate description of virtually anything may not
find much use for Occam's Razor, as a practically useful representation it
often will.  It seems to me that Occam's Razor is more oriented to deriving
meaningful generalizations that it is exact descriptions of anything.

Furthermore, it would seem to me that a more simple set of preconditions, is
generally more probable than a more complex one, because it requires less
coincidence.  It would seem to me this would be true under most random sets
of priors for the probabilities of the possible sets of components involved
and Occam's Razor type selection.

The are the musings of an untrained mind, since I have not spent much time
studying philosophy, because such a high percent of it was so obviously
stupid (such as what was commonly said when I was young, that you can't have
intelligence without language) and my understanding of math is much less
than that of many on this list.  But none the less I think much of what I
have said above is true.

I think its gist is not totally dissimilar to what Abram has said.

Ed Porter





-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 28, 2008 3:05 PM
To: agi@v2.listbox.com

Subject: Re: [agi] Occam's Razor and its abuse


Abram,

I agree with your basic idea in the following, though I usually put it in
different form.

Pei

On Tue, Oct 28, 2008 at 2:52 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Ben,

 You assert that Pei is forced to make an assumption about the
 regulatiry of the world to justify adaptation. Pei could also take a
 different argument. He could try to show that 

Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Mike Tintner

Eric:The core problem of GI is generalization: you want to be able to
figure out new problems as they come along that you haven't seen
before. In order to do that, you basically must implicitly or
explicitly employ some version
of Occam's Razor

It all depends on the subject matter of the generalization. It's a fairly 
good principle, but there is such a thing as simple-mindedness. For example, 
what is the cluster of associations evoked in the human brain by any given 
idea, and what is the principle [or principles] that determines how many 
associations in how many domains and how many brain areas? The answers to 
these questions are unlikely to be simple. IOW if the subject matter is 
complex, the generalization may also have to be complex. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Pei Wang
Ed,

Since NARS doesn't follow the Bayesian approach, there is no initial
priors to be assumed. If we use a more general term, such as initial
knowledge or innate beliefs, then yes, you can add them into the
system, will will improve the system's performance. However, they are
optional. In NARS, all object-level (i.e., not meta-level) innate
beliefs can be learned by the system afterward.

Pei

On Tue, Oct 28, 2008 at 5:37 PM, Ed Porter [EMAIL PROTECTED] wrote:
 It appears to me that the assumptions about initial priors used by a self
 learning AGI or an evolutionary line of AGI's could be quite minimal.

 My understanding is that once a probability distribution starts receiving
 random samples from its distribution the effect of the original prior
 becomes rapidly lost, unless it is a rather rare one.  Such rare problem
 priors would get selected against quickly by evolution.  Evolution would
 tend to tune for the most appropriate priors for the success of subsequent
 generations (either or computing in the same system if it is capable of
 enough change or of descendant systems).  Probably the best priors would
 generally be ones that could be trained moderately rapidly by data.

 So it seems an evolutionary system or line could initially learn priors
 without any assumptions for priors other than a random picking of priors.
 Over time and multiple generations it might develop hereditary priors, an
 perhaps even different hereditary priors for parts of its network connected
 to different inputs, outputs or internal controls.

 The use of priors in an AGI could be greatly improved by having a gen/comp
 hiearachy in which models for a given concept could be inherited from the
 priors of sets of models for similar concepts, and that the set of priors
 appropriate could change contextually.  It would also seem that the notion
 of a prior could be improve by blending information from episodic and
 probabilistic models.

 It would appear than in almost any generally intelligent system, being able
 to approximate reality in a manner sufficient for evolutionary success with
 the most efficient representations would be a characteristic that would be
 greatly preferred by evolution, because it would allow systems to better
 model more of their environement sufficiently well for evolutionary success
 with whatever current modeling capacity they have.

 So, although a completely accurate description of virtually anything may not
 find much use for Occam's Razor, as a practically useful representation it
 often will.  It seems to me that Occam's Razor is more oriented to deriving
 meaningful generalizations that it is exact descriptions of anything.

 Furthermore, it would seem to me that a more simple set of preconditions, is
 generally more probable than a more complex one, because it requires less
 coincidence.  It would seem to me this would be true under most random sets
 of priors for the probabilities of the possible sets of components involved
 and Occam's Razor type selection.

 The are the musings of an untrained mind, since I have not spent much time
 studying philosophy, because such a high percent of it was so obviously
 stupid (such as what was commonly said when I was young, that you can't have
 intelligence without language) and my understanding of math is much less
 than that of many on this list.  But none the less I think much of what I
 have said above is true.

 I think its gist is not totally dissimilar to what Abram has said.

 Ed Porter




 -Original Message-
 From: Pei Wang [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, October 28, 2008 3:05 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Occam's Razor and its abuse


 Abram,

 I agree with your basic idea in the following, though I usually put it in
 different form.

 Pei

 On Tue, Oct 28, 2008 at 2:52 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Ben,

 You assert that Pei is forced to make an assumption about the
 regulatiry of the world to justify adaptation. Pei could also take a
 different argument. He could try to show that *if* a strategy exists
 that can be implemented given the finite resources, NARS will
 eventually find it. Thus, adaptation is justified on a sort of we
 might as well try basis. (The proof would involve showing that NARS
 searches the state of finite-state-machines that can be implemented
 with the resources at hand, and is more probable to stay for longer
 periods of time in configurations that give more reward, such that
 NARS would eventually settle on a configuration if that configuration
 consistently gave the highest reward.)

 So, some form of learning can take place with no assumptions. The
 problem is that the search space is exponential in the resources
 available, so there is some maximum point where the system would
 perform best (because the amount of resources match the problem), but
 giving the system more resources would hurt performance (because the
 system searches the unnecessarily large search 

Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Pei Wang
Eric,

I highly respect your work, though we clearly have different opinions
on what intelligence is, as well as on how to achieve it. For example,
though learning and generalization play central roles in my theory
about intelligence, I don't think PAC learning (or the other learning
algorithms proposed so far) provides a proper conceptual framework for
the typical situation of this process. Generally speaking, I'm not
building some system that learns about the world, in the sense that
there is a correct way to describe the world waiting to be discovered,
which can be captured by some algorithm. Instead, learning to me is a
non-algorithmic open-ended process by which the system summarizes its
own experience, and uses it to predict the future. I fully understand
that most people in this field probably consider this opinion wrong,
though I haven't been convinced yet by the arguments I've seen so far.

Instead of addressing all of the relevant issues, in this discussion I
have a very limited goal. To rephrase what I said initially, I see
that under the term Occam's Razor, currently there are three
different statements:

(1) Simplicity (in conclusions, hypothesis, theories, etc.) is preferred.

(2) The preference to simplicity does not need a reason or justification.

(3) Simplicity is preferred because it is correlated with correctness.

I agree with (1), but not (2) and (3). I know many people have
different opinions, and I don't attempt to argue with them here ---
these problems are too complicated to be settled by email exchanges.

However, I do hope to convince people in this discussion that the
three statements are not logically equivalent, and (2) and (3) are not
implied by (1), so to use Occam's Razor to refer to all of them is
not a good idea, because it is going to mix different issues.
Therefore, I suggest people to use Occam's Razor in its original and
basic sense, that is (1), and to use other terms to refer to (2) and
(3). Otherwise, when people talk about Occam's Razor, I just don't
know what to say.

Pei

On Tue, Oct 28, 2008 at 8:09 PM, Eric Baum [EMAIL PROTECTED] wrote:

 Pei Triggered by several recent discussions, I'd like to make the
 Pei following position statement, though won't commit myself to long
 Pei debate on it. ;-)

 Pei Occam's Razor, in its original form, goes like entities must not
 Pei be multiplied beyond necessity, and it is often stated as All
 Pei other things being equal, the simplest solution is the best or
 Pei when multiple competing theories are equal in other respects,
 Pei the principle recommends selecting the theory that introduces the
 Pei fewest assumptions and postulates the fewest entities --- all
 Pei from http://en.wikipedia.org/wiki/Occam's_razor

 Pei I fully agree with all of the above statements.

 Pei However, to me, there are two common misunderstandings associated
 Pei with it in the context of AGI and philosophy of science.

 Pei (1) To take this statement as self-evident or a stand-alone
 Pei postulate

 Pei To me, it is derived or implied by the insufficiency of
 Pei resources. If a system has sufficient resources, it has no good
 Pei reason to prefer a simpler theory.

 With all due respect, this is mistaken.
 Occam's Razor, in some form, is the heart of Generalization, which
 is the essence (and G) of GI.

 For example, if you study concept learning from examples,
 say in the PAC learning context (related theorems
 hold in some other contexts as well),
 there are theorems to the effect that if you find
 a hypothesis from a simple enough class of a hypotheses
 it will with very high probability accurately classify new
 examples chosen from the same distribution,

 and conversely theorems that state (roughly speaking) that
 any method that chooses a hypothesis from too expressive a class
 of hypotheses will have a probability that can be bounded below
 by some reasonable number like 1/7,
 of having large error in its predictions on new examples--
 in other words it is impossible to PAC learn without respecting
 Occam's Razor.

 For discussion of the above paragraphs, I'd refer you to
 Chapter 4 of What is Thought? (MIT Press, 2004).

 In other words, if you are building some system that learns
 about the world, it had better respect Occam's razor if you
 want whatever it learns to apply to new experience.
 (I use the term Occam's razor loosely; using
 hypotheses that are highly constrained in ways other than
 just being concise may work, but you'd better respect
 simplicity broadly defined. See Chap 6 of WIT? for
 more discussion of this point.)

 The core problem of GI is generalization: you want to be able to
 figure out new problems as they come along that you haven't seen
 before. In order to do that, you basically must implicitly or
 explicitly employ some version
 of Occam's Razor, independent of how much resources you have.

 In my view, the first and most important question to ask about
 any proposal for AGI is, in what way is it going to 

Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Charles Hixson
If not verify, what about falsify?  To me Occam's Razor has always been 
seen as a tool for selecting the first argument to attempt to falsify.  
If you can't, or haven't, falsified it, then it's usually the best 
assumption to go on (presuming that the costs of failing are evenly 
distributed).


OTOH, Occam's Razor clearly isn't quantitative, and it doesn't always 
pick the right answer, just one that's good enough based on what we 
know at the moment.  (Again presuming evenly distributed costs of failure.)


(And actually that's an oversimplification.  I've been considering the 
costs of applying the presumption of the theory chosen by Occam's Razor 
to be equal to or lower then the costs of the alternatives.  Whoops!  
The simplest workable approach isn't always the cheapest, and given that 
all non-falsified-as-of-now approaches have closely equal 
plausibility...perhaps one should instead choose the cheapest to presume 
of all theories that have been vetted against current knowledge.)


Occam's Razor is fine for it's original purposes, but when you try to 
apply it to practical rather than logical problems then you start 
needing to evaluate relative costs.  Both costs of presuming and costs 
of failure.  And actually often it turns out that a solution based on a 
theory known to be incorrect (e.g. Newton's laws) is good enough, so 
you don't need to decide about the correct answer.  NASA uses Newton, 
not Einstein, even though Einstein might be correct and Newton is known 
to be wrong.


Pei Wang wrote:

Ben,

It seems that you agree the issue I pointed out really exists, but
just take it as a necessary evil. Furthermore, you think I also
assumed the same thing, though I failed to see it. I won't argue
against the necessary evil part --- as far as you agree that those
postulates (such as the universe is computable) are not
convincingly justified. I won't try to disprove them.

As for the latter part, I don't think you can convince me that you
know me better than I know myself. ;-)

The following is from
http://nars.wang.googlepages.com/wang.semantics.pdf , page 28:

If the answers provided by NARS are fallible, in what sense these answers are
better than arbitrary guesses? This leads us to the concept of rationality.
When infallible predictions cannot be obtained (due to insufficient knowledge
and resources), answers based on past experience are better than arbitrary
guesses, if the environment is relatively stable. To say an answer is only a
summary of past experience (thus no future confirmation guaranteed) does
not make it equal to an arbitrary conclusion — it is what adaptation means.
Adaptation is the process in which a system changes its behaviors as if the
future is similar to the past. It is a rational process, even though individual
conclusions it produces are often wrong. For this reason, valid inference rules
(deduction, induction, abduction, and so on) are the ones whose conclusions
correctly (according to the semantics) summarize the evidence in the premises.
They are truth-preserving in this sense, not in the model-theoretic sense that
they always generate conclusions which are immune from future revision.

--- so you see, I don't assume adaptation will always be successful,
even successful to a certain probability. You can dislike this
conclusion, though you cannot say it is the same as what is assumed by
Novamente and AIXI.

Pei

On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
  

On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang [EMAIL PROTECTED] wrote:


Ben,

Thanks. So the other people now see that I'm not attacking a straw man.

My solution to Hume's problem, as embedded in the experience-grounded
semantics, is to assume no predictability, but to justify induction as
adaptation. However, it is a separate topic which I've explained in my
other publications.
  

Right, but justifying induction as adaptation only works if the environment
is assumed to have certain regularities which can be adapted to.  In a
random environment, adaptation won't work.  So, still, to justify induction
as adaptation you have to make *some* assumptions about the world.

The Occam prior gives one such assumption: that (to give just one form) sets
of observations in the world tend to be producible by short computer
programs.

For adaptation to successfully carry out induction, *some* vaguely
comparable property to this must hold, and I'm not sure if you have
articulated which one you assume, or if you leave this open.

In effect, you implicitly assume something like an Occam prior, because
you're saying that  a system with finite resources can successfully adapt to
the world ... which means that sets of observations in the world *must* be
approximately summarizable via subprograms that can be executed within this
system.

So I argue that, even though it's not your preferred way to think about it,
your own approach to AI theory and practice implicitly assumes some variant
of the 

Re: [agi] constructivist issues

2008-10-28 Thread Charles Hixson
Excuse me, but I thought there were subsets of Number theory which were 
strong enough to contain all the integers, and perhaps all the rational, 
but which weren't strong enough to prove Gödel's incompleteness theorem 
in.  I seem to remember, though, that you can't get more than a finite 
number of irrationals in such a theory.  And I think that there are 
limitations on what operators can be defined.


Still, depending on what you mean my Number, that would seem to mean 
that Number was well-defined.  Just not in Number Theory, but that's 
because Number Theory itself wasn't well-defined.


Abram Demski wrote:

Mark,

That is thanks to Godel's incompleteness theorem. Any formal system
that describes numbers is doomed to be incomplete, meaning there will
be statements that can be constructed purely by reference to numbers
(no red cats!) that the system will fail to prove either true or
false.

So my question is, do you interpret this as meaning Numbers are not
well-defined and can never be (constructivist), or do you interpret
this as It is impossible to pack all true information about numbers
into an axiom system (classical)?

Hmm By the way, I might not be using the term constructivist in
a way that all constructivists would agree with. I think
intuitionist (a specific type of constructivist) would be a better
term for the view I'm referring to.

--Abram Demski

On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] wrote:
  

Numbers can be fully defined in the classical sense, but not in the


constructivist sense. So, when you say fully defined question, do
you mean a question for which all answers are stipulated by logical
necessity (classical), or logical deduction (constructivist)?

How (or why) are numbers not fully defined in a constructionist sense?

(I was about to ask you whether or not you had answered your own question
until that caught my eye on the second or third read-through).







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Abram Demski
Charles,

Interesting point-- but, all of these theories would be weaker then
the standard axioms, and so there would be *even more* about numbers
left undefined in them.

--Abram

On Tue, Oct 28, 2008 at 10:46 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
 Excuse me, but I thought there were subsets of Number theory which were
 strong enough to contain all the integers, and perhaps all the rational, but
 which weren't strong enough to prove Gödel's incompleteness theorem in.  I
 seem to remember, though, that you can't get more than a finite number of
 irrationals in such a theory.  And I think that there are limitations on
 what operators can be defined.

 Still, depending on what you mean my Number, that would seem to mean that
 Number was well-defined.  Just not in Number Theory, but that's because
 Number Theory itself wasn't well-defined.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com