[agi] virtual credits again

2008-10-29 Thread YKY (Yan King Yin)
Hi Ben and others,

After some more thinking, I decide to try the virtual credit approach afterall.

Last time Ben's argument was that the virtual credit method confuses
for-profit and charity emotions in people.  At that time it sounded
convincing, but after some thinking I realized that it is actually
completely untrue.  My approach is actually more unequivocally
for-profit, and Ben's accusation actually applies to OpenCog's stance
more aptly.  I'm afraid OpenCog has some ethical problems by
straddling between for-profit and charity.  For example:  why do you
need funding to do charity?  If you want to do charity why not do it
out of your own pockets?  Why use a dual license if the final product
is supposed to be free for all?  etc.

It is good for a company to be charitable, but you're forcing me to do
charity when I am having financial problems myself.  Your charity
victimizes me and other people trying to make money in the AGI
business.

I can understand why you dislike my approach:  you have contributed to
AGI in many intangible ways, such as organizing conferences,
increasing public awareness of AGI, etc.  I respect you for these
efforts.  Under the virtual credit system it would be very difficult
to assign credits to you -- not impossible -- but then if you try to
claim too many credits you'd start to look like a Shylock, and that
may be very embarassing.  Secondly, there may be other people in the
OpenCog devel team who dislike virtual credits for their own reasons,
and you may want to placate them.

So, either we confront the embarassing problem and try to assign ex
post facto credits, or, another alternative is to keep our projects
separate.  The world may be able to accomodate two or more AGIs (it
may actually be a healthy thing, from a complex-systems perspective).
I don't suppose my virtual credit approach can universally satisfy all
AGI developers.  But neither can your approach (under which I cannot
get any gaurantee of financial rewards).

I'm open to other suggestions, but if there're aren't any, I'd proceed
with virtual credit.  I guess some people will like it, and some will
hate it.  This is just natural.  At least I'm honest about my motives.

PS.  The argument that AGI should be free because it is such an
important technology can equally apply to other many technologies
such as medicine and (later) life extension or uploading.  It can even
apply to things like food, housing, citizenship, computer hardware,
etc.  In the end I think we need to admit that the good way lies
somewhere between charity and for-profit.  And my project aims to be
charitable in its own way too.  The only difference between my way and
OpenCog is that I want to make the accounting of contributions
transparent, and to reward contributors financially, while being
charitable in some other ways, that depend on how much profits we'll
make.  (Making the software opensource is already very charitable and
we may not be able to make that much money at all).

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Cloud Intelligence

2008-10-29 Thread Samantha Atkins

John G. Rose wrote:


Has anyone done some analysis on cloud computing, in particular the 
recent trend and coming out of clouds with multiple startup efforts in 
this space? And their relationship to AGI type applications?


 


Or is this phenomena just geared to web server farm resource grouping?

 

I suppose that it is worth delving into... at least evaluating. But my 
first thoughts are that the hardware nodes have interrelationships 
that require compatibility layers for service offerings verses custom 
clusters hand tweaked for app specific - AGI in this case, 
optimizations and caterings.


 

From playing around a little in the Amazon cloud you can do anything 
you can do on a standard TCP/IP network of off the shelf boxes.   
Granted you can't hook up a faster network as you certainly could in 
your own cluster.  But it still seems pretty intriguing.


What happens though over time is that the cloud generalization 
substrate made for software and competitive efficiencies eventually 
come close to or exceed the abilities of the hand developed and 
tweaked. That is the problem - determining whether to wait, pay, or to 
develop a custom solution.


Well, most of us have no choice but do do whatever we can as soon as we 
can on top of free/cheap  but relatively plentiful resources.


 

Isn't software development annoying because of this? Big guys like MS 
have the umph to shrug off the little guys using their development 
resource power. Sometimes the only choice is to eat dust and like it. 
Suck up the dust, it's nutritional silicon value is there, feed off of 
it, the perpetuity of a naked quartz lunch.


 

Actually I think software is very exciting and have for 30 years because 
the little guy can and often does come up with something on a relative 
shoestring that blows MS out of the water in some  market that often 
didn't even see coming. 


- samantha




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Cloud Intelligence

2008-10-29 Thread Bob Mottram
2008/10/29 Samantha Atkins [EMAIL PROTECTED]:
 John G. Rose wrote:
 Has anyone done some analysis on cloud computing, in particular the recent
 trend and coming out of clouds with multiple startup efforts in this space?
 And their relationship to AGI type applications?



Beware of putting too much stuff into the cloud.  Especially in the
current economic climate clouds could disappear without notice (i.e.
unrecoverable data loss).  Also, depending upon terms and conditions
any data which you put into the cloud may not legally be owned by you,
even if you created it.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] virtual credits again

2008-10-29 Thread Trent Waddington
On Wed, Oct 29, 2008 at 4:04 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Last time Ben's argument was that the virtual credit method confuses
 for-profit and charity emotions in people.  At that time it sounded
 convincing, but after some thinking I realized that it is actually
 completely untrue.

Don't forget my argument..

You're a gas bag and don't know what you're talking about.. so you'll
never make any money and your virtual credits (hint: credit is already
virtual) are just worthless stupidity.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
 However, it does seem clear that the integers (for instance) is not an 
 entity with *scientific* meaning, if you accept my formalization of science 
 in the blog entry I recently posted...

Huh?  Integers are a class (which I would argue is an entity) that is I would 
argue is well-defined and useful in science.  What is meaning if not 
well-defined and useful?  I need to go back to your paper because I didn't get 
that out of it at all.


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 6:41 PM
  Subject: Re: [agi] constructivist issues



  well-defined is not well-defined in my view...

  However, it does seem clear that the integers (for instance) is not an 
entity with *scientific* meaning, if you accept my formalization of science in 
the blog entry I recently posted...




  On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been WITH 
RESPECT TO THE DEFINITION OF NUMBERS since I was responding to Numbers are not 
well-defined and can never be.  Further, I should not have said information 
about numbers when I meant definition of numbers.  two radically different 
thingsArgh!

= = = = = = = = 

So Ben, how would you answer Abram's question So my question is, do you 
interpret this as meaning Numbers are not well-defined and can never be 
(constructivist), or do you interpret this as It is impossible to pack all 
true information about numbers into an axiom system (classical)?

Does the statement that a formal system is incomplete with respect to 
statements about numbers mean that Numbers are not well-defined and can never 
be.

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about constructivism as in 
Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
Ick!  I emphatically do not believe When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence.



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper hole  :-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your argument 
as to why uncomputable entities are useless for science.  I'm going to need to 
go back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 5:55 PM
  Subject: Re: [agi] constructivist issues



  Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...


  On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED] wrote:

  That is thanks to Godel's incompleteness theorem. Any formal system
  that describes numbers is doomed to be incomplete



Yes, any formal system is doomed to be incomplete.  Emphatically, NO!  
It is not true that any formal system is doomed to be incomplete WITH RESPECT 
TO NUMBERS.

It is entirely possible (nay, almost certain) that there is a larger 
system where the information about numbers is complete but that the other 
things that the system describes are incomplete. 



  So my question is, do you interpret this as meaning Numbers are not
  well-defined and can never be (constructivist), or do you interpret
  this as It is impossible to pack all true information about numbers
  into an axiom system (classical)?



Hmmm.  From a larger reference framework, the former 
claimed-to-be-constructivist view isn't true/correct because it clearly *is* 
possible that numbers may be well-defined within a larger system (i.e. the can 
never be is incorrect).

Does that mean that I'm a classicist or that you are mis-interpreting 
constructivism (because you're attributing a provably false statement to 
constructivists)?  I'm leaning towards the latter currently.  ;-) 


- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com

Sent: Tuesday, October 28, 2008 5:02 PM 

Subject: Re: [agi] constructivist issues



  Mark,

  That is thanks to Godel's incompleteness theorem. Any formal system
  that 

Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Mark Waser

(1) Simplicity (in conclusions, hypothesis, theories, etc.) is preferred.
(2) The preference to simplicity does not need a reason or justification.
(3) Simplicity is preferred because it is correlated with correctness.
I agree with (1), but not (2) and (3).


I concur but would add that (4) Simplicity is preferred because it is 
correlated with correctness *of implementation* (or ease of implementation 
correctly :-)



- Original Message - 
From: Pei Wang [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 10:15 PM
Subject: Re: [agi] Occam's Razor and its abuse



Eric,

I highly respect your work, though we clearly have different opinions
on what intelligence is, as well as on how to achieve it. For example,
though learning and generalization play central roles in my theory
about intelligence, I don't think PAC learning (or the other learning
algorithms proposed so far) provides a proper conceptual framework for
the typical situation of this process. Generally speaking, I'm not
building some system that learns about the world, in the sense that
there is a correct way to describe the world waiting to be discovered,
which can be captured by some algorithm. Instead, learning to me is a
non-algorithmic open-ended process by which the system summarizes its
own experience, and uses it to predict the future. I fully understand
that most people in this field probably consider this opinion wrong,
though I haven't been convinced yet by the arguments I've seen so far.

Instead of addressing all of the relevant issues, in this discussion I
have a very limited goal. To rephrase what I said initially, I see
that under the term Occam's Razor, currently there are three
different statements:

(1) Simplicity (in conclusions, hypothesis, theories, etc.) is preferred.

(2) The preference to simplicity does not need a reason or justification.

(3) Simplicity is preferred because it is correlated with correctness.

I agree with (1), but not (2) and (3). I know many people have
different opinions, and I don't attempt to argue with them here ---
these problems are too complicated to be settled by email exchanges.

However, I do hope to convince people in this discussion that the
three statements are not logically equivalent, and (2) and (3) are not
implied by (1), so to use Occam's Razor to refer to all of them is
not a good idea, because it is going to mix different issues.
Therefore, I suggest people to use Occam's Razor in its original and
basic sense, that is (1), and to use other terms to refer to (2) and
(3). Otherwise, when people talk about Occam's Razor, I just don't
know what to say.

Pei

On Tue, Oct 28, 2008 at 8:09 PM, Eric Baum [EMAIL PROTECTED] wrote:


Pei Triggered by several recent discussions, I'd like to make the
Pei following position statement, though won't commit myself to long
Pei debate on it. ;-)

Pei Occam's Razor, in its original form, goes like entities must not
Pei be multiplied beyond necessity, and it is often stated as All
Pei other things being equal, the simplest solution is the best or
Pei when multiple competing theories are equal in other respects,
Pei the principle recommends selecting the theory that introduces the
Pei fewest assumptions and postulates the fewest entities --- all
Pei from http://en.wikipedia.org/wiki/Occam's_razor

Pei I fully agree with all of the above statements.

Pei However, to me, there are two common misunderstandings associated
Pei with it in the context of AGI and philosophy of science.

Pei (1) To take this statement as self-evident or a stand-alone
Pei postulate

Pei To me, it is derived or implied by the insufficiency of
Pei resources. If a system has sufficient resources, it has no good
Pei reason to prefer a simpler theory.

With all due respect, this is mistaken.
Occam's Razor, in some form, is the heart of Generalization, which
is the essence (and G) of GI.

For example, if you study concept learning from examples,
say in the PAC learning context (related theorems
hold in some other contexts as well),
there are theorems to the effect that if you find
a hypothesis from a simple enough class of a hypotheses
it will with very high probability accurately classify new
examples chosen from the same distribution,

and conversely theorems that state (roughly speaking) that
any method that chooses a hypothesis from too expressive a class
of hypotheses will have a probability that can be bounded below
by some reasonable number like 1/7,
of having large error in its predictions on new examples--
in other words it is impossible to PAC learn without respecting
Occam's Razor.

For discussion of the above paragraphs, I'd refer you to
Chapter 4 of What is Thought? (MIT Press, 2004).

In other words, if you are building some system that learns
about the world, it had better respect Occam's razor if you
want whatever it learns to apply to new experience.
(I use the term Occam's razor loosely; using
hypotheses that are highly constrained in 

Re: [agi] virtual credits again

2008-10-29 Thread YKY (Yan King Yin)
On Wed, Oct 29, 2008 at 6:34 PM, Trent Waddington

 Don't forget my argument..

I don't recall hearing an argument from you.  All your replies to me
are rather rude one liners.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] virtual credits again

2008-10-29 Thread Benjamin Johnston
  I don't recall hearing an argument from you.  All your replies 
  to me are rather rude one liners.

 As opposed to everyone else, who either doesn't reply to you or 
 humors you.

 Get over yourself.

 Trent

Hi Trent,

Your last two emails to YKY were rude and unhelpful. If you felt a burning
desire to express yourself rudely, you could have done so by emailing him
privately.

Even though I do not personally agree with YKY's approaches and theories,
and he is one of the few regulars on this list that I make a point of
reading. He actually uses this list to discuss genuine technical matters, to
seek genuine feedback on draft papers of his ideas, to ask for technical
clarification about current published research in the area, to discuss
practical questions relating to the development of real AGI and to listen
and respond to comments and criticism.

YKY's contributions to this list generally appear to be closer in spirit to
the lists' purpose (i.e., for more technical discussions about current AGI
projects) than much of the talk here (which often devolves into repetitive
and shallow philosophy, name-calling and rude one-liners). 

I think this list would be a better place if there were more people here
like YKY.

This is already off topic for this mailing list, so if you would like to
discuss it further please feel free to email me directly.

Sincerely,

-Benjamin Johnston




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


META: ad hominem attacks WAS Re: [agi] virtual credits again

2008-10-29 Thread Ben Goertzel
Trent,

A comment in my role as list administrator:
Let's keep the discussion on the level of ideas not people, please.

No ad hominem attacks such as You're a gas bag, etc.

thanks
ben g

On Wed, Oct 29, 2008 at 6:34 AM, Trent Waddington 
[EMAIL PROTECTED] wrote:

 On Wed, Oct 29, 2008 at 4:04 PM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:
  Last time Ben's argument was that the virtual credit method confuses
  for-profit and charity emotions in people.  At that time it sounded
  convincing, but after some thinking I realized that it is actually
  completely untrue.

 Don't forget my argument..

 You're a gas bag and don't know what you're talking about.. so you'll
 never make any money and your virtual credits (hint: credit is already
 virtual) are just worthless stupidity.

 Trent


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] virtual credits again

2008-10-29 Thread Ben Goertzel
YKY,

I'm certainly not opposed to you trying a virtual-credits system.  My
prediction
is that it won't work out well, but my predictions are not always right.  I
just
want to clarify two things:

1)
There is *really* nothing unethical about OpenCog's setup.  However, if
we need to discuss that in detail we can do that in another thread.

Nor do I think
your proposed system has anything unethical about it, as long as it's
clearly
explained and those who participate in it understand the potential risks and
rewards.

2)
You say  you're forcing me to do
charity when I am having financial problems myself. -- but I don't see why
you think anyone is forcing you to do anything!

You're a free citizen of Hong Kong, you can do what you like ... and you can
certainly announce and discuss your project on this list, regardless of its
internal organizational, corporate or financial structure.

There is no use of force involved!

...

My earlier post about for-profit versus charitable motivations in humans
was an aside, just an attempt on my part to formally articulate some
reasoning
underlying my basic intuition that the virtual-credit system might not work
very well.  Of course, this kind of armchair psychological theorizing can
easily
go astray; it would be a mistake to take it too seriously.  But, if you
didn't
read Freakonomics when it was popular a while back, you might want to take
a look at the chapters dealing with these themes.

-- Ben G


On Wed, Oct 29, 2008 at 2:04 AM, YKY (Yan King Yin) 
[EMAIL PROTECTED] wrote:

 Hi Ben and others,

 After some more thinking, I decide to try the virtual credit approach
 afterall.

 Last time Ben's argument was that the virtual credit method confuses
 for-profit and charity emotions in people.  At that time it sounded
 convincing, but after some thinking I realized that it is actually
 completely untrue.  My approach is actually more unequivocally
 for-profit, and Ben's accusation actually applies to OpenCog's stance
 more aptly.  I'm afraid OpenCog has some ethical problems by
 straddling between for-profit and charity.  For example:  why do you
 need funding to do charity?  If you want to do charity why not do it
 out of your own pockets?  Why use a dual license if the final product
 is supposed to be free for all?  etc.

 It is good for a company to be charitable, but you're forcing me to do
 charity when I am having financial problems myself.  Your charity
 victimizes me and other people trying to make money in the AGI
 business.

 I can understand why you dislike my approach:  you have contributed to
 AGI in many intangible ways, such as organizing conferences,
 increasing public awareness of AGI, etc.  I respect you for these
 efforts.  Under the virtual credit system it would be very difficult
 to assign credits to you -- not impossible -- but then if you try to
 claim too many credits you'd start to look like a Shylock, and that
 may be very embarassing.  Secondly, there may be other people in the
 OpenCog devel team who dislike virtual credits for their own reasons,
 and you may want to placate them.

 So, either we confront the embarassing problem and try to assign ex
 post facto credits, or, another alternative is to keep our projects
 separate.  The world may be able to accomodate two or more AGIs (it
 may actually be a healthy thing, from a complex-systems perspective).
 I don't suppose my virtual credit approach can universally satisfy all
 AGI developers.  But neither can your approach (under which I cannot
 get any gaurantee of financial rewards).

 I'm open to other suggestions, but if there're aren't any, I'd proceed
 with virtual credit.  I guess some people will like it, and some will
 hate it.  This is just natural.  At least I'm honest about my motives.

 PS.  The argument that AGI should be free because it is such an
 important technology can equally apply to other many technologies
 such as medicine and (later) life extension or uploading.  It can even
 apply to things like food, housing, citizenship, computer hardware,
 etc.  In the end I think we need to admit that the good way lies
 somewhere between charity and for-profit.  And my project aims to be
 charitable in its own way too.  The only difference between my way and
 OpenCog is that I want to make the accounting of contributions
 transparent, and to reward contributors financially, while being
 charitable in some other ways, that depend on how much profits we'll
 make.  (Making the software opensource is already very charitable and
 we may not be able to make that much money at all).

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be 

Re: [agi] virtual credits again

2008-10-29 Thread Trent Waddington
On Wed, Oct 29, 2008 at 11:11 PM, Benjamin Johnston
[EMAIL PROTECTED] wrote:
 Your last two emails to YKY were rude and unhelpful. If you felt a burning
 desire to express yourself rudely, you could have done so by emailing him
 privately.

I'm publicly telling him to piss off.  I *could* have done this
privately, and have previously, but it does not have the desired
effect.

 He actually uses this list to discuss genuine technical matters, to
 seek genuine feedback on draft papers of his ideas, to ask for technical
 clarification about current published research in the area, to discuss
 practical questions relating to the development of real AGI and to listen
 and respond to comments and criticism.

And in this case he is using the list to repeat his tired bullshit
about virtual credits for open source work, which we've all said is
not only loony and stupid but also irrelevant to this list.

So meh, if you want to go ahead with your virtual credit absurdity,
you're free to do so, but I'm also free to call you an idiot.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: META: ad hominem attacks WAS Re: [agi] virtual credits again

2008-10-29 Thread Trent Waddington
On Wed, Oct 29, 2008 at 11:29 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Trent,

 A comment in my role as list administrator:
 Let's keep the discussion on the level of ideas not people, please.

 No ad hominem attacks such as You're a gas bag, etc.

If he's free to talk about virtual credits I should be free to talk
about how stupid his virtual credit idea is and, by extension, he is.

If it was a passing idea he'd had which, after receiving everyone's
feedback on the matter, he decided was not a good idea.. that'd be
fine, if not completely off-topic for this list, but he's brought up
the subject on this list and the opencog list and on irc a good half
dozen times now.  It's stupid.. and he isn't getting the message.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] virtual credits again

2008-10-29 Thread Ben Goertzel

 So meh, if you want to go ahead with your virtual credit absurdity,
 you're free to do so, but I'm also free to call you an idiot.

 Trent


Not on this list, please  If you feel the need to tell him that, tell
him
by private email.

You are free to tell him you think it's a foolish idea that's doomed to
fail.   But not to call him an idiot.

Attack the ideas, if you wish; not the person.

thanks
Ben G
List moderator



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Ben Goertzel
but we never need arbitrarily large integers in any particular case, we only
need integers going up to the size of the universe ;-)

On Wed, Oct 29, 2008 at 7:24 AM, Mark Waser [EMAIL PROTECTED] wrote:

   However, it does seem clear that the integers (for instance) is not
 an entity with *scientific* meaning, if you accept my formalization of
 science in the blog entry I recently posted...

 Huh?  Integers are a class (which I would argue is an entity) that is I
 would argue is well-defined and useful in science.  What is meaning if not
 well-defined and useful?  I need to go back to your paper because I didn't
 get that out of it at all.

  - Original Message -
 *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Tuesday, October 28, 2008 6:41 PM
 *Subject:* Re: [agi] constructivist issues


 well-defined is not well-defined in my view...

 However, it does seem clear that the integers (for instance) is not an
 entity with *scientific* meaning, if you accept my formalization of science
 in the blog entry I recently posted...



 On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser [EMAIL PROTECTED] wrote:

   Any formal system that contains some basic arithmetic apparatus
 equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be
 incomplete with respect to statements about numbers... that is what Godel
 originally showed...

 Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been
 WITH RESPECT TO THE DEFINITION OF NUMBERS since I was responding to Numbers
 are not well-defined and can never be.  Further, I should not have said
 information about numbers when I meant definition of numbers.  two
 radically different thingsArgh!

 = = = = = = = =

 So Ben, how would you answer Abram's question So my question is, do you
 interpret this as meaning Numbers are not well-defined and can never be
 (constructivist), or do you interpret this as It is impossible to pack all
 true information about numbers into an axiom system (classical)?

 Does the statement that a formal system is incomplete with respect to
 statements about numbers mean that Numbers are not well-defined and can
 never be.

 = = = = = = =

 (Semi-)Retraction - maybe? (mostly for Abram).

 Ick again!  I was assuming that we were talking about constructivism as in
 Constructivist epistemology (
 http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just
 had Constructivism (mathematics) pointed out to me (
 http://en.wikipedia.org/wiki/Constructivism_(mathematicshttp://en.wikipedia.org/wiki/Constructivism_%28mathematics))
 All I can say is Ick!  I emphatically do not believe When one assumes
 that an object does not exist and derives a contradiction from that
 assumption http://en.wikipedia.org/wiki/Reductio_ad_absurdum, one still
 has not found the object and therefore not proved its existence.


 = = = = = = = =

 I'm quitting and going home now to avoid digging myself a deeper hole  :-)

 Mark

 PS.  Ben, I read and, at first glance, liked and agreed with your argument
 as to why uncomputable entities are useless for science.  I'm going to need
 to go back over it a few more times though.:-)

 - Original Message -

  *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
   *Sent:* Tuesday, October 28, 2008 5:55 PM
 *Subject:* Re: [agi] constructivist issues


 Any formal system that contains some basic arithmetic apparatus equivalent
 to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be incomplete
 with respect to statements about numbers... that is what Godel originally
 showed...

   On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED]wrote:

   That is thanks to Godel's incompleteness theorem. Any formal system
 that describes numbers is doomed to be incomplete


 Yes, any formal system is doomed to be incomplete.  Emphatically, NO!  It
 is not true that any formal system is doomed to be incomplete WITH RESPECT
 TO NUMBERS.

 It is entirely possible (nay, almost certain) that there is a larger
 system where the information about numbers is complete but that the other
 things that the system describes are incomplete.

 So my question is, do you interpret this as meaning Numbers are not
 well-defined and can never be (constructivist), or do you interpret
 this as It is impossible to pack all true information about numbers
 into an axiom system (classical)?


 Hmmm.  From a larger reference framework, the former
 claimed-to-be-constructivist view isn't true/correct because it clearly *is*
 possible that numbers may be well-defined within a larger system (i.e. the
 can never be is incorrect).

 Does that mean that I'm a classicist or that you are mis-interpreting
 constructivism (because you're attributing a provably false statement to
 constructivists)?  I'm leaning towards the latter currently.  ;-)

 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 
 To: agi@v2.listbox.com
 Sent: Tuesday, October 28, 2008 5:02 

Re: META: ad hominem attacks WAS Re: [agi] virtual credits again

2008-10-29 Thread Ben Goertzel

 If he's free to talk about virtual credits I should be free to talk
 about how stupid his virtual credit idea


Yes


 is and, by extension, he is.


No ...

Look, I am not any kind of expert on list management or social
tact, I'm just applying extremely basic rules of human politeness here

In fact I know YKY and he is actually *not* an idiot, he's a very bright
guy, although he has plenty of ideas I disagree with.

I think the virtual-credits thing probably won't work, but I'm not certain
of it ... and I do note that it seemed obvious to many people, in advance,
that the open-source methodology couldn't work at all (but then it did)

Discussion of business and organizational models for AGI projects is
sufficiently on-topic for this list, given how broadly the theme of the list
is currently being interpreted...

-- 
Ben G
List moderator



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Abram Demski
Ben,

Thanks, that writeup did help me understand your viewpoint. However, I
don't completely unserstand/agree with the argument (one of the two,
not both!). My comments to that effect are posted on your blog.

About the earlier question...

(Mark) So Ben, how would you answer Abram's question So my question
is, do you interpret this as meaning Numbers are not well-defined and
can never be (constructivist), or do you interpret this as It is
impossible to pack all true information about numbers into an axiom
system (classical)?
(Ben) well-defined is not well-defined in my view...

To rephrase. Do you think there is a truth of the matter concerning
formally undecidable statements about numbers?

--Abram

On Tue, Oct 28, 2008 at 5:26 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Hi guys,

 I took a couple hours on a red-eye flight last night to write up in more
 detail my
 argument as to why uncomputable entities are useless for science:

 http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-for.html

 Of course, I had to assume a specific formal model of science which may be
 controversial.  But at any rate, I think I did succeed in writing down my
 argument in a more
 clear way than I'd been able to do in scattershot emails.

 The only real AGI relevance here is some comments on Penrose's nasty AI
 theories, e.g.
 in the last paragraph and near the intro...

 -- Ben G


 On Tue, Oct 28, 2008 at 2:02 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Mark,

 That is thanks to Godel's incompleteness theorem. Any formal system
 that describes numbers is doomed to be incomplete, meaning there will
 be statements that can be constructed purely by reference to numbers
 (no red cats!) that the system will fail to prove either true or
 false.

 So my question is, do you interpret this as meaning Numbers are not
 well-defined and can never be (constructivist), or do you interpret
 this as It is impossible to pack all true information about numbers
 into an axiom system (classical)?

 Hmm By the way, I might not be using the term constructivist in
 a way that all constructivists would agree with. I think
 intuitionist (a specific type of constructivist) would be a better
 term for the view I'm referring to.

 --Abram Demski

 On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] wrote:
  Numbers can be fully defined in the classical sense, but not in the
 
  constructivist sense. So, when you say fully defined question, do
  you mean a question for which all answers are stipulated by logical
  necessity (classical), or logical deduction (constructivist)?
 
  How (or why) are numbers not fully defined in a constructionist sense?
 
  (I was about to ask you whether or not you had answered your own
  question
  until that caught my eye on the second or third read-through).
 
 


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 A human being should be able to change a diaper, plan an invasion, butcher
 a hog, conn a ship, design a building, write a sonnet, balance accounts,
 build a wall, set a bone, comfort the dying, take orders, give orders,
 cooperate, act alone, solve equations, analyze a new problem, pitch manure,
 program a computer, cook a tasty meal, fight efficiently, die gallantly.
 Specialization is for insects.  -- Robert Heinlein


 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Ben Goertzel
 To rephrase. Do you think there is a truth of the matter concerning
 formally undecidable statements about numbers?

 --Abram


That all depends on what the meaning of is, is ...  ;-)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
 but we never need arbitrarily large integers in any particular case, we only 
 need integers going up to the size of the universe ;-)

But measured in which units?  For any given integer, I can come up with (invent 
:-) a unit of measurement that requires a larger/greater number than that 
integer to describe the size of the universe.



;-)  Nice try, but . . . .  :-p

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 29, 2008 9:48 AM
  Subject: Re: [agi] constructivist issues



  but we never need arbitrarily large integers in any particular case, we only 
need integers going up to the size of the universe ;-)


  On Wed, Oct 29, 2008 at 7:24 AM, Mark Waser [EMAIL PROTECTED] wrote:

 However, it does seem clear that the integers (for instance) is not an 
entity with *scientific* meaning, if you accept my formalization of science in 
the blog entry I recently posted...

Huh?  Integers are a class (which I would argue is an entity) that is I 
would argue is well-defined and useful in science.  What is meaning if not 
well-defined and useful?  I need to go back to your paper because I didn't get 
that out of it at all.


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 6:41 PM
  Subject: Re: [agi] constructivist issues



  well-defined is not well-defined in my view...

  However, it does seem clear that the integers (for instance) is not an 
entity with *scientific* meaning, if you accept my formalization of science in 
the blog entry I recently posted...




  On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been 
WITH RESPECT TO THE DEFINITION OF NUMBERS since I was responding to Numbers 
are not well-defined and can never be.  Further, I should not have said 
information about numbers when I meant definition of numbers.  two 
radically different thingsArgh!

= = = = = = = = 

So Ben, how would you answer Abram's question So my question is, do 
you interpret this as meaning Numbers are not well-defined and can never be 
(constructivist), or do you interpret this as It is impossible to pack all 
true information about numbers into an axiom system (classical)?

Does the statement that a formal system is incomplete with respect to 
statements about numbers mean that Numbers are not well-defined and can never 
be.

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about constructivism as 
in Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
Ick!  I emphatically do not believe When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence.



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper hole  
:-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your 
argument as to why uncomputable entities are useless for science.  I'm going to 
need to go back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 5:55 PM
  Subject: Re: [agi] constructivist issues



  Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...


  On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED] wrote:

  That is thanks to Godel's incompleteness theorem. Any formal 
system
  that describes numbers is doomed to be incomplete



Yes, any formal system is doomed to be incomplete.  Emphatically, 
NO!  It is not true that any formal system is doomed to be incomplete WITH 
RESPECT TO NUMBERS.

It is entirely possible (nay, almost certain) that there is a 
larger system where the information about numbers is complete but that the 
other things that the system describes are incomplete. 



  So my question is, do you interpret this as meaning Numbers are 
not
  well-defined and can never be (constructivist), or do you 
interpret
  this as 

Re: [agi] constructivist issues

2008-10-29 Thread Ben Goertzel
sorry, I should have been more precise.   There is some K so that we never
need integers with algorithmic information exceeding K.

On Wed, Oct 29, 2008 at 10:32 AM, Mark Waser [EMAIL PROTECTED] wrote:

   but we never need arbitrarily large integers in any particular case,
 we only need integers going up to the size of the universe ;-)
 But measured in which units?  For any given integer, I can come up
 with (invent :-) a unit of measurement that requires a larger/greater number
 than that integer to describe the size of the universe.



 ;-)  Nice try, but . . . .  :-p


 - Original Message -
 *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Wednesday, October 29, 2008 9:48 AM
 *Subject:* Re: [agi] constructivist issues


 but we never need arbitrarily large integers in any particular case, we
 only need integers going up to the size of the universe ;-)

 On Wed, Oct 29, 2008 at 7:24 AM, Mark Waser [EMAIL PROTECTED] wrote:

   However, it does seem clear that the integers (for instance) is not
 an entity with *scientific* meaning, if you accept my formalization of
 science in the blog entry I recently posted...

 Huh?  Integers are a class (which I would argue is an entity) that is I
 would argue is well-defined and useful in science.  What is meaning if not
 well-defined and useful?  I need to go back to your paper because I didn't
 get that out of it at all.

  - Original Message -
 *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
   *Sent:* Tuesday, October 28, 2008 6:41 PM
 *Subject:* Re: [agi] constructivist issues


 well-defined is not well-defined in my view...

 However, it does seem clear that the integers (for instance) is not an
 entity with *scientific* meaning, if you accept my formalization of science
 in the blog entry I recently posted...



 On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser [EMAIL PROTECTED] wrote:

   Any formal system that contains some basic arithmetic apparatus
 equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be
 incomplete with respect to statements about numbers... that is what Godel
 originally showed...

 Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been
 WITH RESPECT TO THE DEFINITION OF NUMBERS since I was responding to Numbers
 are not well-defined and can never be.  Further, I should not have said
 information about numbers when I meant definition of numbers.  two
 radically different thingsArgh!

 = = = = = = = =

 So Ben, how would you answer Abram's question So my question is, do you
 interpret this as meaning Numbers are not well-defined and can never be
 (constructivist), or do you interpret this as It is impossible to pack all
 true information about numbers into an axiom system (classical)?

 Does the statement that a formal system is incomplete with respect to
 statements about numbers mean that Numbers are not well-defined and can
 never be.

 = = = = = = =

 (Semi-)Retraction - maybe? (mostly for Abram).

 Ick again!  I was assuming that we were talking about constructivism as
 in Constructivist epistemology (
 http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just
 had Constructivism (mathematics) pointed out to me (
 http://en.wikipedia.org/wiki/Constructivism_(mathematicshttp://en.wikipedia.org/wiki/Constructivism_%28mathematics))
 All I can say is Ick!  I emphatically do not believe When one assumes
 that an object does not exist and derives a contradiction from that
 assumption http://en.wikipedia.org/wiki/Reductio_ad_absurdum, one
 still has not found the object and therefore not proved its existence.


 = = = = = = = =

 I'm quitting and going home now to avoid digging myself a deeper hole
 :-)

 Mark

 PS.  Ben, I read and, at first glance, liked and agreed with your argument
 as to why uncomputable entities are useless for science.  I'm going to need
 to go back over it a few more times though.:-)

 - Original Message -

  *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
   *Sent:* Tuesday, October 28, 2008 5:55 PM
 *Subject:* Re: [agi] constructivist issues


 Any formal system that contains some basic arithmetic apparatus
 equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be
 incomplete with respect to statements about numbers... that is what Godel
 originally showed...

   On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED]wrote:

   That is thanks to Godel's incompleteness theorem. Any formal system
 that describes numbers is doomed to be incomplete


 Yes, any formal system is doomed to be incomplete.  Emphatically, NO!
  It is not true that any formal system is doomed to be incomplete WITH
 RESPECT TO NUMBERS.

 It is entirely possible (nay, almost certain) that there is a larger
 system where the information about numbers is complete but that the other
 things that the system describes are incomplete.

 So my question is, do you interpret this as meaning 

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser

Here's another slant . . . .

I really liked Pei's phrasing (which I consider to be the heart of 
Constructivism: The Epistemology :-)

Generally speaking, I'm not
building some system that learns about the world, in the sense that
there is a correct way to describe the world waiting to be discovered,
which can be captured by some algorithm. Instead, learning to me is a
non-algorithmic open-ended process by which the system summarizes its
own experience, and uses it to predict the future.


Classicists (to me) seem to frequently want one and only one truth that must 
be accurate, complete, and not only provable but for proofs of all of it's 
implications to exist (which is obviously thwarted by Tarski and Gödel).


So . . . . is true that light is a particle? is it true that light is a 
wave?


That's why Ben and I are stuck answering many of your questions with 
requests for clarification -- Which question -- pi or cat?  Which subset of 
what *might* be considered mathematics/arithmetic?  Why are you asking the 
question?


Certain statements appear obviously untrue (read inconsistent with the 
empirical world or our assumed extensions of it) in the vast majority of 
cases/contexts but many others are just/simply context-dependent.




- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 29, 2008 10:08 AM
Subject: Re: [agi] constructivist issues



Ben,

Thanks, that writeup did help me understand your viewpoint. However, I
don't completely unserstand/agree with the argument (one of the two,
not both!). My comments to that effect are posted on your blog.

About the earlier question...

(Mark) So Ben, how would you answer Abram's question So my question
is, do you interpret this as meaning Numbers are not well-defined and
can never be (constructivist), or do you interpret this as It is
impossible to pack all true information about numbers into an axiom
system (classical)?
(Ben) well-defined is not well-defined in my view...

To rephrase. Do you think there is a truth of the matter concerning
formally undecidable statements about numbers?

--Abram

On Tue, Oct 28, 2008 at 5:26 PM, Ben Goertzel [EMAIL PROTECTED] wrote:


Hi guys,

I took a couple hours on a red-eye flight last night to write up in more
detail my
argument as to why uncomputable entities are useless for science:

http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-for.html

Of course, I had to assume a specific formal model of science which may 
be

controversial.  But at any rate, I think I did succeed in writing down my
argument in a more
clear way than I'd been able to do in scattershot emails.

The only real AGI relevance here is some comments on Penrose's nasty AI
theories, e.g.
in the last paragraph and near the intro...

-- Ben G


On Tue, Oct 28, 2008 at 2:02 PM, Abram Demski [EMAIL PROTECTED] 
wrote:


Mark,

That is thanks to Godel's incompleteness theorem. Any formal system
that describes numbers is doomed to be incomplete, meaning there will
be statements that can be constructed purely by reference to numbers
(no red cats!) that the system will fail to prove either true or
false.

So my question is, do you interpret this as meaning Numbers are not
well-defined and can never be (constructivist), or do you interpret
this as It is impossible to pack all true information about numbers
into an axiom system (classical)?

Hmm By the way, I might not be using the term constructivist in
a way that all constructivists would agree with. I think
intuitionist (a specific type of constructivist) would be a better
term for the view I'm referring to.

--Abram Demski

On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] 
wrote:

 Numbers can be fully defined in the classical sense, but not in the

 constructivist sense. So, when you say fully defined question, do
 you mean a question for which all answers are stipulated by logical
 necessity (classical), or logical deduction (constructivist)?

 How (or why) are numbers not fully defined in a constructionist sense?

 (I was about to ask you whether or not you had answered your own
 question
 until that caught my eye on the second or third read-through).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, 
butcher

a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch 
manure,

program a computer, cook a tasty meal, fight efficiently, die gallantly.

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
 sorry, I should have been more precise.   There is some K so that we never 
 need integers with algorithmic information exceeding K.

Ah . . . . but is K predictable?  Or do we need all the integers above it as 
a safety margin?   :-)

(What is the meaning of need?  :-)

The inductive proof to show that all integers are necessary as a safety margin 
is pretty obvious . . . .

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 29, 2008 10:38 AM
  Subject: Re: [agi] constructivist issues



  sorry, I should have been more precise.   There is some K so that we never 
need integers with algorithmic information exceeding K.


  On Wed, Oct 29, 2008 at 10:32 AM, Mark Waser [EMAIL PROTECTED] wrote:

 but we never need arbitrarily large integers in any particular case, we 
only need integers going up to the size of the universe ;-)

But measured in which units?  For any given integer, I can come up with 
(invent :-) a unit of measurement that requires a larger/greater number than 
that integer to describe the size of the universe.



;-)  Nice try, but . . . .  :-p

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 29, 2008 9:48 AM
  Subject: Re: [agi] constructivist issues



  but we never need arbitrarily large integers in any particular case, we 
only need integers going up to the size of the universe ;-)


  On Wed, Oct 29, 2008 at 7:24 AM, Mark Waser [EMAIL PROTECTED] wrote:

 However, it does seem clear that the integers (for instance) is 
not an entity with *scientific* meaning, if you accept my formalization of 
science in the blog entry I recently posted...

Huh?  Integers are a class (which I would argue is an entity) that is I 
would argue is well-defined and useful in science.  What is meaning if not 
well-defined and useful?  I need to go back to your paper because I didn't get 
that out of it at all.


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 6:41 PM
  Subject: Re: [agi] constructivist issues



  well-defined is not well-defined in my view...

  However, it does seem clear that the integers (for instance) is not 
an entity with *scientific* meaning, if you accept my formalization of science 
in the blog entry I recently posted...




  On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have 
been WITH RESPECT TO THE DEFINITION OF NUMBERS since I was responding to 
Numbers are not well-defined and can never be.  Further, I should not have 
said information about numbers when I meant definition of numbers.  two 
radically different thingsArgh!

= = = = = = = = 

So Ben, how would you answer Abram's question So my question is, 
do you interpret this as meaning Numbers are not well-defined and can never 
be (constructivist), or do you interpret this as It is impossible to pack all 
true information about numbers into an axiom system (classical)?

Does the statement that a formal system is incomplete with respect 
to statements about numbers mean that Numbers are not well-defined and can 
never be.

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about 
constructivism as in Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
Ick!  I emphatically do not believe When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence.



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper 
hole  :-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your 
argument as to why uncomputable entities are useless for science.  I'm going to 
need to go back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 5:55 PM
  Subject: Re: [agi] constructivist issues



  Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed 

Re: [agi] constructivist issues

2008-10-29 Thread Abram Demski
Ben,

So, for example, if I describe a Turing machine whose halting I prove
formally undecidable by the axioms of peano arithmetic (translating
the Turing machine's operation into numerical terms, of course), and
then I ask you, is this Turing machine non-halting, then would you
answer, That depends on what the meaning of is, is? Or does the
context provide enough additional information to provide a more full
answer?

--Abram

On Wed, Oct 29, 2008 at 10:21 AM, Ben Goertzel [EMAIL PROTECTED] wrote:



 To rephrase. Do you think there is a truth of the matter concerning
 formally undecidable statements about numbers?

 --Abram

 That all depends on what the meaning of is, is ...  ;-)

 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Ben Goertzel
On Wed, Oct 29, 2008 at 11:19 AM, Abram Demski [EMAIL PROTECTED]wrote:

 Ben,

 So, for example, if I describe a Turing machine whose halting I prove
 formally undecidable by the axioms of peano arithmetic (translating
 the Turing machine's operation into numerical terms, of course), and
 then I ask you, is this Turing machine non-halting, then would you
 answer, That depends on what the meaning of is, is? Or does the
 context provide enough additional information to provide a more full
 answer?

 --Abram



hmmm... you're saying the halting is provable  in some more powerful
axiom system but not in Peano arithmetic?

The thing is, a Turing machine is not a real machine: it's a mathematical
abstraction.  A mathematical abstraction only has meaning inside a certain
formal system.  So, the Turing machine inside the Peano arithmetic
system would neither provably halt nor not-halt ... the Turing machine
inside
some other formal system might potentially  provably halt...

But the question is what does this mean about any actual computer,
or any actual physical object -- which we can only communicate about clearly
insofar as it can be boiled down to a finite dataset.

The use of the same term machine for an observable object and a
mathematical
abstraction seems to confuse the issue.

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Cloud Intelligence

2008-10-29 Thread John G. Rose
 From: Bob Mottram [mailto:[EMAIL PROTECTED]
 Beware of putting too much stuff into the cloud.  Especially in the
 current economic climate clouds could disappear without notice (i.e.
 unrecoverable data loss).  Also, depending upon terms and conditions
 any data which you put into the cloud may not legally be owned by you,
 even if you created it.
 

For private commercial clouds this is true. But imagine a public
self-healing cloud where it is somewhat self-regulated and self-organized.
Though commercial clouds could have some sort of inter-cloud virtual
backbone that they subscribe to. So Company A goes bankrupt but it's cloud
is offloaded into the backbone and absorbed by another cloud. Micro payments
migrate with the cloud. Ya right like that could ever happen.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Mark Waser
Hutter proved (3), although as a general principle it was already a well 
established practice in machine learning. Also, I agree with (4) but this 
is not the primary reason to prefer simplicity.


Hutter *defined* the measure of correctness using simplicity as a component. 
Of course, they're correlated when you do such a thing.  That's not a proof, 
that's an assumption.


Regarding (4), I was deliberately ambiguous as to whether I meant 
implementation of thinking system or implementation of thought itself.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 29, 2008 11:11 AM
Subject: Re: [agi] Occam's Razor and its abuse



--- On Wed, 10/29/08, Mark Waser [EMAIL PROTECTED] wrote:


 (1) Simplicity (in conclusions, hypothesis, theories,
 etc.) is preferred.
 (2) The preference to simplicity does not need a
 reason or justification.
 (3) Simplicity is preferred because it is correlated
 with correctness.
 I agree with (1), but not (2) and (3).

I concur but would add that (4) Simplicity is preferred
because it is
correlated with correctness *of implementation* (or ease of
implementation correctly :-)


Occam said (1) but had no proof. Hutter proved (3), although as a general 
principle it was already a well established practice in machine learning. 
Also, I agree with (4) but this is not the primary reason to prefer 
simplicity.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Occam's Razor and its abuse

2008-10-29 Thread Ed Porter
Pei,

My understanding is that when you reason from data, you often want the
ability to extrapolate, which requires some sort of assumptions about the
type of mathematical model to be used.  How do you deal with that in NARS?

Ed Porter

-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 28, 2008 9:40 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Occam's Razor and its abuse


Ed,

Since NARS doesn't follow the Bayesian approach, there is no initial priors
to be assumed. If we use a more general term, such as initial knowledge or
innate beliefs, then yes, you can add them into the system, will will
improve the system's performance. However, they are optional. In NARS, all
object-level (i.e., not meta-level) innate beliefs can be learned by the
system afterward.

Pei

On Tue, Oct 28, 2008 at 5:37 PM, Ed Porter [EMAIL PROTECTED] wrote:
 It appears to me that the assumptions about initial priors used by a 
 self learning AGI or an evolutionary line of AGI's could be quite 
 minimal.

 My understanding is that once a probability distribution starts 
 receiving random samples from its distribution the effect of the 
 original prior becomes rapidly lost, unless it is a rather rare one.  
 Such rare problem priors would get selected against quickly by 
 evolution.  Evolution would tend to tune for the most appropriate 
 priors for the success of subsequent generations (either or computing 
 in the same system if it is capable of enough change or of descendant 
 systems).  Probably the best priors would generally be ones that could 
 be trained moderately rapidly by data.

 So it seems an evolutionary system or line could initially learn 
 priors without any assumptions for priors other than a random picking 
 of priors. Over time and multiple generations it might develop 
 hereditary priors, an perhaps even different hereditary priors for 
 parts of its network connected to different inputs, outputs or 
 internal controls.

 The use of priors in an AGI could be greatly improved by having a 
 gen/comp hiearachy in which models for a given concept could be 
 inherited from the priors of sets of models for similar concepts, and 
 that the set of priors appropriate could change contextually.  It 
 would also seem that the notion of a prior could be improve by 
 blending information from episodic and probabilistic models.

 It would appear than in almost any generally intelligent system, being 
 able to approximate reality in a manner sufficient for evolutionary 
 success with the most efficient representations would be a 
 characteristic that would be greatly preferred by evolution, because 
 it would allow systems to better model more of their environement 
 sufficiently well for evolutionary success with whatever current 
 modeling capacity they have.

 So, although a completely accurate description of virtually anything 
 may not find much use for Occam's Razor, as a practically useful 
 representation it often will.  It seems to me that Occam's Razor is 
 more oriented to deriving meaningful generalizations that it is exact 
 descriptions of anything.

 Furthermore, it would seem to me that a more simple set of 
 preconditions, is generally more probable than a more complex one, 
 because it requires less coincidence.  It would seem to me this would 
 be true under most random sets of priors for the probabilities of the 
 possible sets of components involved and Occam's Razor type selection.

 The are the musings of an untrained mind, since I have not spent much 
 time studying philosophy, because such a high percent of it was so 
 obviously stupid (such as what was commonly said when I was young, 
 that you can't have intelligence without language) and my 
 understanding of math is much less than that of many on this list.  
 But none the less I think much of what I have said above is true.

 I think its gist is not totally dissimilar to what Abram has said.

 Ed Porter




 -Original Message-
 From: Pei Wang [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, October 28, 2008 3:05 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Occam's Razor and its abuse


 Abram,

 I agree with your basic idea in the following, though I usually put it 
 in different form.

 Pei

 On Tue, Oct 28, 2008 at 2:52 PM, Abram Demski [EMAIL PROTECTED] 
 wrote:
 Ben,

 You assert that Pei is forced to make an assumption about the 
 regulatiry of the world to justify adaptation. Pei could also take a 
 different argument. He could try to show that *if* a strategy exists 
 that can be implemented given the finite resources, NARS will 
 eventually find it. Thus, adaptation is justified on a sort of we 
 might as well try basis. (The proof would involve showing that NARS 
 searches the state of finite-state-machines that can be implemented 
 with the resources at hand, and is more probable to stay for longer 
 periods of time in configurations that give more reward, such that 
 NARS would eventually 

Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Pei Wang
Ed,

When NARS extrapolates its past experience to the current and the
future, it is indeed based on the assumption that its future
experience will be similar to its past experience (otherwise any
prediction will be equally valid), however it does not assume the
world can be captured by any specific mathematical model, such as a
Turing Machine or a probability distribution defined on a
propositional space.

Concretely speaking, when a statement S has been tested N times, and
in M times it is true, but in N-M times it is false, then NARS's
expectation value for it to be true in the next testing is E(S) =
(M+0.5)/(N+1) [if there is no other relevant knowledge], and the
system will use this value to decide whether to accept a bet on S.
However, neither the system nor its designer assumes that there is a
true probability for S to occur for which the above expectation is
an approximation. Also, it is not assumed that E(S)  will converge
when the testing on S continues.

Pei


On Wed, Oct 29, 2008 at 11:33 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Pei,

 My understanding is that when you reason from data, you often want the
 ability to extrapolate, which requires some sort of assumptions about the
 type of mathematical model to be used.  How do you deal with that in NARS?

 Ed Porter

 -Original Message-
 From: Pei Wang [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, October 28, 2008 9:40 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Occam's Razor and its abuse


 Ed,

 Since NARS doesn't follow the Bayesian approach, there is no initial priors
 to be assumed. If we use a more general term, such as initial knowledge or
 innate beliefs, then yes, you can add them into the system, will will
 improve the system's performance. However, they are optional. In NARS, all
 object-level (i.e., not meta-level) innate beliefs can be learned by the
 system afterward.

 Pei

 On Tue, Oct 28, 2008 at 5:37 PM, Ed Porter [EMAIL PROTECTED] wrote:
 It appears to me that the assumptions about initial priors used by a
 self learning AGI or an evolutionary line of AGI's could be quite
 minimal.

 My understanding is that once a probability distribution starts
 receiving random samples from its distribution the effect of the
 original prior becomes rapidly lost, unless it is a rather rare one.
 Such rare problem priors would get selected against quickly by
 evolution.  Evolution would tend to tune for the most appropriate
 priors for the success of subsequent generations (either or computing
 in the same system if it is capable of enough change or of descendant
 systems).  Probably the best priors would generally be ones that could
 be trained moderately rapidly by data.

 So it seems an evolutionary system or line could initially learn
 priors without any assumptions for priors other than a random picking
 of priors. Over time and multiple generations it might develop
 hereditary priors, an perhaps even different hereditary priors for
 parts of its network connected to different inputs, outputs or
 internal controls.

 The use of priors in an AGI could be greatly improved by having a
 gen/comp hiearachy in which models for a given concept could be
 inherited from the priors of sets of models for similar concepts, and
 that the set of priors appropriate could change contextually.  It
 would also seem that the notion of a prior could be improve by
 blending information from episodic and probabilistic models.

 It would appear than in almost any generally intelligent system, being
 able to approximate reality in a manner sufficient for evolutionary
 success with the most efficient representations would be a
 characteristic that would be greatly preferred by evolution, because
 it would allow systems to better model more of their environement
 sufficiently well for evolutionary success with whatever current
 modeling capacity they have.

 So, although a completely accurate description of virtually anything
 may not find much use for Occam's Razor, as a practically useful
 representation it often will.  It seems to me that Occam's Razor is
 more oriented to deriving meaningful generalizations that it is exact
 descriptions of anything.

 Furthermore, it would seem to me that a more simple set of
 preconditions, is generally more probable than a more complex one,
 because it requires less coincidence.  It would seem to me this would
 be true under most random sets of priors for the probabilities of the
 possible sets of components involved and Occam's Razor type selection.

 The are the musings of an untrained mind, since I have not spent much
 time studying philosophy, because such a high percent of it was so
 obviously stupid (such as what was commonly said when I was young,
 that you can't have intelligence without language) and my
 understanding of math is much less than that of many on this list.
 But none the less I think much of what I have said above is true.

 I think its gist is not totally dissimilar to what Abram has said.

 

Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Ben Goertzel
But, NARS as an overall software system will perform more effectively
(i.e., learn more rapidly) in
some environments than in others, for a variety of reasons.  There are many
biases built into the NARS architecture in various ways ... it's just not
obvious
to spell out what they are, because the NARS system was not explicitly
designed based on that sort of thinking...

The same is true of every other complex AGI architecture...

ben g


On Wed, Oct 29, 2008 at 12:07 PM, Pei Wang [EMAIL PROTECTED] wrote:

 Ed,

 When NARS extrapolates its past experience to the current and the
 future, it is indeed based on the assumption that its future
 experience will be similar to its past experience (otherwise any
 prediction will be equally valid), however it does not assume the
 world can be captured by any specific mathematical model, such as a
 Turing Machine or a probability distribution defined on a
 propositional space.

 Concretely speaking, when a statement S has been tested N times, and
 in M times it is true, but in N-M times it is false, then NARS's
 expectation value for it to be true in the next testing is E(S) =
 (M+0.5)/(N+1) [if there is no other relevant knowledge], and the
 system will use this value to decide whether to accept a bet on S.
 However, neither the system nor its designer assumes that there is a
 true probability for S to occur for which the above expectation is
 an approximation. Also, it is not assumed that E(S)  will converge
 when the testing on S continues.

 Pei


 On Wed, Oct 29, 2008 at 11:33 AM, Ed Porter [EMAIL PROTECTED] wrote:
  Pei,
 
  My understanding is that when you reason from data, you often want the
  ability to extrapolate, which requires some sort of assumptions about the
  type of mathematical model to be used.  How do you deal with that in
 NARS?
 
  Ed Porter
 
  -Original Message-
  From: Pei Wang [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, October 28, 2008 9:40 PM
  To: agi@v2.listbox.com
  Subject: Re: [agi] Occam's Razor and its abuse
 
 
  Ed,
 
  Since NARS doesn't follow the Bayesian approach, there is no initial
 priors
  to be assumed. If we use a more general term, such as initial knowledge
 or
  innate beliefs, then yes, you can add them into the system, will will
  improve the system's performance. However, they are optional. In NARS,
 all
  object-level (i.e., not meta-level) innate beliefs can be learned by the
  system afterward.
 
  Pei
 
  On Tue, Oct 28, 2008 at 5:37 PM, Ed Porter [EMAIL PROTECTED] wrote:
  It appears to me that the assumptions about initial priors used by a
  self learning AGI or an evolutionary line of AGI's could be quite
  minimal.
 
  My understanding is that once a probability distribution starts
  receiving random samples from its distribution the effect of the
  original prior becomes rapidly lost, unless it is a rather rare one.
  Such rare problem priors would get selected against quickly by
  evolution.  Evolution would tend to tune for the most appropriate
  priors for the success of subsequent generations (either or computing
  in the same system if it is capable of enough change or of descendant
  systems).  Probably the best priors would generally be ones that could
  be trained moderately rapidly by data.
 
  So it seems an evolutionary system or line could initially learn
  priors without any assumptions for priors other than a random picking
  of priors. Over time and multiple generations it might develop
  hereditary priors, an perhaps even different hereditary priors for
  parts of its network connected to different inputs, outputs or
  internal controls.
 
  The use of priors in an AGI could be greatly improved by having a
  gen/comp hiearachy in which models for a given concept could be
  inherited from the priors of sets of models for similar concepts, and
  that the set of priors appropriate could change contextually.  It
  would also seem that the notion of a prior could be improve by
  blending information from episodic and probabilistic models.
 
  It would appear than in almost any generally intelligent system, being
  able to approximate reality in a manner sufficient for evolutionary
  success with the most efficient representations would be a
  characteristic that would be greatly preferred by evolution, because
  it would allow systems to better model more of their environement
  sufficiently well for evolutionary success with whatever current
  modeling capacity they have.
 
  So, although a completely accurate description of virtually anything
  may not find much use for Occam's Razor, as a practically useful
  representation it often will.  It seems to me that Occam's Razor is
  more oriented to deriving meaningful generalizations that it is exact
  descriptions of anything.
 
  Furthermore, it would seem to me that a more simple set of
  preconditions, is generally more probable than a more complex one,
  because it requires less coincidence.  It would seem to me this would
  be true 

Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Pei Wang
Ben,

I never claimed that NARS is not based on assumptions (or call them
biases), but only on truths. It surely is, and many of the
assumptions are my beliefs and intuitions, which I cannot convince
other people to accept very soon.

However, it does not mean that all assumptions are equally acceptable,
or as soon as something is called a assumption, the author will be
released from the duty of justifying it.

Going back to the original topic, since simplicity/complexity of a
description is correlated with its prior probability is the core
assumption of certain research paradigms, it should be justified. Call
it Occam's Razor so as to suggest it is self-evident is not the
proper way to do the job. This is all I want to argue in this
discussion.

Pei

On Wed, Oct 29, 2008 at 12:10 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 But, NARS as an overall software system will perform more effectively
 (i.e., learn more rapidly) in
 some environments than in others, for a variety of reasons.  There are many
 biases built into the NARS architecture in various ways ... it's just not
 obvious
 to spell out what they are, because the NARS system was not explicitly
 designed based on that sort of thinking...

 The same is true of every other complex AGI architecture...

 ben g


 On Wed, Oct 29, 2008 at 12:07 PM, Pei Wang [EMAIL PROTECTED] wrote:

 Ed,

 When NARS extrapolates its past experience to the current and the
 future, it is indeed based on the assumption that its future
 experience will be similar to its past experience (otherwise any
 prediction will be equally valid), however it does not assume the
 world can be captured by any specific mathematical model, such as a
 Turing Machine or a probability distribution defined on a
 propositional space.

 Concretely speaking, when a statement S has been tested N times, and
 in M times it is true, but in N-M times it is false, then NARS's
 expectation value for it to be true in the next testing is E(S) =
 (M+0.5)/(N+1) [if there is no other relevant knowledge], and the
 system will use this value to decide whether to accept a bet on S.
 However, neither the system nor its designer assumes that there is a
 true probability for S to occur for which the above expectation is
 an approximation. Also, it is not assumed that E(S)  will converge
 when the testing on S continues.

 Pei


 On Wed, Oct 29, 2008 at 11:33 AM, Ed Porter [EMAIL PROTECTED] wrote:
  Pei,
 
  My understanding is that when you reason from data, you often want the
  ability to extrapolate, which requires some sort of assumptions about
  the
  type of mathematical model to be used.  How do you deal with that in
  NARS?
 
  Ed Porter
 
  -Original Message-
  From: Pei Wang [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, October 28, 2008 9:40 PM
  To: agi@v2.listbox.com
  Subject: Re: [agi] Occam's Razor and its abuse
 
 
  Ed,
 
  Since NARS doesn't follow the Bayesian approach, there is no initial
  priors
  to be assumed. If we use a more general term, such as initial
  knowledge or
  innate beliefs, then yes, you can add them into the system, will will
  improve the system's performance. However, they are optional. In NARS,
  all
  object-level (i.e., not meta-level) innate beliefs can be learned by the
  system afterward.
 
  Pei
 
  On Tue, Oct 28, 2008 at 5:37 PM, Ed Porter [EMAIL PROTECTED] wrote:
  It appears to me that the assumptions about initial priors used by a
  self learning AGI or an evolutionary line of AGI's could be quite
  minimal.
 
  My understanding is that once a probability distribution starts
  receiving random samples from its distribution the effect of the
  original prior becomes rapidly lost, unless it is a rather rare one.
  Such rare problem priors would get selected against quickly by
  evolution.  Evolution would tend to tune for the most appropriate
  priors for the success of subsequent generations (either or computing
  in the same system if it is capable of enough change or of descendant
  systems).  Probably the best priors would generally be ones that could
  be trained moderately rapidly by data.
 
  So it seems an evolutionary system or line could initially learn
  priors without any assumptions for priors other than a random picking
  of priors. Over time and multiple generations it might develop
  hereditary priors, an perhaps even different hereditary priors for
  parts of its network connected to different inputs, outputs or
  internal controls.
 
  The use of priors in an AGI could be greatly improved by having a
  gen/comp hiearachy in which models for a given concept could be
  inherited from the priors of sets of models for similar concepts, and
  that the set of priors appropriate could change contextually.  It
  would also seem that the notion of a prior could be improve by
  blending information from episodic and probabilistic models.
 
  It would appear than in almost any generally intelligent system, being
  able to approximate reality in a manner 

Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Ben Goertzel
 However, it does not mean that all assumptions are equally acceptable,
 or as soon as something is called a assumption, the author will be
 released from the duty of justifying it.



Hume argued that at the basis of any approach to induction, there will
necessarily lie some assumption that is *not* inductively justified, but
must in essence be taken on faith or as an unjustified assumption

He claimed that humans make certain unjustified assumptions of this nature
automatically due to human nature

This is an argument that not all assumptions can be expected to be justified
...

Comments?
ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Pei Wang
Ben,

It goes back to what justification we are talking about. To prove
it is a strong version, and to show supporting evidence is a weak
version. Hume pointed out that induction cannot be justified in the
sense that there is no way to guarantee that all inductive conclusions
will be confirmed.

I don't think Hume can be cited to support the assumption that
complexity is correlated to probability, or that this assumption
does not need justification, just because inductive conclusions can be
wrong. There are much more reasons to accept induction than to accept
the above assumption.

Pei

On Wed, Oct 29, 2008 at 12:31 PM, Ben Goertzel [EMAIL PROTECTED] wrote:



 However, it does not mean that all assumptions are equally acceptable,
 or as soon as something is called a assumption, the author will be
 released from the duty of justifying it.

 Hume argued that at the basis of any approach to induction, there will
 necessarily lie some assumption that is *not* inductively justified, but
 must in essence be taken on faith or as an unjustified assumption

 He claimed that humans make certain unjustified assumptions of this nature
 automatically due to human nature

 This is an argument that not all assumptions can be expected to be justified
 ...

 Comments?
 ben g

 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Abram Demski
Ben,

OK, that is a pretty good answer. I don't think I have any questions
left about your philosophy :).

Some comments, though.

 hmmm... you're saying the halting is provable  in some more powerful
 axiom system but not in Peano arithmetic?

Yea, it would be provable in whatever formal system I used to prove
the undecidability in the first place. (Probably PA plus an axiom
asserting PA is consistent.)

 The thing is, a Turing machine is not a real machine: it's a mathematical
 abstraction.  A mathematical abstraction only has meaning inside a certain
 formal system.  So, the Turing machine inside the Peano arithmetic
 system would neither provably halt nor not-halt ... the Turing machine
 inside
 some other formal system might potentially  provably halt...

Basically, I see this this as a no to my original Do you think
there is a truth of the matter question. After all, if we need more
definitions to determine the truth of a statement, then surely the
statement's truth without the additional context is undefined.

Take-home message for me: Yes, Ben really is a constructivist.


 But the question is what does this mean about any actual computer,
 or any actual physical object -- which we can only communicate about clearly
 insofar as it can be boiled down to a finite dataset.

What it means to me is that Any actual computer will not halt (with a
correct output) for this program. An actual computer will keep
crunching away until some event happens that breaks the metaphor
between it and the abstract machine-- memory overload, power failure,
et cetera.

This does not seem to me to depend on the formal system that we choose.

Argument: very basic axioms fill in all the positive facts, and will
tell us that a Turing machine halts when such is the case. Any
additional axioms are attempts to fill in the negative space, so that
we can prove some Turing machines non-halting. It seems perfectly
reasonable to think hypothetically about the formal system that has
*all* the negative cases filled in properly, even though this is
impossible to actually do. This system is the truth of the matter.
So, when we choose a formal system to reason about Turing machines
with, we are justified in choosing the strongest one available to us
(more specifically, the strongest one we suspect to be consistent).


 The use of the same term machine for an observable object and a
 mathematical
 abstraction seems to confuse the issue.

Sure. Do you have a preferred term? I can't think of any...


 -- Ben

 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Ben Goertzel

 
  But the question is what does this mean about any actual computer,
  or any actual physical object -- which we can only communicate about
 clearly
  insofar as it can be boiled down to a finite dataset.

 What it means to me is that Any actual computer will not halt (with a
 correct output) for this program. An actual computer will keep
 crunching away until some event happens that breaks the metaphor
 between it and the abstract machine-- memory overload, power failure,
 et cetera.



Yes ... this can be concluded **if** you can convince yourself that the
formal model corresponds to the physical machine.

And to do *this*, you need to use a finite set of finite data points ;-)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Abram Demski
Ben,

The difference can I think be best illustrated with two hypothetical
AGIs. Both are supposed to be learning that computers are
approximately Turing machines. The first, made by you, interprets
this constructively (let's say relative to PA). The second, made by
me, interprets this classically (so it will always take the strongest
set of axioms that it suspects to be consistent).

The first AGI will be checking to see how well the computer's halting
matches with the positive cases it can prove in PA, and the
non-halting with the negative cases it can prove in PA. It will be
ignoring the halting/nonhalting behavior when it can prove nothing.

The second AGI will be checking to see how well the computer's halting
matches with the positive cases it can prove in the axiom system of
its choice, and the non-halting with the negative cases it can prove
in PA, *plus* it will look to see if it is non-halting in the cases
where it can prove nothing (after significant effort).

Of course, both will conclude nearly the same thing: the computer is
similar to the formal entity within specific restrictions. The second
AGI will have slightly more data (extra axioms plus information in
cases when it can't prove anything), but it will be learning a
formally different statement too, so a direct comparison isn't quite
fair. Anyway, I think this clarifies the difference.

--Abram

On Wed, Oct 29, 2008 at 1:13 PM, Ben Goertzel [EMAIL PROTECTED] wrote:


 
  But the question is what does this mean about any actual computer,
  or any actual physical object -- which we can only communicate about
  clearly
  insofar as it can be boiled down to a finite dataset.

 What it means to me is that Any actual computer will not halt (with a
 correct output) for this program. An actual computer will keep
 crunching away until some event happens that breaks the metaphor
 between it and the abstract machine-- memory overload, power failure,
 et cetera.

 Yes ... this can be concluded **if** you can convince yourself that the
 formal model corresponds to the physical machine.

 And to do *this*, you need to use a finite set of finite data points ;-)

 ben

 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Cloud Intelligence

2008-10-29 Thread Mike Archbold
I guess I don't see how cloud computing is materially different from
open source in so much as we see the sharing of resources and also now
increased availability, no need to buy so much hardware at the outset.
 But it seems more a case of convenience.

So what does that have to do with AGI?   I can see the advantage that
if you wanted your executable code to remain hidden in a cloud so
nobody can get a hold of it to decompile and figure it out, however.

On 10/29/08, John G. Rose [EMAIL PROTECTED] wrote:
 From: Bob Mottram [mailto:[EMAIL PROTECTED]
 Beware of putting too much stuff into the cloud.  Especially in the
 current economic climate clouds could disappear without notice (i.e.
 unrecoverable data loss).  Also, depending upon terms and conditions
 any data which you put into the cloud may not legally be owned by you,
 even if you created it.


 For private commercial clouds this is true. But imagine a public
 self-healing cloud where it is somewhat self-regulated and self-organized.
 Though commercial clouds could have some sort of inter-cloud virtual
 backbone that they subscribe to. So Company A goes bankrupt but it's cloud
 is offloaded into the backbone and absorbed by another cloud. Micro payments
 migrate with the cloud. Ya right like that could ever happen.

 John



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Abram Demski
Ben,

No, I wasn't intending any weird chips.

For me, the most important way in which you are a constructivist is
that you think AIXI is the ideal that finite intelligence should
approach.

--Abram

On Wed, Oct 29, 2008 at 2:33 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 OK ... but are both of these hypothetical computer programs on standard
 contemporary chips, or do any of them use weird
 supposedly-uncomputability-supporting chips?  ;-)

 Of course, a computer program can use any axiom set it wants to analyze its
 data ... just as we can now use automated theorem-provers to prove stuff
 about uncomputable entities, in a formal sense...

 By the way, I'm not sure the sense in which I'm a constructivist.  I'm not
 willing to commit to the statement that the universe is finite, or that only
 finite math has meaning.  But, it seems to me that, within the scope of
 *science* and *language*, as currently conceived, there is no *need* to
 posit anything non-finite.  Science and language are not necessarily
 comprehensive of the universe  Potentially (though I doubt it) mind is
 uncomputable in a way that makes it impossible for science and math to grasp
 it well enough to guide us in building an AGI ;-) ... and, interestingly, in
 this case we could still potentially build an AGI via copying a human brain
 ... and then randomly tinkering with it!!

 ben

 On Wed, Oct 29, 2008 at 1:45 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Ben,

 The difference can I think be best illustrated with two hypothetical
 AGIs. Both are supposed to be learning that computers are
 approximately Turing machines. The first, made by you, interprets
 this constructively (let's say relative to PA). The second, made by
 me, interprets this classically (so it will always take the strongest
 set of axioms that it suspects to be consistent).

 The first AGI will be checking to see how well the computer's halting
 matches with the positive cases it can prove in PA, and the
 non-halting with the negative cases it can prove in PA. It will be
 ignoring the halting/nonhalting behavior when it can prove nothing.

 The second AGI will be checking to see how well the computer's halting
 matches with the positive cases it can prove in the axiom system of
 its choice, and the non-halting with the negative cases it can prove
 in PA, *plus* it will look to see if it is non-halting in the cases
 where it can prove nothing (after significant effort).

 Of course, both will conclude nearly the same thing: the computer is
 similar to the formal entity within specific restrictions. The second
 AGI will have slightly more data (extra axioms plus information in
 cases when it can't prove anything), but it will be learning a
 formally different statement too, so a direct comparison isn't quite
 fair. Anyway, I think this clarifies the difference.

 --Abram

 On Wed, Oct 29, 2008 at 1:13 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
 
  
   But the question is what does this mean about any actual computer,
   or any actual physical object -- which we can only communicate about
   clearly
   insofar as it can be boiled down to a finite dataset.
 
  What it means to me is that Any actual computer will not halt (with a
  correct output) for this program. An actual computer will keep
  crunching away until some event happens that breaks the metaphor
  between it and the abstract machine-- memory overload, power failure,
  et cetera.
 
  Yes ... this can be concluded **if** you can convince yourself that the
  formal model corresponds to the physical machine.
 
  And to do *this*, you need to use a finite set of finite data points ;-)
 
  ben
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 A human being should be able to change a diaper, plan an invasion, butcher
 a hog, conn a ship, design a building, write a sonnet, balance accounts,
 build a wall, set a bone, comfort the dying, take orders, give orders,
 cooperate, act alone, solve equations, analyze a new problem, pitch manure,
 program a computer, cook a tasty meal, fight efficiently, die gallantly.
 Specialization is for insects.  -- Robert Heinlein


 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Re: Two Remarkable Computational Competencies of the SGA

2008-10-29 Thread Lukasz Stafiniak
OK it's just a Compact Genetic Algorithm -- genetic drift kind of
stuff. Nice read, but very simple (subsumed by any serious EDA). It
says you can do simple pattern mining by just looking at the
distribution, without complex statistics.

On Wed, Oct 29, 2008 at 8:13 PM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 Very relevant even if you don't agree. Too much rhetoric though (it's
 not really that earth-shaking). I haven't made up my mind yet.

 http://evoadaptation.wordpress.com/2008/10/18/new-manuscript-two-remarkable-computational-competencies-of-the-simple-genetic-algorithm/



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Ben Goertzel
On Wed, Oct 29, 2008 at 4:47 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Ben,

 No, I wasn't intending any weird chips.

 For me, the most important way in which you are a constructivist is
 that you think AIXI is the ideal that finite intelligence should
 approach.




Hmmm... I'm not sure I think that.  AIXI is ideal in terms of a certain
formal definition of intelligence, which I don't necessarily accept as the
end-all of intelligence...

It may be that future science identifies conceptual shortcomings in the
theoretical framework within which AIXI lives.

But, I do think that AIXI is interesting as a source of inspiration for some
aspects of the process of creating practical AGI systems.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Machine Consciousness Workshop, Hong Kong, June 2009

2008-10-29 Thread Ben Goertzel
Hi all,

I wanted to let you know that Gino Yu and I are co-organizing  a Workshop on
Machine
Consciousness, which will be held in  Hong Kong in June 2008: see

http://novamente.net/machinecs/index.html

for details.

It is colocated with a larger, interdisciplinary conference on consciousness
research,
which has previously been announced:

http://www.consciousness.arizona.edu/

As an aside, I also note that the date for submitting papers to
AGI-09 has been extended, by popular demand, till November 12;
see

http://agi-09.org/

AGI-09 will welcome quality papers on any strong-AI
related topics.

thanks!
ben

-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Matt Mahoney
--- On Tue, 10/28/08, Pei Wang [EMAIL PROTECTED] wrote:

 Whenever someone prove something outside mathematics, it is always
 based on certain assumptions. If the assumptions are not well
 justified, there is no strong reason for people to accept the
 conclusion, even though the proof process is correct.

My assumption is that the physics of the observable universe is computable 
(which is widely believed to be true). If it is true, then AIXI proves that 
Occam's Razor holds.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Matt Mahoney
--- On Wed, 10/29/08, Mark Waser [EMAIL PROTECTED] wrote:

 Hutter *defined* the measure of correctness using
 simplicity as a component. 
 Of course, they're correlated when you do such a thing.
  That's not a proof, 
 that's an assumption.

Hutter defined the measure of correctness as the accumulated reward by the 
agent in AIXI.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com