[agi] Cosmist Manifesto available via Amazon.com

2010-07-21 Thread Ben Goertzel
Hi all,

My new futurist tract The Cosmist Manifesto is now available on
Amazon.com, courtesy of Humanity+ Press:

http://www.amazon.com/gp/product/0984609709/

Thanks to Natasha Vita-More for the beautiful cover, and David Orban
for helping make the book happen...

-- Ben


--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if
we are to give everything its due, two times two makes five is
sometimes a very charming thing too. -- Fyodor Dostoevsky



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if
we are to give everything its due, two times two makes five is
sometimes a very charming thing too. -- Fyodor Dostoevsky


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread deepakjnath
Yes we could do a 4x4 tic tac toe game like this in a PC. The training sets
can be generated simply by playing the agents against each other using
random moves and letting the agents know if it passed or failed as a
feedback mechanism.

Cheers,
Deepak

On Wed, Jul 21, 2010 at 9:02 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Mike, I think we all agree that we should not have to tell an AGI the steps
 to solving problems. It should learn and figure it out, like the way that
 people figure it out.

 The question is how to do that. We know that it is possible. For example, I
 could write a chess program that I could not win against. I could write the
 program in such a way that it learns to improve its game by playing against
 itself or other opponents. I could write it in such a way that initially
 does not know the rules for chess, but instead learns the rules by being
 given examples of legal and illegal moves.

 What we have not yet been able to do is scale this type of learning and
 problem solving up to general, human level intelligence. I believe it is
 possible, but it will require lots of training data and lots of computing
 power. It is not something you could do on a PC, and it won't be cheap.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* Mike Tintner tint...@blueyonder.co.uk
 *To:* agi agi@v2.listbox.com
 *Sent:* Mon, July 19, 2010 9:07:53 PM

 *Subject:* Re: [agi] Of definitions and tests of AGI

 The issue isn't what a computer can do. The issue is how you structure the
 computer's or any agent's thinking about a problem. Programs/Turing machines
 are only one way of structuring thinking/problemsolving - by, among other
 things, giving the computer a method/process of solution. There is an
 alternative way of structuring a computer's thinking, which incl., among
 other things, not giving it a method/ process of solution, but making it
 rather than a human programmer do the real problemsolving.  More of that
 another time.

  *From:* Matt Mahoney matmaho...@yahoo.com
 *Sent:* Tuesday, July 20, 2010 1:38 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

  Creativity is the good feeling you get when you discover a clever
 solution to a hard problem without knowing the process you used to discover
 it.

 I think a computer could do that.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Mike Tintner tint...@blueyonder.co.uk
 *To:* agi agi@v2.listbox.com
 *Sent:* Mon, July 19, 2010 2:08:28 PM
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Yes that's what people do, but it's not what programmed computers do.

 The useful formulation that emerges here is:

 narrow AI (and in fact all rational) problems  have *a method of solution*
 (to be equated with general method)   - and are programmable (a program is
 a method of solution)

 AGI  (and in fact all creative) problems do NOT have *a method of solution*
 (in the general sense)  -  rather a one.off *way of solving the problem* has
 to be improvised each time.

 AGI/creative problems do not in fact have a method of solution, period.
 There is no (general) method of solving either the toy box or the
 build-a-rock-wall problem - one essential feature which makes them AGI.

 You can learn, as you indicate, from *parts* of any given AGI/creative
 solution, and apply the lessons to future problems - and indeed with
 practice, should improve at solving any given kind of AGI/creative problem.
 But you can never apply a *whole* solution/way to further problems.

 P.S. One should add that in terms of computers, we are talking here of
 *complete, step-by-step* methods of solution.


  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 5:09 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI



  And are you happy with:

 AGI is about devising *one-off* methods of problemsolving (that only apply
 to the individual problem, and cannot be re-used - at

  least not in their totality)



 Yes exactly, isn't that what people do?  Also, I think that being able to
 recognize where past solutions can be generalized and where past solutions
 can be varied and reused is a detail of how intelligence works that is
 likely to be universal.



  vs

 narrow AI is about applying pre-existing *general* methods of
 problemsolving  (applicable to whole classes of problems)?



  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 4:45 PM
  *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Well, solving ANY problem is a little too strong.  This is AGI, not AGH
 (artificial godhead), though AGH could be an unintended consequence ;).  So
 I would rephrase solving any problem as being able to come up with
 reasonable approaches and strategies to any problem (just as humans are able
 to do).

 On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner 
 

Re: [agi] The Collective Brain

2010-07-21 Thread Jan Klauck
Mike Tintner wrote

 You partly illustrate my point - you talk of artificial brains as if
 they actually exist

That's the magic of thinking in scenarios. For you it may appear as if
we couldn't differentiate between reality and a thought experiment.

 By implicitly pretending that artificial brains exist - in the form of
 computer programs -  you (and most AGI-ers), deflect attention away from
 all
 the unsolved dimensions of what is required for an independent
 brain-cum-living system, natural or artificial.

Then bring this topic up. But please in an educated way and not with
the same half-understanding of AGI and math you demonstrate here.
But to be honest I expect you to talk about this with your usual
misunderstandings and then wonder that nobody (positively) reacts on
that--and then you'll again run around and whine that we don't get it.

(And what's an artificial brain-cum-living system?)

 Yes you may know these things some times as you say, but most of the
 time
 they're forgotten.

There are other topics that often require more focus at this time.
People are working on details you usually don't understand and don't
care to understand.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread David Jones
Training data is not available in many real problems. I don't think training
data should be used as the main learning mechanism. It likely won't solve
any of the problems.

On Jul 21, 2010 2:52 AM, deepakjnath deepakjn...@gmail.com wrote:

Yes we could do a 4x4 tic tac toe game like this in a PC. The training sets
can be generated simply by playing the agents against each other using
random moves and letting the agents know if it passed or failed as a
feedback mechanism.

Cheers,
Deepak



On Wed, Jul 21, 2010 at 9:02 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Mike, I think we a...
-- 
cheers,
Deepak
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread Mike Tintner
Matt,

How did you learn to play chess?   Or write programs? How do you teach people 
to write programs?

Compare and contrast - esp. the nature and number/ extent of instructions -  
with how you propose to force a computer to learn below.

Why is it that if you tell a child [real AGI] what to do, it will never learn?

Why can and does a human learner get to ask questions and a computer doesn't?

How come you [a real AGI] can get to choose your instructors and textbooks, 
and/or whether you choose to pay attention to them, and a computer can't?

Why do computers stop learning once they've done what they're told, and humans 
and animals never stop and keep going on to learn ever new activities?

What and how many are the fundamental differences between how real AGI's and 
computers learn?




Mike, I think we all agree that we should not have to tell an AGI the steps to 
solving problems. It should learn and figure it out, like the way that people 
figure it out.


The question is how to do that. We know that it is possible. For example, I 
could write a chess program that I could not win against. I could write the 
program in such a way that it learns to improve its game by playing against 
itself or other opponents. I could write it in such a way that initially does 
not know the rules for chess, but instead learns the rules by being given 
examples of legal and illegal moves.


What we have not yet been able to do is scale this type of learning and problem 
solving up to general, human level intelligence. I believe it is possible, but 
it will require lots of training data and lots of computing power. It is not 
something you could do on a PC, and it won't be cheap.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 9:07:53 PM
Subject: Re: [agi] Of definitions and tests of AGI


The issue isn't what a computer can do. The issue is how you structure the 
computer's or any agent's thinking about a problem. Programs/Turing machines 
are only one way of structuring thinking/problemsolving - by, among other 
things, giving the computer a method/process of solution. There is an 
alternative way of structuring a computer's thinking, which incl., among other 
things, not giving it a method/ process of solution, but making it rather than 
a human programmer do the real problemsolving.  More of that another time.


From: Matt Mahoney 
Sent: Tuesday, July 20, 2010 1:38 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Creativity is the good feeling you get when you discover a clever solution to a 
hard problem without knowing the process you used to discover it.


I think a computer could do that.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 2:08:28 PM
Subject: Re: [agi] Of definitions and tests of AGI


Yes that's what people do, but it's not what programmed computers do.

The useful formulation that emerges here is:

narrow AI (and in fact all rational) problems  have *a method of solution*  (to 
be equated with general method)   - and are programmable (a program is a 
method of solution)

AGI  (and in fact all creative) problems do NOT have *a method of solution* (in 
the general sense)  -  rather a one.off *way of solving the problem* has to be 
improvised each time.

AGI/creative problems do not in fact have a method of solution, period. There 
is no (general) method of solving either the toy box or the build-a-rock-wall 
problem - one essential feature which makes them AGI.

You can learn, as you indicate, from *parts* of any given AGI/creative 
solution, and apply the lessons to future problems - and indeed with practice, 
should improve at solving any given kind of AGI/creative problem. But you can 
never apply a *whole* solution/way to further problems.

P.S. One should add that in terms of computers, we are talking here of 
*complete, step-by-step* methods of solution.



From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


  
  And are you happy with:

  AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at 

  least not in their totality)


Yes exactly, isn't that what people do?  Also, I think that being able to 
recognize where past solutions can be generalized and where past solutions can 
be varied and reused is a detail of how intelligence works that is likely to be 
universal.

 
  vs

  narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




  From: rob levy 
  Sent: Monday, July 19, 2010 4:45 PM
  To: agi 
  Subject: Re: [agi] Of 

Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread rob levy
A child AGI should be expected to need help learning how to solve many
problems, and even be told what the steps are.  But at some point it needs
to have developed general problem-solving skills.  But I feel like this is
all stating the obvious.

On Tue, Jul 20, 2010 at 11:32 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Mike, I think we all agree that we should not have to tell an AGI the steps
 to solving problems. It should learn and figure it out, like the way that
 people figure it out.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: Cosmist Manifesto available via Amazon.com

2010-07-21 Thread Ben Goertzel
Oh... and, a PDF version of the book is also available for free at

http://goertzel.org/CosmistManifesto_July2010.pdf

;-) ...

ben

On Tue, Jul 20, 2010 at 11:30 PM, Ben Goertzel b...@goertzel.org wrote:
 Hi all,

 My new futurist tract The Cosmist Manifesto is now available on
 Amazon.com, courtesy of Humanity+ Press:

 http://www.amazon.com/gp/product/0984609709/

 Thanks to Natasha Vita-More for the beautiful cover, and David Orban
 for helping make the book happen...

 -- Ben


 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 I admit that two times two makes four is an excellent thing, but if
 we are to give everything its due, two times two makes five is
 sometimes a very charming thing too. -- Fyodor Dostoevsky



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 I admit that two times two makes four is an excellent thing, but if
 we are to give everything its due, two times two makes five is
 sometimes a very charming thing too. -- Fyodor Dostoevsky




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if
we are to give everything its due, two times two makes five is
sometimes a very charming thing too. -- Fyodor Dostoevsky


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Cosmist Manifesto available via Amazon.com

2010-07-21 Thread David Orban
That's fantastic.

Next steps I am going to do:
- set up a Kindle edition
- set up an iBooks edition
- set up a Scribd edition

D

David Orban
skype, twitter, linkedin, sl, etc: davidorban



On Wed, Jul 21, 2010 at 8:01 AM, Ben Goertzel b...@goertzel.org wrote:
 Oh... and, a PDF version of the book is also available for free at

 http://goertzel.org/CosmistManifesto_July2010.pdf

 ;-) ...

 ben

 On Tue, Jul 20, 2010 at 11:30 PM, Ben Goertzel b...@goertzel.org wrote:
 Hi all,

 My new futurist tract The Cosmist Manifesto is now available on
 Amazon.com, courtesy of Humanity+ Press:

 http://www.amazon.com/gp/product/0984609709/

 Thanks to Natasha Vita-More for the beautiful cover, and David Orban
 for helping make the book happen...

 -- Ben


 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 I admit that two times two makes four is an excellent thing, but if
 we are to give everything its due, two times two makes five is
 sometimes a very charming thing too. -- Fyodor Dostoevsky



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 I admit that two times two makes four is an excellent thing, but if
 we are to give everything its due, two times two makes five is
 sometimes a very charming thing too. -- Fyodor Dostoevsky




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 I admit that two times two makes four is an excellent thing, but if
 we are to give everything its due, two times two makes five is
 sometimes a very charming thing too. -- Fyodor Dostoevsky


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread Mike Tintner
Infants *start* with general learning skills - they have to extensively 
discover for themselves how to do most things - control head, reach out, turn 
over, sit up, crawl, walk - and also have to work out perceptually what the 
objects they see are, and what they do... and what sounds are, and how they 
form words, and how those words relate to objects - and how language works

it is this capacity to keep discovering ways of doing things, that is a major 
motivation in their continually learning new activities - continually seeking 
novelty, and getting bored with too repetitive activities

obviously an AGI needs some help.. but at the mo. all projects get *full* help/ 
*complete* instructions - IOW are merely dressed up versions of narrow AI

no one AFAIK is dealing with the issue of how do you produce a true 
goalseeking agent who *can* discover things for itself?  - an agent, that 
like humans and animals, can *find* its way to its goals generally, as well as 
to learning new activities, on its own initiative  - rather than by following 
instructions.  (The full instruction method only works in artificial, 
controlled environments and can't possibly work in the real, uncontrollable 
world - where future conditions are highly unpredictable, even by the sagest 
instructor). [Ben BTW strikes me as merely gesturing at all this].

There really can't be any serious argument about this - humans and animals 
clearly learn all their activities with v. limited and largely general rather 
than step-by-step instructions.

You may want to argue there is an underlying general program that effectively 
specifies every step they must take (good luck) - but with respect to all their 
specialist.particular activities, - think having a conversation, sex, writing a 
post, an essay, fantasying, shopping, browsing the net, reading a newspaper - 
etc etc. - you got and get v. little step-by-step instruction about these and 
all your other activities

So AGI's require a fundamentally and massively different paradigm of 
instruction to the program, comprehensive, step-by-step paradigm of narrow AI.

[The rock wall/toybox tests BTW are AGI activities, where it is *impossible* to 
give full instructions, or produce a formula, whatever you may want to do].


From: rob levy 
Sent: Wednesday, July 21, 2010 3:56 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


A child AGI should be expected to need help learning how to solve many 
problems, and even be told what the steps are.  But at some point it needs to 
have developed general problem-solving skills.  But I feel like this is all 
stating the obvious.


On Tue, Jul 20, 2010 at 11:32 PM, Matt Mahoney matmaho...@yahoo.com wrote:

  Mike, I think we all agree that we should not have to tell an AGI the steps 
to solving problems. It should learn and figure it out, like the way that 
people figure it out.




  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-21 Thread Matt Mahoney
Mike Tintner wrote:
 The fantasy of a superAGI machine that can grow individually without a vast 
society supporting it, is another one of the wild fantasies of AGI-ers 
and Singularitarians that violate truly basic laws of nature. Individual 
brains 
cannot flourish individually in the real world, only societies of brains (and 
bodies) can.

I agree. It is the basis of my AGI design, to supplement a global brain with 
computers. http://mattmahoney.net/agi2.html

 -- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Tue, July 20, 2010 1:50:45 PM
Subject: [agi] The Collective Brain


http://www.ted.com/talks/matt_ridley_when_ideas_have_sex.html?utm_source=newsletter_weekly_2010-07-20utm_campaign=newsletter_weeklyutm_medium=email

 
Good lecture worth looking at about how trade -  exchange of both goods and 
ideas - has fostered civilisation. Near the end  introduces a v. important idea 
- the collective brain. In other words, our  apparently individual 
intelligence is actually a collective intelligence. Nobody  he points out 
actually knows how to make a computer mouse, although that may  seem 
counterintuitive  - it's an immensely complex piece of equipment,  simple as it 
may appear, that engages the collective,  interdependent intelligence and 
productive efforts of vast numbers of  people.
 
When you start thinking like that, you realise that  there is v. little we know 
how to do, esp of an intellectual nature,  individually, without the implicit 
and explicit collaboration of vast numbers of  people and sectors of society. 

 
The fantasy of a superAGI machine that can grow  individually without a vast 
society supporting it, is another one of the  wild fantasies of AGI-ers 
and Singularitarians that violate truly  basic laws of nature. Individual 
brains 
cannot flourish individually in the real  world, only societies of brains (and 
bodies) can. 

 
(And of course computers can do absolutely nothing  or in any way survive 
without their human masters - even if it may appear that  way, if you don't 
look 
properly at their whole  operation)
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Matt Mahoney
Jim Bromer wrote:
 The question was asked whether, given infinite resources could Solmonoff 
Induction work.  I made the assumption that it was computable and found that 
it 
wouldn't work.  

On what infinitely powerful computer did you do your experiment?

 My conclusion suggests, that the use of Solmonoff Induction as an ideal for 
compression or something like MDL is not only unsubstantiated but based on a 
massive inability to comprehend the idea of a program that runs every possible 
program. 

It is sufficient to find the shortest program consistent with past results, not 
all programs. The difference is no more than the language-dependent constant. 
Legg proved this in the paper that Ben and I both pointed you to. Do you 
dispute 
his proof? I guess you don't, because you didn't respond the last 3 times this 
was pointed out to you.

 I am comfortable with the conclusion that the claim that Solomonoff Induction 
is an ideal for compression or induction or anything else is pretty shallow 
and not based on careful consideration.

I am comfortable with the conclusion that the world is flat because I have a 
gut 
feeling about it and I ignore overwhelming evidence to the contrary.

 There is a chance that I am wrong

So why don't you drop it?

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, July 20, 2010 3:10:40 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


The question was asked whether, given infinite resources could Solmonoff 
Induction work.  I made the assumption that it was computable and found that it 
wouldn't work.  It is not computable, even with infinite resources, for the 
kind 
of thing that was claimed it would do. (I believe that with a governance 
program 
it might actually be programmable) but it could not be used to predict (or 
compute the probability of) a subsequent string given some prefix string.  Not 
only is the method impractical it is theoretically inane.  My conclusion 
suggests, that the use of Solmonoff Induction as an ideal for compression or 
something like MDL is not only unsubstantiated but based on a massive inability 
to comprehend the idea of a program that runs every possible program.  

 
I am comfortable with the conclusion that the claim that Solomonoff Induction 
is 
an ideal for compression or induction or anything else is pretty shallow and 
not based on careful consideration.
 
There is a chance that I am wrong, but I am confident that there is nothing in 
the definition of Solmonoff Induction that could be used to prove it.
Jim Bromer
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread rob levy
I completely agree with this characterization, I was just pointing out the
importance already-existing generally intelligent entities in providing
scaffolding for the system's learning and meta-learning processes.

On Wed, Jul 21, 2010 at 12:25 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Infants *start* with general learning skills - they have to extensively
 discover for themselves how to do most things - control head, reach out,
 turn over, sit up, crawl, walk - and also have to work out perceptually what
 the objects they see are, and what they do... and what sounds are, and how
 they form words, and how those words relate to objects - and how language
 works

 it is this capacity to keep discovering ways of doing things, that is a
 major motivation in their continually learning new activities - continually
 seeking novelty, and getting bored with too repetitive activities

 obviously an AGI needs some help.. but at the mo. all projects get *full*
 help/ *complete* instructions - IOW are merely dressed up versions of narrow
 AI

 no one AFAIK is dealing with the issue of how do you produce a true
 goalseeking agent who *can* discover things for itself?  - an agent, that
 like humans and animals, can *find* its way to its goals generally, as well
 as to learning new activities, on its own initiative  - rather than by
 following instructions.  (The full instruction method only works in
 artificial, controlled environments and can't possibly work in the real,
 uncontrollable world - where future conditions are highly unpredictable,
 even by the sagest instructor). [Ben BTW strikes me as merely gesturing at
 all this].

 There really can't be any serious argument about this - humans and animals
 clearly learn all their activities with v. limited and largely general
 rather than step-by-step instructions.

 You may want to argue there is an underlying general program that
 effectively specifies every step they must take (good luck) - but with
 respect to all their specialist.particular activities, - think having a
 conversation, sex, writing a post, an essay, fantasying, shopping, browsing
 the net, reading a newspaper - etc etc. - you got and get v. little
 step-by-step instruction about these and all your other activities

 So AGI's require a fundamentally and massively different paradigm of
 instruction to the program, comprehensive, step-by-step paradigm of narrow
 AI.

 [The rock wall/toybox tests BTW are AGI activities, where it is
 *impossible* to give full instructions, or produce a formula, whatever you
 may want to do].

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Wednesday, July 21, 2010 3:56 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 A child AGI should be expected to need help learning how to solve many
 problems, and even be told what the steps are.  But at some point it needs
 to have developed general problem-solving skills.  But I feel like this is
 all stating the obvious.

 On Tue, Jul 20, 2010 at 11:32 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Mike, I think we all agree that we should not have to tell an AGI the
 steps to solving problems. It should learn and figure it out, like the way
 that people figure it out.


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
Matt,
I never said that I did not accept the application of the method of
probability, it is just that is has to be applied using logic.  Solomonoff
Induction does not meet this standard.  From this conclusion, and from other
sources of information, including the acknowledgement of incomputability and
the lack of acceptance in the general mathematical community I feel
comfortable with rejecting the theory of Kolomogrov Complexity as well.

What I said was: My conclusion suggests, that the use of Solmonoff Induction
as an ideal for compression or something like MDL...
What you said was: It is sufficient to find the shortest program consistent
with past results, not all programs. The difference is no more than the
language-dependent constant...
This is an equivocation based on the line you were responding to.  You are
presenting a related comment as if it were a valid response to what I
actually said.  That is one reason why I am starting to ignore you.

Jim Bromer

On Wed, Jul 21, 2010 at 1:15 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.

 On what infinitely powerful computer did you do your experiment?

  My conclusion suggests, that the use of Solmonoff Induction as an ideal
 for compression or something like MDL is not only unsubstantiated but based
 on a massive inability to comprehend the idea of a program that runs every
 possible program.

 It is sufficient to find the shortest program consistent with past results,
 not all programs. The difference is no more than the language-dependent
 constant. Legg proved this in the paper that Ben and I both pointed you to.
 Do you dispute his proof? I guess you don't, because you didn't respond the
 last 3 times this was pointed out to you.

  I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 I am comfortable with the conclusion that the world is flat because I have
 a gut feeling about it and I ignore overwhelming evidence to the contrary.

  There is a chance that I am wrong

 So why don't you drop it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, July 20, 2010 3:10:40 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.  It is not computable, even with infinite resources, for
 the kind of thing that was claimed it would do. (I believe that with a
 governance program it might actually be programmable) but it could not be
 used to predict (or compute the probability of) a subsequent string
 given some prefix string.  Not only is the method impractical it is
 theoretically inane.  My conclusion suggests, that the use of Solmonoff
 Induction as an ideal for compression or something like MDL is not only
 unsubstantiated but based on a massive inability to comprehend the idea of a
 program that runs every possible program.

 I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 There is a chance that I am wrong, but I am confident that there is nothing
 in the definition of Solmonoff Induction that could be used to prove it.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
I meant this was what I said was: My conclusion suggests, that the use of
Solmonoff Induction as an ideal for compression or something like MDL is not
only unsubstantiated but based on a massive inability to comprehend the idea
of a program that runs every possible program.
What Matt said was: It is sufficient to find the shortest program consistent
with past results, not all programs. The difference is no more than the
language-dependent constant...
This is an equivocation based on the line you were responding to.  You are
presenting a related comment as if it were a valid response to what I
actually said.  That is one reason why I am starting to ignore you.
Jim Bromer



On Wed, Jul 21, 2010 at 2:36 PM, Jim Bromer jimbro...@gmail.com wrote:

 Matt,
 I never said that I did not accept the application of the method of
 probability, it is just that is has to be applied using logic.  Solomonoff
 Induction does not meet this standard.  From this conclusion, and from other
 sources of information, including the acknowledgement of incomputability and
 the lack of acceptance in the general mathematical community I feel
 comfortable with rejecting the theory of Kolomogrov Complexity as well.

 What I said was: My conclusion suggests, that the use of Solmonoff
 Induction as an ideal for compression or something like MDL...
 What you said was: It is sufficient to find the shortest program consistent
 with past results, not all programs. The difference is no more than the
 language-dependent constant...
 This is an equivocation based on the line you were responding to.  You are
 presenting a related comment as if it were a valid response to what I
 actually said.  That is one reason why I am starting to ignore you.

 Jim Bromer

 On Wed, Jul 21, 2010 at 1:15 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.

 On what infinitely powerful computer did you do your experiment?

  My conclusion suggests, that the use of Solmonoff Induction as an ideal
 for compression or something like MDL is not only unsubstantiated but based
 on a massive inability to comprehend the idea of a program that runs every
 possible program.

 It is sufficient to find the shortest program consistent with past
 results, not all programs. The difference is no more than the
 language-dependent constant. Legg proved this in the paper that Ben and I
 both pointed you to. Do you dispute his proof? I guess you don't, because
 you didn't respond the last 3 times this was pointed out to you.

  I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 I am comfortable with the conclusion that the world is flat because I have
 a gut feeling about it and I ignore overwhelming evidence to the contrary.

  There is a chance that I am wrong

 So why don't you drop it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, July 20, 2010 3:10:40 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.  It is not computable, even with infinite resources, for
 the kind of thing that was claimed it would do. (I believe that with a
 governance program it might actually be programmable) but it could not be
 used to predict (or compute the probability of) a subsequent string
 given some prefix string.  Not only is the method impractical it is
 theoretically inane.  My conclusion suggests, that the use of Solmonoff
 Induction as an ideal for compression or something like MDL is not only
 unsubstantiated but based on a massive inability to comprehend the idea of a
 program that runs every possible program.

 I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 There is a chance that I am wrong, but I am confident that there is
 nothing in the definition of Solmonoff Induction that could be used to prove
 it.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/






---
agi
Archives: 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
you iterate through all possible programs, combining different programs as
you go.  Then you have an infinite number of possible programs which have a
trans-infinite number of combinations, because each tier of combinations can
then be recombined to produce a second, third, fourth,... tier of
recombinations.

Anyone who claims that this method is the ideal for a method of applied
probability is unwise.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
I should have said, It would be unwise to claim that this method could stand
as an ideal for some valid and feasible application of probability.
Jim Bromer

On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
 you iterate through all possible programs, combining different programs as
 you go.  Then you have an infinite number of possible programs which have a
 trans-infinite number of combinations, because each tier of combinations can
 then be recombined to produce a second, third, fourth,... tier of
 recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] My Boolean Satisfiability Solver

2010-07-21 Thread Jim Bromer
I haven't made any noteworthy progress on my attempt to create a polynomial
time Boolean Satisfiability Solver.
I am going to try to explore some more modest means of compressing formulas
in a way so that the formula will reveal more about individual combinations
(of the Boolean states of the variables that are True or False), through the
use of strands which are groups of combinations.  So I am not trying to
find a polynomial time solution at this point, I am just going through the
stuff that I have been thinking of, either explicitly or implicitly during
the past few years to see if I can get some means of representing more about
a formula in an efficient manner.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] My Boolean Satisfiability Solver

2010-07-21 Thread Ian Parker
But surely a number is a group of binary combinations if we represent the
number in binary form, as we always can. The real theorems are those which
deal with *numbers*. What you are in essence discussing is no more or less
than the *Theory of Numbers.*
*
*
*  - Ian Parker
*
On 21 July 2010 20:17, Jim Bromer jimbro...@gmail.com wrote:

 I haven't made any noteworthy progress on my attempt to create a polynomial
 time Boolean Satisfiability Solver.
 I am going to try to explore some more modest means of compressing formulas
 in a way so that the formula will reveal more about individual combinations
 (of the Boolean states of the variables that are True or False), through the
 use of strands which are groups of combinations.  So I am not trying to
 find a polynomial time solution at this point, I am just going through the
 stuff that I have been thinking of, either explicitly or implicitly during
 the past few years to see if I can get some means of representing more about
 a formula in an efficient manner.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] My Boolean Satisfiability Solver

2010-07-21 Thread Jim Bromer
Because a logical system can be applied to a problem, that does not mean
that the logical system is the same as the problem.  Most notably, the
theory of numbers contains definitions that do not belong to logic per se.
Jim Bromer

On Wed, Jul 21, 2010 at 3:45 PM, Ian Parker ianpark...@gmail.com wrote:

 But surely a number is a group of binary combinations if we represent the
 number in binary form, as we always can. The real theorems are those which
 deal with *numbers*. What you are in essence discussing is no more or less
 than the *Theory of Numbers.*
 *
 *
 *  - Ian Parker
 *
   On 21 July 2010 20:17, Jim Bromer jimbro...@gmail.com wrote:

   I haven't made any noteworthy progress on my attempt to create a
 polynomial time Boolean Satisfiability Solver.
 I am going to try to explore some more modest means of compressing
 formulas in a way so that the formula will reveal more about individual
 combinations (of the Boolean states of the variables that are True or
 False), through the use of strands which are groups of combinations.  So I
 am not trying to find a polynomial time solution at this point, I am just
 going through the stuff that I have been thinking of, either explicitly or
 implicitly during the past few years to see if I can get some means of
 representing more about a formula in an efficient manner.

 Jim Bromer
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Abram Demski
Jim,

This argument that you've got to consider recombinations *in addition to*
just the programs displays the lack of mathematical understanding that I am
referring to... you appear to be arguing against what you *think* solomonoff
induction is, without checking how it is actually defined...

--Abram

On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
 you iterate through all possible programs, combining different programs as
 you go.  Then you have an infinite number of possible programs which have a
 trans-infinite number of combinations, because each tier of combinations can
 then be recombined to produce a second, third, fourth,... tier of
 recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
You claim that I have not checked how Solomonoff Induction is actually
defined, but then don't bother mentioning how it is defined as if it would
be too much of an ordeal to even begin to try.  It is this kind of evasive
response, along with the fact that these functions are incomputable, that
make your replies so absurd.

On Wed, Jul 21, 2010 at 4:01 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 This argument that you've got to consider recombinations *in addition to*
 just the programs displays the lack of mathematical understanding that I am
 referring to... you appear to be arguing against what you *think* solomonoff
 induction is, without checking how it is actually defined...

 --Abram

   On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

   The fundamental method of Solmonoff Induction is trans-infinite.
 Suppose you iterate through all possible programs, combining different
 programs as you go.  Then you have an infinite number of possible programs
 which have a trans-infinite number of combinations, because each tier of
 combinations can then be recombined to produce a second, third, fourth,...
 tier of recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] My Boolean Satisfiability Solver

2010-07-21 Thread Jim Bromer
Well, Boolean Logic may be a part of number theory but even then it is still
not the same as number theory.

On Wed, Jul 21, 2010 at 4:01 PM, Jim Bromer jimbro...@gmail.com wrote:

 Because a logical system can be applied to a problem, that does not mean
 that the logical system is the same as the problem.  Most notably, the
 theory of numbers contains definitions that do not belong to logic per se.
 Jim Bromer

 On Wed, Jul 21, 2010 at 3:45 PM, Ian Parker ianpark...@gmail.com wrote:

 But surely a number is a group of binary combinations if we represent the
 number in binary form, as we always can. The real theorems are those which
 deal with *numbers*. What you are in essence discussing is no more or
 less than the *Theory of Numbers.*
 *
 *
 *  - Ian Parker
 *
   On 21 July 2010 20:17, Jim Bromer jimbro...@gmail.com wrote:

   I haven't made any noteworthy progress on my attempt to create a
 polynomial time Boolean Satisfiability Solver.
 I am going to try to explore some more modest means of compressing
 formulas in a way so that the formula will reveal more about individual
 combinations (of the Boolean states of the variables that are True or
 False), through the use of strands which are groups of combinations.  So I
 am not trying to find a polynomial time solution at this point, I am just
 going through the stuff that I have been thinking of, either explicitly or
 implicitly during the past few years to see if I can get some means of
 representing more about a formula in an efficient manner.

 Jim Bromer
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] My Boolean Satisfiability Solver

2010-07-21 Thread Ian Parker
The Theory of Numbers as its name implies about numbers. Advanced Theory of
Number is also about things like Elliptic Functions, Modular functions,
Polynomials, Symmetry groups, the Riemann hypothesis.

What I am saying is I can express *ANY* numerical problem in binary form. I
can use numbers, expressible in any base to define the above. Logic is in
fact expressible if we take numbers of modulus 1, but that is another story.
You do not have to express all of logic in terms of the Theory of Numbers. I
am claiming that the Theory of Numbers, and all its advanced ramifications
are expressible in terms of logic.


  - Ian Parker

On 21 July 2010 21:01, Jim Bromer jimbro...@gmail.com wrote:

 Because a logical system can be applied to a problem, that does not mean
 that the logical system is the same as the problem.  Most notably, the
 theory of numbers contains definitions that do not belong to logic per se.
 Jim Bromer

 On Wed, Jul 21, 2010 at 3:45 PM, Ian Parker ianpark...@gmail.com wrote:

 But surely a number is a group of binary combinations if we represent the
 number in binary form, as we always can. The real theorems are those which
 deal with *numbers*. What you are in essence discussing is no more or
 less than the *Theory of Numbers.*
 *
 *
 *  - Ian Parker
 *
   On 21 July 2010 20:17, Jim Bromer jimbro...@gmail.com wrote:

   I haven't made any noteworthy progress on my attempt to create a
 polynomial time Boolean Satisfiability Solver.
 I am going to try to explore some more modest means of compressing
 formulas in a way so that the formula will reveal more about individual
 combinations (of the Boolean states of the variables that are True or
 False), through the use of strands which are groups of combinations.  So I
 am not trying to find a polynomial time solution at this point, I am just
 going through the stuff that I have been thinking of, either explicitly or
 implicitly during the past few years to see if I can get some means of
 representing more about a formula in an efficient manner.

 Jim Bromer
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] My Boolean Satisfiability Solver

2010-07-21 Thread Ian Parker
If I can express Arithmetic in logical terms it must be.


  - Ian Parker

On 21 July 2010 21:38, Jim Bromer jimbro...@gmail.com wrote:

 Well, Boolean Logic may be a part of number theory but even then it is
 still not the same as number theory.

 On Wed, Jul 21, 2010 at 4:01 PM, Jim Bromer jimbro...@gmail.com wrote:

 Because a logical system can be applied to a problem, that does not mean
 that the logical system is the same as the problem.  Most notably, the
 theory of numbers contains definitions that do not belong to logic per se.
 Jim Bromer

 On Wed, Jul 21, 2010 at 3:45 PM, Ian Parker ianpark...@gmail.com wrote:

 But surely a number is a group of binary combinations if we represent the
 number in binary form, as we always can. The real theorems are those which
 deal with *numbers*. What you are in essence discussing is no more or
 less than the *Theory of Numbers.*
 *
 *
 *  - Ian Parker
 *
   On 21 July 2010 20:17, Jim Bromer jimbro...@gmail.com wrote:

   I haven't made any noteworthy progress on my attempt to create a
 polynomial time Boolean Satisfiability Solver.
 I am going to try to explore some more modest means of compressing
 formulas in a way so that the formula will reveal more about individual
 combinations (of the Boolean states of the variables that are True or
 False), through the use of strands which are groups of combinations.  So 
 I
 am not trying to find a polynomial time solution at this point, I am just
 going through the stuff that I have been thinking of, either explicitly or
 implicitly during the past few years to see if I can get some means of
 representing more about a formula in an efficient manner.

 Jim Bromer
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/



*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
On Wed, Jul 21, 2010 at 4:01 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 This argument that you've got to consider recombinations *in addition to*
 just the programs displays the lack of mathematical understanding that I am
 referring to... you appear to be arguing against what you *think* solomonoff
 induction is, without checking how it is actually defined...

 --Abram


I mean this in a friendly way.  (When I started to write in a fiendly way,
it was only a typo and nothing more.)

Is it possible that it is Abram who doesn't understand how Solomonoff
Induction is actually defined.  Is it possible that it is Abram who has
missed an implication of the defiinition because it didn't fit in with his
ideal of a convenient and reasonable application of Bayesian mathematics?

I am just saying that you should ask yourself this: is it possible that
Abram doesn't see the obvious flaws because it obviously wouldn't make any
sense vis a vis a reasonable and sound application of probability theory.
For example, would you be willing to write to a real expert in probability
theory to ask him for his opinions on Solomonoff Induction?  Because I would
be.

Jim Bromer


On Wed, Jul 21, 2010 at 4:01 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 This argument that you've got to consider recombinations *in addition to*
 just the programs displays the lack of mathematical understanding that I am
 referring to... you appear to be arguing against what you *think* solomonoff
 induction is, without checking how it is actually defined...

 --Abram

   On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

   The fundamental method of Solmonoff Induction is trans-infinite.
 Suppose you iterate through all possible programs, combining different
 programs as you go.  Then you have an infinite number of possible programs
 which have a trans-infinite number of combinations, because each tier of
 combinations can then be recombined to produce a second, third, fourth,...
 tier of recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
I can't say where you are going wrong because I really have no idea.
However, my guess is that you are ignoring certain contingencies that would
be necessary to make your claims valid.  I tried to use a reference to the
theory of limits to explain this but it seemed to fall on deaf ears.

If I were to write everything I knew about Bernoulli without looking it up,
it would be a page of few facts.  I have read some things about him, I just
don't recall much of it.  So when I dare say that Abram couldn't write much
about Cauchy without looking it up, it is not a pretentious put down, but
more like a last-ditch effort to teach him some basic humility.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
If someone had a profound knowledge of Solomonoff Induction and the *science
of probability* he could at the very least talk to me in a way that I knew
he knew what I was talking about and I knew he knew what he was talking
about.  He might be slightly obnoxious or he might be casual or (more
likely) he would try to be patient.  But it is unlikely that he would use a
hit and run attack and denounce my conjectures without taking the
opportunity to talk to me about what I was saying.  That is one of the ways
that I know that you don't know as much as you think you know.  You rarely
get angry about being totally right.

A true expert would be able to talk to me and also take advantage of my
thinking about the subject to weave some new information into the
conversation so that I could leave it with a greater insight about the
problem than I did before.  That is not just a skill that only good teachers
have, it is a skill that almost any expert can develop if he is willing to
use it.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Matt Mahoney
Jim Bromer wrote:
 The fundamental method of Solmonoff Induction is trans-infinite.

The fundamental method is that the probability of a string x is proportional to 
the sum of all programs M that output x weighted by 2^-|M|. That probability is 
dominated by the shortest program, but it is equally uncomputable either way. 
How does this approximation invalidate Solomonoff induction?

Also, please point me to this mathematical community that you claim rejects 
Solomonoff induction. Can you find even one paper that refutes it?

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Wed, July 21, 2010 3:08:13 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


I should have said, It would be unwise to claim that this method could stand as 
an ideal for some valid and feasible application of probability.
Jim Bromer


On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

The fundamental method of Solmonoff Induction is trans-infinite.  Suppose you 
iterate through all possible programs, combining different programs as you go.  
Then you have an infinite number of possible programs which have a 
trans-infinite number of combinations, because each tier of combinations can 
then be recombined to produce a second, third, fourth,... tier of 
recombinations.
 
Anyone who claims that this method is the ideal for a method of applied 
probability is unwise.
 Jim Bromer

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com