[agi] Mindplex for Is-a Functionality

2010-07-22 Thread A. T. Murray
Thurs.22.JUL.2010 -- Mindplex for Is-a Functionality

As we contemplate AI coding for responses 
to such questions as 

Who is Andru? What is Andru?
Who are you? What are you?

we realize that simple memory-activation of 
question-words like who or what will not 
be sufficient to invoke the special handling 
of mental issues raised by such question-words. 
Nay, we realize that each question-word will 
need to call not so much a mind-module of 
normal syntactic control, such as NounPhrase 
or VerbPhrase, but rather something like a 
WhoPlex or a WhatPlex or a WherePlex 
or even a WhyPlex, as a kind of meta-module 
which is not a building block of the cognitive 
architecture, but is rather a governance of the 
interaction of the regular mind-modules. 
A WhatPlex, for instance, in answering a 
What-is question, must predispose the AI 
Mind to provide a certain kind of information 
(e.g., ontological class) couched amid certain 
concomitant mind-modules (e.g., EnArticle a) 
so as to output an answer such as, I am a robot. 
Since the quasi-mind-modules to be invoked by 
question-words comprise a small cluster of 
similar mental complexes necessary for the 
special handling of the input of the question-words, 
we might as well designate the members of the set 
of complexes as code structures with names like
 WhatPlex ending in -Plex. Witness that the 
Google enterprise has named its campus or cluster 
of buildings as the Googleplex. Ben Goertzel has 
used a similar term to refer to a mindplex of 
mind components. We will try to use WhoPlex 
and WhatPlex to remind ourselves as AI appcoders 
that we are letting rules of special handling 
accumulate by an accretion akin to the emergence 
of a mental complex. 

Arthur
-- 
See the HTML version below for its links.
http://robots.net/person/AI4U/diary/23.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread David Jones
An Update

I think the following gets to the heart of general AI and what it takes to
achieve it. It also provides us with evidence as to why general AI is so
difficult. With this new knowledge in mind, I think I will be much more
capable now of solving the problems and making it work.

I've come to the conclusion lately that the best hypothesis is better
because it is more predictive and then simpler than other hypotheses (in
that order more predictive... then simpler). But, I am amazed at how
difficult it is to quantitatively define more predictive and simpler for
specific problems. This is why I have sometimes doubted the truth of the
statement.

In addition, the observations that the AI gets are not representative of all
observations! This means that if your measure of predictiveness depends on
the number of certain observations, it could make mistakes! So, the specific
observations you are aware of may be unrepresentative of the predictiveness
of a hypothesis relative to the truth. If you try to calculate which
hypothesis is more predictive and you don't have the critical observations
that would give you the right answer, you may get the wrong answer! This all
depends of course on your method of calculation, which is quite elusive to
define.

Visual input from screenshots, for example, can be somewhat malicious.
Things can move, appear, disappear or occlude each other suddenly. So,
without sufficient knowledge it is hard to decide whether matches you find
between such large changes are because it is the same object or a different
object. This may indicate that bias and preprogrammed experience should be
introduced to the AI before training. Either that or the training inputs
should be carefully chosen to avoid malicious input and to make them nice
for learning.

This is the correspondence problem that is typical of computer vision and
has never been properly solved. Such malicious input also makes it difficult
to learn automatically because the AI doesn't have sufficient experience to
know which changes or transformations are acceptable and which are not. It
is immediately bombarded with malicious inputs.

I've also realized that if a hypothesis is more explanatory, it may be
better. But quantitatively defining explanatory is also elusive and truly
depends on the specific problems you are applying it to because it is a
heuristic. It is not a true measure of correctness. It is not loyal to the
truth. More explanatory is really a heuristic that helps us find
hypothesis that are more predictive. The true measure of whether a
hypothesis is better is simply the most accurate and predictive hypothesis.
That is the ultimate and true measure of correctness.

Also, since we can't measure every possible prediction or every last
prediction (and we certainly can't predict everything), our measure of
predictiveness can't possibly be right all the time! We have no choice but
to use a heuristic of some kind.

So, its clear to me that the right hypothesis is more predictive and then
simpler. But, it is also clear that there will never be a single measure of
this that can be applied to all problems. I hope to eventually find a nice
model for how to apply it to different problems though. This may be the
reason that so many people have tried and failed to develop general AI. Yes,
there is a solution. But there is no silver bullet that can be applied to
all problems. Some methods are better than others. But I think another major
reason of the failures is that people think they can predict things without
sufficient information. By approaching the problem this way, we compound the
need for heuristics and the errors they produce because we simply don't have
sufficient information to make a good decision with limited evidence. If
approached correctly, the right solution would solve many more problems with
the same efforts than a poor solution would. It would also eliminate some of
the difficulties we currently face if sufficient data is available to learn
from.

In addition to all this theory about better hypotheses, you have to add on
the need to solve problems in reasonable time. This also compounds the
difficulty of the problem and the complexity of solutions.

I am always fascinated by the extraordinary difficulty and complexity of
this problem. The more I learn about it, the more I appreciate it.

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread Abram Demski
Jim,

Why more predictive *and then* simpler?

--Abram

On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.com wrote:

 An Update

 I think the following gets to the heart of general AI and what it takes to
 achieve it. It also provides us with evidence as to why general AI is so
 difficult. With this new knowledge in mind, I think I will be much more
 capable now of solving the problems and making it work.

 I've come to the conclusion lately that the best hypothesis is better
 because it is more predictive and then simpler than other hypotheses (in
 that order more predictive... then simpler). But, I am amazed at how
 difficult it is to quantitatively define more predictive and simpler for
 specific problems. This is why I have sometimes doubted the truth of the
 statement.

 In addition, the observations that the AI gets are not representative of
 all observations! This means that if your measure of predictiveness
 depends on the number of certain observations, it could make mistakes! So,
 the specific observations you are aware of may be unrepresentative of the
 predictiveness of a hypothesis relative to the truth. If you try to
 calculate which hypothesis is more predictive and you don't have the
 critical observations that would give you the right answer, you may get the
 wrong answer! This all depends of course on your method of calculation,
 which is quite elusive to define.

 Visual input from screenshots, for example, can be somewhat malicious.
 Things can move, appear, disappear or occlude each other suddenly. So,
 without sufficient knowledge it is hard to decide whether matches you find
 between such large changes are because it is the same object or a different
 object. This may indicate that bias and preprogrammed experience should be
 introduced to the AI before training. Either that or the training inputs
 should be carefully chosen to avoid malicious input and to make them nice
 for learning.

 This is the correspondence problem that is typical of computer vision and
 has never been properly solved. Such malicious input also makes it difficult
 to learn automatically because the AI doesn't have sufficient experience to
 know which changes or transformations are acceptable and which are not. It
 is immediately bombarded with malicious inputs.

 I've also realized that if a hypothesis is more explanatory, it may be
 better. But quantitatively defining explanatory is also elusive and truly
 depends on the specific problems you are applying it to because it is a
 heuristic. It is not a true measure of correctness. It is not loyal to the
 truth. More explanatory is really a heuristic that helps us find
 hypothesis that are more predictive. The true measure of whether a
 hypothesis is better is simply the most accurate and predictive hypothesis.
 That is the ultimate and true measure of correctness.

 Also, since we can't measure every possible prediction or every last
 prediction (and we certainly can't predict everything), our measure of
 predictiveness can't possibly be right all the time! We have no choice but
 to use a heuristic of some kind.

 So, its clear to me that the right hypothesis is more predictive and then
 simpler. But, it is also clear that there will never be a single measure of
 this that can be applied to all problems. I hope to eventually find a nice
 model for how to apply it to different problems though. This may be the
 reason that so many people have tried and failed to develop general AI. Yes,
 there is a solution. But there is no silver bullet that can be applied to
 all problems. Some methods are better than others. But I think another major
 reason of the failures is that people think they can predict things without
 sufficient information. By approaching the problem this way, we compound the
 need for heuristics and the errors they produce because we simply don't have
 sufficient information to make a good decision with limited evidence. If
 approached correctly, the right solution would solve many more problems with
 the same efforts than a poor solution would. It would also eliminate some of
 the difficulties we currently face if sufficient data is available to learn
 from.

 In addition to all this theory about better hypotheses, you have to add on
 the need to solve problems in reasonable time. This also compounds the
 difficulty of the problem and the complexity of solutions.

 I am always fascinated by the extraordinary difficulty and complexity of
 this problem. The more I learn about it, the more I appreciate it.

 Dave
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread Mike Tintner
Predicting the old and predictable  [incl in shape and form] is narrow AI. 
Squaresville.
Adapting to the new and unpredictable [incl in shape and form] is AGI. Rock on.


From: David Jones 
Sent: Thursday, July 22, 2010 4:49 PM
To: agi 
Subject: [agi] Re: Huge Progress on the Core of AGI


An Update

I think the following gets to the heart of general AI and what it takes to 
achieve it. It also provides us with evidence as to why general AI is so 
difficult. With this new knowledge in mind, I think I will be much more capable 
now of solving the problems and making it work. 

I've come to the conclusion lately that the best hypothesis is better because 
it is more predictive and then simpler than other hypotheses (in that order 
more predictive... then simpler). But, I am amazed at how difficult it is to 
quantitatively define more predictive and simpler for specific problems. This 
is why I have sometimes doubted the truth of the statement.

In addition, the observations that the AI gets are not representative of all 
observations! This means that if your measure of predictiveness depends on 
the number of certain observations, it could make mistakes! So, the specific 
observations you are aware of may be unrepresentative of the predictiveness of 
a hypothesis relative to the truth. If you try to calculate which hypothesis is 
more predictive and you don't have the critical observations that would give 
you the right answer, you may get the wrong answer! This all depends of course 
on your method of calculation, which is quite elusive to define. 

Visual input from screenshots, for example, can be somewhat malicious. Things 
can move, appear, disappear or occlude each other suddenly. So, without 
sufficient knowledge it is hard to decide whether matches you find between such 
large changes are because it is the same object or a different object. This may 
indicate that bias and preprogrammed experience should be introduced to the AI 
before training. Either that or the training inputs should be carefully chosen 
to avoid malicious input and to make them nice for learning. 

This is the correspondence problem that is typical of computer vision and has 
never been properly solved. Such malicious input also makes it difficult to 
learn automatically because the AI doesn't have sufficient experience to know 
which changes or transformations are acceptable and which are not. It is 
immediately bombarded with malicious inputs.

I've also realized that if a hypothesis is more explanatory, it may be 
better. But quantitatively defining explanatory is also elusive and truly 
depends on the specific problems you are applying it to because it is a 
heuristic. It is not a true measure of correctness. It is not loyal to the 
truth. More explanatory is really a heuristic that helps us find hypothesis 
that are more predictive. The true measure of whether a hypothesis is better is 
simply the most accurate and predictive hypothesis. That is the ultimate and 
true measure of correctness.

Also, since we can't measure every possible prediction or every last prediction 
(and we certainly can't predict everything), our measure of predictiveness 
can't possibly be right all the time! We have no choice but to use a heuristic 
of some kind.

So, its clear to me that the right hypothesis is more predictive and then 
simpler. But, it is also clear that there will never be a single measure of 
this that can be applied to all problems. I hope to eventually find a nice 
model for how to apply it to different problems though. This may be the reason 
that so many people have tried and failed to develop general AI. Yes, there is 
a solution. But there is no silver bullet that can be applied to all problems. 
Some methods are better than others. But I think another major reason of the 
failures is that people think they can predict things without sufficient 
information. By approaching the problem this way, we compound the need for 
heuristics and the errors they produce because we simply don't have sufficient 
information to make a good decision with limited evidence. If approached 
correctly, the right solution would solve many more problems with the same 
efforts than a poor solution would. It would also eliminate some of the 
difficulties we currently face if sufficient data is available to learn from.

In addition to all this theory about better hypotheses, you have to add on the 
need to solve problems in reasonable time. This also compounds the difficulty 
of the problem and the complexity of solutions.

I am always fascinated by the extraordinary difficulty and complexity of this 
problem. The more I learn about it, the more I appreciate it.

Dave

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Jim Bromer
I have to retract my claim that the programs of Solomonoff Induction would
be trans-infinite.  Each of the infinite individual programs could be
enumerated by their individual instructions so some combination of unique
individual programs would not correspond to a unique program but to the
enumerated program that corresponds to the string of their individual
instructions.  So I got that one wrong.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread David Jones
Because simpler is not better if it is less predictive.


On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 Why more predictive *and then* simpler?

 --Abram

 On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.comwrote:

 An Update

 I think the following gets to the heart of general AI and what it takes to
 achieve it. It also provides us with evidence as to why general AI is so
 difficult. With this new knowledge in mind, I think I will be much more
 capable now of solving the problems and making it work.

 I've come to the conclusion lately that the best hypothesis is better
 because it is more predictive and then simpler than other hypotheses (in
 that order more predictive... then simpler). But, I am amazed at how
 difficult it is to quantitatively define more predictive and simpler for
 specific problems. This is why I have sometimes doubted the truth of the
 statement.

 In addition, the observations that the AI gets are not representative of
 all observations! This means that if your measure of predictiveness
 depends on the number of certain observations, it could make mistakes! So,
 the specific observations you are aware of may be unrepresentative of the
 predictiveness of a hypothesis relative to the truth. If you try to
 calculate which hypothesis is more predictive and you don't have the
 critical observations that would give you the right answer, you may get the
 wrong answer! This all depends of course on your method of calculation,
 which is quite elusive to define.

 Visual input from screenshots, for example, can be somewhat malicious.
 Things can move, appear, disappear or occlude each other suddenly. So,
 without sufficient knowledge it is hard to decide whether matches you find
 between such large changes are because it is the same object or a different
 object. This may indicate that bias and preprogrammed experience should be
 introduced to the AI before training. Either that or the training inputs
 should be carefully chosen to avoid malicious input and to make them nice
 for learning.

 This is the correspondence problem that is typical of computer vision
 and has never been properly solved. Such malicious input also makes it
 difficult to learn automatically because the AI doesn't have sufficient
 experience to know which changes or transformations are acceptable and which
 are not. It is immediately bombarded with malicious inputs.

 I've also realized that if a hypothesis is more explanatory, it may be
 better. But quantitatively defining explanatory is also elusive and truly
 depends on the specific problems you are applying it to because it is a
 heuristic. It is not a true measure of correctness. It is not loyal to the
 truth. More explanatory is really a heuristic that helps us find
 hypothesis that are more predictive. The true measure of whether a
 hypothesis is better is simply the most accurate and predictive hypothesis.
 That is the ultimate and true measure of correctness.

 Also, since we can't measure every possible prediction or every last
 prediction (and we certainly can't predict everything), our measure of
 predictiveness can't possibly be right all the time! We have no choice but
 to use a heuristic of some kind.

 So, its clear to me that the right hypothesis is more predictive and then
 simpler. But, it is also clear that there will never be a single measure of
 this that can be applied to all problems. I hope to eventually find a nice
 model for how to apply it to different problems though. This may be the
 reason that so many people have tried and failed to develop general AI. Yes,
 there is a solution. But there is no silver bullet that can be applied to
 all problems. Some methods are better than others. But I think another major
 reason of the failures is that people think they can predict things without
 sufficient information. By approaching the problem this way, we compound the
 need for heuristics and the errors they produce because we simply don't have
 sufficient information to make a good decision with limited evidence. If
 approached correctly, the right solution would solve many more problems with
 the same efforts than a poor solution would. It would also eliminate some of
 the difficulties we currently face if sufficient data is available to learn
 from.

 In addition to all this theory about better hypotheses, you have to add on
 the need to solve problems in reasonable time. This also compounds the
 difficulty of the problem and the complexity of solutions.

 I am always fascinated by the extraordinary difficulty and complexity of
 this problem. The more I learn about it, the more I appreciate it.

 Dave
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread Matt Mahoney
David Jones wrote:
 But, I am amazed at how difficult it is to quantitatively define more 
predictive and simpler for specific problems. 

It isn't hard. To measure predictiveness, you assign a probability to each 
possible outcome. If the actual outcome has probability p, you score a penalty 
of log(1/p) bits. To measure simplicity, use the compressed size of the code 
for 
your prediction algorithm. Then add the two scores together. That's how it is 
done in the Calgary challenge http://www.mailcom.com/challenge/ and in my own 
text compression benchmark.

 -- Matt Mahoney, matmaho...@yahoo.com





From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Thu, July 22, 2010 3:11:46 PM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI

Because simpler is not better if it is less predictive.



On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com wrote:

Jim,

Why more predictive *and then* simpler?

--Abram


On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.com wrote:

An Update

I think the following gets to the heart of general AI and what it takes to  
achieve it. It also provides us with evidence as to why general AI is so 
difficult. With this new knowledge in mind, I think I will be much more 
capable 
now  of solving the problems and making it work. 


I've come to the conclusion lately that the best hypothesis is better because 
it 
is more predictive and then simpler than other hypotheses (in that order 
more predictive... then simpler). But, I am amazed at how difficult it is to 
quantitatively define more predictive and simpler for specific problems. This 
is 
why I have sometimes doubted the truth of the statement.

In addition, the observations that the AI gets are not representative of all 
observations! This means that if your measure of predictiveness depends on 
the 
number of certain observations, it could make mistakes! So, the specific 
observations you are aware of may be unrepresentative of the predictiveness 
of a 
hypothesis relative to the truth. If you try to calculate which hypothesis is 
more predictive and you don't have the critical observations that would give 
you 
the right answer, you may get the wrong answer! This all depends of course on 
your method of calculation, which is quite elusive to define. 


Visual input from screenshots, for example, can be somewhat malicious. Things 
can move, appear, disappear or occlude each other suddenly. So, without 
sufficient knowledge it is hard to decide whether matches you find between 
such 
large changes are because it is the same object or a different object. This 
may 
indicate that bias and preprogrammed experience should be introduced to the 
AI 
before training. Either that or the training inputs should be carefully 
chosen 
to avoid malicious input and to make them nice for learning. 


This is the correspondence problem that is typical of computer vision and 
has 
never been properly solved. Such malicious input also makes it difficult to 
learn automatically because the AI doesn't have sufficient experience to know 
which changes or transformations are acceptable and which are not. It is 
immediately bombarded with malicious inputs.

I've also realized that if a hypothesis is more explanatory, it may be 
better. 
But quantitatively defining explanatory is also elusive and truly depends on 
the 
specific problems you are applying it to because it is a heuristic. It is not 
a 
true measure of correctness. It is not loyal to the truth. More explanatory 
is 
really a heuristic that helps us find hypothesis that are more predictive. 
The 
true measure of whether a hypothesis is better is simply the most accurate 
and 
predictive hypothesis. That is the ultimate and true measure of correctness.

Also, since we can't measure every possible prediction or every last 
prediction 
(and we certainly can't predict everything), our measure of predictiveness 
can't 
possibly be right all the time! We have no choice but to use a heuristic of 
some 
kind.

So, its clear to me that the right hypothesis is more predictive and then 
simpler. But, it is also clear that there will never be a single measure of 
this that can be applied to all problems. I hope to eventually find a nice 
model 
for how to apply it to different problems though. This may be the reason that 
so 
many people have tried and failed to develop general AI. Yes, there is a 
solution. But there is no silver bullet that can be applied to all problems. 
Some methods are better than others. But I think another major reason of the 
failures is that people think they can predict things without sufficient 
information. By approaching the problem this way, we compound the need for 
heuristics and the errors they produce because we simply don't have 
sufficient 
information to make a good decision with limited evidence. If approached 
correctly, the right solution would solve many more 

[agi] What is so special with the number seven

2010-07-22 Thread deepakjnath
Is there any predisposition to the Number 7 and our brains?

Why do we have a scale with 7 notes? Why are there 7 colors in a rainbow.?
Can this relate to how we perceive things?

Seven days of a week.

cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] How do we hear music

2010-07-22 Thread deepakjnath
Why do we listen to a song sung in different scale and yet identify it as
the same song.?  Does it have something to do with the fundamental way in
which we store memory?

cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-22 Thread Matt Mahoney
deepakjnath wrote:

 Why do we listen to a song sung in different scale and yet identify it as the 
same song.?  Does it have something to do with the fundamental way in which we 
store memory?

For the same reason that gray looks green on a red background. You have more 
neurons that respond to differences in tones than to absolute frequencies.

 -- Matt Mahoney, matmaho...@yahoo.com





From: deepakjnath deepakjn...@gmail.com
To: agi agi@v2.listbox.com
Sent: Thu, July 22, 2010 3:59:57 PM
Subject: [agi] How do we hear music

Why do we listen to a song sung in different scale and yet identify it as the 
same song.?  Does it have something to do with the fundamental way in which we 
store memory?

cheers,
Deepak

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Jim Bromer
On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.com wrote:
The fundamental method is that the probability of a string x is proportional
to the sum of all programs M that output x weighted by 2^-|M|. That
probability is dominated by the shortest program, but it is equally
uncomputable either way.
Also, please point me to this mathematical community that you claim rejects
Solomonoff induction. Can you find even one paper that refutes it?

You give a precise statement of the probability in general terms, but then
say that it is uncomputable.  Then you ask if there is a paper that refutes
it.  Well, why would any serious mathematician bother to refute it since you
yourself acknowledge that it is uncomputable and therefore unverifiable and
therefore not a mathematical theorem that can be proven true or false?  It
isn't like you claimed that the mathematical statement is verifiable. It is
as if you are making a statement and then ducking any responsibility for it
by denying that it is even an evaluation.  You honestly don't see the
irregularity?

My point is that the general mathematical community doesn't accept
Solomonoff Induction, not that I have a paper that *refutes it,* whatever
that would mean.

Please give me a little more explanation why you say the fundamental method
is that the probability of a string x is proportional to the sum of all
programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?


On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  The fundamental method of Solmonoff Induction is trans-infinite.

 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way. How does this approximation invalidate Solomonoff
 induction?

 Also, please point me to this mathematical community that you claim rejects
 Solomonoff induction. Can you find even one paper that refutes it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 21, 2010 3:08:13 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I should have said, It would be unwise to claim that this method could
 stand as an ideal for some valid and feasible application of probability.
 Jim Bromer

 On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
 you iterate through all possible programs, combining different programs as
 you go.  Then you have an infinite number of possible programs which have a
 trans-infinite number of combinations, because each tier of combinations can
 then be recombined to produce a second, third, fourth,... tier of
 recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is so special with the number seven

2010-07-22 Thread Panu Horsmalahti
2010/7/22 deepakjnath deepakjn...@gmail.com

 Is there any predisposition to the Number 7 and our brains?

 Why do we have a scale with 7 notes? Why are there 7 colors in a rainbow.?
 Can this relate to how we perceive things?

 Seven days of a week.

 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


You can pick any number and find things related to it. Besides, your
anecdotes may be explained with physics or musical theory, for example. I
don't know anything about music theory, but it seems that the chromatic
scale has 12 notes. Also, it seems that Newton named 7 colors in the Rainbow
to make it match the idea of the seven note scale (
http://en.wikipedia.org/wiki/Rainbow#Distinct_colours )

- Panu Horsmalahti



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-22 Thread L Detetive
Schemas are what maths can't handle - and are fundamental to AGI.

Maths are what Mike can't handle - and are fundamental to AGI.

-- 
L



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread David Jones
It's certainly not as simple as you claim. First, assigning a probability is
not always possible, nor is it easy. The factors in calculating that
probability are unknown and are not the same for every instance. Since we do
not know what combination of observations we will see, we cannot have a
predefined set of probabilities, nor is it any easier to create a
probability function that generates them for us. That is just as exactly
what I meant by quantitatively define the predictiveness... it would be
proportional to the probability.

Second, if you can define a program ina way that is always simpler when it
is smaller, then you can do the same thing without a program. I don't think
it makes any sense to do it this way.

It is not that simple. If it was, we could solve a large portion of agi
easily.

On Thu, Jul 22, 2010 at 3:16 PM, Matt Mahoney matmaho...@yahoo.com wrote:

David Jones wrote:

 But, I am amazed at how difficult it is to quantitatively define more
predictive and simpler for specific problems.

It isn't hard. To measure predictiveness, you assign a probability to each
possible outcome. If the actual outcome has probability p, you score a
penalty of log(1/p) bits. To measure simplicity, use the compressed size of
the code for your prediction algorithm. Then add the two scores together.
That's how it is done in the Calgary challenge
http://www.mailcom.com/challenge/ and in my own text compression benchmark.



-- Matt Mahoney, matmaho...@yahoo.com

*From:* David Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Thu, July 22, 2010 3:11:46 PM
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

Because simpler is not better if it is less predictive.

On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com wrote:

Jim,

Why more predictive *and then* simpler?

--Abram

On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.com wrote:

 An Update

I think the following gets to the heart of general AI and what it takes to
achieve it. It also provides us with evidence as to why general AI is so
difficult. With this new knowledge in mind, I think I will be much more
capable now of solving the problems and making it work.

I've come to the conclusion lately that the best hypothesis is better
because it is more predictive and then simpler than other hypotheses (in
that order more predictive... then simpler). But, I am amazed at how
difficult it is to quantitatively define more predictive and simpler for
specific problems. This is why I have sometimes doubted the truth of the
statement.

In addition, the observations that the AI gets are not representative of all
observations! This means that if your measure of predictiveness depends on
the number of certain observations, it could make mistakes! So, the specific
observations you are aware of may be unrepresentative of the predictiveness
of a hypothesis relative to the truth. If you try to calculate which
hypothesis is more predictive and you don't have the critical observations
that would give you the right answer, you may get the wrong answer! This all
depends of course on your method of calculation, which is quite elusive to
define.

Visual input from screenshots, for example, can be somewhat malicious.
Things can move, appear, disappear or occlude each other suddenly. So,
without sufficient knowledge it is hard to decide whether matches you find
between such large changes are because it is the same object or a different
object. This may indicate that bias and preprogrammed experience should be
introduced to the AI before training. Either that or the training inputs
should be carefully chosen to avoid malicious input and to make them nice
for learning.

This is the correspondence problem that is typical of computer vision and
has never been properly solved. Such malicious input also makes it difficult
to learn automatically because the AI doesn't have sufficient experience to
know which changes or transformations are acceptable and which are not. It
is immediately bombarded with malicious inputs.

I've also realized that if a hypothesis is more explanatory, it may be
better. But quantitatively defining explanatory is also elusive and truly
depends on the specific problems you are applying it to because it is a
heuristic. It is not a true measure of correctness. It is not loyal to the
truth. More explanatory is really a heuristic that helps us find
hypothesis that are more predictive. The true measure of whether a
hypothesis is better is simply the most accurate and predictive hypothesis.
That is the ultimate and true measure of correctness.

Also, since we can't measure every possible prediction or every last
prediction (and we certainly can't predict everything), our measure of
predictiveness can't possibly be right all the time! We have no choice but
to use a heuristic of some kind.

So, its clear to me that the right hypothesis is more predictive and then
simpler. But, it is also clear that there 

Re: [agi] How do we hear music

2010-07-22 Thread Mike Tintner
And maths will handle the examples given :

same tunes - different scales, different instruments
same face -  cartoon, photo
same logo  - different parts [buildings/ fruits/ human figures]

revealing them to be the same  -   how exactly?

Or you could take two arseholes -  same kind of object, but radically different 
configurations - maths will show them to belong to the same category, how?

IOW do you have the slightest evidence for what you're claiming? 

And to which part of  AGI, is maths demonstrably fundamental? Any idea? Or are 
you just praying?




From: L Detetive 
Sent: Thursday, July 22, 2010 11:49 PM
To: agi 
Subject: Re: [agi] How do we hear music


Schemas are what maths can't handle - and are fundamental to AGI.



Maths are what Mike can't handle - and are fundamental to AGI.

-- 
L

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-22 Thread L Detetive
Are you suggesting that I teach you some math? I learned it by myself, why
can't you? Stop being lazy (and ridiculous), please.

--
L



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-22 Thread Jan Klauck
Mike Tintner trolled

 And maths will handle the examples given :

 same tunes - different scales, different instruments
 same face -  cartoon, photo
 same logo  - different parts [buildings/ fruits/ human figures]

Unfortunately I forgot. The answer is somewhere down there:

http://en.wikipedia.org/wiki/Eigenvalue,_eigenvector_and_eigenspace
http://en.wikipedia.org/wiki/Pattern_recognition
http://en.wikipedia.org/wiki/Curve_fitting
http://en.wikipedia.org/wiki/System_identification

 revealing them to be the same  -   how exactly?

Why should anybody explain that mystery to you? You are not an
accepted member of the Grand Lodge of AGI Masons or its affiliates.

 Or you could take two arseholes -  same kind of object, but radically
 different configurations - maths will show them to belong to the same
 category, how?

How will you do it? By licking them?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-22 Thread L Detetive
You could add this one too, Jan:
http://scholar.google.com.br/scholar?hl=enq=%22fourier-mellin+transform%22btnG=Searchas_sdt=2000as_ylo=as_vis=1
No more excuses for being lazy now. The answers for all proposed questions
are inside those links.

-- 
L



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-22 Thread Michael Swan
Hi,

Sometimes outrageous comments are a catalyst for better ideas. 

On Fri, 2010-07-23 at 01:48 +0200, Jan Klauck wrote:
 Mike Tintner trolled
 
  And maths will handle the examples given :
 
  same tunes - different scales, different instruments
  same face -  cartoon, photo
  same logo  - different parts [buildings/ fruits/ human figures]
 
 Unfortunately I forgot. The answer is somewhere down there:
 
 http://en.wikipedia.org/wiki/Eigenvalue,_eigenvector_and_eigenspace
 http://en.wikipedia.org/wiki/Pattern_recognition
 http://en.wikipedia.org/wiki/Curve_fitting
 http://en.wikipedia.org/wiki/System_identification
 
No-one has successfully integrated these concepts into a working AGI,
despite numerous attempts. Even though these concept feel general, when
implemented, only narrow or affected by combinatorial explosion have
succeeded. 
 
  revealing them to be the same  -   how exactly?
 
 Why should anybody explain that mystery to you? You are not an
 accepted member of the Grand Lodge of AGI Masons or its affiliates.
 
  Or you could take two arseholes -  same kind of object, but radically
  different configurations - maths will show them to belong to the same
  category, how?
 
 How will you do it? By licking them?

Personal attacks only weaken your arguments.

 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-22 Thread L Detetive
No-one has successfully integrated these concepts into a working AGI,

So I could say for ANY method.

-- 
L



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Matt Mahoney
Jim Bromer wrote:
 Please give me a little more explanation why you say the fundamental method 
 is 
that the probability of a string x is proportional to the sum of all programs 
M 
that output x weighted by 2^-|M|.  Why is the M in a bracket?

By |M| I mean the length of the program M in bits. Why 2^-|M|? Because each bit 
means you can have twice as many programs, so they should count half as much.

Being uncomputable doesn't make it wrong. The fact that there is no general 
procedure for finding the shortest program that outputs a string doesn't mean 
that you can never find it, or that for many cases you can't approximate it.

You apply Solomonoff induction all the time. What is the next bit in these 
sequences?

1. 0101010101010101010101010101010

2. 11001001110110101010001

In sequence 1 there is an obvious pattern with a short description. You can 
find 
a short program that outputs 0 and 1 alternately forever, so you predict the 
next bit will be 1. It might not be the shortest program, but it is enough that 
alternate 0 and 1 forever is shorter than alternate 0 and 1 15 times 
followed 
by 00 that you can confidently predict the first hypothesis is more likely.

The second sequence is not so obvious. It looks like random bits. With enough 
intelligence (or computation) you might discover that the sequence is a binary 
representation of pi, and therefore the next bit is 0. But the fact that you 
might not discover the shortest description does not invalidate the principle. 
It just says that you can't always apply Solomonoff induction and get the 
number 
you want.

Perhaps http://en.wikipedia.org/wiki/Kolmogorov_complexity will make this clear.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Thu, July 22, 2010 5:06:12 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.com wrote:
The fundamental method is that the probability of a string x is proportional to 
the sum of all programs M that output x weighted by 2^-|M|. That probability is 
dominated by the shortest program, but it is equally uncomputable either way.
Also, please point me to this mathematical community that you claim rejects 
Solomonoff induction. Can you find even one paper that refutes it?
 
You give a precise statement of the probability in general terms, but then say 
that it is uncomputable.  Then you ask if there is a paper that refutes it.  
Well, why would any serious mathematician bother to refute it since you 
yourself 
acknowledge that it is uncomputable and therefore unverifiable and therefore 
not 
a mathematical theorem that can be proven true or false?  It isn't like you 
claimed that the mathematical statement is verifiable. It is as if you are 
making a statement and then ducking any responsibility for it by denying that 
it 
is even an evaluation.  You honestly don't see the irregularity?
 
My point is that the general mathematical community doesn't accept Solomonoff 
Induction, not that I have a paper that refutes it, whatever that would mean.
 
Please give me a little more explanation why you say the fundamental method is 
that the probability of a string x is proportional to the sum of all programs M 
that output x weighted by 2^-|M|.  Why is the M in a bracket?

 
On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.com wrote:

Jim Bromer wrote:
 The fundamental method of Solmonoff Induction is trans-infinite.


The fundamental method is that the probability of a string x is proportional 
to 
the sum of all programs M that output x weighted by 2^-|M|. That probability 
is 
dominated by the shortest program, but it is equally uncomputable either way. 
How does this approximation invalidate Solomonoff induction?


Also, please point me to this mathematical community that you claim rejects 
Solomonoff induction. Can you find even one paper that refutes it?

 -- Matt Mahoney, matmaho...@yahoo.com 






 From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Wed, July 21, 2010 3:08:13 PM 

Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction
 


I should have said, It would be unwise to claim that this method could stand 
as 
an ideal for some valid and feasible application of probability.
Jim Bromer


On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

The fundamental method of Solmonoff Induction is trans-infinite.  Suppose you 
iterate through all possible programs, combining different programs as you go. 
 
Then you have an infinite number of possible programs which have a 
trans-infinite number of combinations, because each tier of combinations can 
then be recombined to produce a second, third, fourth,... tier of 
recombinations.
 
Anyone who claims that this method is the ideal for a method of applied 
probability is unwise.
 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Jim Bromer
Thanks for the explanation.  I want to learn more about statistical
modelling and compression but I will need to take my time on it.  But no, I
don't apply Solomonoff Induction all the time, I never apply it.  I am not
being petty, it's just that you have taken a coincidence and interpreted it
the way you want to.

On Thu, Jul 22, 2010 at 9:33 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  Please give me a little more explanation why you say the fundamental
 method is that the probability of a string x is proportional to the sum of
 all programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?

 By |M| I mean the length of the program M in bits. Why 2^-|M|? Because each
 bit means you can have twice as many programs, so they should count half as
 much.

 Being uncomputable doesn't make it wrong. The fact that there is no general
 procedure for finding the shortest program that outputs a string doesn't
 mean that you can never find it, or that for many cases you can't
 approximate it.

 You apply Solomonoff induction all the time. What is the next bit in these
 sequences?

 1. 0101010101010101010101010101010

 2. 11001001110110101010001

 In sequence 1 there is an obvious pattern with a short description. You can
 find a short program that outputs 0 and 1 alternately forever, so you
 predict the next bit will be 1. It might not be the shortest program, but it
 is enough that alternate 0 and 1 forever is shorter than alternate 0 and
 1 15 times followed by 00 that you can confidently predict the first
 hypothesis is more likely.

 The second sequence is not so obvious. It looks like random bits. With
 enough intelligence (or computation) you might discover that the sequence is
 a binary representation of pi, and therefore the next bit is 0. But the fact
 that you might not discover the shortest description does not invalidate the
 principle. It just says that you can't always apply Solomonoff induction and
 get the number you want.

 Perhaps http://en.wikipedia.org/wiki/Kolmogorov_complexity will make this
 clear.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Thu, July 22, 2010 5:06:12 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:
 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way.
 Also, please point me to this mathematical community that you claim rejects
 Solomonoff induction. Can you find even one paper that refutes it?

 You give a precise statement of the probability in general terms, but then
 say that it is uncomputable.  Then you ask if there is a paper that refutes
 it.  Well, why would any serious mathematician bother to refute it since you
 yourself acknowledge that it is uncomputable and therefore unverifiable and
 therefore not a mathematical theorem that can be proven true or false?  It
 isn't like you claimed that the mathematical statement is verifiable. It is
 as if you are making a statement and then ducking any responsibility for it
 by denying that it is even an evaluation.  You honestly don't see the
 irregularity?

 My point is that the general mathematical community doesn't accept
 Solomonoff Induction, not that I have a paper that *refutes it,*whatever 
 that would mean.

 Please give me a little more explanation why you say the fundamental method
 is that the probability of a string x is proportional to the sum of all
 programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?


 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  The fundamental method of Solmonoff Induction is trans-infinite.

 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way. How does this approximation invalidate Solomonoff
 induction?

 Also, please point me to this mathematical community that you claim
 rejects Solomonoff induction. Can you find even one paper that refutes it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 21, 2010 3:08:13 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I should have said, It would be unwise to claim that this method could
 stand as an ideal for some valid and feasible application of probability.
 Jim Bromer

 On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite. 

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread Abram Demski
ps-- Sorry for accidentally calling you Jim!

On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 Why more predictive *and then* simpler?

 --Abram


 On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.comwrote:

 An Update

 I think the following gets to the heart of general AI and what it takes to
 achieve it. It also provides us with evidence as to why general AI is so
 difficult. With this new knowledge in mind, I think I will be much more
 capable now of solving the problems and making it work.

 I've come to the conclusion lately that the best hypothesis is better
 because it is more predictive and then simpler than other hypotheses (in
 that order more predictive... then simpler). But, I am amazed at how
 difficult it is to quantitatively define more predictive and simpler for
 specific problems. This is why I have sometimes doubted the truth of the
 statement.

 In addition, the observations that the AI gets are not representative of
 all observations! This means that if your measure of predictiveness
 depends on the number of certain observations, it could make mistakes! So,
 the specific observations you are aware of may be unrepresentative of the
 predictiveness of a hypothesis relative to the truth. If you try to
 calculate which hypothesis is more predictive and you don't have the
 critical observations that would give you the right answer, you may get the
 wrong answer! This all depends of course on your method of calculation,
 which is quite elusive to define.

 Visual input from screenshots, for example, can be somewhat malicious.
 Things can move, appear, disappear or occlude each other suddenly. So,
 without sufficient knowledge it is hard to decide whether matches you find
 between such large changes are because it is the same object or a different
 object. This may indicate that bias and preprogrammed experience should be
 introduced to the AI before training. Either that or the training inputs
 should be carefully chosen to avoid malicious input and to make them nice
 for learning.

 This is the correspondence problem that is typical of computer vision
 and has never been properly solved. Such malicious input also makes it
 difficult to learn automatically because the AI doesn't have sufficient
 experience to know which changes or transformations are acceptable and which
 are not. It is immediately bombarded with malicious inputs.

 I've also realized that if a hypothesis is more explanatory, it may be
 better. But quantitatively defining explanatory is also elusive and truly
 depends on the specific problems you are applying it to because it is a
 heuristic. It is not a true measure of correctness. It is not loyal to the
 truth. More explanatory is really a heuristic that helps us find
 hypothesis that are more predictive. The true measure of whether a
 hypothesis is better is simply the most accurate and predictive hypothesis.
 That is the ultimate and true measure of correctness.

 Also, since we can't measure every possible prediction or every last
 prediction (and we certainly can't predict everything), our measure of
 predictiveness can't possibly be right all the time! We have no choice but
 to use a heuristic of some kind.

 So, its clear to me that the right hypothesis is more predictive and then
 simpler. But, it is also clear that there will never be a single measure of
 this that can be applied to all problems. I hope to eventually find a nice
 model for how to apply it to different problems though. This may be the
 reason that so many people have tried and failed to develop general AI. Yes,
 there is a solution. But there is no silver bullet that can be applied to
 all problems. Some methods are better than others. But I think another major
 reason of the failures is that people think they can predict things without
 sufficient information. By approaching the problem this way, we compound the
 need for heuristics and the errors they produce because we simply don't have
 sufficient information to make a good decision with limited evidence. If
 approached correctly, the right solution would solve many more problems with
 the same efforts than a poor solution would. It would also eliminate some of
 the difficulties we currently face if sufficient data is available to learn
 from.

 In addition to all this theory about better hypotheses, you have to add on
 the need to solve problems in reasonable time. This also compounds the
 difficulty of the problem and the complexity of solutions.

 I am always fascinated by the extraordinary difficulty and complexity of
 this problem. The more I learn about it, the more I appreciate it.

 Dave
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic


[agi] Huge Progress on the Core of AGI

2010-07-22 Thread Jim Bromer
I have to say that I am proud of David Jone's efforts.  He has really
matured during these last few months.  I'm kidding but I really do respect
the fact that he is actively experimenting.  I want to get back to work on
my artificial imagination and image analysis programs - if I can ever figure
out how to get the time.

As I have read David's comments, I realize that we need to really leverage
all sorts of cruddy data in order to make good agi.  But since that kind of
thing doesn't work with sparse knowledge, it seems that the only way it
could work is with extensive knowledge about a wide range of situations,
like the knowledge gained from a vast variety of experiences.  This
conjecture makes some sense because if wide ranging knowledge could be kept
in superficial stores where it could be accessed quickly and economically,
it could be used efficiently in (conceptual) model fitting.  However, as
knowledge becomes too extensive it might become too unwieldy to find what is
needed for a particular situation.  At this point indexing becomes necessary
with cross-indexing references to different knowledge based on similarities
and commonalities of employment.

Here I am saying that relevant knowledge based on previous learning might
not have to be totally relevant to a situation as long as it could be used
to run during an ongoing situation.  From this perspective
then, knowledge from a wide variety of experiences should actually be
composed of reactions on different conceptual levels.  Then as a piece of
knowledge is brought into play for an ongoing situation, those levels that
seem best suited to deal with the situation could be promoted quickly as the
situation unfolds, acting like an automated indexing system into other
knowledge relevant to the situation.  So the ongoing process of trying to
determine what is going on and what actions should be made would
simultaneously act like an automated index to find better knowledge more
suited for the situation.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread Abram Demski
David,

What are the different ways you are thinking of for measuring the
predictiveness? I can think of a few different possibilities (such as
measuring number incorrect vs measuring fraction incorrect, et cetera) but
I'm wondering which variations you consider significant/troublesome/etc.

--Abram

On Thu, Jul 22, 2010 at 7:12 PM, David Jones davidher...@gmail.com wrote:

 It's certainly not as simple as you claim. First, assigning a probability
 is not always possible, nor is it easy. The factors in calculating that
 probability are unknown and are not the same for every instance. Since we do
 not know what combination of observations we will see, we cannot have a
 predefined set of probabilities, nor is it any easier to create a
 probability function that generates them for us. That is just as exactly
 what I meant by quantitatively define the predictiveness... it would be
 proportional to the probability.

 Second, if you can define a program ina way that is always simpler when it
 is smaller, then you can do the same thing without a program. I don't think
 it makes any sense to do it this way.

 It is not that simple. If it was, we could solve a large portion of agi
 easily.

 On Thu, Jul 22, 2010 at 3:16 PM, Matt Mahoney matmaho...@yahoo.com
 wrote:

 David Jones wrote:

  But, I am amazed at how difficult it is to quantitatively define more
 predictive and simpler for specific problems.

 It isn't hard. To measure predictiveness, you assign a probability to each
 possible outcome. If the actual outcome has probability p, you score a
 penalty of log(1/p) bits. To measure simplicity, use the compressed size of
 the code for your prediction algorithm. Then add the two scores together.
 That's how it is done in the Calgary challenge
 http://www.mailcom.com/challenge/ and in my own text compression
 benchmark.



 -- Matt Mahoney, matmaho...@yahoo.com

 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Thu, July 22, 2010 3:11:46 PM
 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Because simpler is not better if it is less predictive.

 On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com
 wrote:

 Jim,

 Why more predictive *and then* simpler?

 --Abram

 On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.com
 wrote:

  An Update

 I think the following gets to the heart of general AI and what it takes to
 achieve it. It also provides us with evidence as to why general AI is so
 difficult. With this new knowledge in mind, I think I will be much more
 capable now of solving the problems and making it work.

 I've come to the conclusion lately that the best hypothesis is better
 because it is more predictive and then simpler than other hypotheses (in
 that order more predictive... then simpler). But, I am amazed at how
 difficult it is to quantitatively define more predictive and simpler for
 specific problems. This is why I have sometimes doubted the truth of the
 statement.

 In addition, the observations that the AI gets are not representative of
 all observations! This means that if your measure of predictiveness
 depends on the number of certain observations, it could make mistakes! So,
 the specific observations you are aware of may be unrepresentative of the
 predictiveness of a hypothesis relative to the truth. If you try to
 calculate which hypothesis is more predictive and you don't have the
 critical observations that would give you the right answer, you may get the
 wrong answer! This all depends of course on your method of calculation,
 which is quite elusive to define.

 Visual input from screenshots, for example, can be somewhat malicious.
 Things can move, appear, disappear or occlude each other suddenly. So,
 without sufficient knowledge it is hard to decide whether matches you find
 between such large changes are because it is the same object or a different
 object. This may indicate that bias and preprogrammed experience should be
 introduced to the AI before training. Either that or the training inputs
 should be carefully chosen to avoid malicious input and to make them nice
 for learning.

 This is the correspondence problem that is typical of computer vision and
 has never been properly solved. Such malicious input also makes it difficult
 to learn automatically because the AI doesn't have sufficient experience to
 know which changes or transformations are acceptable and which are not. It
 is immediately bombarded with malicious inputs.

 I've also realized that if a hypothesis is more explanatory, it may be
 better. But quantitatively defining explanatory is also elusive and truly
 depends on the specific problems you are applying it to because it is a
 heuristic. It is not a true measure of correctness. It is not loyal to the
 truth. More explanatory is really a heuristic that helps us find
 hypothesis that are more predictive. The true measure of whether a
 hypothesis is better is simply the most 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Abram Demski
Jim,

Sorry for the short quip... I should have thought about how it would sound
before sending.

--Abram

On Wed, Jul 21, 2010 at 4:36 PM, Jim Bromer jimbro...@gmail.com wrote:

 You claim that I have not checked how Solomonoff Induction is actually
 defined, but then don't bother mentioning how it is defined as if it would
 be too much of an ordeal to even begin to try.  It is this kind of evasive
 response, along with the fact that these functions are incomputable, that
 make your replies so absurd.

 On Wed, Jul 21, 2010 at 4:01 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 This argument that you've got to consider recombinations *in addition to*
 just the programs displays the lack of mathematical understanding that I am
 referring to... you appear to be arguing against what you *think* solomonoff
 induction is, without checking how it is actually defined...

 --Abram

   On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.comwrote:

   The fundamental method of Solmonoff Induction is trans-infinite.
 Suppose you iterate through all possible programs, combining different
 programs as you go.  Then you have an infinite number of possible programs
 which have a trans-infinite number of combinations, because each tier of
 combinations can then be recombined to produce a second, third, fourth,...
 tier of recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Abram Demski
Jim,

Aha! So you *are* a constructivist or intuitionist or finitist of some
variety? This would explain the miscommunication... you appear to hold the
belief that a structure needs to be computable in order to be well-defined.
Is that right?

If that's the case, then you're not really just arguing against Solomonoff
induction in particular, you're arguing against the entrenched framework of
thinking which allows it to be defined-- the so-called classical
mathematics.

If this is the case, then you aren't alone.

--Abram

On Thu, Jul 22, 2010 at 5:06 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:
 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way.
 Also, please point me to this mathematical community that you claim rejects
 Solomonoff induction. Can you find even one paper that refutes it?

 You give a precise statement of the probability in general terms, but then
 say that it is uncomputable.  Then you ask if there is a paper that refutes
 it.  Well, why would any serious mathematician bother to refute it since you
 yourself acknowledge that it is uncomputable and therefore unverifiable and
 therefore not a mathematical theorem that can be proven true or false?  It
 isn't like you claimed that the mathematical statement is verifiable. It is
 as if you are making a statement and then ducking any responsibility for it
 by denying that it is even an evaluation.  You honestly don't see the
 irregularity?

 My point is that the general mathematical community doesn't accept
 Solomonoff Induction, not that I have a paper that *refutes it,*whatever 
 that would mean.

 Please give me a little more explanation why you say the fundamental method
 is that the probability of a string x is proportional to the sum of all
 programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?


 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  The fundamental method of Solmonoff Induction is trans-infinite.

 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way. How does this approximation invalidate Solomonoff
 induction?

 Also, please point me to this mathematical community that you claim
 rejects Solomonoff induction. Can you find even one paper that refutes it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 21, 2010 3:08:13 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I should have said, It would be unwise to claim that this method could
 stand as an ideal for some valid and feasible application of probability.
 Jim Bromer

 On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
 you iterate through all possible programs, combining different programs as
 you go.  Then you have an infinite number of possible programs which have a
 trans-infinite number of combinations, because each tier of combinations can
 then be recombined to produce a second, third, fourth,... tier of
 recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-22 Thread L Detetive
So you must explain how a mathematical approach, wh. is all about
recognizing patterns, can apply to objects wh. do not fit patterns.

No, we mustn't. You must read the links we've posted or stop asking the same
things again and again. The answers are all there.

-- 
L



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-22 Thread Mike Archbold
On Thu, Jul 22, 2010 at 12:59 PM, deepakjnath deepakjn...@gmail.com wrote:

 Why do we listen to a song sung in different scale and yet identify it as
 the same song.?  Does it have something to do with the fundamental way in
 which we store memory?



Probably due to evolution?  Maybe at some point prior to words pitch was
used in some variation.  You (an astrolopithicus etc, the spelling is f-ed
up, I know) is not going to care what key you are singing Watch out for
that sabertooth tiger in.  If you got messed up like that, can't hear the
same song in a different key, you are cancelled out in evolution.  Just a
guess.

Mike Archbold



 cheers,
 Deepak
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com