Re: [agi] Pretty worldchanging

2010-07-24 Thread Panu Horsmalahti
Availibility of the Internet actually makes school grades worse. Of course,
grades does not equal education, but I don't see anything worldchanging
about education because of this.

- Panu Horsmalahti



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] How do we hear music

2010-07-24 Thread John G. Rose
 -Original Message-
 
 You have all missed one vital point. Music is repeating and it has a
symmetry.
 In dancing (song and dance) moves are repeated in a symmetrical pattern.
 
 Question why are we programmed to find symmetry? This question may be
 more core to AGI than appears at first sight. Chearly an AGI system will
have
 to look for symmetry and do what Hardy described as beautiful maths.
 

Symmetry is at the heart of everything; without symmetry the universe
collapses. Intelligence operates over symmetric verses non-symmetric IMO.
But everything is ultimately grounded in symmetry. 

BTW kind of related, was just watching this neat video - the soundtrack
needs to be redone though :)

http://www.youtube.com/watch?v=4dpRPTwsKJs

Why does the brain have bi-lateral symmetry I wonder and why is the heart
not symmetric? Some researchers say consciousness is both heart and brain.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Pretty worldchanging

2010-07-24 Thread Ben Goertzel

On Sat, Jul 24, 2010 at 5:36 AM, Panu Horsmalahti nawi...@gmail.com wrote:
Availibility of the Internet actually makes school grades worse. Of course,
grades does not equal education, but I don't see anything worldchanging
about education because of this.

- Panu Horsmalahti


Hmmm  I do think the Internet has worldchanging implications for
education, many of which are being realized all around us as we speak...

School grades are a poor measure of intellectual achievement.  And of
course, the Internet can be used in either wonderful or idiotic ways -- it
obviously DOES have revolutionary implications for education, even if
statistically few make use of it in a way that significantly manifests these
implications.

I see this article

http://news.yahoo.com/s/ytech_wguy/20100714/tc_ytech_wguy/ytech_wguy_tc3118

linked from the above article, which provides some (not that much) data that
computer or Net access may decrease test scores in some low-income
families

But as the article itself states, this suggests the problem is not the
computers or Net, but rather the inability of many low-income parents to
guide their kids in educational use of computers and the Net ... or to give
their kids a broad enough general education to enable them to guide
themselves in this regard...

Similarly, reading has great potential to aid education -- but if all you
read are romance novels and People or Fat Biker Chick magazine, you're not
going to broaden your mind that much ;p ...

Maybe there are some students on this email list, who are wading through all
the BS and learning something about AGI, by following links and reading
papers mentioned here, etc.  Without the Net, how would these students learn
about AGI, in practice?  Such education would be far harder to come by and
less effective without the Net.  That's world-changing... ;-) ...

Learning about AGI via online resources may not improve your school grades
any, because AGI knowledge isn't tested much in school.  But students
learning about AGI online could change the world...

-- Ben G



*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
lol. thanks Jim :)


On Thu, Jul 22, 2010 at 10:08 PM, Jim Bromer jimbro...@gmail.com wrote:

 I have to say that I am proud of David Jone's efforts.  He has really
 matured during these last few months.  I'm kidding but I really do respect
 the fact that he is actively experimenting.  I want to get back to work on
 my artificial imagination and image analysis programs - if I can ever figure
 out how to get the time.

 As I have read David's comments, I realize that we need to really leverage
 all sorts of cruddy data in order to make good agi.  But since that kind of
 thing doesn't work with sparse knowledge, it seems that the only way it
 could work is with extensive knowledge about a wide range of situations,
 like the knowledge gained from a vast variety of experiences.  This
 conjecture makes some sense because if wide ranging knowledge could be kept
 in superficial stores where it could be accessed quickly and economically,
 it could be used efficiently in (conceptual) model fitting.  However, as
 knowledge becomes too extensive it might become too unwieldy to find what is
 needed for a particular situation.  At this point indexing becomes necessary
 with cross-indexing references to different knowledge based on similarities
 and commonalities of employment.

 Here I am saying that relevant knowledge based on previous learning might
 not have to be totally relevant to a situation as long as it could be used
 to run during an ongoing situation.  From this perspective
 then, knowledge from a wide variety of experiences should actually be
 composed of reactions on different conceptual levels.  Then as a piece of
 knowledge is brought into play for an ongoing situation, those levels that
 seem best suited to deal with the situation could be promoted quickly as the
 situation unfolds, acting like an automated indexing system into other
 knowledge relevant to the situation.  So the ongoing process of trying to
 determine what is going on and what actions should be made would
 simultaneously act like an automated index to find better knowledge more
 suited for the situation.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
Abram,

I haven't found a method that I think works consistently yet. Basically I
was trying methods like the one you suggested, which measures the number of
correct predictions or expectations. But, then I ran into the problem of,
what if the predictions you are counting are more of the same? Do you count
them or not? For example, lets say that we see a piece of paper on a table
in an image and we see that the paper looks different but moves with the
table. So, we can hypothesize that they are attached. Now what if it is not
a piece of paper, but a mural. Do you count every little piece of the mural
that moves with the desk as a correct prediction? Is it a single prediction?
What about the number of times they move together? It doesn't seem right to
count each and every time, but we also have to be careful about coincidental
movement together. Just because it seems to move together in one frame out
of 1000 does not mean we should consider them temporarily attached.

So, quantitatively defining simpler and predictive is quite challenging. I
am honestly a bit stumped at how to do it at the moment. I will keep trying
to find ways to at least approximate it, but I'm really not sure the best
way.

Of course, I haven't been working on this specific problem long, but other
people have tried to quantify our explanatory methods in other areas and
have also failed. I think part of the failure has to do with the fact that
the things they want to explain using the same method should probably use
different methods and should be more heuristic than mathematically precise.
It's all quite overwhelming to analyze sometimes.

I may have thought about fractions correct vs. incorrect also. The truth is,
I haven't locked on and carefully analyzed the different ideas I've come up
with because they all seem to have issues and it is difficult to analyze. I
definitely need to try some out and just see what the results are and
document them better.

Dave

On Thu, Jul 22, 2010 at 10:23 PM, Abram Demski abramdem...@gmail.comwrote:

 David,

 What are the different ways you are thinking of for measuring the
 predictiveness? I can think of a few different possibilities (such as
 measuring number incorrect vs measuring fraction incorrect, et cetera) but
 I'm wondering which variations you consider significant/troublesome/etc.

 --Abram

 On Thu, Jul 22, 2010 at 7:12 PM, David Jones davidher...@gmail.comwrote:

 It's certainly not as simple as you claim. First, assigning a probability
 is not always possible, nor is it easy. The factors in calculating that
 probability are unknown and are not the same for every instance. Since we do
 not know what combination of observations we will see, we cannot have a
 predefined set of probabilities, nor is it any easier to create a
 probability function that generates them for us. That is just as exactly
 what I meant by quantitatively define the predictiveness... it would be
 proportional to the probability.

 Second, if you can define a program ina way that is always simpler when it
 is smaller, then you can do the same thing without a program. I don't think
 it makes any sense to do it this way.

 It is not that simple. If it was, we could solve a large portion of agi
 easily.

 On Thu, Jul 22, 2010 at 3:16 PM, Matt Mahoney matmaho...@yahoo.com
 wrote:

 David Jones wrote:

  But, I am amazed at how difficult it is to quantitatively define more
 predictive and simpler for specific problems.

 It isn't hard. To measure predictiveness, you assign a probability to each
 possible outcome. If the actual outcome has probability p, you score a
 penalty of log(1/p) bits. To measure simplicity, use the compressed size of
 the code for your prediction algorithm. Then add the two scores together.
 That's how it is done in the Calgary challenge
 http://www.mailcom.com/challenge/ and in my own text compression
 benchmark.



 -- Matt Mahoney, matmaho...@yahoo.com

 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Thu, July 22, 2010 3:11:46 PM
 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Because simpler is not better if it is less predictive.

 On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com
 wrote:

 Jim,

 Why more predictive *and then* simpler?

 --Abram

 On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.com
 wrote:

  An Update

 I think the following gets to the heart of general AI and what it takes to
 achieve it. It also provides us with evidence as to why general AI is so
 difficult. With this new knowledge in mind, I think I will be much more
 capable now of solving the problems and making it work.

 I've come to the conclusion lately that the best hypothesis is better
 because it is more predictive and then simpler than other hypotheses (in
 that order more predictive... then simpler). But, I am amazed at how
 difficult it is to quantitatively define more predictive and simpler for
 specific problems. This is 

Re: [agi] Huge Progress on the Core of AGI

2010-07-24 Thread A. T. Murray
The Web site of David Jones at

http://practicalai.org

is quite impressive to me 
as a kindred spirit building AGI.
(Just today I have been coding MindForth AGI :-)

For his Practical AI Challenge or similar 
ventures, I would hope that David Jones is
open to the idea of aggregating or archiving
representative AI samples from such sources as
- TexAI;
- OpenCog;
- Mentifex AI;
- etc.;
so that visitors to PracticalAI may gain an
overview of what is happening in our field.

Arthur
-- 
http://www.scn.org/~mentifex/AiMind.html
http://www.scn.org/~mentifex/mindforth.txt


lol. thanks Jim :)


On Thu, Jul 22, 2010 at 10:08 PM, Jim Bromer jimbro...@gmail.com wrote:

 I have to say that I am proud of David Jone's efforts.  He has really
 matured during these last few months.  I'm kidding but I really do respect
 the fact that he is actively experimenting.  I want to get back to work on
 my artificial imagination and image analysis programs - if I can ever figure
 out how to get the time.

 As I have read David's comments, I realize that we need to really leverage
 all sorts of cruddy data in order to make good agi.  But since that kind of
 thing doesn't work with sparse knowledge, it seems that the only way it
 could work is with extensive knowledge about a wide range of situations,
 like the knowledge gained from a vast variety of experiences.  This
 conjecture makes some sense because if wide ranging knowledge could be kept
 in superficial stores where it could be accessed quickly and economically,
 it could be used efficiently in (conceptual) model fitting.  However, as
 knowledge becomes too extensive it might become too unwieldy to find what is
 needed for a particular situation.  At this point indexing becomes necessary
 with cross-indexing references to different knowledge based on similarities
 and commonalities of employment.

 Here I am saying that relevant knowledge based on previous learning might
 not have to be totally relevant to a situation as long as it could be used
 to run during an ongoing situation.  From this perspective
 then, knowledge from a wide variety of experiences should actually be
 composed of reactions on different conceptual levels.  Then as a piece of
 knowledge is brought into play for an ongoing situation, those levels that
 seem best suited to deal with the situation could be promoted quickly as the
 situation unfolds, acting like an automated indexing system into other
 knowledge relevant to the situation.  So the ongoing process of trying to
 determine what is going on and what actions should be made would
 simultaneously act like an automated index to find better knowledge more
 suited for the situation.
 Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Clues to the Mind: What do you think is the reason for selective attention

2010-07-24 Thread deepakjnath
http://www.youtube.com/watch?v=vJG698U2Mvo

Can anyone suggest why our brains exhibit this phenomenon?


cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
Abram,

I should also mention that I ran into problems mainly because I was having a
hard time deciding how to identify objects and determine what is really
going on in a scene. This adds a whole other layer of complexity to
hypotheses. It's not just about what is more predictive of the observations,
it is about deciding what exactly you are observing in the first place.
(although you might say its the same problem).

I ran into this problem when my algorithm finds matches between items that
are not the same. Or it may not find any matches between items that are the
same, but have changed. So, how do you decide whether it is 1) the same
object, 2) a different object or 3) the same object but it has changed.
And how do you decide its relationship to something else...  is it 1)
dependently attached 2) semi-dependently attached(can move independently,
but only in certain ways. Yet also moves dependently) 3) independent 4)
sometimes dependent 5) was dependent, but no longer is, 6) was dependent on
something else, but then was independent, but now is dependent on something
new.

These hypotheses are different ways of explaining the same observations, but
are complicated by the fact that we aren't sure of the identity of the
objects we are observing in the first place. Multiple hypotheses may fit the
same observations, and its hard to decide why one is simpler or better than
the other. The object you were observing at first may have disappeared. A
new object may have appeared at the same time (this is why screenshots are a
bit malicious). Or the object you were observing may have changed. In
screenshots, sometimes the objects that you are trying to identify as
different never appear at the same time because they always completely
occlude each other. So, that can make it extremely difficult to decide
whether they are the same object that has changed or different objects.

Such ambiguities are common in AGI. It is unclear to me yet how to deal with
them effectively, although I am continuing to work hard on it.

I know its a bit of a mess, but I'm just trying to demonstrate the trouble
I've run into.

I hope that makes it more clear why I'm having so much trouble finding a way
of determining what hypothesis is most predictive and simplest.

Dave

On Thu, Jul 22, 2010 at 10:23 PM, Abram Demski abramdem...@gmail.comwrote:

 David,

 What are the different ways you are thinking of for measuring the
 predictiveness? I can think of a few different possibilities (such as
 measuring number incorrect vs measuring fraction incorrect, et cetera) but
 I'm wondering which variations you consider significant/troublesome/etc.

 --Abram

 On Thu, Jul 22, 2010 at 7:12 PM, David Jones davidher...@gmail.comwrote:

 It's certainly not as simple as you claim. First, assigning a probability
 is not always possible, nor is it easy. The factors in calculating that
 probability are unknown and are not the same for every instance. Since we do
 not know what combination of observations we will see, we cannot have a
 predefined set of probabilities, nor is it any easier to create a
 probability function that generates them for us. That is just as exactly
 what I meant by quantitatively define the predictiveness... it would be
 proportional to the probability.

 Second, if you can define a program ina way that is always simpler when it
 is smaller, then you can do the same thing without a program. I don't think
 it makes any sense to do it this way.

 It is not that simple. If it was, we could solve a large portion of agi
 easily.

 On Thu, Jul 22, 2010 at 3:16 PM, Matt Mahoney matmaho...@yahoo.com
 wrote:

 David Jones wrote:

  But, I am amazed at how difficult it is to quantitatively define more
 predictive and simpler for specific problems.

 It isn't hard. To measure predictiveness, you assign a probability to each
 possible outcome. If the actual outcome has probability p, you score a
 penalty of log(1/p) bits. To measure simplicity, use the compressed size of
 the code for your prediction algorithm. Then add the two scores together.
 That's how it is done in the Calgary challenge
 http://www.mailcom.com/challenge/ and in my own text compression
 benchmark.



 -- Matt Mahoney, matmaho...@yahoo.com

 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Thu, July 22, 2010 3:11:46 PM
 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Because simpler is not better if it is less predictive.

 On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com
 wrote:

 Jim,

 Why more predictive *and then* simpler?

 --Abram

 On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.com
 wrote:

  An Update

 I think the following gets to the heart of general AI and what it takes to
 achieve it. It also provides us with evidence as to why general AI is so
 difficult. With this new knowledge in mind, I think I will be much more
 capable now of solving the problems and making it work.

 I've 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-24 Thread Jim Bromer
Solomonoff Induction may require a trans-infinite level of complexity just
to run each program.  Suppose each program is iterated through the
enumeration of its instructions.  Then, not only do the infinity of possible
programs need to be run, many combinations of the infinite programs from
each simulated Turing Machine also have to be tried.  All the possible
combinations of (accepted) programs, one from any two or more of the
(accepted) programs produced by each simulated Turing Machine, have to be
tried.  Although these combinations of programs from each of the simulated
Turing Machine may not all be unique, they all have to be tried.  Since each
simulated Turing Machine would produce infinite programs, I am pretty sure
that this means that Solmonoff Induction is, *by definition,*trans-infinite.
Jim Bromer


On Thu, Jul 22, 2010 at 2:06 PM, Jim Bromer jimbro...@gmail.com wrote:

 I have to retract my claim that the programs of Solomonoff Induction would
 be trans-infinite.  Each of the infinite individual programs could be
 enumerated by their individual instructions so some combination of unique
 individual programs would not correspond to a unique program but to the
 enumerated program that corresponds to the string of their individual
 instructions.  So I got that one wrong.
 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-24 Thread Jim Bromer
On Sat, Jul 24, 2010 at 3:59 PM, Jim Bromer jimbro...@gmail.com wrote:

 Solomonoff Induction may require a trans-infinite level of complexity just
 to run each program.  Suppose each program is iterated through the
 enumeration of its instructions.  Then, not only do the infinity of
 possible programs need to be run, many combinations of the infinite programs
 from each simulated Turing Machine also have to be tried.  All the
 possible combinations of (accepted) programs, one from any two or more of
 the (accepted) programs produced by each simulated Turing Machine, have to
 be tried.  Although these combinations of programs from each of the
 simulated Turing Machine may not all be unique, they all have to be tried.
 Since each simulated Turing Machine would produce infinite programs, I am
 pretty sure that this means that Solmonoff Induction is, *by 
 definition,*trans-infinite.
 Jim Bromer



All the possible combinations of (accepted) programs, one program taken from
any two or more simulated Turing Machines, have to be tried. Since each
simulated Turing Machine would produce infinite programs and there are
infinite simulated Turing Machines, I am pretty sure that this means
that Solmonoff
Induction is, *by definition,* trans-infinite.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Clues to the Mind: What do you think is the reason for selective attention

2010-07-24 Thread Anastasios Tsiolakidis
On Sat, Jul 24, 2010 at 7:07 PM, deepakjnath deepakjn...@gmail.com wrote:

 http://www.youtube.com/watch?v=vJG698U2Mvo

 Can anyone suggest why our brains exhibit this phenomenon?

May I flag this as AGI irrelevant? The brain at a non-AGI task is not
that interesting for AGI, me thinks.  Plus, we have loads of
specialist opinion on these things. Having just missed the gorilla
myself, I would be curious to see the video´s effectiveness with
different screen sizes and different prompts though. How about the
prompt which of these players is the most intelligent!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread Matt Mahoney
David Jones wrote:
 I should also mention that I ran into problems mainly because I was having a 
hard time deciding how to identify objects and determine what is really going 
on 
in a scene.

I think that your approach makes the problem harder than it needs to be (not 
that it is easy). Natural language processing is hard, so researchers in an 
attempt to break down the task into simpler parts, focused on steps like 
lexical 
analysis, parsing, part of speech resolution, and semantic analysis. While 
these 
problems went unsolved, Google went directly to a solution by skipping them.

Likewise, parsing an image into physically separate objects and then building a 
3-D model makes the problem harder, not easier. Again, look at the whole 
picture. You input an image and output a response. Let the system figure out 
which features are important. If your goal is to count basketball passes, then 
it is irrelevant whether the AGI recognizes that somebody is wearing a gorilla 
suit.

 -- Matt Mahoney, matmaho...@yahoo.com





From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sat, July 24, 2010 2:25:49 PM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI

Abram,

I should also mention that I ran into problems mainly because I was having a 
hard time deciding how to identify objects and determine what is really going 
on 
in a scene. This adds a whole other layer of complexity to hypotheses. It's not 
just about what is more predictive of the observations, it is about deciding 
what exactly you are observing in the first place. (although you might say its 
the same problem).

I ran into this problem when my algorithm finds matches between items that are 
not the same. Or it may not find any matches between items that are the same, 
but have changed. So, how do you decide whether it is 1) the same object, 2) a 
different object or 3) the same object but it has changed. 

And how do you decide its relationship to something else...  is it 1) 
dependently attached 2) semi-dependently attached(can move independently, but 
only in certain ways. Yet also moves dependently) 3) independent 4) sometimes 
dependent 5) was dependent, but no longer is, 6) was dependent on something 
else, but then was independent, but now is dependent on something new. 


These hypotheses are different ways of explaining the same observations, but 
are 
complicated by the fact that we aren't sure of the identity of the objects we 
are observing in the first place. Multiple hypotheses may fit the same 
observations, and its hard to decide why one is simpler or better than the 
other. The object you were observing at first may have disappeared. A new 
object 
may have appeared at the same time (this is why screenshots are a bit 
malicious). Or the object you were observing may have changed. In screenshots, 
sometimes the objects that you are trying to identify as different never appear 
at the same time because they always completely occlude each other. So, that 
can 
make it extremely difficult to decide whether they are the same object that has 
changed or different objects.

Such ambiguities are common in AGI. It is unclear to me yet how to deal with 
them effectively, although I am continuing to work hard on it. 


I know its a bit of a mess, but I'm just trying to demonstrate the trouble I've 
run into. 


I hope that makes it more clear why I'm having so much trouble finding a way of 
determining what hypothesis is most predictive and simplest.

Dave


On Thu, Jul 22, 2010 at 10:23 PM, Abram Demski abramdem...@gmail.com wrote:

David,

What are the different ways you are thinking of for measuring the 
predictiveness? I can think of a few different possibilities (such as 
measuring 
number incorrect vs measuring fraction incorrect, et cetera) but I'm wondering 
which variations you consider significant/troublesome/etc.

--Abram


On Thu, Jul 22, 2010 at 7:12 PM, David Jones davidher...@gmail.com wrote:

It's certainly not as simple as you claim. First, assigning a probability is 
not 
always possible, nor is it easy. The factors in calculating that probability 
are 
unknown and are not the same for every instance. Since we do not know what 
combination of observations we will see, we cannot have a predefined set of 
probabilities, nor is it any easier to create a probability function that 
generates them for us. That is just as exactly what I meant by quantitatively 
define the predictiveness... it would be proportional to the probability. 

Second, if you can define a program ina way that is always simpler when it is 
smaller, then you can do the same thing without a program. I don't think it 
makes any sense to do it this way. 

It is not that simple. If it was, we could solve a large portion of agi 
easily.
On Thu, Jul 22, 2010 at 3:16 PM, Matt Mahoney matmaho...@yahoo.com wrote:
David Jones wrote:
 But, I am amazed at how difficult it is to quantitatively define more 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-24 Thread Matt Mahoney
Jim Bromer wrote:
 Solomonoff Induction may require a trans-infinite level of complexity just to 
run each program. 

Trans-infinite is not a mathematically defined term as far as I can tell. 
Maybe you mean larger than infinity, as in the infinite set of real numbers is 
larger than the infinite set of natural numbers (which is true).

But it is not true that Solomonoff induction requires more than aleph-null 
operations. (Aleph-null is the size of the set of natural numbers, the 
smallest 
infinity). An exact calculation requires that you test aleph-null programs for 
aleph-null time steps each. There are aleph-null programs because each program 
is a finite length string, and there is a 1 to 1 correspondence between the set 
of finite strings and N, the set of natural numbers. Also, each program 
requires 
aleph-null computation in the case that it runs forever, because each step in 
the infinite computation can be numbered 1, 2, 3...

However, the total amount of computation is still aleph-null because each step 
of each program can be described by an ordered pair (m,n) in N^2, meaning the 
n'th step of the m'th program, where m and n are natural numbers. The 
cardinality of N^2 is the same as the cardinality of N because there is a 1 to 
1 
correspondence between the sets. You can order the ordered pairs as (1,1), 
(1,2), (2,1), (1,3), (2,2), (3,1), (1,4), (2,3), (3,2), (4,1), (1,5), etc. 
See http://en.wikipedia.org/wiki/Countable_set#More_formal_introduction

Furthermore you may approximate Solomonoff induction to any desired precision 
with finite computation. Simply interleave the execution of all programs as 
indicated in the ordering of ordered pairs that I just gave, where the programs 
are ordered from shortest to longest. Take the shortest program found so far 
that outputs your string, x. It is guaranteed that this algorithm will approach 
and eventually find the shortest program that outputs x given sufficient time, 
because this program exists and it halts.

In case you are wondering how Solomonoff induction is not computable, the 
problem is that after this algorithm finds the true shortest program that 
outputs x, it will keep running forever and you might still be wondering if a 
shorter program is forthcoming. In general you won't know.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sat, July 24, 2010 3:59:18 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


Solomonoff Induction may require a trans-infinite level of complexity just to 
run each program.  Suppose each program is iterated through the enumeration of 
its instructions.  Then, not only do the infinity of possible programs need to 
be run, many combinations of the infinite programs from each simulated Turing 
Machine also have to be tried.  All the possible combinations of (accepted) 
programs, one from any two or more of the (accepted) programs produced by each 
simulated Turing Machine, have to be tried.  Although these combinations of 
programs from each of the simulated Turing Machine may not all be unique, they 
all have to be tried.  Since each simulated Turing Machine would produce 
infinite programs, I am pretty sure that this means that Solmonoff Induction 
is, 
by definition, trans-infinite.
Jim Bromer


On Thu, Jul 22, 2010 at 2:06 PM, Jim Bromer jimbro...@gmail.com wrote:

I have to retract my claim that the programs of Solomonoff Induction would be 
trans-infinite.  Each of the infinite individual programs could be enumerated 
by 
their individual instructions so some combination of unique individual programs 
would not correspond to a unique program but to the enumerated program that 
corresponds to the string of their individual instructions.  So I got that one 
wrong.
Jim Bromer

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread Mike Tintner
Huh, Matt? What examples of this holistic scene analysis are there (or are 
you thinking about)?


From: Matt Mahoney 
Sent: Saturday, July 24, 2010 10:25 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


David Jones wrote:
 I should also mention that I ran into problems mainly because I was having a 
 hard time deciding how to identify objects and determine what is really going 
 on in a scene.


I think that your approach makes the problem harder than it needs to be (not 
that it is easy). Natural language processing is hard, so researchers in an 
attempt to break down the task into simpler parts, focused on steps like 
lexical analysis, parsing, part of speech resolution, and semantic analysis. 
While these problems went unsolved, Google went directly to a solution by 
skipping them.


Likewise, parsing an image into physically separate objects and then building a 
3-D model makes the problem harder, not easier. Again, look at the whole 
picture. You input an image and output a response. Let the system figure out 
which features are important. If your goal is to count basketball passes, then 
it is irrelevant whether the AGI recognizes that somebody is wearing a gorilla 
suit.

 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-24 Thread Jim Bromer
Abram,
II use constructivist's and intuitionist's (and for that matter finitist's)
methods when they seem useful to me.  I often make mistakes when I am not
wary of constructivist issues.  Constructist criticisms are interesting
because they can be turned against any presumptive method even though they
might seem to contradict a constructivist criticism taken from a different
presumption.

I misused the term computable a few times because I have seen it used in
different ways.  But it turns out that it can be used in different ways.
For example pi is not computable because it is infinite but a limiting
approximation to pi is computable.  So I would say that pi is computable -
given infinite resources.  One of my claims here is that I believe there are
programs that will run Solomonoff Induction so the method would therefore be
computable given infinite resources.  However, my other claim is that the
much desired function or result where one may compute the probability that a
string will be produced given a particular prefix is incomputable.

If I lived 500 years ago I might have said that a function that wasn't
computable wasn't well-defined.  (I might well have been somewhat
pompous about such things in 1510).  However, because of the efficacy of the
theory of limits and other methods of finding bounds on functions, I would
not say that now.  Pi is well defined, but I don't think that Solmonoff
Induction is completely well-defined.  But we can still talk about certain
aspects of it (using mathematics that are well grounded relative to those
aspects of the method that are computable) even though the entire function
is not completely well-defined.

One way to do this is by using conditional statements.  So if it turns out
that one or some of my assumptions are wrong, I can see how to revise my
theory about the aspect of the function that is computable (or seems
computable).

Jim Bromer

On Thu, Jul 22, 2010 at 10:50 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 Aha! So you *are* a constructivist or intuitionist or finitist of some
 variety? This would explain the miscommunication... you appear to hold the
 belief that a structure needs to be computable in order to be well-defined.
 Is that right?

 If that's the case, then you're not really just arguing against Solomonoff
 induction in particular, you're arguing against the entrenched framework of
 thinking which allows it to be defined-- the so-called classical
 mathematics.

 If this is the case, then you aren't alone.

 --Abram


 On Thu, Jul 22, 2010 at 5:06 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:
 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way.
  Also, please point me to this mathematical community that you claim
 rejects Solomonoff induction. Can you find even one paper that refutes it?

 You give a precise statement of the probability in general terms, but then
 say that it is uncomputable.  Then you ask if there is a paper that refutes
 it.  Well, why would any serious mathematician bother to refute it since you
 yourself acknowledge that it is uncomputable and therefore unverifiable and
 therefore not a mathematical theorem that can be proven true or false?  It
 isn't like you claimed that the mathematical statement is verifiable. It is
 as if you are making a statement and then ducking any responsibility for it
 by denying that it is even an evaluation.  You honestly don't see the
 irregularity?

 My point is that the general mathematical community doesn't accept
 Solomonoff Induction, not that I have a paper that *refutes it,*whatever 
 that would mean.

 Please give me a little more explanation why you say the fundamental
 method is that the probability of a string x is proportional to the sum of
 all programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?


 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  The fundamental method of Solmonoff Induction is trans-infinite.

 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way. How does this approximation invalidate Solomonoff
 induction?

 Also, please point me to this mathematical community that you claim
 rejects Solomonoff induction. Can you find even one paper that refutes it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 21, 2010 3:08:13 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I should have said, It would be unwise 

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread Matt Mahoney
Mike Tintner wrote:
 Huh, Matt? What examples of this holistic scene analysis are there (or are 
you thinking about)?
 
I mean a neural model with increasingly complex features, as opposed to an 
algorithmic 3-D model (like video game graphics in reverse).

Of course David rejects such ideas ( http://practicalai.org/Prize/Default.aspx 
) 
even though the one proven working vision model uses it.

-- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Sat, July 24, 2010 6:16:07 PM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


Huh, Matt? What examples of this holistic scene  analysis are there (or are 
you thinking about)?


From: Matt Mahoney 
Sent: Saturday, July 24, 2010 10:25 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of  AGI

David Jones wrote:
 I should also mention that I ran into  problems mainly because I was having a 
hard time deciding how to identify  objects and determine what is really going 
on in a scene.

I think that your approach makes the problem harder than it needs to be  (not 
that it is easy). Natural language processing is hard, so researchers in an  
attempt to break down the task into simpler parts, focused on steps like 
lexical  
analysis, parsing, part of speech resolution, and semantic analysis. While 
these  
problems went unsolved, Google went directly to a solution by skipping  them.

Likewise, parsing an image into physically separate objects and then  building 
a 
3-D model makes the problem harder, not easier. Again, look at the  whole 
picture. You input an image and output a response. Let the system figure  out 
which features are important. If your goal is to count basketball passes,  then 
it is irrelevant whether the AGI recognizes that somebody is wearing a  gorilla 
suit.

 
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
Matt,

Any method must deal with similar, if not the same, ambiguities. You need to
show how neural nets solve this problem or how they solve agi goals while
completely skipping the problem. Until then, it is not a successful method.

Dave

On Jul 24, 2010 7:18 PM, Matt Mahoney matmaho...@yahoo.com wrote:

Mike Tintner wrote:
 Huh, Matt? What examples of this holistic scene analysis are there (or
are y...
I mean a neural model with increasingly complex features, as opposed to an
algorithmic 3-D model (like video game graphics in reverse).

Of course David rejects such ideas (
http://practicalai.org/Prize/Default.aspx ) even though the one proven
working vision model uses it.




-- Matt Mahoney, matmaho...@yahoo.com

--
*From:* Mike Tintner tint...@blueyonder.co.uk


To: agi agi@v2.listbox.com
*Sent:* Sat, July 24, 2010 6:16:07 PM


Subject: Re: [agi] Re: Huge Progress on the Core of AGI


Huh, Matt? What examples of this holistic scene analysis are there (or are
you thinking about)?

...
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Pretty worldchanging

2010-07-24 Thread Boris Kazachenko
Maybe there are some students on this email list, who are wading through all 
the BS and learning something about AGI, by following links and reading papers 
mentioned here, etc.  Without the Net, how would these students learn about 
AGI, in practice?  Such education would be far harder to come by and less 
effective without the Net.  That's world-changing... ;-) ...

The Net saves time. Back in the day, one could spend a lifetime sifting through 
paper in the library, or traveling the world to meet authorities. Now you do 
some googling, realize that no one has a clue,  go on to do some real work on 
your own. That's if you have the guts, of course.
intelligence-as-a-cognitive-algorithm


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread Mike Tintner
Matt: 
I mean a neural model with increasingly complex features, as opposed to an 
algorithmic 3-D model (like video game graphics in reverse). Of course David 
rejects such ideas ( http://practicalai.org/Prize/Default.aspx ) even though 
the one proven working vision model uses it.


Which is? and does what?  (I'm starting to consider that vision and visual 
perception  -  or perhaps one should say common sense, since no sense in 
humans works independent of the others -  may well be considerably *more* 
complex than language. The evolutionary time required to develop our common 
sense perception and conception of the world was vastly greater than that 
required to develop language. And we are as a culture merely in our babbling 
infancy in beginning to understand how sensory images work and are processed).


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread Matt Mahoney
Mike Tintner wrote:
 Which is?
 
The one right behind your eyes.

-- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Sat, July 24, 2010 9:00:42 PM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


Matt: 
I mean a neural model with increasingly complex features, as opposed to an  
algorithmic 3-D model (like video game graphics in reverse). Of course David  
rejects such ideas ( http://practicalai.org/Prize/Default.aspx )  even though 
the one proven working vision model uses it.
 
 
Which is? and does what?  (I'm starting to consider that vision and  visual 
perception  -  or perhaps one should say common sense, since  no sense in 
humans works independent of the others -  may well be  considerably *more* 
complex than language. The evolutionary time required to  develop our common 
sense perception and conception of the world was vastly  greater than that 
required to develop language. And we are as a culture merely  in our babbling 
infancy in beginning to understand how sensory images work and  are processed).
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
Check this out!

The title Space and time, not surface features, guide object persistence
says it all.

http://pbr.psychonomic-journals.org/content/14/6/1199.full.pdf

Over just the last couple days I have begun to realize that they are so
right. My idea before of using high frame rates is also spot on. The brain
does not use features as much as we think. First we construct a model of the
object, then we probably decide what features to index it with for future
search. If we know that the object occurs at a particular location in space,
then we can learn a great deal about it with very little ambiguity! Of
course, processing images at all is hard, but that's besides the point...
The point is that we can automatically learn about the world using high
frame rates and a simple heuristic for identifying specific objects in a
scene. Because we can reliably identify them, we can learn an extremely
large amount in a very short period of time. We can learn about how lighting
affects the colors, noise, size, shape, components, attachment
relationships, etc. etc.

So, it is very likely that screenshots are not simpler than real images!
lol. The objects in real images usually don't change as much, as drastically
or as quickly as the objects in screenshots. That means that we can use the
simple heuristics of size, shape, location and continuity of time to match
objects and learn about them.

Dave

On Sat, Jul 24, 2010 at 9:10 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Mike Tintner wrote:
  Which is?

 The one right behind your eyes.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* Mike Tintner tint...@blueyonder.co.uk
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, July 24, 2010 9:00:42 PM

 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Matt:
 I mean a neural model with increasingly complex features, as opposed to an
 algorithmic 3-D model (like video game graphics in reverse). Of course David
 rejects such ideas ( http://practicalai.org/Prize/Default.aspx ) even
 though the one proven working vision model uses it.


 Which is? and does what?  (I'm starting to consider that vision and visual
 perception  -  or perhaps one should say common sense, since no sense in
 humans works independent of the others -  may well be considerably *more*
 complex than language. The evolutionary time required to develop our common
 sense perception and conception of the world was vastly greater than that
 required to develop language. And we are as a culture merely in our babbling
 infancy in beginning to understand how sensory images work and are
 processed).
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
This is absolutely incredible. The answer was right there in the last
paragraph:

The present experiments suggest that the computation
of object persistence appears to rely so heavily upon spatiotemporal
information that it will not (or at least is unlikely
to) use otherwise available surface feature information,
particularly when there is conflicting spatiotemporal
information. This reveals a striking limitation, given various
theories that visual perception uses whatever shortcuts,
or heuristics, it can to simplify processing, as well as
the theory that perception evolves out of a buildup of the
statistical nature of our environment (e.g., Purves  Lotto,
2003). Instead, it appears that the object file system has
“tunnel vision” and turns a blind eye to surface feature information,
focusing on spatiotemporal information when
computing persistence.

So much for Matt's claim that the brain uses hierarchical features LOL

Dave

On Sat, Jul 24, 2010 at 11:52 PM, David Jones davidher...@gmail.com wrote:

 Check this out!

 The title Space and time, not surface features, guide object persistence
 says it all.

 http://pbr.psychonomic-journals.org/content/14/6/1199.full.pdf

 Over just the last couple days I have begun to realize that they are so
 right. My idea before of using high frame rates is also spot on. The brain
 does not use features as much as we think. First we construct a model of the
 object, then we probably decide what features to index it with for future
 search. If we know that the object occurs at a particular location in space,
 then we can learn a great deal about it with very little ambiguity! Of
 course, processing images at all is hard, but that's besides the point...
 The point is that we can automatically learn about the world using high
 frame rates and a simple heuristic for identifying specific objects in a
 scene. Because we can reliably identify them, we can learn an extremely
 large amount in a very short period of time. We can learn about how lighting
 affects the colors, noise, size, shape, components, attachment
 relationships, etc. etc.

 So, it is very likely that screenshots are not simpler than real images!
 lol. The objects in real images usually don't change as much, as drastically
 or as quickly as the objects in screenshots. That means that we can use the
 simple heuristics of size, shape, location and continuity of time to match
 objects and learn about them.

 Dave


 On Sat, Jul 24, 2010 at 9:10 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 Mike Tintner wrote:
  Which is?

 The one right behind your eyes.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* Mike Tintner tint...@blueyonder.co.uk
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, July 24, 2010 9:00:42 PM

 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Matt:
 I mean a neural model with increasingly complex features, as opposed to an
 algorithmic 3-D model (like video game graphics in reverse). Of course David
 rejects such ideas ( http://practicalai.org/Prize/Default.aspx ) even
 though the one proven working vision model uses it.


 Which is? and does what?  (I'm starting to consider that vision and visual
 perception  -  or perhaps one should say common sense, since no sense in
 humans works independent of the others -  may well be considerably *more*
 complex than language. The evolutionary time required to develop our common
 sense perception and conception of the world was vastly greater than that
 required to develop language. And we are as a culture merely in our babbling
 infancy in beginning to understand how sensory images work and are
 processed).
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Clues to the Mind: What do you think is the reason for selective attention

2010-07-24 Thread deepakjnath
Thanks Dave, its very interesting. This gives us more clues in to how the
brain compresses and uses the relevant information while neglecting the
irrelevant information. But as Anast has demonstrated, the brain does need
priming inorder to decide what is relevant and irrelevant. :)

Cheers,
Deepak

On Sun, Jul 25, 2010 at 5:34 AM, David Jones davidher...@gmail.com wrote:

 I also wanted to say that it is agi related because this may be the way
 that the brain deals with ambiguity in the real world. It ignores many
 things if it can use expectations to constrain possibilities. It is an
 important way in which the brain tracks objects and identifies them without
 analyzing all of an objects features before matching over the whole image.

 On Jul 24, 2010 7:53 PM, David Jones davidher...@gmail.com wrote:

 Actually Deepak, this is AGI related.

 This week I finally found a cool body of research that I previously had no
 knowledge of. This research area is in psychology, which is probably why I
 missed it the first time. It has to do with human perception, object files,
 how we keep track of object, individuate them, match them (the
 correspondence problem), etc.

 And I found the perfect article just now for you Deepak:
 http://www.duke.edu/~mitroff/papers/SimonsMitroff_01.pdfhttp://www.duke.edu/%7Emitroff/papers/SimonsMitroff_01.pdf

 This article mentions why the brain does not notice things. And I just
 realized as I was reading it why we don't see the gorilla or other
 unexpected changes. The reason is this:
 We have a limited amount of processing power that we can apply to visual
 tracking and analysis. So, in attention demanding situations such as these,
 we assign our processing resources to only track the things we are
 interested in. In fact, we probably do this all the time, but it is only
 when we need a lot of attention to be applied to a few objects do we notice
 that we don't see some unexpected events.

 So, our brain knows where to expect the ball next and our visual processing
 is very busy tracking the ball and then seeing who is throwing it. As a
 result, it is unable to also process the movement of other objects. If the
 unexpected event is drastic enough, it will get our attention. But since
 some of the people are in black, our brain probably thinks it is just a
 person in black and doesn't consider it an event that is worthy of
 interrupting our intense tracking.

 Dave



 On Sat, Jul 24, 2010 at 4:58 PM, Anastasios Tsiolakidis sokratis.dk@
 gmail.com wrote:
 
  On Sat,...

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Clues to the Mind: Illusions / Vision

2010-07-24 Thread deepakjnath
http://www.youtube.com/watch?v=QbKw0_v2clofeature=player_embedded

What we see is not really what you see. Its what you see and what you know
you are seeing. The brain superimposes the predicted images to the viewed
image to actually have a perception of image.

cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Clues to the Mind: Illusions / Vision

2010-07-24 Thread David Jones
Yes. I think I may have discovered the keys to crack this puzzle wide open.
The brain seems to use simplistic heuristics for depth perception and
surface bounding. Once it has that, it can apply the spaciotemporal
heuristic I mentioned in other emails to identify and track an object, which
allows it to learn a lot with high confidence. So, that model would explain
why we see depth perception illusions.

Dave

On Jul 25, 2010 1:04 AM, deepakjnath deepakjn...@gmail.com wrote:

http://www.youtube.com/watch?v=QbKw0_v2clofeature=player_embedded

What we see is not really what you see. Its what you see and what you know
you are seeing. The brain superimposes the predicted images to the viewed
image to actually have a perception of image.

cheers,
Deepak
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com