Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-08 Thread deepakjnath
1. Basic object recognition can be used in camera phones to identify people
in front or objects in front. This can be used by blind people to navigate
their environment better.

2. AGI expert systems can be used to diagnose diseases.

thanks,
Deepak


On Sun, Aug 8, 2010 at 6:40 AM, Ben Goertzel b...@goertzel.org wrote:

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] $35 ( 2GB RAM) it is

2010-08-07 Thread deepakjnath
This is done in a university in my city.! :) That is our Education Minister
:)

cheers,
Deepak

On Sat, Aug 7, 2010 at 6:04 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

 http://shockedinvestor.blogspot.com/2010/07/new-35-laptop-unveiled.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Shhh!

2010-08-03 Thread deepakjnath
for {set i 0} {$i  infinity} {incr i} {
 print $i
}

On Mon, Aug 2, 2010 at 6:23 PM, Jim Bromer jimbro...@gmail.com wrote:

 I can write an algorithm that is capable of describing ('reaching') every
 possible irrational number - given infinite resources.  The infinite is not
 a number-like object, it is an active form of incrementation or
 concatenation.  So I can write an algorithm that can write *every* finite
 state of *every* possible number.  However, it would take another
 algorithm to 'prove' it.  Given an irrational number, this other algorithm
 could find the infinite incrementation for every digit of the given number.
 Each possible number (including the incrementation of those numbers that
 cannot be represented in truncated form) is embedded within a single
 infinite infinite incrementation of digits that is produced by the
 algorithm, so the second algorithm would have to calculate where you would
 find each digit of the given irrational number by increment.  But the thing
 is, both functions would be computable and provable.  (I haven't actually
 figured the second algorithm out yet, but it is not a difficult problem.)

 This means that the Trans-Infinite Is Computable.  But don't tell anyone
 about this, it's a secret.

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Clues to the Mind: Learning Ability

2010-07-27 Thread deepakjnath
http://www.facebook.com/video/video.php?v=287151911466

See how the parrot can learn so much! Does that mean that the parrot does
intelligence. Will this parrot pass the turing test?

There must be a learning center in the brain which is much lower than the
higher cognitive fucntions like imagination and thoughts.


cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-26 Thread deepakjnath
Mike,

All chinese look the same for me. But for a chinese person they don't. Why
is this? Is there another clue here?

Thanks,
Deepak

On Mon, Jul 26, 2010 at 9:10 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  David,

 There must be a fair amount of cog sci/AI analysis of all this -  of how
 the brain analyses and remembers tunes  - and presumably leading theories
 (as for vision). Do you or anyone know more here?

 Also, you have noted something of extreme importance, wh. is a lot more
 than a step further.

 OTOH you've been analysing how we recognize the same, general tune in
 different, individual renditions.

 OTOH you've pointed out, we also recognize the INDIVIDUAL differences
 of/variatiions on the same genre/class - we appreciate the different ways
 Davis/Gillespie play as well as that they're playing the same tune.

 Now correct me but isn't the individual dimension of images of all kinds,
 almost entirely missing from AI? The capacity to recognize what
 makes individuals of a species individual, and not just that they belong to
 the same species.  Isn't visual object recognition for example almost
 entirely focussed on recognizing general objects rather than individual
 objects - that that's an example of a general doll, rather than an
 individual particularly beaten up, or just slightly and disturbingly altered
 doll?

 No doubt AI can recognize individual fingerprints, but it's the capacity to
 recognize individuals as variations on the general - to recognize that he
 has a particularly sarcastic smile, or she has a particularly lyrical, fluid
 walk,  or that that tune contrasts harmonious and discordant music (as per
 rap) in a distinctive way - that's missing, no?


  *From:* David Butler dbut...@flomedia.com
 *Sent:* Monday, July 26, 2010 3:44 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How do we hear music

 When we listen to music there are many elements that come into play that
 create our memory of how the song goes.  If you take a piece of instrumental
 music,  you have the melody, a succession of tones in a certain order,
 duration of each note in the melody,  timbre, or tonal quality, (guitar vs
 trombone), time, how fast the song is played.  Phrasing, what part of the
 melody is emphasized using volume, change of tone quality etc...  Is the
 melody played slurred with all the notes run together or staccato played
 with short notes.

 Too take it a step further how do we recognize a solo played by Miles Davis
 rather than Dizzy Gillespie  playing the same song both on trumpet but sound
 completely different in style.  How do we recognize when two different
 conductors direct the same music with the same orchestra but yet make it
 sound different?

 .

 On Thu, Jul 22, 2010 at 3:05 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   deepakjnath wrote:

  Why do we listen to a song sung in different scale and yet identify it
 as the same song.?  Does it have something to do with the fundamental way in
 which we store memory?

 For the same reason that gray looks green on a red background. You have
 more neurons that respond to differences in tones than to absolute
 frequencies.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* deepakjnath deepakjn...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Thu, July 22, 2010 3:59:57 PM
 *Subject:* [agi] How do we hear music

 Why do we listen to a song sung in different scale and yet identify it as
 the same song.?  Does it have something to do with the fundamental way in
 which we store memory?

 cheers,
 Deepak
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-26 Thread deepakjnath
thanks Dave,

This means that there is a system in the brain that decides on the details
that we capture from our external environment. Something like an auto focus
or a system that increases or decreases the resolution of the picture as it
deems fit. We could call this an auto attention focusing system.

It would be interesting to know what kind of priming helps the brain decide
these kind of things. What is it that the brain needs to be exposed to get
these prejudices. Does this affect our choice of mates. Are we at all free
willed? or these filters make us do the things that we do?

Cheers,
Deepak

On Tue, Jul 27, 2010 at 12:42 AM, David Jones davidher...@gmail.com wrote:

 Deepak,

 I have some insight on this question. There was a study regarding change
 blindness. One of the study's famous experiments was having a person ask for
 directions on a college campus. Then in the middle of this, a door would
 pass between the person asking directions and the student giving directions.
 What they found is that many people didn't realize the person had changed.

 BUT, 100% of the people that did notice the change were the same age or
 younger than the person they were observing!
 So, they did another experiment to rule out the different possible
 explanations. They took young people and dressed them as construction
 workers. Then, they performed the experiment again with similar age groups.
 They found that the people that had noticed the change before no longer did!

 Why? Well, the evidence leads us to believe that people pay much closer
 attention to the details of people they consider to be similar to them. So,
 we notice fewer details when we are observing people of a group we consider
 our out-group. In other words, we don't think we belong to the same group
 as the person we are observing.

 That is why asians all look the same to you :)

 I think the purpose of this is analogous to attention. We only learn about
 things we consider important. Or we only pay attention to things we think
 are important. So, for whatever reason, we think that out-group people are
 not as important to us, and we don't need to spend our brain's resources on
 remembering details about them.

 Dave

 On Jul 26, 2010 2:58 PM, deepakjnath deepakjn...@gmail.com wrote:

 Mike,

 All chinese look the same for me. But for a chinese person they don't. Why
 is this? Is there another clue here?

 Thanks,
 Deepak



 On Mon, Jul 26, 2010 at 9:10 PM, Mike Tintner tint...@blueyonder.co.uk
 wrote:
 
  David,
 
  T...
 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-26 Thread deepakjnath
Okay Mike,

Let me write down my theory of this phenomenon. my intuition is that brain
learns in steps and deltas. The brain takes in a fixed amount of only new
information at a time. So when a person who doesn't have too much
impressions (image memories) of a chinese person sees a chinese, He takes in
the round face and the eyes etc which are new info to the seer.

When the seer sees another chinese person the older chinese persons image
comes back into the working memory. The new person is stored as delta of the
other person.

As the seer sees more and more people the basic structure is no longer new.
The new features that get captured become the subtle variations from the
basic structure. This ability to identify new information becomes a crucial
function of the brain. Thus as time passes with images of chinese people,
the seer will be able to capture subtle variation and recognize the person.

People who are not musically trained find it difficult to distinguish
between notes. But repeated listening to the notes engrave the structure of
notes to the memory. And complex and subtle variations of the notes become
apparent to the listener as the base notes are already stored in the memory
and so no longer new.

cheers,
Deepak



On Tue, Jul 27, 2010 at 12:54 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Deepak,

 No it's basically a distraction from the problem.  With time and closer
 inspection, they will all look different.

 Correction, it IS useful. It probably tells us something about how the
 brain and an AGI must work
 First you start with a round blob shape for a class of objects - a face
 blob, and then you refine it and refine it, add more and more detail, for
 different individuals.

 What makes Chinese difficult to individuate at first, is they have a
 particular characteristic wh. would be highly distinctive for a Western
 individual - relatively slanted eyes.  Imagine if a new race all had square
 jaws. You can't take your eyes off that feature at first. With time you
 learn to make adjustments for it, and notice the individual characteristics
 within the narrower eyes.  Ditto elephants are hard to individuate at first
 because they all have these massively distinctive features of huge ears and
 trunks.

 You start general, and gradually individuate - but you have to individuate
 - your life depends on being able to distinguish individual characteristics
 as well as general forms.

  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Monday, July 26, 2010 7:56 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How do we hear music

 Mike,

 All chinese look the same for me. But for a chinese person they don't. Why
 is this? Is there another clue here?

 Thanks,
 Deepak

 On Mon, Jul 26, 2010 at 9:10 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  David,

 There must be a fair amount of cog sci/AI analysis of all this -  of how
 the brain analyses and remembers tunes  - and presumably leading theories
 (as for vision). Do you or anyone know more here?

 Also, you have noted something of extreme importance, wh. is a lot more
 than a step further.

 OTOH you've been analysing how we recognize the same, general tune in
 different, individual renditions.

 OTOH you've pointed out, we also recognize the INDIVIDUAL differences
 of/variatiions on the same genre/class - we appreciate the different ways
 Davis/Gillespie play as well as that they're playing the same tune.

 Now correct me but isn't the individual dimension of images of all kinds,
 almost entirely missing from AI? The capacity to recognize what
 makes individuals of a species individual, and not just that they belong to
 the same species.  Isn't visual object recognition for example almost
 entirely focussed on recognizing general objects rather than individual
 objects - that that's an example of a general doll, rather than an
 individual particularly beaten up, or just slightly and disturbingly altered
 doll?

 No doubt AI can recognize individual fingerprints, but it's the capacity
 to recognize individuals as variations on the general - to recognize that he
 has a particularly sarcastic smile, or she has a particularly lyrical, fluid
 walk,  or that that tune contrasts harmonious and discordant music (as per
 rap) in a distinctive way - that's missing, no?


  *From:* David Butler dbut...@flomedia.com
 *Sent:* Monday, July 26, 2010 3:44 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How do we hear music

 When we listen to music there are many elements that come into play that
 create our memory of how the song goes.  If you take a piece of instrumental
 music,  you have the melody, a succession of tones in a certain order,
 duration of each note in the melody,  timbre, or tonal quality, (guitar vs
 trombone), time, how fast the song is played.  Phrasing, what part of the
 melody is emphasized using volume, change of tone quality etc...  Is the
 melody played slurred with all the notes run together or staccato

Re: [agi] How do we hear music

2010-07-26 Thread deepakjnath
My theory is that there is no general class. What ever you see new is a new
class for you. If you see that again then this becomes a variation of the
earlier class. Basically the brain is able to detect if something it sees in
new and store it along with an emotion of excitement. This is why young
people are excited to see things because many things are new for them. New
experiences gives a kind of emotional high. This high helps to register the
new experience as a class. Now if there are things that just variations of
the new class then there is not too much excitement.

The neurons initially are nascent and as it sees new objects the neurons
become that memory. So when we see a car for the first time, the car neurons
are created. Basic class. When we see an audi, our brain create a variation
of car neuron. Each time u see a car, the car neurons in the brain get
excited.

Common things become boring. A star if he is seen always doesn't create that
excitement anymore. This is why the stars who are over exposed lose their
star value..

The emotions also play a huge role in forming memories. - This we will take
up as another thread.

The ability of the child to move its mouth comes from another phenomenon
namely mirror neurons. - Its basically means that a visual input of a
movement of the mouth creates memories that can be used to activate motor
nerves to move the mouth. This is a feedback mechanism.
Its similar to recording. When we record an mp3 song we can use it to play
back. similarly when we hear a sound in our brain, the brain finds a way to
reproduce the sound through the mouth. I know this is very confusing, I will
try to explain this better in a later post.

But the question is interesting. How does the baby know where its mouth is?

Thanks,
Deepak



On Tue, Jul 27, 2010 at 1:35 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I'm not sure that's too diff. from what I'm saying.

 The interesting question is what does the brain use as its general class
 model against wh. to compare new individuals? It's unlikely to be a or the
 first individual face/object as you seem to be suggesting.

 Another factor here is that you interpret all these objects with your body
 - you understand other faces and bodies by projecting your own body into
 them - a remarkable example of that is the ability of a c. 2 month old
 infant to imitate the mouth movements of parents ( remember it hasn't seen
 its own mouth yet).

  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Monday, July 26, 2010 8:38 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How do we hear music

 Okay Mike,

 Let me write down my theory of this phenomenon. my intuition is that brain
 learns in steps and deltas. The brain takes in a fixed amount of only new
 information at a time. So when a person who doesn't have too much
 impressions (image memories) of a chinese person sees a chinese, He takes in
 the round face and the eyes etc which are new info to the seer.

 When the seer sees another chinese person the older chinese persons image
 comes back into the working memory. The new person is stored as delta of the
 other person.

 As the seer sees more and more people the basic structure is no longer new.
 The new features that get captured become the subtle variations from the
 basic structure. This ability to identify new information becomes a crucial
 function of the brain. Thus as time passes with images of chinese people,
 the seer will be able to capture subtle variation and recognize the person.

 People who are not musically trained find it difficult to distinguish
 between notes. But repeated listening to the notes engrave the structure of
 notes to the memory. And complex and subtle variations of the notes become
 apparent to the listener as the base notes are already stored in the memory
 and so no longer new.

 cheers,
 Deepak



 On Tue, Jul 27, 2010 at 12:54 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Deepak,

 No it's basically a distraction from the problem.  With time and closer
 inspection, they will all look different.

 Correction, it IS useful. It probably tells us something about how the
 brain and an AGI must work
 First you start with a round blob shape for a class of objects - a face
 blob, and then you refine it and refine it, add more and more detail, for
 different individuals.

 What makes Chinese difficult to individuate at first, is they have a
 particular characteristic wh. would be highly distinctive for a Western
 individual - relatively slanted eyes.  Imagine if a new race all had square
 jaws. You can't take your eyes off that feature at first. With time you
 learn to make adjustments for it, and notice the individual characteristics
 within the narrower eyes.  Ditto elephants are hard to individuate at first
 because they all have these massively distinctive features of huge ears and
 trunks.

 You start general, and gradually individuate - but you have to individuate
 - your life depends

[agi] Clues to the Mind: What do you think is the reason for selective attention

2010-07-24 Thread deepakjnath
http://www.youtube.com/watch?v=vJG698U2Mvo

Can anyone suggest why our brains exhibit this phenomenon?


cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Clues to the Mind: What do you think is the reason for selective attention

2010-07-24 Thread deepakjnath
Thanks Dave, its very interesting. This gives us more clues in to how the
brain compresses and uses the relevant information while neglecting the
irrelevant information. But as Anast has demonstrated, the brain does need
priming inorder to decide what is relevant and irrelevant. :)

Cheers,
Deepak

On Sun, Jul 25, 2010 at 5:34 AM, David Jones davidher...@gmail.com wrote:

 I also wanted to say that it is agi related because this may be the way
 that the brain deals with ambiguity in the real world. It ignores many
 things if it can use expectations to constrain possibilities. It is an
 important way in which the brain tracks objects and identifies them without
 analyzing all of an objects features before matching over the whole image.

 On Jul 24, 2010 7:53 PM, David Jones davidher...@gmail.com wrote:

 Actually Deepak, this is AGI related.

 This week I finally found a cool body of research that I previously had no
 knowledge of. This research area is in psychology, which is probably why I
 missed it the first time. It has to do with human perception, object files,
 how we keep track of object, individuate them, match them (the
 correspondence problem), etc.

 And I found the perfect article just now for you Deepak:
 http://www.duke.edu/~mitroff/papers/SimonsMitroff_01.pdfhttp://www.duke.edu/%7Emitroff/papers/SimonsMitroff_01.pdf

 This article mentions why the brain does not notice things. And I just
 realized as I was reading it why we don't see the gorilla or other
 unexpected changes. The reason is this:
 We have a limited amount of processing power that we can apply to visual
 tracking and analysis. So, in attention demanding situations such as these,
 we assign our processing resources to only track the things we are
 interested in. In fact, we probably do this all the time, but it is only
 when we need a lot of attention to be applied to a few objects do we notice
 that we don't see some unexpected events.

 So, our brain knows where to expect the ball next and our visual processing
 is very busy tracking the ball and then seeing who is throwing it. As a
 result, it is unable to also process the movement of other objects. If the
 unexpected event is drastic enough, it will get our attention. But since
 some of the people are in black, our brain probably thinks it is just a
 person in black and doesn't consider it an event that is worthy of
 interrupting our intense tracking.

 Dave



 On Sat, Jul 24, 2010 at 4:58 PM, Anastasios Tsiolakidis sokratis.dk@
 gmail.com wrote:
 
  On Sat,...

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Clues to the Mind: Illusions / Vision

2010-07-24 Thread deepakjnath
http://www.youtube.com/watch?v=QbKw0_v2clofeature=player_embedded

What we see is not really what you see. Its what you see and what you know
you are seeing. The brain superimposes the predicted images to the viewed
image to actually have a perception of image.

cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] What is so special with the number seven

2010-07-22 Thread deepakjnath
Is there any predisposition to the Number 7 and our brains?

Why do we have a scale with 7 notes? Why are there 7 colors in a rainbow.?
Can this relate to how we perceive things?

Seven days of a week.

cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] How do we hear music

2010-07-22 Thread deepakjnath
Why do we listen to a song sung in different scale and yet identify it as
the same song.?  Does it have something to do with the fundamental way in
which we store memory?

cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread deepakjnath
Yes we could do a 4x4 tic tac toe game like this in a PC. The training sets
can be generated simply by playing the agents against each other using
random moves and letting the agents know if it passed or failed as a
feedback mechanism.

Cheers,
Deepak

On Wed, Jul 21, 2010 at 9:02 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Mike, I think we all agree that we should not have to tell an AGI the steps
 to solving problems. It should learn and figure it out, like the way that
 people figure it out.

 The question is how to do that. We know that it is possible. For example, I
 could write a chess program that I could not win against. I could write the
 program in such a way that it learns to improve its game by playing against
 itself or other opponents. I could write it in such a way that initially
 does not know the rules for chess, but instead learns the rules by being
 given examples of legal and illegal moves.

 What we have not yet been able to do is scale this type of learning and
 problem solving up to general, human level intelligence. I believe it is
 possible, but it will require lots of training data and lots of computing
 power. It is not something you could do on a PC, and it won't be cheap.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* Mike Tintner tint...@blueyonder.co.uk
 *To:* agi agi@v2.listbox.com
 *Sent:* Mon, July 19, 2010 9:07:53 PM

 *Subject:* Re: [agi] Of definitions and tests of AGI

 The issue isn't what a computer can do. The issue is how you structure the
 computer's or any agent's thinking about a problem. Programs/Turing machines
 are only one way of structuring thinking/problemsolving - by, among other
 things, giving the computer a method/process of solution. There is an
 alternative way of structuring a computer's thinking, which incl., among
 other things, not giving it a method/ process of solution, but making it
 rather than a human programmer do the real problemsolving.  More of that
 another time.

  *From:* Matt Mahoney matmaho...@yahoo.com
 *Sent:* Tuesday, July 20, 2010 1:38 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

  Creativity is the good feeling you get when you discover a clever
 solution to a hard problem without knowing the process you used to discover
 it.

 I think a computer could do that.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Mike Tintner tint...@blueyonder.co.uk
 *To:* agi agi@v2.listbox.com
 *Sent:* Mon, July 19, 2010 2:08:28 PM
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Yes that's what people do, but it's not what programmed computers do.

 The useful formulation that emerges here is:

 narrow AI (and in fact all rational) problems  have *a method of solution*
 (to be equated with general method)   - and are programmable (a program is
 a method of solution)

 AGI  (and in fact all creative) problems do NOT have *a method of solution*
 (in the general sense)  -  rather a one.off *way of solving the problem* has
 to be improvised each time.

 AGI/creative problems do not in fact have a method of solution, period.
 There is no (general) method of solving either the toy box or the
 build-a-rock-wall problem - one essential feature which makes them AGI.

 You can learn, as you indicate, from *parts* of any given AGI/creative
 solution, and apply the lessons to future problems - and indeed with
 practice, should improve at solving any given kind of AGI/creative problem.
 But you can never apply a *whole* solution/way to further problems.

 P.S. One should add that in terms of computers, we are talking here of
 *complete, step-by-step* methods of solution.


  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 5:09 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI



  And are you happy with:

 AGI is about devising *one-off* methods of problemsolving (that only apply
 to the individual problem, and cannot be re-used - at

  least not in their totality)



 Yes exactly, isn't that what people do?  Also, I think that being able to
 recognize where past solutions can be generalized and where past solutions
 can be varied and reused is a detail of how intelligence works that is
 likely to be universal.



  vs

 narrow AI is about applying pre-existing *general* methods of
 problemsolving  (applicable to whole classes of problems)?



  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 4:45 PM
  *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Well, solving ANY problem is a little too strong.  This is AGI, not AGH
 (artificial godhead), though AGH could be an unintended consequence ;).  So
 I would rephrase solving any problem as being able to come up with
 reasonable approaches and strategies to any problem (just as humans are able
 to do).

 On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner 
 

Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread deepakjnath
Dave,

I agree completely on your point of having a general unifying system that
will solve a simple problem. This system when scaled should be able to solve
all the other problems that you were talking about.

How will we recognize the solution when we get it. I believe that it will be
elegant and simple and would address many problems rather than just one.

I disagree on breaking the problem and looking at it step by step. This is
how we solve any problem logically. Millions of years of evolution has gone
into perfecting our minds and optimizing the brain. So following the normal
engineering way of breaking a big problem in to small manageable problems
and working on it may take a long time. Because when we optimize locally we
may find that globally the system is not optimized and vice versa.

My approach is of looking at the whole problem and finding a simple solution
that will be answer to many problems. We should use our superior processing
power of subconscious to find a solution. The same way artists make their
creations.

I am no way discounting the enormity of the challenge. But different
approaches are valid and it will be too arrogant to say that my approach is
superior to another one. So the more number of radically different approach
the chances of finding a solution increases.

Cheers,
Deepak






On Mon, Jul 19, 2010 at 1:43 AM, David Jones davidher...@gmail.com wrote:

 Deepak,

 I think you would be much better off focusing on something more practical.
 Understanding a movie and all the myriad things going on, their
 significance, etc... that's AI complete. There is no way you are going to
 get there without a hell of a lot of steps in between. So, you might as well
 focus on the steps required to get there. Such a test is so complicated,
 that you cannot even start, except to look for simpler test cases and goals.


 My approach to testing agi has been to define what AGI must accomplish.
 Which I have in the following steps:
 1) understand the environment
 2) understand ones own actions and how they affect the environment
 3) understand language
 4) learn goals from other people through language
 5) perform planning and attempt to achieve goals
 6) other miscellaneous requirements.

 Each step must be accomplished in a general way. By general, I mean that it
 can solve many many problems with the same programming.

 Each step must be done in order because each step requires previous steps
 to proceed. So, to me, the most important place to start is general
 environment understanding.

 Then, now that you know where to start, you pick more specific goals and
 test cases. How do you develop and test general environment understanding?
 What is a simple test case you can develop on? What are the fundamental
 problems and principles involved? What is required to solve these problems?

 Those are the sorts of tests you should be considering. But that only comes
 after you decide what AGI requires and steps required. Maybe you'll agree
 with me, maybe you won't. So, that's how I would recommend going about it.

 Dave

 On Sun, Jul 18, 2010 at 4:04 PM, deepakjnath deepakjn...@gmail.comwrote:

 Let me clarify. As you all know there are somethings computers are good at
 doing and somethings that Humans can do but a computer cannot.

 One of the test that I was thinking about recently is to have to movies
 show to the AGI. Both movies will have the same story but it would be a
 totally different remake of the film probably in different languages and
 settings. If the AGI is able to understand the sub plot and say that the
 story line is similar in the two movies then it could be a good test for AGI
 structure.

 The ability of a system to understand its environment and underlying sub
 plots is an important requirement of AGI.

 Deepak

 On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Please explain/expound freely why you're not convinced - and indicate
 what you expect,  - and I'll reply - but it may not be till tomorrow.

 Re your last point, there def. is no consensus on a general problem/test
 OR a def. of AGI.

 One flaw in your expectations seems to be a desire for a single test -
 almost by definition, there is no such thing as

 a) a single test - i.e. there should be at least a dual or serial test -
 having passed any given test, like the rock/toy test, the AGI must be
 presented with a new adjacent test for wh. it has had no preparation,
 like say building with cushions or sand bags or packing with fruit. (and
 neither rock/toy test state that clearly)

 b) one kind of test - this is an AGI, so it should be clear that if it
 can pass one kind of test, it has the basic potential to go on to many
 different kinds, and it doesn't really matter which kind of test you start
 with - that is partly the function of having a good.definition of AGI .


  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Sunday, July 18, 2010 8:03 PM
 *To:* agi agi@v2.listbox.com

Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread deepakjnath
‘The intuitive mind is a sacred gift and the rational  mind is a faithful
servant. We have created a society that honours the servant and has
forgotten the gift.’

‘The intellect has little to do on the road to discovery. There comes a leap
in consciousness, call it intuition or what you will, and the solution comes
to you and you don’t know how or why.’

— Albert Einstein

We are here talking like programmers who needs to build a new system; Just
divide the problem, solve it one by one, arrange the pieces and voila. We
are missing something fundamentally here. That I believe has to come as a
stroke of genius to someone.

thanks,
Deepak




On Mon, Jul 19, 2010 at 4:10 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  No, Dave  I vaguely agree here that you have to start simple. To think
 of movies is massively confused - rather like saying: when we have created
 an entire new electric supply system for cars, we will have solved the
 problem of replacing gasoline - first you have to focus just on inventing a
 radically cheaper battery, before you consider the possibly hundreds to
 thousands of associated inventions and innovations.involved in creating a
 major new supply system.

 Here it would be much simpler to focus on understanding a single
 photographic scene - or real, directly-viewed scene - of objects, rather
 than the many thousands involved in a movie.

 In terms of language, it would be simpler to focus on understanding just
 two consecutive sentences of a text or section of dialogue  - or even as
 I've already suggested, just the flexible combinations of two words - rather
 than the hundreds of lines and many thousands of words involved in a movie
 or play script.

 And even this is probably all too evolved, for humans only came to use
 formal representations of the world v. recently in evolution.

 The general point -  a massively important one - is that AGI-ers cannot
 continue to think of AGI in terms of massively complex and evolved
 intelligent systems, as you are doing. You have to start with the simplest
 possible systems and gradually evolve them.  Anything else is a defiance of
 all the laws of technology - and will see AGI continuing to go absolutely
 nowhere.

  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Monday, July 19, 2010 5:19 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Exactly my point. So if I show a demo of an AGI system that can see two
 movies and understand that the plot of the movies are same even though they
 are 2 entirely different movies, you would agree that we have created a true
 AGI.

 Yes there are always lot of things we need to do before we reach that
 level. Its just good to know the destination so that we will know it when it
 arrives.




 On Mon, Jul 19, 2010 at 2:18 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Jeez,  no AI program can understand *two* consecutive *sentences* in a
 text - can understand any text period - can understand language, period. And
 you want an AGI that can understand a *story*. You don't seem to understand
 that requires cognitively a fabulous, massively evolved, highly educated,
 hugely complex set of powers .

 No AI can understand a photograph of a scene, period - a crowd scene, a
 house by the river. Programs are hard put to recognize any objects other
 than those in v. standard positions. And you want an AGI that can understand
 a *movie*.

 You don't seem to realise that we can't take the smallest AGI  *step* yet
 - and you're fantasying about a superevolved AGI globetrotter.

 That's why Benjamin  I tried to focus on v. v. simple tests -  they're
 still way too complex  they (or comparable tests) will have to be refined
 down considerably for anyone who is interested in practical vs sci-fi
 fantasy AGI.

 I recommend looking at Packbots and other military robots and hospital
 robots and the like, and asking how we can free them from their human
 masters and give them the very simplest of capacities to rove and handle the
 world independently - like handling and travelling on rocks.

 Anyone dreaming of computers or robots that can follow Gone with The
 Wind or become a child (real) scientist in the foreseeable future pace Ben,
 has no realistic understanding of what is involved.
  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Sunday, July 18, 2010 9:04 PM
   *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Let me clarify. As you all know there are somethings computers are good at
 doing and somethings that Humans can do but a computer cannot.

 One of the test that I was thinking about recently is to have to movies
 show to the AGI. Both movies will have the same story but it would be a
 totally different remake of the film probably in different languages and
 settings. If the AGI is able to understand the sub plot and say that the
 story line is similar in the two movies then it could be a good test for AGI
 structure

[agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread deepakjnath
I wanted to know if there is any bench mark test that can really convince
majority of today's AGIers that a System is true AGI?

Is there some real prize like the XPrize for AGI or AI in general?

thanks,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread deepakjnath
Yes, but is there a competition like the XPrize or something that we can
work towards. ?

On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti nawi...@gmail.com wrote:

 2010/7/18 deepakjnath deepakjn...@gmail.com

 I wanted to know if there is any bench mark test that can really convince
 majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak


 Have you heard about the Turing test?

 - Panu Horsmalahti
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread deepakjnath
So if I have a system that is close to AGI, I have no way of really knowing
it right?

Even if I believe that my system is a true AGI there is no way of convincing
the others irrefutably that this system is indeed a AGI not just an advanced
AI system.

I have read the toy box problem and rock wall problem, but not many people
will still be convinced I am sure.

I wanted to know that if there is any consensus on a general problem which
can be solved and only solved by a true AGI. Without such a test bench how
will we know if we are moving closer or away from our quest. There is no
map.

Deepak



On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall from
 rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain what
 AGI is generally , and why these tests are indeed AGI. Google - there are v.
 few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate if not as bad: Ben's solves a variety of
 complex problems in a variety of complex environments. Nope, so does  a
 multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
 something to do with insufficient knowledge and resources...
 Insufficient is open to narrow AI interpretations and reducible to
 mathematically calculable probabilities.or uncertainties. That doesn't
 distinguish AGI from narrow AI.

 The one thing we should all be able to agree on (but who can be sure?) is
 that:

 ** an AGI is a general intelligence system, capable of independent
 learning**

 i.e. capable of independently learning new activities/skills with minimal
 guidance or even, ideally, with zero guidance (as humans and animals are) -
 and thus acquiring a general, all-round range of intelligence..

 This is an essential AGI goal -  the capacity to keep entering and
 mastering new domains of both mental and physical skills WITHOUT being
 specially programmed each time - that crucially distinguishes it from narrow
 AI's, which have to be individually programmed anew for each new task. Ben's
 AGI dog exemplified this in a v simple way -  the dog is supposed to be able
 to learn to fetch a ball, with only minimal instructions, as real dogs do -
 they can learn a whole variety of new skills with minimal instruction.  But
 I am confident Ben's dog can't actually do this.

 However, the independent learning def. while focussing on the distinctive
 AGI goal,  still is not detailed enough by itself.

 It requires further identification of the **cognitive operations** which
 distinguish AGI,  and wh. are exemplified by the above tests.

 [I'll stop there for interruptions/comments  continue another time].

  P.S. Deepakjnath,

 It is vital to realise that the overwhelming majority of AGI-ers do not *
 want* an AGI test -  Ben has never gone near one, and is merely typical in
 this respect. I'd put almost all AGI-ers here in the same league as the US
 banks, who only want mark-to-fantasy rather than mark-to-market tests of
 their assets.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread deepakjnath
Let me clarify. As you all know there are somethings computers are good at
doing and somethings that Humans can do but a computer cannot.

One of the test that I was thinking about recently is to have to movies show
to the AGI. Both movies will have the same story but it would be a totally
different remake of the film probably in different languages and settings.
If the AGI is able to understand the sub plot and say that the story line is
similar in the two movies then it could be a good test for AGI structure.

The ability of a system to understand its environment and underlying sub
plots is an important requirement of AGI.

Deepak

On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Please explain/expound freely why you're not convinced - and indicate
 what you expect,  - and I'll reply - but it may not be till tomorrow.

 Re your last point, there def. is no consensus on a general problem/test OR
 a def. of AGI.

 One flaw in your expectations seems to be a desire for a single test -
 almost by definition, there is no such thing as

 a) a single test - i.e. there should be at least a dual or serial test -
 having passed any given test, like the rock/toy test, the AGI must be
 presented with a new adjacent test for wh. it has had no preparation,
 like say building with cushions or sand bags or packing with fruit. (and
 neither rock/toy test state that clearly)

 b) one kind of test - this is an AGI, so it should be clear that if it can
 pass one kind of test, it has the basic potential to go on to many different
 kinds, and it doesn't really matter which kind of test you start with - that
 is partly the function of having a good.definition of AGI .


  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Sunday, July 18, 2010 8:03 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 So if I have a system that is close to AGI, I have no way of really knowing
 it right?

 Even if I believe that my system is a true AGI there is no way of
 convincing the others irrefutably that this system is indeed a AGI not just
 an advanced AI system.

 I have read the toy box problem and rock wall problem, but not many people
 will still be convinced I am sure.

 I wanted to know that if there is any consensus on a general problem which
 can be solved and only solved by a true AGI. Without such a test bench how
 will we know if we are moving closer or away from our quest. There is no
 map.

 Deepak



 On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall from
 rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate if not as bad: Ben's solves a variety
 of complex problems in a variety of complex environments. Nope, so does  a
 multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
 something to do with insufficient knowledge and resources...
 Insufficient is open to narrow AI interpretations and reducible to
 mathematically calculable probabilities.or uncertainties. That doesn't
 distinguish AGI from narrow AI.

 The one thing we should all be able to agree on (but who can be sure?) is
 that:

 ** an AGI is a general intelligence system, capable of independent
 learning**

 i.e. capable of independently learning new activities/skills with minimal
 guidance or even, ideally, with zero guidance (as humans and animals are) -
 and thus acquiring a general, all-round range of intelligence..

 This is an essential AGI goal -  the capacity to keep entering and
 mastering new domains of both mental and physical skills WITHOUT being
 specially programmed each time - that crucially distinguishes it from narrow
 AI's, which have to be individually programmed anew for each new task. Ben's
 AGI dog exemplified this in a v simple way -  the dog is supposed to be able
 to learn to fetch a ball, with only minimal instructions, as real dogs do -
 they can learn a whole variety of new skills with minimal instruction.  But
 I am confident Ben's dog can't actually do this.

 However, the independent learning def. while focussing on the distinctive
 AGI goal,  still is not detailed enough by itself.

 It requires further identification of the **cognitive operations** which
 distinguish AGI,  and wh. are exemplified by the above tests.

 [I'll stop there for interruptions/comments  continue another time].

  P.S

Re: [agi] Questions for an AGI

2010-06-24 Thread deepakjnath
I would ask What should I ask if I could ask AGI anything?


On Thu, Jun 24, 2010 at 11:34 AM, The Wizard key.unive...@gmail.com wrote:


 If you could ask an AGI anything, what would you ask it?
 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread deepakjnath
The brain does not get the high frame rate signals as the eye itself
only gives brain images at 24 frames per second. Else u wouldn't be
able to watch a movie.
Any comments?

On 6/21/10, Matt Mahoney matmaho...@yahoo.com wrote:
 Your computer monitor flashes 75 frames per second, but you don't notice any
 flicker because light sensing neurons have a response delay of about 100 ms.
 Motion detection begins in the retina by cells that respond to contrast
 between light and dark moving in specific directions computed by simple,
 fixed weight circuits. Higher up in the processing chain, you detect motion
 when your eyes and head smoothly track moving objects using kinesthetic
 feedback from your eye and neck muscles and input from your built in
 accelerometer in the semicircular canals in your ears. This is all very
 complicated of course. You are more likely to detect motion in objects that
 you recognize and expect to move, like people, animals, cars, etc.

  -- Matt Mahoney, matmaho...@yahoo.com




 
 From: David Jones davidher...@gmail.com
 To: agi agi@v2.listbox.com
 Sent: Mon, June 21, 2010 9:39:30 AM
 Subject: [agi] Re: High Frame Rates Reduce Uncertainty

 Ignoring Steve because we are simply going to have to agree to disagree...
 And I don't see enough value in trying to understand his paper. I said the
 math was overly complex, but what I really meant is that the approach is
 overly complex and so filled with research specific jargon, I don't care to
 try understand it. It is overly converned with copying the way that the
 brain does things. I don't care how the brain does it. I care about why the
 brain does it. Its the same as the analogy of giving a man a fish or
 teaching him to fish. You may figure out how the brain works, but it does
 you little good if you don't understand why it works that way. You would
 have to create a synthetic brain to take advantage of the knowledge, which
 is not a approach to AGI for many reasons. There are a million other ways,
 even better ways, to do it than the way the brain does it. Just because the
 brain accidentally found 1 way out of a million to do it doesn't make it the
  right way for us to develop AGI.

 So, moving on

 I can't find references online, but I've read that the Air Force studied the
 ability of the human eye to identify aircraft in images that were flashed on
 a screen at 1/220th of a second. So, clearly, the human eye can at least
 distinguish 220 fps if it operated that way. Of course, it may not operate
 on fps second, but that is besides the point. I've also heard other people
 say that a study has shown that the human eye takes 1000 exposures per
 second. They had no references though, so it is hearsay.

 The point was that the brain takes advantage of the fact that with such a
 high exposure rate, the changes between each image are very small if the
 objects are moving. This allows it to distinguish movement and visual
 changes with extremely low uncertainty. If it detects that the changes
 required to match two parts of an image are too high or the distance between
 matches is too far, it can reject a match. This allows it to distinguish
 only very low uncertainty changes and reject changes that have high
 uncertainty.

 I think this is a very significant discovery regarding how the brain is able
 to learn in such an ambiguous world with so many variables that are
 difficult to disambiguate, interpret and understand.

 Dave

 On Fri, Jun 18, 2010 at 2:19 PM, David Jones davidher...@gmail.com wrote:

I just came up with an awesome idea. I just realized that the brain takes
 advantage of high frame rates to reduce uncertainty when it is estimating
 motion. The slower the frame rate, the more uncertainty there is because
 objects may have traveled too far between images to match with high
 certainty using simple techniques.

So, this made me think, what if the secret to the brain's ability to learn
 generally stems from this high frame rate trick. What if we made a system
 that could process even high frame rates than the brain can. By doing this
 you can reduce the uncertainty of matches very very low (well in my theory
 so far). If you can do that, then you can learn about the objects in a
 video, how they move together or separately with very high certainty.

You see, matching is the main barrier when learning about objects. But with
 a very high frame rate, we can use a fast algorithm and could potentially
 reduce the uncertainty to almost nothing. Once we learn about objects,
 matching gets easier because now we have training data and experience to
 take advantage of.

In addition, you can also gain knowledge about lighting, color variation,
 noise, etc. With that knowledge, you can then automatically create a model
 of the object with extremely high confidence. You will also be able to
 determine the effects of light and noise on the object's appearance, which
 will help match the object