Re: [agi] Computer Vision not as hard as I thought!

2010-08-06 Thread Jim Bromer
On Wed, Aug 4, 2010 at 9:27 AM, David Jones davidher...@gmail.com wrote:
*So, why computer vision? Why can't we just enter knowledge manually?

*a) The knowledge we require for AI to do what we want is vast and complex
and we can prove that it is completely ineffective to enter the knowledge we
need manually.
b) Computer vision is the most effective means of gathering facts about the
world. Knowledge and experience can be gained from analysis of these facts.
c) Language is not learned through passive observation. The associations
that words have to the environment and our common sense knowledge of the
environment/world are absolutely essential to language learning,
understanding and disambiguation. When visual information is available,
children use visual cues from their parents and from the objects they are
interacting with to figure out word-environment associations. If visual info
is not available, touch is essential to replace the visual cues. Touch can
provide much of the same info as vision, but it is not as effective because
not everything is in reach and it provides less information than vision.
There is some very good documentation out there on how children learn
language that supports this. One example is How Children Learn Language by
William O'grady.
d) The real world cannot be predicted blindly. It is absolutely essential to
be able to directly observe it and receive feedback.
e) Manual entry of knowledge, even if possible, would be extremely slow and
would be a very serious bottleneck(it already is). This is a major reason we
want AI... to increase our man power and remove man-power related
bottlenecks.


Discovering a way to get a computer program to interpret a human language is
a difficult problem.  The feeling that an AI program might be able to attain
a higher level of intelligence if only it could examine data from a variety
of different kinds of sensory input modalities it is not new.  It has been
tried and tried during the past 35 years.  But there is no experimental data
(that I have heard of) that suggests that this method is the only way anyone
will achieve intelligence.



I have tried to explain that I believe the problem is twofold.  First of
all, there have been quite a few AI programs that worked real well as long
as the problem was simple enough.  This suggests that the complexity of what
is trying to be understood is a critical factor.  This in turn suggests that
using different input modalities, would not -in itself- make AI
possible.  Secondly,
there is a problem of getting the computer to accurately model that which it
can know in such a way that it could be effectively utilized for higher
degrees of complexity.  I consider this to be a conceptual integration
problem.  We do not know how to integrate different kinds of ideas (or
idea-like knowledge) in an effective manner, and as a result we have not
seen the gradual advancement in AI programming that we would expect to see
given all the advances in computer technology that have been occurring.



Both visual analysis and linguistic analysis are significant challenges in
AI programming.  The idea that combining both of them would make the problem
1/2 as hard may not be any crazier than saying that it would make the
problem 2 times as hard, but without experimental evidence it isn't any
saner either.

Jim Bromer




On Wed, Aug 4, 2010 at 9:27 AM, David Jones davidher...@gmail.com wrote:

 :D Thanks Jim for paying attention!

 One very cool thing about the human brain is that we use multiple feedback
 mechanisms to correct for such problems as observer movement. For example,
 the inner ear senses your bodies movement and provides feedback for visual
 processing. This is why we get nauseous when the ear disagrees with the eyes
 and other senses. As you said, eye muscles also provide feedback about how
 the eye itself has moved. In example papers I have read, such as Object
 Discovery through Motion, Appearance and Shape, the researchers know the
 position of the camera (I'm not sure how) and use that to determine which
 moving features are closest to the cameras movement, and therefore are not
 actually moving. Once you know how much the camera moved, you can try to
 subtract this from apparent motion.

 You're right that I should attempt to implement the system. I think I will
 in fact, but it is difficult because I have limited time and resources. My
 main goal is to make sure it is accomplished, even if not by me. So,
 sometimes I think that it is better to prove that it can be done than to
 actually spend a much longer amount of time to actually do it myself. I am
 struggling to figure out how I can gather the resources or support to
 accomplish the monstrous task. I think that I should work on the theoretical
 basis in addition to the actual implementation. This is likely important to
 make sure that my design is well grounded and reflects reality. It 

Re: [agi] Computer Vision not as hard as I thought!

2010-08-06 Thread David Jones
On Fri, Aug 6, 2010 at 7:37 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Aug 4, 2010 at 9:27 AM, David Jones davidher...@gmail.com wrote:
 *So, why computer vision? Why can't we just enter knowledge manually?

 *
 a) The knowledge we require for AI to do what we want is vast and complex
 and we can prove that it is completely ineffective to enter the knowledge we
 need manually.
 b) Computer vision is the most effective means of gathering facts about the
 world. Knowledge and experience can be gained from analysis of these facts.
 c) Language is not learned through passive observation. The associations
 that words have to the environment and our common sense knowledge of the
 environment/world are absolutely essential to language learning,
 understanding and disambiguation. When visual information is available,
 children use visual cues from their parents and from the objects they are
 interacting with to figure out word-environment associations. If visual info
 is not available, touch is essential to replace the visual cues. Touch can
 provide much of the same info as vision, but it is not as effective because
 not everything is in reach and it provides less information than vision.
 There is some very good documentation out there on how children learn
 language that supports this. One example is How Children Learn Language by
 William O'grady.
 d) The real world cannot be predicted blindly. It is absolutely essential
 to be able to directly observe it and receive feedback.
 e) Manual entry of knowledge, even if possible, would be extremely slow and
 would be a very serious bottleneck(it already is). This is a major reason we
 want AI... to increase our man power and remove man-power related
 bottlenecks.
  

 Discovering a way to get a computer program to interpret a human language
 is a difficult problem.  The feeling that an AI program might be able to
 attain a higher level of intelligence if only it could examine data from a
 variety of different kinds of sensory input modalities it is not new.  It
 has been tried and tried during the past 35 years.  But there is no
 experimental data (that I have heard of) that suggests that this method is
 the only way anyone will achieve intelligence.


if only it could examine data from a variety of different kinds of sensory
input modalities

That statement suggests that such different kinds of input have no
meaningful relationship to the problem at hand. I'm not talking about
different kinds of input. I'm talking about explicitly and deliberately
extracting facts about the environment from sensory perception, specifically
remote perception or visual perception. The input modalities are not what
is important. It is the facts that you can extract from computer vision that
are useful in understanding what is out there in the world, what
relationships and associations exist, and how is language associated with
the environment.

It is well documented that children learn language by interacting with
adults around them and using cues from them to learn how the words they
speak are associated with what is going on. It is not hard to support the
claim that extensive knowledge about the world is important for
understanding and interpreting human language. Nor is it hard to support the
idea that such knowledge can be gained from computer vision.





 I have tried to explain that I believe the problem is twofold.  First of
 all, there have been quite a few AI programs that worked real well as long
 as the problem was simple enough.  This suggests that the complexity of
 what is trying to be understood is a critical factor.  This in turn
 suggests that using different input modalities, would not -in itself- make
 AI possible.


Your conclusion isn't supported by your arguments. I'm not even saying it
makes AI possible. I'm saying that a system can make reasonable inferences
and come to reasonable conclusions with sufficient knowledge. Without
sufficient knowledge, there is reason to believe that it is significantly
harder and often impossible to come to correct conclusions.

Therefore, gaining knowledge about how things are related is not just
helpful in making correct inferences, it is required. So, different input
modalities which can give you facts about the world, which in turn would
give you knowledge about the world, do make correct reasoning possible, when
it otherwise would not be possible.

You see, it has nothing to do with the source of the info or whether it is
more info or not. It has everything to do with the relationships that
information have. Just calling them different input modalities is not
correct.



   Secondly, there is a problem of getting the computer to accurately model
 that which it can know in such a way that it could be effectively utilized
 for higher degrees of complexity.


This is an engineering problem, not necessarily a problem that can't be
solved. When we get 

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread Jim Bromer
On Tue, Aug 3, 2010 at 11:52 AM, David Jones davidher...@gmail.com wrote:
I've suddenly realized that computer vision of real images is very much
solvable and that it is now just a matter of engineering...
I've also realized that I don't actually have to implement it, which is what
is most difficult because even if you know a solution to part of the problem
has certain properties and issues, implementing it takes a lot of time.
Whereas I can just assume I have a less than perfect solution with the
properties I predict from other experiments. Then I can solve the problem
without actually implementing every last detail...
*First*, existing methods find observations that are likely true by
themselves. They find data patterns that are very unlikely to occur by
coincidence, such as many features moving together over several frames of a
video and over a statistically significant distance. They use thresholds to
ensure that the observed changes are likely transformations of the original
property observed or to ensure the statistical significance of an
observation. These are highly likely true observations and not coincidences
or noise.
--
Just looking at these statements, I can find three significant errors. (I do
agree with some of what you said, like the significance of finding
observations that are likely true in themselves.)  When the camera moves (in
a rotation or pan) many features will appear 'to move together over a
statistically significant distance'.  One might suppose that the animal can
sense the movement of his own eyes but then again, he can rotate his head
and use his vision to gauge the rotation of his head.  So right away there
is some kind of serious error in your statement.  It might be resolvable, it
is just that your statement does not really do the resolution.  I do believe
that computer vision is possible with contemporary computers but you are
also saying that while you can't get your algorithms to work the way you had
hoped, it doesn't really matter because you can figure it all out without
the work of implementation.  My point of view is that these represent major
errors in reasoning.
I hope to get back to actual visual processing experiments again.  Although
I don't think that computer vision is necessary for AGI, I do think that
there is so much to be learned from experimenting with computer vision that
it is a serious mistake not to take advantage of opportunity.
Jim Bromer


On Tue, Aug 3, 2010 at 11:52 AM, David Jones davidher...@gmail.com wrote:

 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering. I was so stuck
 before because you can't make the simple assumptions in screenshot computer
 vision that you can in real computer vision. This makes experience probably
 necessary to effectively learn from screenshots. Objects in real images to
 not change drastically in appearance, position or other dimensions in
 unpredictable ways.

 The reason I came to the conclusion that it's a lot easier than I thought
 is that I found a way to describe why existing solutions work, how they work
 and how to come up with even better solutions.

 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just assume I have a less than perfect solution with the
 properties I predict from other experiments. Then I can solve the problem
 without actually implementing every last detail.

 *First*, existing methods find observations that are likely true by
 themselves. They find data patterns that are very unlikely to occur by
 coincidence, such as many features moving together over several frames of a
 video and over a statistically significant distance. They use thresholds to
 ensure that the observed changes are likely transformations of the original
 property observed or to ensure the statistical significance of an
 observation. These are highly likely true observations and not coincidences
 or noise.

 *Second*, they make sure that the other possible explanations of the
 observations are very unlikely. This is usually done using a threshold, and
 a second difference threshold from the first match to the second match. This
 makes sure that second best matches are much farther away than the best
 match. This is important because it's not enough to find a very likely match
 if there are 1000 very likely matches. You have to be able to show that the
 other matches are very unlikely, otherwise the specific match you pick may
 be just a tiny bit better than the others, and the confidence of that match
 would be very low.


 So, my initial design plans are as follows. Note: I will probably not
 actually implement the system because the engineering part dominates the
 time. I'd rather convert real 

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread David Jones
:D Thanks Jim for paying attention!

One very cool thing about the human brain is that we use multiple feedback
mechanisms to correct for such problems as observer movement. For example,
the inner ear senses your bodies movement and provides feedback for visual
processing. This is why we get nauseous when the ear disagrees with the eyes
and other senses. As you said, eye muscles also provide feedback about how
the eye itself has moved. In example papers I have read, such as Object
Discovery through Motion, Appearance and Shape, the researchers know the
position of the camera (I'm not sure how) and use that to determine which
moving features are closest to the cameras movement, and therefore are not
actually moving. Once you know how much the camera moved, you can try to
subtract this from apparent motion.

You're right that I should attempt to implement the system. I think I will
in fact, but it is difficult because I have limited time and resources. My
main goal is to make sure it is accomplished, even if not by me. So,
sometimes I think that it is better to prove that it can be done than to
actually spend a much longer amount of time to actually do it myself. I am
struggling to figure out how I can gather the resources or support to
accomplish the monstrous task. I think that I should work on the theoretical
basis in addition to the actual implementation. This is likely important to
make sure that my design is well grounded and reflects reality. It is very
hard for me to balance everything that has to be done though. This
definitely should be done by a much larger team of people.

As for your belief that computer vision is not necessary for AGI, I just
finished writing an email to someone else who had similar questions
regarding why computer vision helps with AGI. I will append them here. I
hope you find them helpful.





*Appended Below: Why do I think computer vision is so important for
AGI.**


Someone asked, if I solved computer vision, why would it help in higher
reasoning and learning? Why do I think computer vision so important for AGI.

*Regarding higher reasoning and learning, I'll try to explain my views a bit
here:*

*When I talk about higher reasoning and learning,  I'm referring to all of
the following: *
* language learning,
* language disambiguation and interpretation
* learning about cause and effect
* learning about object/environment behavior and mechanisms regarding how or
why they behave certain ways
* explanatory reasoning that requires common sense knowledge
* learning common sense knowledge at increasing levels of abstraction.
* trial and error learning
* learning to predict the environment. This is extremely important for the
purposes of goal pursuit, which is the whole point of AI I think.
* inductive learning on examples from observation. This is needed for
language learning. This also helps with predicting the behavior of new
object instances.
* rule induction from observed examples
* etc. etc. etc.

*So, what am I really using computer vision for?*
I'm using computer vision to gather *knowledge*, including common sense
knowledge. It is very clear to me, and probably many others, that knowledge
is required to solve the problems we want AI to solve. The core problem of
AGI is knowledge. There are many other supporting problems such as machine
learning, planning, language disambiguation, etc., but without knowledge it
is much harder than it needs to be. We need it, we want it, but we haven't
been able to get enough of it. Knowledge also makes it easier to solve these
problems, making it possible to use simpler algorithms and to learn from
fewer examples.

Computer vision isn't just for knowledge though. It's also for goal pursuit.
Many things we want an AI to do require exploration of the environment,
trial and error learning, exploration in general, interaction with the
environment, unsupervised  learning, etc. These require the ability to
perceive and understand the environment. The environment is too complex to
predict blindly. It is absolutely essential to be able to directly observe
it and receive feedback.

*So, why computer vision? Why can't we just enter knowledge manually?*

Explaining this requires several supporting arguments that will have to be
argued separately. So I will list them below:

a) The knowledge we require for AI to do what we want is vast and complex
and we can prove that it is completely ineffective to enter the knowledge we
need manually.
b) Computer vision is the most effective means of gathering facts about the
world. Knowledge and experience can be gained from analysis of these facts.
c) Language is not learned through passive observation. The associations
that words have to the environment and our common sense knowledge of the
environment/world are absolutely essential to language learning,
understanding and disambiguation. When visual information is available,
children use visual cues from their parents and from the objects they 

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread David Jones
Steve,

I wouldn't say that's an accurate description of what I wrote. What a wrote
was a way to think about how to solve computer vision.

My approach to artificial intelligence is a Neat approach. See
http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you attached is a
Scruffy approach. Neat approaches are characterized by deliberate
algorithms that are analogous to the problem and can sometimes be shown to
be provably correct. An example of a Neat approach is the use of features in
the paper I mentioned. One can describe why the features are calculated and
manipulated the way they are. An example of a scruffies approach would be
neural nets, where you don't know the rules by which it comes up with an
answer and such approaches are not very scalable. Neural nets require
manually created training data and the knowledge generated is not in a form
that can be used for other tasks. The knowledge isn't portable.

I also wouldn't say I switched from absolute values to rates of change.
That's not really at all what I'm saying here.

Dave

On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work on rates of change rather than absolute values. Then,
 temporal learning, which is otherwise very difficult, falls out as the
 easiest of things to do.

 In effect, your proposal shifts from absolute values to rates of change.

 Steve
 ===
 On Tue, Aug 3, 2010 at 8:52 AM, David Jones davidher...@gmail.com wrote:

 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering. I was so stuck
 before because you can't make the simple assumptions in screenshot computer
 vision that you can in real computer vision. This makes experience probably
 necessary to effectively learn from screenshots. Objects in real images to
 not change drastically in appearance, position or other dimensions in
 unpredictable ways.

 The reason I came to the conclusion that it's a lot easier than I thought
 is that I found a way to describe why existing solutions work, how they work
 and how to come up with even better solutions.

 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just assume I have a less than perfect solution with the
 properties I predict from other experiments. Then I can solve the problem
 without actually implementing every last detail.

 *First*, existing methods find observations that are likely true by
 themselves. They find data patterns that are very unlikely to occur by
 coincidence, such as many features moving together over several frames of a
 video and over a statistically significant distance. They use thresholds to
 ensure that the observed changes are likely transformations of the original
 property observed or to ensure the statistical significance of an
 observation. These are highly likely true observations and not coincidences
 or noise.

 *Second*, they make sure that the other possible explanations of the
 observations are very unlikely. This is usually done using a threshold, and
 a second difference threshold from the first match to the second match. This
 makes sure that second best matches are much farther away than the best
 match. This is important because it's not enough to find a very likely match
 if there are 1000 very likely matches. You have to be able to show that the
 other matches are very unlikely, otherwise the specific match you pick may
 be just a tiny bit better than the others, and the confidence of that match
 would be very low.


 So, my initial design plans are as follows. Note: I will probably not
 actually implement the system because the engineering part dominates the
 time. I'd rather convert real videos to pseudo test cases or simulation test
 cases and then write a psuedo design and algorithm that can solve it. This
 would show that it works without actually spending the time needed to
 implement it. It's more important for me to prove it works and show what it
 can do than to actually do it. If I can prove it, there will be sufficient
 motivation for others to do it with more resources and man power than I have
 at my disposal.

 *My Design*
 *First, we use high speed cameras and lidar systems to gather sufficient
 data with very low uncertainty because the changes possible between data
 points can be assumed to be very low, allowing our thresholds to be much
 smaller, which eliminates many possible errors and ambiguities.

 *Second*, *we have to gain experience from high confidence observations.
 These are gathered as follows:
 1) Describe allowable transformations(thresholds) and what they mean. 

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread Steve Richfield
David,

You are correct in that I keep bad company. My approach to NNs is VERY
different than other people's approaches. I insist on reasonable math being
performed on quantities that I understand, which sets me apart from just
about everyone else.

Your neat approach isn't all that neat, and is arguably scruffier than
mine. At least I have SOME math to back up my approach. Further, note that
we are self-organizing systems, and that this process is poorly understood.
I am NOT particularly interest in people-programmed systems because of their
very fundamental limitations. Yes, self-organization is messy, but it fits
the neat definition better than it meets the scruffy definition. Scruffy
has more to do with people-programmed ad hoc approaches (like most of AGI),
which I agree are a waste of time.

Steve

On Wed, Aug 4, 2010 at 12:43 PM, David Jones davidher...@gmail.com wrote:

 Steve,

 I wouldn't say that's an accurate description of what I wrote. What a wrote
 was a way to think about how to solve computer vision.

 My approach to artificial intelligence is a Neat approach. See
 http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you attached is
 a Scruffy approach. Neat approaches are characterized by deliberate
 algorithms that are analogous to the problem and can sometimes be shown to
 be provably correct. An example of a Neat approach is the use of features in
 the paper I mentioned. One can describe why the features are calculated and
 manipulated the way they are. An example of a scruffies approach would be
 neural nets, where you don't know the rules by which it comes up with an
 answer and such approaches are not very scalable. Neural nets require
 manually created training data and the knowledge generated is not in a form
 that can be used for other tasks. The knowledge isn't portable.

 I also wouldn't say I switched from absolute values to rates of change.
 That's not really at all what I'm saying here.

 Dave

 On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield steve.richfi...@gmail.com
  wrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work on rates of change rather than absolute values. Then,
 temporal learning, which is otherwise very difficult, falls out as the
 easiest of things to do.

 In effect, your proposal shifts from absolute values to rates of change.

 Steve
 ===
 On Tue, Aug 3, 2010 at 8:52 AM, David Jones davidher...@gmail.comwrote:

 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering. I was so stuck
 before because you can't make the simple assumptions in screenshot computer
 vision that you can in real computer vision. This makes experience probably
 necessary to effectively learn from screenshots. Objects in real images to
 not change drastically in appearance, position or other dimensions in
 unpredictable ways.

 The reason I came to the conclusion that it's a lot easier than I thought
 is that I found a way to describe why existing solutions work, how they work
 and how to come up with even better solutions.

 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just assume I have a less than perfect solution with the
 properties I predict from other experiments. Then I can solve the problem
 without actually implementing every last detail.

 *First*, existing methods find observations that are likely true by
 themselves. They find data patterns that are very unlikely to occur by
 coincidence, such as many features moving together over several frames of a
 video and over a statistically significant distance. They use thresholds to
 ensure that the observed changes are likely transformations of the original
 property observed or to ensure the statistical significance of an
 observation. These are highly likely true observations and not coincidences
 or noise.

 *Second*, they make sure that the other possible explanations of the
 observations are very unlikely. This is usually done using a threshold, and
 a second difference threshold from the first match to the second match. This
 makes sure that second best matches are much farther away than the best
 match. This is important because it's not enough to find a very likely match
 if there are 1000 very likely matches. You have to be able to show that the
 other matches are very unlikely, otherwise the specific match you pick may
 be just a tiny bit better than the others, and the confidence of that match
 would be very low.


 So, my initial design plans are as follows. Note: I will probably not
 actually implement the system because the engineering part dominates the
 time. I'd rather convert real 

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread David Jones
Steve,

Sorry if I misunderstood your approach. I do not really understand how it
would work though because it is not clear how you go from inputs to output
goals. It likely will still have many of the same problems as other neural
networks including 1) poor knowledge portability  2) difficult to extend,
augment or understand how it works  3) requires manually created training
data, which is a major problem.  4) is designed with biological hardware in
mind, not necessarily existing hardware and software.

These are my main reasons, at least that I can remember, that I avoid
biologically inspired methods. It's not to say that they are wrong. But they
don't meet my requirements. It is also very unclear how to implement the
system and make it work. My approach is very deliberate, so the steps
required to make it work are pretty clear to me.

It is not that your approach is bad. It is just different and I really
prefer methods that are not biologically inspired, but are designed
specifically with goals and requirements in mind as the most important
design motivator.

Dave

On Wed, Aug 4, 2010 at 3:54 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 David,

 You are correct in that I keep bad company. My approach to NNs is VERY
 different than other people's approaches. I insist on reasonable math being
 performed on quantities that I understand, which sets me apart from just
 about everyone else.

 Your neat approach isn't all that neat, and is arguably scruffier than
 mine. At least I have SOME math to back up my approach. Further, note that
 we are self-organizing systems, and that this process is poorly understood.
 I am NOT particularly interest in people-programmed systems because of their
 very fundamental limitations. Yes, self-organization is messy, but it fits
 the neat definition better than it meets the scruffy definition. Scruffy
 has more to do with people-programmed ad hoc approaches (like most of AGI),
 which I agree are a waste of time.

 Steve
 
 On Wed, Aug 4, 2010 at 12:43 PM, David Jones davidher...@gmail.comwrote:

 Steve,

 I wouldn't say that's an accurate description of what I wrote. What a
 wrote was a way to think about how to solve computer vision.

 My approach to artificial intelligence is a Neat approach. See
 http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you attached
 is a Scruffy approach. Neat approaches are characterized by deliberate
 algorithms that are analogous to the problem and can sometimes be shown to
 be provably correct. An example of a Neat approach is the use of features in
 the paper I mentioned. One can describe why the features are calculated and
 manipulated the way they are. An example of a scruffies approach would be
 neural nets, where you don't know the rules by which it comes up with an
 answer and such approaches are not very scalable. Neural nets require
 manually created training data and the knowledge generated is not in a form
 that can be used for other tasks. The knowledge isn't portable.

 I also wouldn't say I switched from absolute values to rates of change.
 That's not really at all what I'm saying here.

 Dave

 On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work on rates of change rather than absolute values. Then,
 temporal learning, which is otherwise very difficult, falls out as the
 easiest of things to do.

 In effect, your proposal shifts from absolute values to rates of change.

 Steve
 ===
 On Tue, Aug 3, 2010 at 8:52 AM, David Jones davidher...@gmail.comwrote:

 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering. I was so stuck
 before because you can't make the simple assumptions in screenshot computer
 vision that you can in real computer vision. This makes experience probably
 necessary to effectively learn from screenshots. Objects in real images to
 not change drastically in appearance, position or other dimensions in
 unpredictable ways.

 The reason I came to the conclusion that it's a lot easier than I
 thought is that I found a way to describe why existing solutions work, how
 they work and how to come up with even better solutions.

 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just assume I have a less than perfect solution with 
 the
 properties I predict from other experiments. Then I can solve the problem
 without actually implementing every last detail.

 *First*, existing methods find observations that are likely true by
 themselves. They find data patterns that are very unlikely to occur by
 

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread Steve Richfield
David

On Wed, Aug 4, 2010 at 1:16 PM, David Jones davidher...@gmail.com wrote:

 3) requires manually created training data, which is a major problem.


Where did this come from. Certainly, people are ill equipped to create dP/dt
type data. These would have to come from sensors.


 4) is designed with biological hardware in mind, not necessarily existing
 hardware and software.


The biology is just good to help the math over some humps. So far, I have
not been able to identify ANY neuronal characteristic that hasn't been
refined to near-perfection, once the true functionality was fully
understood.

Anyway, with the math, you can build a system anyway you want. Without the
math, you are just wasting your time and electricity. The math comes first,
and all other things follow.

Steve
===


 These are my main reasons, at least that I can remember, that I avoid
 biologically inspired methods. It's not to say that they are wrong. But they
 don't meet my requirements. It is also very unclear how to implement the
 system and make it work. My approach is very deliberate, so the steps
 required to make it work are pretty clear to me.

 It is not that your approach is bad. It is just different and I really
 prefer methods that are not biologically inspired, but are designed
 specifically with goals and requirements in mind as the most important
 design motivator.

 Dave

 On Wed, Aug 4, 2010 at 3:54 PM, Steve Richfield steve.richfi...@gmail.com
  wrote:

 David,

 You are correct in that I keep bad company. My approach to NNs is VERY
 different than other people's approaches. I insist on reasonable math being
 performed on quantities that I understand, which sets me apart from just
 about everyone else.

 Your neat approach isn't all that neat, and is arguably scruffier than
 mine. At least I have SOME math to back up my approach. Further, note that
 we are self-organizing systems, and that this process is poorly understood.
 I am NOT particularly interest in people-programmed systems because of their
 very fundamental limitations. Yes, self-organization is messy, but it fits
 the neat definition better than it meets the scruffy definition. Scruffy
 has more to do with people-programmed ad hoc approaches (like most of AGI),
 which I agree are a waste of time.

 Steve
 
 On Wed, Aug 4, 2010 at 12:43 PM, David Jones davidher...@gmail.comwrote:

 Steve,

 I wouldn't say that's an accurate description of what I wrote. What a
 wrote was a way to think about how to solve computer vision.

 My approach to artificial intelligence is a Neat approach. See
 http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you attached
 is a Scruffy approach. Neat approaches are characterized by deliberate
 algorithms that are analogous to the problem and can sometimes be shown to
 be provably correct. An example of a Neat approach is the use of features in
 the paper I mentioned. One can describe why the features are calculated and
 manipulated the way they are. An example of a scruffies approach would be
 neural nets, where you don't know the rules by which it comes up with an
 answer and such approaches are not very scalable. Neural nets require
 manually created training data and the knowledge generated is not in a form
 that can be used for other tasks. The knowledge isn't portable.

 I also wouldn't say I switched from absolute values to rates of change.
 That's not really at all what I'm saying here.

 Dave

 On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work on rates of change rather than absolute values. Then,
 temporal learning, which is otherwise very difficult, falls out as the
 easiest of things to do.

 In effect, your proposal shifts from absolute values to rates of change.

 Steve
 ===
 On Tue, Aug 3, 2010 at 8:52 AM, David Jones davidher...@gmail.comwrote:

 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering. I was so stuck
 before because you can't make the simple assumptions in screenshot 
 computer
 vision that you can in real computer vision. This makes experience 
 probably
 necessary to effectively learn from screenshots. Objects in real images to
 not change drastically in appearance, position or other dimensions in
 unpredictable ways.

 The reason I came to the conclusion that it's a lot easier than I
 thought is that I found a way to describe why existing solutions work, how
 they work and how to come up with even better solutions.

 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just 

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread David Jones
Steve,

I replace your need for math with my need to understand what the system is
doing and why it is doing it. It's basically the same thing. But you are
approaching it at an extremely low level. It doesn't seem to me that you are
clear on how this math makes the system work the way we want it to work.
So, make the math as perfect as you like, if you don't understand why you
need the math and how it makes the system do what you want, then it's not
going to do you any good.

Understanding what you are trying to accomplish and how you want the system
to work comes first, not math.

If your neural net doesn't require training data, I don't understand how it
works or why you expect it to do what you want it to do if it is self
organized. How do you tell it how to process inputs correctly? What guides
the processing and analysis?

Dave

On Wed, Aug 4, 2010 at 4:33 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 David

 On Wed, Aug 4, 2010 at 1:16 PM, David Jones davidher...@gmail.com wrote:

 3) requires manually created training data, which is a major problem.


 Where did this come from. Certainly, people are ill equipped to create
 dP/dt type data. These would have to come from sensors.



 4) is designed with biological hardware in mind, not necessarily existing
 hardware and software.


 The biology is just good to help the math over some humps. So far, I have
 not been able to identify ANY neuronal characteristic that hasn't been
 refined to near-perfection, once the true functionality was fully
 understood.

 Anyway, with the math, you can build a system anyway you want. Without the
 math, you are just wasting your time and electricity. The math comes first,
 and all other things follow.

 Steve
 ===


 These are my main reasons, at least that I can remember, that I avoid
 biologically inspired methods. It's not to say that they are wrong. But they
 don't meet my requirements. It is also very unclear how to implement the
 system and make it work. My approach is very deliberate, so the steps
 required to make it work are pretty clear to me.

 It is not that your approach is bad. It is just different and I really
 prefer methods that are not biologically inspired, but are designed
 specifically with goals and requirements in mind as the most important
 design motivator.

 Dave

 On Wed, Aug 4, 2010 at 3:54 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 You are correct in that I keep bad company. My approach to NNs is VERY
 different than other people's approaches. I insist on reasonable math being
 performed on quantities that I understand, which sets me apart from just
 about everyone else.

 Your neat approach isn't all that neat, and is arguably scruffier than
 mine. At least I have SOME math to back up my approach. Further, note that
 we are self-organizing systems, and that this process is poorly understood.
 I am NOT particularly interest in people-programmed systems because of their
 very fundamental limitations. Yes, self-organization is messy, but it fits
 the neat definition better than it meets the scruffy definition. Scruffy
 has more to do with people-programmed ad hoc approaches (like most of AGI),
 which I agree are a waste of time.

 Steve
 
 On Wed, Aug 4, 2010 at 12:43 PM, David Jones davidher...@gmail.comwrote:

 Steve,

 I wouldn't say that's an accurate description of what I wrote. What a
 wrote was a way to think about how to solve computer vision.

 My approach to artificial intelligence is a Neat approach. See
 http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you attached
 is a Scruffy approach. Neat approaches are characterized by deliberate
 algorithms that are analogous to the problem and can sometimes be shown to
 be provably correct. An example of a Neat approach is the use of features 
 in
 the paper I mentioned. One can describe why the features are calculated and
 manipulated the way they are. An example of a scruffies approach would be
 neural nets, where you don't know the rules by which it comes up with an
 answer and such approaches are not very scalable. Neural nets require
 manually created training data and the knowledge generated is not in a form
 that can be used for other tasks. The knowledge isn't portable.

 I also wouldn't say I switched from absolute values to rates of change.
 That's not really at all what I'm saying here.

 Dave

 On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work on rates of change rather than absolute values. Then,
 temporal learning, which is otherwise very difficult, falls out as the
 easiest of things to do.

 In effect, your proposal shifts from absolute values to rates of
 change.

 Steve
 ===
 On Tue, Aug 3, 2010 at 8:52 AM, David Jones 

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread Steve Richfield
David,

On Wed, Aug 4, 2010 at 1:45 PM, David Jones davidher...@gmail.com wrote:


 Understanding what you are trying to accomplish and how you want the system
 to work comes first, not math.


It's all the same. First comes the qualitative, then comes the quantitative.



 If your neural net doesn't require training data,


Sure it needs training data -real-world interactive sensory input training
data, rather than static manually prepared training data.

I don't understand how it works or why you expect it to do what you want it
 to do if it is self organized. How do you tell it how to process inputs
 correctly? What guides the processing and analysis?


Bingo - you have just hit on THE great challenge in AI/AGI., and the source
of much past debate. Some believe in maximizing the information content of
the output. Some believe in other figures of merit, e.g. success in
interacting with a test environment, success in forming a layered structure,
etc. This particular sub-field is still WIDE open and waiting for some good
answers.

Note that this same problem presents itself, regardless of approach, e.g.
AGI.

Steve
===


 On Wed, Aug 4, 2010 at 4:33 PM, Steve Richfield steve.richfi...@gmail.com
  wrote:

 David

 On Wed, Aug 4, 2010 at 1:16 PM, David Jones davidher...@gmail.comwrote:

 3) requires manually created training data, which is a major problem.


 Where did this come from. Certainly, people are ill equipped to create
 dP/dt type data. These would have to come from sensors.



 4) is designed with biological hardware in mind, not necessarily existing
 hardware and software.


 The biology is just good to help the math over some humps. So far, I have
 not been able to identify ANY neuronal characteristic that hasn't been
 refined to near-perfection, once the true functionality was fully
 understood.

 Anyway, with the math, you can build a system anyway you want. Without the
 math, you are just wasting your time and electricity. The math comes first,
 and all other things follow.

 Steve
 ===


 These are my main reasons, at least that I can remember, that I avoid
 biologically inspired methods. It's not to say that they are wrong. But they
 don't meet my requirements. It is also very unclear how to implement the
 system and make it work. My approach is very deliberate, so the steps
 required to make it work are pretty clear to me.

 It is not that your approach is bad. It is just different and I really
 prefer methods that are not biologically inspired, but are designed
 specifically with goals and requirements in mind as the most important
 design motivator.

 Dave

 On Wed, Aug 4, 2010 at 3:54 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 You are correct in that I keep bad company. My approach to NNs is VERY
 different than other people's approaches. I insist on reasonable math being
 performed on quantities that I understand, which sets me apart from just
 about everyone else.

 Your neat approach isn't all that neat, and is arguably scruffier than
 mine. At least I have SOME math to back up my approach. Further, note that
 we are self-organizing systems, and that this process is poorly understood.
 I am NOT particularly interest in people-programmed systems because of 
 their
 very fundamental limitations. Yes, self-organization is messy, but it fits
 the neat definition better than it meets the scruffy definition. 
 Scruffy
 has more to do with people-programmed ad hoc approaches (like most of AGI),
 which I agree are a waste of time.

 Steve
 
 On Wed, Aug 4, 2010 at 12:43 PM, David Jones davidher...@gmail.comwrote:

 Steve,

 I wouldn't say that's an accurate description of what I wrote. What a
 wrote was a way to think about how to solve computer vision.

 My approach to artificial intelligence is a Neat approach. See
 http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you
 attached is a Scruffy approach. Neat approaches are characterized by
 deliberate algorithms that are analogous to the problem and can sometimes 
 be
 shown to be provably correct. An example of a Neat approach is the use of
 features in the paper I mentioned. One can describe why the features are
 calculated and manipulated the way they are. An example of a scruffies
 approach would be neural nets, where you don't know the rules by which it
 comes up with an answer and such approaches are not very scalable. Neural
 nets require manually created training data and the knowledge generated is
 not in a form that can be used for other tasks. The knowledge isn't
 portable.

 I also wouldn't say I switched from absolute values to rates of change.
 That's not really at all what I'm saying here.

 Dave

 On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work on 

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread David Jones
On Wed, Aug 4, 2010 at 6:17 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 David,

 On Wed, Aug 4, 2010 at 1:45 PM, David Jones davidher...@gmail.com wrote:


 Understanding what you are trying to accomplish and how you want the
 system to work comes first, not math.


 It's all the same. First comes the qualitative, then comes the
 quantitative.


 If your neural net doesn't require training data,


 Sure it needs training data -real-world interactive sensory input training
 data, rather than static manually prepared training data.


You design is not described well enough or succinctly enough for me to
comment on then.



 I don't understand how it works or why you expect it to do what you want it
 to do if it is self organized. How do you tell it how to process inputs
 correctly? What guides the processing and analysis?


 Bingo - you have just hit on THE great challenge in AI/AGI., and the source
 of much past debate. Some believe in maximizing the information content of
 the output. Some believe in other figures of merit, e.g. success in
 interacting with a test environment, success in forming a layered structure,
 etc. This particular sub-field is still WIDE open and waiting for some good
 answers.

 Note that this same problem presents itself, regardless of approach, e.g.
 AGI.


Ah, but I think that this problem is much more solvable and better defined
with a more deliberate approach that does not depend on emergence. Emergence
is wishful thinking. I hope you do not include such wishful thinking in your
design :)

Once the AI has the tools and knowledge needed to solve a problem, which I
expect to get from computer vision, then it can reason about user stated
goals (in natural language) and we can work on how the goal pursuit part
works. Much work has already been done on planning and execution. But, all
that work was done with insufficient knowledge on narrow problems. All the
research needs to be re-evaluated and studied with sufficient knowledge
about the world. It changes everything. This is another mile marker on my
roadmap to general AI.

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Computer Vision not as hard as I thought!

2010-08-03 Thread A. T. Murray
David Jones wrote:

 I've suddenly realized that computer vision
 of real images is very much solvable and that 
 it is now just a matter of engineering. [...]
 
Would you (or anyone else on this list) be 
interested in learning Forth and working on
http://code.google.com/p/mindforth/wiki/VisRecog
for the MindForth artificial intelligence?

There would be no pay other than AI glory.
You have already shown a keen AI interest at

http://www.practicalai.org

and so you could put your code and 
documentation up there.

Arthur
--
http://www.scn.org/~mentifex/mindforth.txt
http://www.scn.org/~mentifex/AiMind.html
http://AIMind-i.com


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com