So the method of initial recognition that I am going to try with my (very
early version of) image analysis will be to use multiple
analysis algorithms on an image. Then I am going to use the results to
output some kind of result that I choose to study.  (The first result is
going to be a modification of the original image which can be used for some
purpose).  Then I can select parts of the results to 'reinforce' as cases
where the program achieved the desired goal.  Then the test will be to see
if the program can guess which parts should be analyzed by which
algorithms to produce the desired goal.  My, point however, is that it is
not that simple. It won't work as described because initial image analysis
is not strong enough to produce the desired results even when there is
exhaustive training.  I can predict pretty confidently that the initial
analysis will need to taken through a kind of 'perceptual' stage where
some higher level ideas about the image has to be projected onto the
initial results of the analysis and compared to see if the analysis makes
sense based on previous learning.  Often times this comparison of the
projection of high level insight onto the configuration of results of the
initial analysis will not fit perfectly and some 'stretching' of the
relations will need to be made.

One other thing.  Although I think Mike Tintner's commentary has not very
useful, I do feel that he has helped me to discover something important in
the past.  He tries to point out that there is no program that can produce
every set of actions or algorithms.  That seems reasonable.  But most of us
feel that he is missing the point.  We can write programs that can learn
new things without being programmed to be able to produce every possible
variation.  An AGI program can learn by interacting intelligently with the
IO data world.  However, when I once reacted to one of Mike's challenges
and I tried to figure out a way to write an algorithm that could produce
obvious variations of a typological character without being intelligent or
reacting to input I was able to derive an insight about how such a thing
might work.  And I could use a variation of that idea on my image analyses
(including analysis through modification) methods.  So if I was able to get
somewhere it is theoretically feasible to design a single algorithm that
could create an immense number of image modification algorithms.  The one
thing that is left is to harness this process so that a useful analysis or
modification algorithm could be chosen -without running them all- on the
basis that it might produce a result that is required to produce an
insightful result.  By examining how the variations on the (single) image
analysis super algorithm affects the result, it might be possible for the
program to learn to 'project' families of characteristics of those
variations and hence make educated guesses that resemble some of ours that
go like, "If I could find this and that kind of thing and test it to see if
it could be related to finding a solution then I might be able to get
closer to figuring this out.  Characteristic A and characteristic B are
similar to the this and that therefore by trying variations on the super
algorithm I might be able to detect the this and that event."
Jim Bromer






On Tue, Feb 5, 2013 at 9:07 AM, Jim Bromer <[email protected]> wrote:

> Oh I forgot. The initial recognition dilemma is this. All good AI methods
> work well in some situations but in others they don't work very well at
> all. The problem is that the situations in which they work well do not
> cover entire situations. So, even in a single particular situation, like a
> scene in visual recognition, the AI algorithm might be spectacular with
> some 'objects' of the scene but fail with others. Now if a programmer was
> looking at the scene he can decide to use different algorithms that work
> well with different parts of the scene and by this method get good initial
> recognition coverage. But an automated system which does not have good
> initial recognition (or understanding) of the scene is not going to be able
> to choose which algorithms it should use for recognition. This assumes that
> the recognition algorithms will both fail to recognize some parts and give
> some false identifications of other parts.
> So the programmer sees that there are some algorithms which work really
> well, and he could select different algorithms to work on different parts
> of a particular input object, but when you try to automate this the problem
> becomes a dilemma. How can a program choose the right algorithm to evaluate
> a part of a scene without first recognizing what parts cannot be identified
> and what algorithms would be best for those parts.
> Jim Bromer
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to