A concept can be projected onto a system of other concepts.  Think of a
picture with a person in it.  You could use a cut and paste method to paste
the person onto another background.  But then you might want to fit the
pasted image into the background (or other images.)  For instance you might
want to resize the pasted image of the person and you might want to try to
fit the image of the person into the background so that it makes sense.  The
image may not be complete so you might want to imagine that the rest of the
image of the person was in the picture as well.  When you take this cut and
paste idea further you can see that more elaborate systems of concepts
could be pasted or projected onto other systems.  So an author can, for
example, write a story by combining a number of different mini-plots that
he had in mind.  However, the transition might be awkward and on rereading
the story the next day he might decide to fit the stories together a little
better.

In psychology the term projection is usually used to denote an extreme form
of denial where a person who is faced with an uncomfortable sense about a
personal failing goes into a kind of denial and then projects the failing
onto someone else, accusing the other person of the failing that he briefly
realized that he had before he repressed it.  One example is someone with
an inflated ego who becomes so uncomfortable with the possibility that he
might not be smart enough to understand something that he banishes the very
consideration that he might be stupid from his mind and then accuses other
people of being stupid when they disagree with him about the subject.
However, I feel that almost all knowledge is a projection but most of it is
effective and necessary.  When we encounter a kind of situation that we
have previously acquired knowledge about we have to project our knowledge
onto the situation to try to understand it.  So I am saying that projection
is not just a symptom of a rather poor way to handle the awareness of
personal failings as the term was typically used in psychology.  By the way
there was an altruistic form of psychological projection that was
considered in the 19th century, so the psychological history of the term
projection is not only about a system of personal repression followed by
projection of the failing onto someone else.

Can (my use of the concept of) projection explain something about
recognition?  What I am saying is that when you are confronted with a
situation where you (happen to) understand the significant parts of a
situation you can use your imagination and project different possibilities
(of knowledge that you had previously acquired) onto the situation to try
to see if any of those possibilities makes sense in an analysis of the
situation.  (Notice that this method would reduce the likelihood of the
more dysfunctional form of psychological projection from twentieth century
psychology.  For example, when I tried to find out more about a 'graph
Laplacian algorithm, I realized that I might not be smart enough to
intuitively figure out what 'graph Laplacian' is.  Based on my usual
learning curve I have to conclude that my first guesses about 'graph
Laplacian' are probably wrong even if I am in the ballpark.  So it is very
unlikely that I will reach a profound sense that an algorithm about 'graph
Laplacian' must be stupid just because I cannot make an intuitive analysis
of how the algorithm might work.  And also note that as I tried to
intuitively derive a sense of what the term meant I was projecting
knowledge related to what I knew about those kinds of thing onto what I
read about it. So there is a possibility that I do intuitively understand
it, although I probably don't get it all.
Jim Bromer



On Sat, Mar 2, 2013 at 10:02 AM, Stan Nilsen <[email protected]> wrote:

> Appreciate your response Jim.
> few more comments below
> Stan
>
>
> On 03/02/2013 03:17 AM, Jim Bromer wrote:
>
>>   There are a number of narrow domain examples of recognition but these
>> do not work for artificial general intelligence.  Yes, I am doing a
>> little quibbling but the fact that artificial general reasoning cannot
>> be established shows that this is a recognition problem.  If an AGI
>> program was able to show some true progress in general recognition it
>> would establish a foundation for successful AGI.  Choosing how to react
>> to a recognized situation would be relatively simple (or at least it is
>> not any more difficult than recognition).
>>
>
> But, that's the problem - how do you recognize a situation that could be
> hundreds of situations based on the set of facts you have?  Each possible
> "situation" has a probability of being correct, but some are more far
> fetched than others.  So, you generalize the situation to a level where you
> have a "matching" pattern and run with the response that fits.
>
>
>
>  Recognition is not just pattern matching just as pattern matching is not
>> just matching patterns.  Recognition and everything about AGI has to
>> include conceptual integration.
>>
>
> Okay, but what's wrong with saying "recognize what you can recognize and
> integrate what you need to integrate." Look at each method as suitable for
> a slightly different problem.
>
> A method of implementing Concept Integration is by use of frames.  I'm not
> recommending or dismissing frames, but the idea has been around since the
> 70's.  Minsky's Society of the Mind has several pages devoted to a "frames"
> approach.  Stan Franklin discusses Minsky's frames and other "mechanisms
> for concepts" in his book Artificial Minds.
>
>
>
>  Some Neural Network guys say that their
>> methods are capable of concept integration but that is not truly correct
>> because standard neural networks are not capable of projecting their
>> 'knowledge' onto other situations.  So since I am pretty sure that I am
>> right about this, this gives me a clue that general concept projection
>> is also a part of concept integration.  From there, the concepts that
>> have been combined through projection (or other methods) have to be
>> fitted.
>>
>> I don't think that initial recognition is easily feasible because of the
>> relevancy problem. If an AGI program could pick out good candidates for
>> possible relevancy then I believe it would be able to determine the best
>> candidate pretty reliably. At least it would be reliable enough to act
>> as a sound basis for elementary intelligence. But, it can only pick out
>> the best possibilities for relevancy if it already has a good insight
>> into the problem.
>>
>
> Yes, if it has "frames" that fit what it is dealing with, then we would
> say it has insight.  As a substitute for millions of frames, one uses a
> generalization to find a fit to a frame we do have.   And, I agree that
> bootstrapping hasn't produced an AGI.
>
> I believe there is a bootstrap process that will work, we simply haven't
> found it yet.
>
> AGI is hard because the bootstrap program is more massive than other
> programs.  And, an AGI is more than a program - it's a player in the real
> world.
>
>
>
>  There is just no basis for a bootstrapping method in
>> general artificial intelligence. It is simple to talk about a gradual
>> system of recognition in AGI but it is not easy to understand how it
>> could be implemented.
>>
>> So if concept projection (onto other concepts) followed by data fitting
>> is a sound method to gradually develop insight about a problem, then
>> perhaps this is a model that can be used for all recognition. It is a
>> slow method, but perhaps this would be a good experiment for me to try.
>>
>>
> Would you have an example of what you mean by concept project onto other
> concepts?
>
>  Jim Bromer
>>
>>
>>
>> On Fri, Mar 1, 2013 at 8:50 PM, Stan Nilsen <[email protected]
>> <mailto:[email protected]>**> wrote:
>>
>>     Hi Jim,
>>
>>     couple more comments
>>
>>
>>     On 03/01/2013 05:34 PM, Jim Bromer wrote:
>>
>>         Stan,
>>         We seem to agree on a number of the basics.
>>         You said, "In my conjecture about an initial design of an
>>         intelligent
>>         unit, I find the combinatorial explosion problem to have much
>>         more to do
>>         with attempt to "evaluate" than with rudimentary cognition or
>>         recognition. Think of the real world problem of trying to assess
>> the
>>         value of something. Often that value "depends" on events that
>>         may or may
>>         not occur in the future."
>>         It is true that an evaluation does often depend on something
>>         that may or
>>         may not occur in the future and that is true with immediate
>>         language as
>>         well.  But I do think that recognition is a major problem for AGI.
>>
>>
>>     I think that recognition is a big problem too, but more in the sense
>>     of the multitude of things to be recognized.  Picking one thing and
>>     learning to recognize it, or teaching a program to recognize it
>>     wouldn't be so hard.  But, when you multiply the recognition problem
>>     by the number of objects in the world it gets to be an enormous
>>     problem. This is one reason progress is slow.
>>
>>
>>         My
>>         question is: if this is feasible then why hasn't it been done?
>>
>>
>>     I'm not sure I understand this question.  We have many examples of
>>     recognition - OCR, handwriting readers, spoken language recognition
>>     (at least the word part of it) and even written language translators
>>     that show we recognize parts of speech etc.   Can you be more
>>     specific about what "hasn't been done" in the way of recognizers?
>>
>>
>>         We do
>>         not need programs to work amazingly well, but the problem is that
>>         elementary necessities are often elusive for computer programs.
>>           So it
>>         often turns out that something that seems simple, like basic
>>         recognition, is extremely difficult. This suggests that some
>>         elementary
>>         processes a re very difficult.  The explanation is probably due
>> to a
>>         complication of the process of integrating different kinds of
>>         analyses
>>         which work well in some cases but not others.
>>
>>
>>     If one looks at this issue as a "pattern match" problem, it may be
>>     that we simply haven't established all the templates that will be
>>     needed. Again, more of a quantity problem that a technical hurdle.
>>       I'm probably not getting what you mean by basic recognition - can
>>     you explain this a little more?
>>
>>     Stan
>>
>>
>>
>>         Yes it seems that nature
>>         is full of approximations and irregularities which somehow work.
>>           But,
>>         when you actually start wondering how you might integrate multiple
>>         sources of partial information simple steps become very
>>         difficult.  But
>>         maybe I just need to start trying to use coarse integration
>>         methods and
>>         see what happens.
>>
>>
>>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to