On Sat, Nov 29, 2008 at 11:53 AM, Steve Richfield
<[EMAIL PROTECTED]> wrote:
> Jim,
>
> YES - and I think I have another piece of your puzzle to consider...
>
> A longtime friend of mine, Dave,  went on to become a PhD psychologist, who
> subsequently took me on as a sort of "project" - to figure out why most
> people who met me then either greatly valued my friendship, or quite the
> opposite, would probably kill me if they had the safe opportunity. After
> much discussion, interviewing people in both camps, etc., he came up with
> what appears to be a key to decision making in general...
>
> It appears that people "pigeonhole" other people, concepts, situations,
> etc., into a very finite number of pigeonholes - probably just tens of
> pigeonholes for other people.


Steve:
I found that I used a similar method of categorizing people who I
talked to on these newsgroups.  I wouldn't call it pigeonholing
though. (Actually, I wouldn't call anything pigeonholing, but that is
just me.)  I would rely on a handful of generalizations that I thought
were applicable to different people who tended to exhibit some common
characteristics.  However, when I discovered that an individual who I
thought I understood had another facet to his personality or thoughts
that I hadn't seen before I often found that I had to apply another
categorical generality to my impression of him.  I soon built up
generalization categories based on different experiences with
different kinds of people, and I eventually realized that although I
often saw similar kinds of behaviors in different people, each person
seemed to be comprised of different sets (or different strengths) of
the various component characteristics that I derived to recall my
experiences with people in these groups.  So I came to similar
conclusions that you and your friend came to.

An interesting thing about talking to reactive people in these
discussion groups.  I found that by eliminating more and more affect
from my comments, by refraining from personal comments, innuendos or
making meta-discussion analyses and by increasingly emphasizing
objectivity in my comments I could substantially reduce any hostility
directed at me.  My problem is that I do not want to remove all affect
from my conversation just to placate some unpleasant person.  But I
guess I should start using that technique again when necessary.

Jim Bromer


On Sat, Nov 29, 2008 at 11:53 AM, Steve Richfield
<[EMAIL PROTECTED]> wrote:
> Jim,
>
> YES - and I think I have another piece of your puzzle to consider...
>
> A longtime friend of mine, Dave,  went on to become a PhD psychologist, who
> subsequently took me on as a sort of "project" - to figure out why most
> people who met me then either greatly valued my friendship, or quite the
> opposite, would probably kill me if they had the safe opportunity. After
> much discussion, interviewing people in both camps, etc., he came up with
> what appears to be a key to decision making in general...
>
> It appears that people "pigeonhole" other people, concepts, situations,
> etc., into a very finite number of pigeonholes - probably just tens of
> pigeonholes for other people. Along with the pigeonhole, they keep
> amendments, like "Steve is like Joe, but with ...".
>
> Then, there is the pigeonhole labeled "other" that all the mavericks are
> thrown into. Not being at all like anyone else that most people have ever
> met, I was invariably filed into the "other" pigeonhole, along with
> Einstein, Ted Bundy, Jack the Ripper, Stephen Hawking, etc.
>
> People are "safe" to the extent that they are predictable, and people in the
> "other" pigeonhole got that way because they appear to NOT be predictable,
> e.g. because of their worldview, etc. Now, does the potential value of the
> alternative worldview outweigh the potential danger of perceived
> unpredictability? The answer to this question apparently drove my own
> personal classification in other people.
>
> Dave's goal was to devise a way to stop making enemies, but unfortunately,
> this model of how people got that way suggested no potential solution.
> People who keep themselves safe from others having radically different
> worldviews are truly in a mental prison of their own making, and there is no
> way that someone whom they distrust could ever release them from that
> prison.
>
> I suspect that recognition, decision making, and all sorts of "intelligent"
> processes may be proceeding in much the same way. There may be no
> "grandmother" neuron/pidgeonhole, but rather a "kindly old person" with an
> amendment that "is related". If on the other hand your other grandmother
> flogged you as a child, the filing might be quite different.
>
> Any thoughts?
>
> Steve Richfield
> ================
> On 11/29/08, Jim Bromer <[EMAIL PROTECTED]> wrote:
>>
>> One of the problems that comes with the casual use of analytical
>> methods is that the user becomes inured to their habitual misuse. When
>> a casual familiarity is combined with a habitual ignorance of the
>> consequences of a misuse the user can become over-confident or
>> unwisely dismissive of criticism regardless of how on the mark it
>> might be.
>>
>> The most proper use of statistical and probabilistic methods is to
>> base results on a strong association with the data that they were
>> derived from.  The problem is that the AI community cannot afford this
>> strong a connection to original source because they are trying to
>> emulate the mind in some way and it is not reasonable to assume that
>> the mind is capable of storing all data that it has used to derive
>> insight.
>>
>> This is a problem any AI method has to deal with, it is not just a
>> probability thing.  What is wrong with the AI-probability group
>> mind-set is that very few of its proponents ever consider the problem
>> of statistical ambiguity and its obvious consequences.
>>
>> All AI programmers have to consider the problem.  Most theories about
>> the mind posit the use of similar experiences to build up theories
>> about the world (or to derive methods to deal effectively with the
>> world).  So even though the methods to deal with the data environment
>> are detached from the original sources of those methods, they can
>> still be reconnected by the examination of similar experiences that
>> may subsequently occur.
>>
>> But still it is important to be able to recognize the significance and
>> necessity of doing this from time to time.  It is important to be able
>> to reevaluate parts of your theories about things.  We are not just
>> making little modifications from our internal theories about things
>> when we react to ongoing events, we must be making some sort of
>> reevaluation of our insights about the kind of thing that we are
>> dealing with as well.
>>
>> I realize now that most people in these groups probably do not
>> understand where I am coming from because their idea of AI programming
>> is based on a model of programming that is flat.  You have the program
>> at one level and the possible reactions to the data that is input as
>> the values of the program variables are carefully constrained by that
>> level.  You can imagine a more complex model of programming by
>> appreciating the possibility that the program can react to IO data by
>> rearranging subprograms to make new kinds of programs.  Although a
>> subtle argument can be made that any program that conditionally reacts
>> to input data is rearranging the execution of its subprograms, the
>> explicit recognition by the programmer that this is useful tool in
>> advanced programming is probably highly correlated with its more
>> effective use.  (I mean of course it is highly correlated with its
>> effective use!)  I believe that casually constructed learning methods
>> (and decision processes) can lead to even more uncontrollable results
>> when used with this self-programming aspect of advanced AI programs.
>>
>> The consequences then of failing to recognize that mushed up decision
>> processes that are never compared against the data (or kinds of
>> situations) that they were derived from will be the inevitable
>> emergence of inherently illogical decision processes that will mush up
>> an AI system long before it gets any traction.
>>
>> Jim Bromer
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
> ________________________________
> agi | Archives | Modify Your Subscription


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to