One of the worst problems of early AI was that it over-generalized
when it tried to use a general rule on a specific case.  Actually they
over-generalized, under-generalized, and under-specified problem
solutions, but over-generalization was the most notable because they
relied primarily on word based generalizations which are often
simplifications of what would otherwise be immensely complicated
qualified cases.  For instance all primates have arms is not really
true (even if it is typically true) because there are some primates
that don't have arms (like people who have lost their arms or who were
born without arms).  So if a computer program was written to make
judgments based general rules which were then applied to information
that was input for some problem, it might come to an incorrect
conclusion for those less typical cases that did not match the overly
generalized descriptions.

When a person over-generalizes he can appear to be exaggerating or
relying on prejudiced thinking because a human being has such well
developed capabilities that odd over-generalizations tend to stand
out.  That is probably why early AI did not seem to work very well.
The over-generalizations were so apparent to most people that the
programs would disappoint anyone who expected too much from them.  The
programmers might be very excited because they saw the potential in
the first steps made in the technology, whereas the non-programmer
skeptic might only see the insipid errors.

Over generalization is one of the tools of the prejudiced.  They see
some characteristics in a few of the members of the group of people
they are trying to disparage and instead of accurately describing an
experience they might have had or might have heard about with an
individual, they convert it into a statement about them people (as
Archie Bunker used to say) as if the characteristic seen in a few
could be applied to the entire group.  This is typically combined with
exaggeration which is often used to make a weak case stronger and a
dull story more interesting.  And since people who tend to dwell on
the faults of a group that are different from them in some superficial
way often don't have much intellectually stimulating work to occupy
them, they may rely on over-simplification as well.  And when you add
emotional exaggeration to this mix the over-generalizations of the
prejudiced mind can become quite apparent and really out of touch with
reality.  Of course, there are some exceptions to the rules.  A few
prejudiced people are very intelligent, and they may be as civil
towards others as they are prejudiced.  But for the most part,
negative prejudice is associated with contempt and hatred and it is
not based on objective thinking.

But when you think about it, prejudice is not just a problem of
over-generalization, but of under-generalization as well.  For some
reason the prejudiced person has difficulty generalizing about the
positive experiences they have had with the members of the group that
they feel so much contempt for.  The reason is probably that they
never have good experiences because their own attitudes make all of
their experiences undesirable.  But the problem can be seen in this
case to be in their own heads, not somewhere else.

Of course what may appear to be prejudiced may actually be based on
uncommon thinking relative to some group.  How can anyone
differentiate between prejudice and individuality?  Often prejudice is
directed at some group based on ethnic difference, color of the skin,
religion, national identity or towards some working group that holds
different ethical views, whereas individuality is seen in contrast to
groups that are otherwise quite diverse.  But this kind of rule should
not be over-generalized.  The non-conformist has to have some good
reasons for his views if he is constantly claiming that the group has
got it wrong and he has to be able to make the case that the group
thinks in group think if he is claiming that the group all share some
underlying principle.  To make the claim that his individualistic
views (or the views of someone who the critic thinks got it right) are
somehow stronger than the group's belief he would have to be capable
of presenting objective evidence to show, (for example) that the
belief in system A which he disagrees with in contrast to the other
members of the group would invalidate some general truth B which the
group does believe in. But the problem with this definition of
objective reasonable non-conformity is that it too may be subject to
the artifacts of thought that cause over-generalization even if lacks
the earmarks of more common prejudices.  (My definition should not be
over-generalized by the way. I had to simplify it in order to make it
understandable, and I doubt that I could qualify it to the extent that
would be necessary to make it into some kind of general truth.)

So we have to strive for objectivity and qualification in order to
avoid making the errors of over-generalization especially when we are
trying to claim that the more commonly held opinions of the group are
wrong.

Jim Bromer




On Wed, Aug 13, 2008 at 10:15 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> THE POINT OF PHILOSOPHY:  There seemed to be some confusion re this - the
> main point of philosophy is that it makes us aware of the frameworks that
> are brought to bear on any subject, from sci to tech to business to arts -
> and therefore the limitations of those frameworks. Crudely, it says: hey
> you're looking in 2D, you could be loooking in 3D or nD.
>
> Classic example: Kuhn. Hey, he said, we've thought science discovers bodies
> feature-by-feature, with a steady-accumulation-of-facts. Actually those
> studies are largely governed by paradigms [or frameworks] of bodies, which
> heavily determine  what features we even look for in the first place. A
> beatiful piece of philosophical analysis.
>
> AGI: PROBLEM-SOLVING VS LEARNING.
>
> I have difficulties with AGI-ers, because my philosophical approach to AGI
> is -  start with the end-problems that an AGI must solve, and how they
> differ from AI. No one though is interested in discussing them - to a great
> extent, perhaps, because the general discussion of such problem distinctions
> throughout AI's history (and through psychology's and philosophy's history)
> has been pretty poor.
>
> AGI-ers, it seems to me, focus on learning - on how AGI's must *learn* to
> solve problems. The attitude is : if we can just develop a good way for
> AGI's to learn here, then they can learn to solve any problem, and gradually
> their intelligence will just take off, (hence superAGI). And there is a
> great deal of learning theory in AI, and detailed analysis of different
> modes of learning, that is logic- and maths-based. So AGI-ers are more
> comfortable with this approach.
>
> PHILOSOPHY OF LEARNING
>
> However there is relatively little broad-based philosophy of learning. Let's
> do some.
>
> V. broadly, the basic framework, it seems to me, that AGI imposes on
> learning to solve problems is:
>
> 1) define a *set of options* for solving a problem,  and attach if you can,
> certain probabilities to them
>
> 2) test those options,  and carry the best, if any, forward
>
> 3) find a further set of options from the problem environment, and test
> those, updating your probabilities and also perhaps your basic rules for
> applying them, as you go
>
> And, basically, just keep going like that, grinding your way to a solution,
> and adapting your program.
>
> What separates AI from AGI is that in the former:
>
> * the set of options [or problem space]  is well-defined, [as say, for how a
> program can play chess] and the environnment is highly accessible.AGI-ers
> recognize their world is much more complicated and not so clearly defined,
> and full of *uncertainty*.
>
> But the common philosophy of both AI and AGI and programming, period, it
> seems to me, is : test a set of options.
>
> THE $1M QUESTION with both approaches is: *how do you define your set of
> options*? That's the question I'd like you to try and answer. Let's make it
> more concrete.
>
> a) Defining A Set of Actions?   Take AGI agents, like Ben's, in virtual
> worlds. Such agents must learn to perform physical actions and move about
> their world. Ben's had to learn how to move to a ball and pick it up.
>
> So how do you define the set of options here - the set of
> actions/trajectories-from-A-to-B that an agent must test? For,say, moving
> to, or picking up/hitting a ball. Ben's tried a load - how were they
> defined? And by whom? The AGI programmer or the agent?
>
> b)Defining A Set of Associations ?Essentially, a great deal of formal
> problem-solving comes down to working out that A is associated with B,  (if
> C,D,E, and however many conditions apply) -   whether A "means," "causes,"
> or "contains" B etc etc .
>
> So basically you go out and test a set of associations, involving A and B
> etc, to solve the problem. If you're translating or defining language, you
> go and test a whole set of statements involving the relevant words, say "He
> jumped over the limit" to know what it means.
>
> So, again, how do you define the set of options here - the set of
> associations to be tested, e.g. the set of texts to be used on Google, say,
> for reference for your translation?
>
> c)What's The Total Possible Set of Options [Actions/Associations] -  how can
> you work out the *total* possible set of options to be tested (as opposed to
> the set you initially choose) ? Is there one with any AGI problem?
>
> Can the set of options be definitively defined at all? Is it infinite say
> for that set of trajectories, or somehow limited?   (Is there a definitive
> or guaranteed way to learn language?)
>
> d) How Can You Insure the Set of Options is not arbitrary?  That you won't
> entirely miss out the crucial options no matter how many more you add? Is
> defining a set of options an art not a science - the art of programming,
> pace Matt?
>
> POST HOC VS AD HOC APPROACHES TO LEARNING:  It seems to me there should be a
> further condition to how you define your set of options.
>
> Basically, IMO, AGI learns to solve problems, and AI solves them, *post
> hoc.* AFTER the problem has already been solved/learned.
>
> The perspective of  both on developing a program for
> problem-solving/learning is this:
>
> http://www.danradcliffe.com/12days2005/12days2005_maze_solution.jpg
>
> you work from the end, with the luxury of a grand overview, after sets of
> options and solutions have already been arrived at,  and develop your
> program from there.
>
> But in real life, general intelligences such as humans and animals have to
> solve most problems and acquire most skills AD HOC, starting from an
> extremely limited view of them :
>
> http://graphics.stanford.edu/~merrie/Europe/photos/inside%20the%20maze%202.jpg
>
> where you DON'T know what you're getting into, and you can't be sure what
> kind of maze this is, or whether it's a proper maze at all.  That's how YOU
> learn to do most things in your life. How can you develop a set of options
> from such a position?
>
> MAZES VS MESSES. Another way of phrasing the question of :"how do you define
> the set of options?" is :
>
> is the set of options along with the problem a maze [clearly definable, even
> if only in stages] or a mess:
>
> http://www.leninimports.com/jackson_pollock_gallery_12.jpg
>
> [where not a lot is definable]?
>
> Testing a set of options, it seems to me, is the essence of AI/AGI so far.
> It's worth taking time to think about. Philosophically.
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to