Richard, The problem here is that I am not sure in what sense you are using
the
word "rational". There are many usages. One of those usages is very
common in cog sci, and if I go with *that* usage your claim is completely
wrong: you can pick up an elementary cog psy textbook and find at least
two chapters dedicated to a discussion about the many ways that humans are
(according to the textbook) "irrational".
This is a subject of huge importance, and it shouldn't be hard to reach a
mutual understanding at least. "Rational" in general means that a system or
agent follows a coherent and systematic set of steps in solving a problem.
The social sciences treat humans as rational agents maximising or boundedly
satisficing their utilities in taking decisions - coherently systematically
finding solutions for their needs,(& there is much controversy about this -
everyone knows it ain't right, but no substitute has been offered)
Cognitive science treats the human mind as basically a programmed
computational machine much like actual programmed computers - and programs
are normally conceived of as rational. - coherent sets of steps etc.
Both cog sci and sci psych. generally endlessly highlight irrationalities in
our decisionmaking/problemsolving processes - but these are only in *parts*
of those processes, not the processes as a whole. They're like bugs in the
program, but the program and mind as a whole are basically rational -
following coherent sets of steps - it's just that the odd heuristic/
attitude/ assumption is wrong (or perhaps they have a neurocognitive
deficit).
Thousands of years of philosophy have also treated human beings as
fundamentally rational creatures.
The reality is two-sided. Let's start with why the mind is in fact
irrational.
The mind is actually designed to deal with problematic, divergent problems,
otherwise known as wicked, ill-structured problems. A simple example is -
writing an essay. "Write an essay (or a post) on the future/evils/ flaws of
AI." An even simpler example is: "would you like to watch this or that TV
program?" The literature on wicked problems acknowledges (sotto voce) that
these are extraordinarily abundant and more or less continuous. Eysenck
acknowledges this too.
What characterises these problems is that they are indeed ill-structured -
put it another way: there is *no such thing as a rational solution or way of
solving them*. There is no rational essay on "the future of AI" or "the
causes of the French revolution." No rational beginning, middle, ending or
any step at all. There is no rational way to think about them - incl. about
which program you want to watch. It would be quite reasonable from a purely
logical point of view to spend eternity debating that question. These are in
fact infinite problems with infinite solutions or, at least, (with the TV
program decision), infinite ways of solving them.
The mind has no coherent structure or inner systematic programming for
dealing with these problems -no coherent set of steps at all to follow - it
has to find and achieve a structure - as you do for an essay or a post.
Consequently the mind can be regarded as "systematically irrational". Look
at how people actually write essays or posts and you will find that they can
and will depart at each and any stage from what might be regarded as an
ideal process. They don't even define the problem - (I don't think there's a
single person engaged in an AGI project who has yet defined the problem) -
they don't answer the problem but answer something else entirely - they
don't look at the evidence - they don't have ideas but endlessly redefine
the problem - they don't order or organize their ideas - do any checking.
They actually write a confused mix of three essays rather than one. etc etc.
They always jump to conclusions to some extent, because it's actually
impossible to do otherwise. And they may or may not make these errors on
different occasions. Everyone's practice is highly variable. IOW these
errors have nothing to do with bugs or deficits.
Put it another way, humans are systematically more or less unfocussed,
disordered, disorganized, poorly concentrated and applied, uncritical,
unimaginative, sloppy, etc. etc. in their thinking. But since there is never
world enough and time to think about divergent problems this is more or less
inevitable, (except when an AGI-er doesn't define the problem, which is
unforgivable :) )
Sci psych and cog sci v. largely ignore all this.
Scientific psychology does not pay any serious attention at all to divergent
problems - as Michael Eysenck acknowledges. (Why? Because psychologists like
convergent problems with nice right, rational answers that can be easily
studied and marked).
IQ focusses on convergent problems, even though essaywriting and similar
projects constitute a good half - and by far the most important half -of
educational and real world problemsolving and intelligence, period. IOW sci
psych's concept of intelligence basically ignores the divergent/ "fluid"
half of intelligence.
Scientific pyschology does not in any way study the systematic and
inevitable irrationality of how people actually solve divergent problems. To
do that, you would have to look, for example, at how people solve problems
like essays from beginning to end - involving hundreds to thousands of lines
of thought. That would be much, much too complex for present-day psychology
which concentrates on simple problems and simple aspects of problemsolving.
I can go on at much greater length, incl. explaining how I think the mind is
actually programmed, and the positive, "creative" side of its irrationality,
but by now you should have the beginning of an understanding of what I'm on
about and mean by rational/irrational. There really is no question that
science does currently regard the human mind as rational - (it's called
rational, not irrational, decision theory and science talks of rational, not
irrational, agents ) - and to do otherwise would challenge the foundations
of cognitive science.
AI in general does not seek to produce irrational programs - and it remains
a subject of intense debate as to whether it can produce creative programs.
When you have a machine that can solve divergent problems - incl. writing
essays and having free-flowing conversations - and do so as
irrationally/creatively as humans do, you will have solved the problem of
AGI.
----- Original Message -----
From: "Richard Loosemore" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Thursday, December 06, 2007 3:19 PM
Subject: Re: [agi] None of you seem to be able ...
Mike Tintner wrote:
Richard: Now, interpreting that result is not easy,
Richard, I get the feeling you're getting understandably tired with all
your correspondence today. Interpreting *any* of the examples of *hard*
cog sci that you give is not easy. They're all useful, stimulating stuff,
but they don't add up to a hard pic. of the brain's cognitive
architecture. Perhaps Ben will back me up on this - it's a rather
important point - our overall *integrated* picture of the brain's
cognitive functioning is really v. poor, although certainly we have a
wealth of details about, say, which part of the brain is somehow
connected to a given operation.
You make an important point, but in your haste to make it you may have
overlooked the fact that I really agree with you ... and have gone on to
say that I am trying to fix that problem.
What I mean by that: if you look at cog psy/cog sci in a superficial way
you might come awy with the strong impression that "they don't add up to a
hard pic. of the brain's cognitive architecture". Sure. But that is what
I meant when I said that "cog sci has a huge amount of information stashed
away, but it is in a format that makes it very hard for someone trying to
build an intelligent system to actually use".
I believe I can see deeper into this problem, and I think that cog sci can
be made to add up to a consistent picture, but it requires an extra
organizational ingredient that I am in the process of adding right now.
The root of the problem is that the cog sci and AI communities both have
extremely rigid protocols about how to do research, which are incompatible
with each other. In cog sci you are expected to produce a micro-theory
for every experimental result, and efforts to work on larger theories or
frameworks without introducing new experimental results that are directly
explained are frowned upon. The result is a style of work that produces
"local patch" theories that do not have any generality.
The net result of all this is that when you say that "our overall
*integrated* picture of the brain's cognitive functioning is really v.
poor" I would point out that this is only true if you replace the "our"
with "the AI community's".
Richard:I admit that I am confused right
now: in the above paragraphs you say that your position is that the
human mind is 'rational' and then later that it is 'irrational' - was
the first one of those a typo?
Richard, No typo whatsoever if you just reread. V. clear. I say and said:
*scientific pychology* and *cog sci* treat the mind as rational. I am the
weirdo who is saying this is nonsense - the mind is
irrational/crazy/creative - rationality is a major *achievement* not
something that comes naturally. "Mike Tintner= crazy/irrational"-
somehow, I don't think you'll find that hard to remember.
The problem here is that I am not sure in what sense you are using the
word "rational". There are many usages. One of those usages is very
common in cog sci, and if I go with *that* usage your claim is completely
wrong: you can pick up an elementary cog psy textbook and find at least
two chapters dedicated to a discussion about the many ways that humans are
(according to the textbook) "irrational".
I suspect what is happening is that you are using the term in a different
way, and that this is the cause of the confusion. Since you are making
the claim, I think the ball is in your court: please try to explain why
this discrepency arises so I can understand you claim. Take a look at
e.g. Eysenck and Keane (Cognitive Psychology) and try to reconcile what
you say with what they say.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.503 / Virus Database:
269.16.15/1173 - Release Date: 12/5/2007 9:29 PM
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73240727-93e07d