Mike,

That's a rather weak reply. I'm open to the possibility that my ideas are 
incorrect or need improvement, but calling what I said nonsense without further 
justification is just hand waving.

Unless you mean this as your justification:
"Your conscious, inner thoughts are not that different from your public, 
recordable dialogue."

How this amounts to an objection to my points about introspection is beyond 
me... care to elaborate?

Terren

--- On Wed, 7/2/08, Mike Tintner <[EMAIL PROTECTED]> wrote:

> Terren,
> 
> Obviously, as I indicated, I'm not suggesting that we
> can easily construct a 
> total model of human cognition. But it ain't that hard
> to reconstruct 
> reasonable and highly informative, if imperfect,  models of
> how humans 
> consciously think about problems. As I said, artists have
> been doing a 
> reasonable job for centuries. Shakespeare, who really
> started the inner 
> monologue, was arguably the first scientist of
> consciousness. The kind of 
> standard argument you give below - the eye can't look
> at itself - is 
> actually nonsense. Your conscious, inner thoughts are not
> that different 
> from your public, recordable dialogue. (Any decent
> transcript of thought, 
> BTW, will give a v. good indication of the emotions
> involved).
> 
> We're not v. far apart here - we agree about the many
> dimensions of 
> cognition, most of which are probably NOT directly
> accessible to the 
> conscious mind. I'm just insisting on the massive
> importance of studying 
> conscious thought. It was, as Crick said,
> "ridiculous" for science not to 
> study consciousness - (it had a lot of rubbish arguments
> for not doing that, 
> then) - it is equally ridiculous and in fact scientifically
> obscene not to 
> study conscious thought. The consequences both for humans
> generally and AGI 
> are enormous.
> 
> 
> Terren:> Mike,
> >
> >> This is going too far. We can reconstruct to a
> considerable
> >> extent how  humans think about problems - their
> conscious thoughts.
> >
> > Why is it going too far?  I agree with you that we can
> reconstruct 
> > thinking, to a point. I notice you didn't say
> "we can completely 
> > reconstruct how humans think about problems". Why
> not?
> >
> > We have two primary means for understanding thought,
> and both are deeply 
> > flawed:
> >
> > 1. Introspection. Introspection allows us to analyze
> our mental life in a 
> > reflective way. This is possible because we are able
> to construct mental 
> > models of our mental models. There are three flaws
> with introspection. The 
> > first, least serious flaw is that we only have access
> to that which is 
> > present in our conscious awareness. We cannot
> introspect about unconscious 
> > processes, by definition.
> >
> > This is a less serious objection because it's
> possible in practice to 
> > become conscious of phenomena there were previously
> unconscious, by 
> > developing our meta-mental-models. The question here
> becomes, is there any 
> > reason in principle that we cannot become conscious of
> *all* mental 
> > processes?
> >
> > The second flaw is that, because introspection relies
> on the meta-models 
> > we need to make sense of our internal, mental life,
> the possibility is 
> > always present that our meta-models themselves are
> flawed. Worse, we have 
> > no way of knowing if they are wrong, because we often
> unconsciously, 
> > unwittingly deny evidence contrary to our conception
> of our own cognition, 
> > particularly when it runs counter to a positive
> account of our self-image.
> >
> > Harvard's "Project Implicit" experiment 
> > (https://implicit.harvard.edu/implicit/) is a great
> way to demonstrate how 
> > we remain ignorant of deep, unconscious biases.
> Another example is how 
> > little we understand the contribution of emotion to
> our decision-making. 
> > Joseph Ledoux and others have shown fairly
> convincingly that emotion is a 
> > crucial part of human cognition, but most of us
> (particularly us men) deny 
> > the influence of emotion on our decision making.
> >
> > The final flaw is the most serious. It says there is a
> fundamental limit 
> > to what introspection has access to. This is the
> "an eye cannot see 
> > itself" objection. But I can see my eyes in the
> mirror, says the devil's 
> > advocate. Of course, a mirror lets us observe a
> reflected version of our 
> > eye, and this is what introspection is. But we cannot
> see inside our own 
> > eye, directly - it's a fundamental limitation of
> any observational 
> > apparatus. Likewise, we cannot see inside the very act
> of model-simulation 
> > that enables introspection. Introspection relies on
> meta-models, or 
> > "models about models", which are
> activated/simulated *after the fact*. We 
> > might observe ourselves in the act of introspection,
> but that is nothing 
> > but a meta-meta-model. Each introspectional act by
> necessity is one step 
> > (at least) removed from the direct, in-the-present
> flow of cognition. This 
> > means that we can never observe the cognitive
> machinery that enables the 
> > act of introspection itself.
> >
> > And if you don't believe that introspection relies
> on cognitive machinery 
> > (maybe you're a dualist, but then why are you on
> an AI list? :-), ask 
> > yourself why we can't introspect about ourselves
> before a certain point in 
> > our young lives. It relies on a sufficiently
> sophisticated toolset that 
> > requires a certain amount of development before it is
> even possible.
> >
> > 2. Theory. Our theories of cognition are another path
> to understanding, 
> > and much of theory is directly or indirectly informed
> by introspection. 
> > When introspection fails (as in language acquisition),
> we rely completely 
> > on theory. The flaw with theory should be obvious. We
> have no direct way 
> > of testing theories of cognition, since we don't
> understand the connection 
> > between the mental and the physical. At best, we can
> use clever indirect 
> > means for generating evidence, and we usually have to
> accept the limits of 
> > reliability of subjective reports.
> >
> > Terren
> >
> > --- On Wed, 7/2/08, Mike Tintner
> <[EMAIL PROTECTED]> wrote:
> >> Terren,
> >>
> >> This is going too far. We can reconstruct to a
> considerable
> >> extent how
> >> humans think about problems - their conscious
> thoughts.
> >> Artists have been
> >> doing this reasonably well for hundreds of years.
> Science
> >> has so far avoided
> >> this, just as it avoided studying first the mind,
> with
> >> behaviourism,  then
> >> consciousness,. The main reason cognitive science
> and
> >> psychology have
> >> avoided stream-of-thought studies (apart from v.
> odd
> >> scientists like Jerome
> >> Singer) is that conscious thought about problems
> is v.
> >> different from the
> >> highly ordered, rational, thinking of programmed
> computers
> >> which cog. sci.
> >> uses as its basic paradigm. In fact, human
> thinking is
> >> fundamentally
> >> different - the conscious self has major
> difficulty
> >> concentrating on any
> >> problem for any length of time -  controlling the
> mind for
> >> more than a
> >> relatively few seconds, (as religious and
> humanistic
> >> thinkers have been
> >> telling us for thousands of years). Computers of
> course
> >> have perfect
> >> concentration forever. But that's because
> computers
> >> haven't had to deal with
> >> the type of problems that we do - the problematic
> problems
> >> where you don't,
> >> basically, know the answer, or how to find the
> answer,
> >> before you start.
> >>
> >> For this kind of problem - which is actually what
> >> differentiates AGI from
> >> narrow AI - human thinking, creative as opposed to
> >> rational, stumbling,
> >> scatty, and freely associative, is actually IDEAL,
> for all
> >> its
> >> imperfections.
> >>
> >> Yes, even if we extend our model of intelligence
> to include
> >> creative as well
> >> as rational thinking, it will still be an
> impoverished
> >> model, which may not
> >> include embodied thinking and perhaps other
> dimensions. But
> >> hey, we'll get
> >> there bit by bit, (just not, as we both agree, all
> at once
> >> in one five-year
> >> leap).
> >>
> >> Terren:> My points about the pitfalls of
> theorizing
> >> about intelligence apply
> >> to any and all humans who would attempt it -
> meaning,
> >> it's not necessary to
> >> characterize AI folks in one way or another. There
> are any
> >> number of aspects
> >> of intelligence we could highlight that pose a
> challenge to
> >> orthodox models
> >> of intelligence, but the bigger point is that
> there are
> >> fundamental limits
> >> to the ability of an intelligence to observe
> itself, in
> >> exactly the same way
> >> that an eye cannot see itself.
> >> >
> >> > Consciousness and intelligence are present in
> every
> >> possible act of
> >> > contemplation, so it is impossible to gain a
> vantage
> >> point of intelligence
> >> > from outside of it. And that's exactly
> what we
> >> pretend to do when we
> >> > conceptualize it within an artificial
> construct. This
> >> is the principle
> >> > conceit of AI, that we can understand
> intelligence in
> >> an objective way,
> >> > and model it well enough to reproduce by
> design.
> >> >
> >> > Terren
> >> >
> >> > --- On Tue, 7/1/08, Mike Tintner
> >> <[EMAIL PROTECTED]> wrote:
> >> >
> >> >> Terren:It's to make the larger point
> that we
> >> may be so
> >> >> immersed in our own
> >> >> conceptualizations of intelligence -
> particularly
> >> because
> >> >> we live in our
> >> >> models and draw on our own experience and
> >> introspection to
> >> >> elaborate them -
> >> >> that we may have tunnel vision about the
> >> possibilities for
> >> >> better or
> >> >> different models. Or, we may take for
> granted huge
> >> swaths
> >> >> of what makes us
> >> >> so smart, because it's so familiar,
> or below
> >> the radar
> >> >> of our conscious
> >> >> awareness, that it doesn't even occur
> to us to
> >> reflect
> >> >> on it.
> >> >>
> >> >> No 2 is more relevant - AI-ers don't
> seem to
> >> introspect
> >> >> much. It's an irony
> >> >> that the way AI-ers think when creating a
> program
> >> bears v.
> >> >> little
> >> >> resemblance to the way programmed
> computers think.
> >> (Matt
> >> >> started to broach
> >> >> this when he talked a while back of
> computer
> >> programming as
> >> >> an art). But
> >> >> AI-ers seem to have no interest in the
> discrepancy
> >> - which
> >> >> again is ironic,
> >> >> because analysing it would surely help
> them with
> >> their
> >> >> programming as well
> >> >> as the small matter of understanding how
> general
> >> >> intelligence actually
> >> >> works.
> >> >>
> >> >> In fact  - I just looked - there is a
> longstanding
> >> field on
> >> >> psychology of
> >> >> programming. But it seems to share the
> deficiency
> >> of
> >> >> psychology and
> >> >> cognitive science generally which is : no
> study of
> >> the
> >> >> stream-of-conscious-thought, especially
> conscious
> >> >> problemsolving. The only
> >> >> AI figure I know who did take some
> interest here
> >> was
> >> >> Herbert Simon who
> >> >> helped establish the use of verbal
> protocols.
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> -------------------------------------------
> >> >> agi
> >> >> Archives:
> >> http://www.listbox.com/member/archive/303/=now
> >> >> RSS Feed:
> >> http://www.listbox.com/member/archive/rss/303/
> >> >> Modify Your Subscription:
> >> >> http://www.listbox.com/member/?&;
> >> >> Powered by Listbox:
> http://www.listbox.com
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > -------------------------------------------
> >> > agi
> >> > Archives:
> >> http://www.listbox.com/member/archive/303/=now
> >> > RSS Feed:
> >> http://www.listbox.com/member/archive/rss/303/
> >> > Modify Your Subscription:
> >> > http://www.listbox.com/member/?&;
> >> > Powered by Listbox: http://www.listbox.com
> >> >
> >>
> >>
> >>
> >>
> >> -------------------------------------------
> >> agi
> >> Archives:
> http://www.listbox.com/member/archive/303/=now
> >> RSS Feed:
> http://www.listbox.com/member/archive/rss/303/
> >> Modify Your Subscription:
> >> http://www.listbox.com/member/?&;
> >> Powered by Listbox: http://www.listbox.com
> >
> >
> >
> >
> >
> > -------------------------------------------
> > agi
> > Archives:
> http://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> http://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription: 
> > http://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> > 
> 
> 
> 
> 
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to