2008/7/2 Terren Suydam <[EMAIL PROTECTED]>:
>
> Mike,
>
>> This is going too far. We can reconstruct to a considerable
>> extent how  humans think about problems - their conscious thoughts.
>
> Why is it going too far?  I agree with you that we can reconstruct thinking, 
> to a point. I notice you didn't say "we can completely reconstruct how humans 
> think about problems". Why not?
>
> We have two primary means for understanding thought, and both are deeply 
> flawed:
>
> 1. Introspection. Introspection allows us to analyze our mental life in a 
> reflective way. This is possible because we are able to construct mental 
> models of our mental models. There are three flaws with introspection. The 
> first, least serious flaw is that we only have access to that which is 
> present in our conscious awareness. We cannot introspect about unconscious 
> processes, by definition.
>
> This is a less serious objection because it's possible in practice to become 
> conscious of phenomena there were previously unconscious, by developing our 
> meta-mental-models. The question here becomes, is there any reason in 
> principle that we cannot become conscious of *all* mental processes?
>
> The second flaw is that, because introspection relies on the meta-models we 
> need to make sense of our internal, mental life, the possibility is always 
> present that our meta-models themselves are flawed. Worse, we have no way of 
> knowing if they are wrong, because we often unconsciously, unwittingly deny 
> evidence contrary to our conception of our own cognition, particularly when 
> it runs counter to a positive account of our self-image.
>
> Harvard's "Project Implicit" experiment 
> (https://implicit.harvard.edu/implicit/) is a great way to demonstrate how we 
> remain ignorant of deep, unconscious biases. Another example is how little we 
> understand the contribution of emotion to our decision-making. Joseph Ledoux 
> and others have shown fairly convincingly that emotion is a crucial part of 
> human cognition, but most of us (particularly us men) deny the influence of 
> emotion on our decision making.
>
> The final flaw is the most serious. It says there is a fundamental limit to 
> what introspection has access to. This is the "an eye cannot see itself" 
> objection. But I can see my eyes in the mirror, says the devil's advocate. Of 
> course, a mirror lets us observe a reflected version of our eye, and this is 
> what introspection is. But we cannot see inside our own eye, directly - it's 
> a fundamental limitation of any observational apparatus. Likewise, we cannot 
> see inside the very act of model-simulation that enables introspection. 
> Introspection relies on meta-models, or "models about models", which are 
> activated/simulated *after the fact*. We might observe ourselves in the act 
> of introspection, but that is nothing but a meta-meta-model. Each 
> introspectional act by necessity is one step (at least) removed from the 
> direct, in-the-present flow of cognition. This means that we can never 
> observe the cognitive machinery that enables the act of introspection itself.
>
> And if you don't believe that introspection relies on cognitive machinery 
> (maybe you're a dualist, but then why are you on an AI list? :-), ask 
> yourself why we can't introspect about ourselves before a certain point in 
> our young lives. It relies on a sufficiently sophisticated toolset that 
> requires a certain amount of development before it is even possible.
>
> 2. Theory. Our theories of cognition are another path to understanding, and 
> much of theory is directly or indirectly informed by introspection. When 
> introspection fails (as in language acquisition), we rely completely on 
> theory. The flaw with theory should be obvious. We have no direct way of 
> testing theories of cognition, since we don't understand the connection 
> between the mental and the physical. At best, we can use clever indirect 
> means for generating evidence, and we usually have to accept the limits of 
> reliability of subjective reports.
>

My plan is go for 3) Usefulness. Cognition is useful from an
evolutionary point of view, if we try to create systems that are
useful in the same situations (social, building world models), then we
might one day stumble upon cognition.

To expand on usefulness in social contexts, you have to ask yourself
what the point of language is, why is it useful in an evolutionary
setting. One thing the point of language is not, is fooling humans
that you are human, which makes me annoyed at all the chatbots that
get coverage as AI.

I'll write more on this later.

This by the way is why I don't self-organise purpose. I am pretty sure
a specified purpose (not the same thing as a goal, at all) is needed
for an intelligence.

  Will


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to