Re: The Mind (off topic, but then, is anything off topic on this list?)

2002-12-29 Thread John M

> John M wrote:
> >Eric,
> >your proposal sounds like: "here I am and here is my mind" .
> >What gave you the idea that "the two" can be thought of as separate
> >entities?
> >The fact that we differentiate between a bowel movement and a thinking
> >process in philosophy ... does not MAKE them separate entities.
> >
> Eric's first law of abstraction: (known variously as the trivially
> profound law or the profoundly trivial law:)
>-- "Every two things are both the same and different."--
>
> Bowel movements and mental processes. They are both physical processes
> in the body, it's true.
> The difference is that a mental process is in its essence a process of
>representation ("re presentation") of reality and similarly
> structured potential realities. That is, it is a process of using some
> aspect in the brain as a stand-in for some aspect of the external world.
>And it is a process of doing so in a way that is flexible and
> general enough to allow the generation of representations (mental
> stand-ins) of new hypothetical or counterfactual states of the external
>reality, as well as of actual states. Thus we can think of how
> things we have not directly apprehended might be, and how things that
> haven't yet taken place could be, if only this would happen, or, sadly but
>instructively, of how things might have been.

Eric, it is an old rule in arguing to use lohg foggy sentences with
foggy expressions so the opponent does not know what to attack. Do you wish
a similarly worded translation of your "mental narrative" to the "bowel
movement"  I mentioned as a punch line? Mutatis mutandis the same phrases
can be formulated for the latter. (I save it from the list).
Please forgive me not to re-quote all of your post.
>
> Models of the mind:
> 
I skip the interesting historical survey.
>
> And today, we believe that the brain is a computer and the mind is
> software.
No, we don't.
SNIP
In your argument *against* the statement (thank you) you use obsolete
visions stemming from conventional analyses of the image (you argue
against).
Also you plunge into the 'computer' idea, a machine that "thinks" using
memory.
Again, in classical reductionism, as a "Ding an Sich" a white elephant on
its own. A 'computer' hardware is an expensive paperweight, a software some
dirt on a paper - or configurational patterns in a semiconductor-maze. None
does anything. It becomes a comnputer, when the complexity functions of the
total, just as a brain does not think and a skin does not feel, without the
complexity of the living composition total (so abruptly stopped at death).
The difference is not so profound as I thought earlier: machines are
products of complexity beyond the machine-aspect, human complexities
(ingenuity?) and social possibilities were functioning together in the
'evolvement' of a (cute) machine that thinks.
What we observe, is a piece of industrial product, what wwe don't is the
extended network and (their combined) influence to generate that piece of
junk.
What I am referring to is your statement about the computer 'memory' :
>
>...? It's not really part of the computer at all. It is the software.<

>
> The historical thinker who came closest to understanding the nature of
> the mind was undoubtedly Plato,
Sorry, he and his contemporaries (up to this time) are obsolete in a
sense that they used for their (surely ingenius) thinking an epistemic
cognitive inventory way less complete than developed since their death or
sources. I risk the statement, that our (scientific etc.) cognitive
inventory is not complete even today There are 'hidden variables' in
Bohm's 'implicate' and unknown connections in the so far undiscovered
networks of complexities, what we may call 'impredicative' --- all important
to understand what we THINK we understand, even formalize in the (=) signs
fixed quantized formalisms.
So I don't rely on the relics. I rather confess to my scientific agnosticism
(nothing to do with the religious one). Give me 300 years, we will know
more.

I wanted to skip your concluding par. about computer-examples for an
impredicative complexity looked at by different aspects of select functions
in reductionistic science as chapters of separate units (ie. the mental
aspect of the complexity-human) but I find it necessary to show how we can
confuse a model with the 'total' we formulated in the representation of an
aspect within.
A computer 'models' nicely the "model"-functions of human thinking *as cut*
in certain scientific considerations for detailed study. If somebody does
not look beyond the (artificially set) boundaries of this model, then the
computer-example looks like "brain", (with software: mind). It is the recent
version of the historical images you mentioned (and I skipped): telephone
switchboard, steam engine, clockwork, soul-spirit of religion and caveman,
whatever.
I dare state that I do not know details, but I know: it is more involved,
interlaced end

Re: The Mind (off topic, but then, is anything off topic on this list?)

2002-12-28 Thread Eric Hawthorne
John M wrote:


Eric,

your proposal sounds like: "here I am and here is my mind" .
What gave you the idea that "the two" can be thought of as separate
entities?
The fact that we differentiate between a bowel movement and a thinking
process in philosophy ... does not MAKE them separate
entities. 

Eric's first law of abstraction: (known variously as the trivially 
profound law or the profoundly trivial law:)

"Every two things are both the same and different."

Bowel movements and mental processes. They are both physical processes 
in the body, it's true.
The difference is that a mental process is
in its essence a process of representation ("re presentation") of 
reality and similarly
structured potential realities. That is, it is a process of using some 
aspect in the brain as a stand-in
for some aspect of the external world. And it is a process of doing so 
in a way that is flexible and
general enough to allow the generation of representations (mental 
stand-ins) of new hypothetical
or counterfactual states of the external reality, as well as of actual 
states. Thus we can think of how
things we have not directly apprehended might be, and how things that 
haven't yet taken place could be,
if only this would happen, or, sadly but instructively, of how things 
might have been.

Models of the mind:

Back when 90% of the world and its behaviour was unknown and attributed 
to God, the mind and soul
was thought to be an earthbound, temporarily trapped part of the greater 
mind of God.

During the early industrial age, the mind was thought by some to be the 
process of operation of
a machine comprised of cogs and gears and things like steam power.

And today, we believe that the brain is a computer and the mind is 
software.
The conventional wisdom is that this theory is as naive as the earlier 
theories; that we are similarly
deluded by our present-day fetish with computers. But I think this 
dismissal of brain-as-computer, mind-as-software
is facile. I think our theories of mind have been improving over time. 
The brain IS a form of machine, as the 19th century
people thought. And much more specifically, the brain IS a form of 
universal computing machine, as we think today.

Let me ask this. What category of machine do we have that can hold in it 
symbolic representations
that have correspondences with aspects of the external world? What 
category of machine do we have whose
representations of the world are manipulable and malleable in ways that 
can correspond to changes in the
state of the external world? The computer of course. What part of the 
computer stores the representations
of the world? Well I guess we could say its disk and memory. But what 
part of the computer performs the
manipulations on the symbols which sometimes correspond to the formation 
of hypotheses about the state of the
external world? It's not really part of the computer at all. It is the 
software.

The historical thinker who came closest to understanding the nature of 
the mind was undoubtedly Plato,
who first understood a world of abstract concepts, his world of ideals. 
The only thing he didn't know is that
you could build a machine (the computer) capable of holding inside 
itself and manipulating those ideals, and that
in fact we already had a particularly sophisticated form of that type of 
machine on top of our shoulders.
Our brains. Where is Plato's world of ideals? He didn't know. We do. It 
is the representative and asbstracted
representative information about the world stored symbolically in our 
brains,
manipulated by the cognitive software running on our brains.

The brain-mind duality is solved (and now officially boring). If you can 
say that you truly understand:
  1) the distinction between computing software and computing hardware,
  2) issues such as what makes one piece of software different from or 
the same as another
  versus what makes one piece of hardware different from versus the 
same as another,
  3) What the relationship of computing software to computing hardware
  4) How the essential particulars of high-level software gain 
independence from particulars of computing hardware through
the construction of hierarchies of  levels or layers of software process 
with emergent behaviours at each level,

and yet you claim not to know what a mind is with respect to a brain, 
then I would say you're just not thinking
hard enough about the issue.





Re: The Mind (off topic, but then, is anything off topic on this list?)

2002-12-28 Thread John M
Eric,

your proposal sounds like: "here I am and here is my mind" .
What gave you the idea that "the two" can be thought of as separate
entities?
The fact that we differentiate between a bowel movement and a thinking
process in philosophy and constructed separate noumena for 'mental' and
'bodily' aspects (call it res extensa and res cogitans, soul/body,
mind/flesh or whatever is your actual distaste) does not MAKE them separate
entities. The ONE complexity 'human' (part of the whole, anyway) has aspects
we more, or less observe, and in primitive thinking we personnify them into
'units', called by stupid names. And then write smart - awardwinning? -
books on them.

Happy New Year

John Mikes



- Original Message -
From: "Eric Hawthorne" <[EMAIL PROTECTED]>
Sent: Saturday, December 28, 2002 2:03 AM

> See response attached as text file:
>
> Joao Leao wrote:
>
> >Both seem to me rather vaccuous statements since we don't
> >really yet have a theory, classical or quantum or whathaveyou , of what a
> >mind is or does. I don't mean an emprirical, or verifiable, or decidable
> >or merely speculative theory! I mean ANY theory. Please show me I
> >am wrong if you think otherwise.
> >
>
> If you don't like my somewhat rambling ideas on the subject, below,
perhaps try
> A book by Steven Pinker called "How the Mind Works". It's supposed to be
pretty
> good. I've got it but haven't read it yet.
>
> Eric
>
> -
>
>
>






> What does a mind do?
>
> A mind in an intelligent animal, such as ourselves, does the following:
>
> 1. Interprets sense-data and symbolically represents the objects,
relationships, processes,
> and more generally, situations that occur in its environment.
>
>   Extra buzzwords: segmentation, individuation,
>"cutting the world with a knife into this and
not-this"
>(paraphrased from Zen & The Art of Motorcycle
Maintenance)
>
> 2. Creates both specific models of specific situations and their
constituents,
> and abstracted, generalized models of important classes of situations and
situation
> constituents, using techniques such as cluster analysis, logical induction
and abduction,
> bayesian inference (or effectively equivalent processes).
>
>   Extra buzzwords: "structure pump", "concept formation", "episodic
memory"
>
>
> 3. Recognizes new situations, objects, relationships, processes as being
instances
> of already represented specific or generalized situations, objects,
relationships,
> processes.
>
> The details of the recognition processes vary across sensory domains, but
probably
> commonly use things like: matching at multiple levels of abstraction with
feedback
> between levels, massively parallel matching processes, abstraction
lattices.
>
> Extra buzzwords: patterns, pattern-matching, neural net algorithms,
>  constraint-logic-programming, associative recall
>
>
> 4. Builds up, through sense-experience, representation, and recognition
processes,
> over time, an associatively interrelated "library of
symbolic+probabilistic models or
> micro-theories" about contexts in the environment.
>
> 5. Holds micro-theories in degrees of belief. That is, in degrees of being
considered
> a good "simple, corresponding, explanatory, successfully predictive" model
of some
> aspect of the environment.
>
> 6. Adjusts degrees of belief through a continual process of theory
extension,
> hypothesis testing against new observations, incremental theory revision,
assessment of
> competing extended theories etc. In short, performs a mini, personalized
equivalent
> of "the history of science forming the evolving set of well-accepted
scientific
> theories".
>
> Degree of belief in each micro-theory is influenced by factors such as:
>
> a. Repeated success of theory at prediction under trial against new observ
ations
>
> b. Internal logical consistency of theory.
>
> c. Lack of inconsistency with new observations and with other
micro-theories of possibly
> identical or constituent-sharing contexts.
>
> d. Generation of large numbers of general and specific propositions which
are
> deductively derived from the assumptions of the theory, and which are
independently
> verified as being "corresponding" to observations.
>
> e. Depth and longevity of embedding of the theory in the knowledge base.
i.e.
> the extent to which repeated successful reasoning from the theory has
resulted in the
> theory becoming a "basis theory" or "theory justifying other extended or
analogous
> theories" in the knowledge base.
>
>
> 7. Creates alternative possible world models (counterfactuals or
hypotheticals),
> by combining abstracted models with episodic models but with variations
generated
> through the use of substitution of altered or alternative constituent
entities,
> sequences of events, etc.
>
>  Extra buzzwords: Counterfactuals, possibl

Re: The Mind (off topic, but then, is anything off topic on this list?)

2002-12-27 Thread Eric Hawthorne
See response attached as text file:

Joao Leao wrote:


Both seem to me rather vaccuous statements since we don't
really yet have a theory, classical or quantum or whathaveyou , of what a
mind is or does. I don't mean an emprirical, or verifiable, or decidable
or merely speculative theory! I mean ANY theory. Please show me I
am wrong if you think otherwise.



If you don't like my somewhat rambling ideas on the subject, below, perhaps try
A book by Steven Pinker called "How the Mind Works". It's supposed to be pretty
good. I've got it but haven't read it yet.

Eric

-



What does a mind do?

A mind in an intelligent animal, such as ourselves, does the following:

1. Interprets sense-data and symbolically represents the objects, relationships, 
processes,
and more generally, situations that occur in its environment.
  
  Extra buzzwords: segmentation, individuation,
   "cutting the world with a knife into this and not-this" 
   (paraphrased from Zen & The Art of Motorcycle Maintenance)
 
2. Creates both specific models of specific situations and their constituents,
and abstracted, generalized models of important classes of situations and situation
constituents, using techniques such as cluster analysis, logical induction and 
abduction,
bayesian inference (or effectively equivalent processes).
   
  Extra buzzwords: "structure pump", "concept formation", "episodic memory" 
  

3. Recognizes new situations, objects, relationships, processes as being instances
of already represented specific or generalized situations, objects, relationships, 
processes. 

The details of the recognition processes vary across sensory domains, but probably
commonly use things like: matching at multiple levels of abstraction with feedback
between levels, massively parallel matching processes, abstraction lattices.

Extra buzzwords: patterns, pattern-matching, neural net algorithms, 
 constraint-logic-programming, associative recall


4. Builds up, through sense-experience, representation, and recognition processes, 
over time, an associatively interrelated "library of symbolic+probabilistic models or 
micro-theories" about contexts in the environment.

5. Holds micro-theories in degrees of belief. That is, in degrees of being considered
a good "simple, corresponding, explanatory, successfully predictive" model of some
aspect of the environment.

6. Adjusts degrees of belief through a continual process of theory extension,
hypothesis testing against new observations, incremental theory revision, assessment of
competing extended theories etc. In short, performs a mini, personalized equivalent
of "the history of science forming the evolving set of well-accepted scientific 
theories".

Degree of belief in each micro-theory is influenced by factors such as: 

a. Repeated success of theory at prediction under trial against new observations

b. Internal logical consistency of theory.

c. Lack of inconsistency with new observations and with other micro-theories of 
possibly
identical or constituent-sharing contexts.

d. Generation of large numbers of general and specific propositions which are
deductively derived from the assumptions of the theory, and which are independently
verified as being "corresponding" to observations. 

e. Depth and longevity of embedding of the theory in the knowledge base. i.e.
the extent to which repeated successful reasoning from the theory has resulted in the 
theory becoming a "basis theory" or "theory justifying other extended or analogous 
theories" in the knowledge base. 


7. Creates alternative possible world models (counterfactuals or hypotheticals),
by combining abstracted models with episodic models but with variations generated
through the use of substitution of altered or alternative constituent entities,
sequences of events, etc.

 Extra buzzwords: Counterfactuals, possible worlds, modal logic, dreaming 

8. Generates, and ranks for likelihood, extensions of episodic models into the future,
using stereotyped abstract situation models with associated probabilities to predict
the next likely sequences of events, given the part of the situation that has
been observed to unfold so far.
 
9. Uses the extended and altered models, (hypotheticals, counterfactuals), as a context
in which to create and pre-evaluate through simulation the likely effectiveness of 
plans of action designed to alter the course of future events to the material 
advantage of the animal.

10. Chooses a plan. Acts on the world according to the plan, either indirectly, 
through communication with other motivated intelligent agents, or directly by 
controlling its own body and using tools.

10a. Communicates with other motivated intelligent agents to assist it in carrying
out plans to affect the environment:
Aspects of the communication process:
- Model (represent and simulate) the knowledge, motivations and r