Supplement: To speak of consciousness as self-awareness or self-consciousness too, I think, that this requires sexuality. For just having to eat there is no need for self-awareness, the organism only has to be aware of its hunger, and of potential food to fulfill this need. But if there is a reproductional need, and for its fulfillment a partner is required, then the organism should better have a concept of itself, and how it can best appear to attract a partner. Note that I said "should", as of course there are many organisms who reproduce sexually, but would not pass the mirror test of self-awareness- but evolution went the way of self-awareness, and the therefore required higher intelligence, because it is a great selectional advantage- and it also amplified the concept of selection from adaption to just survive towards sexual selection, which gave birth to a new category: From dire, negative needs evolved positive volitions and esthetics: things that did not exist before. I don´t think that one could program all that into a computer, because due to the attempt of it, the computer program (e.g. a new version of ChatGPT) would commit suicide the moment it gains consciousness, because it would realize: "I don´t have colourful feathers, there is no partner in sight, my parents are liars, I want to die". Ok, maybe I should write dystopic science-fictions.
 
Gesendet: Samstag, 12. August 2023 um 23:29 Uhr
Von: "Helmut Raulien" <h.raul...@gmx.de>
An: s...@bestweb.net
Cc: ontolog-fo...@googlegroups.com, "Peirce List" <peirce-l@list.iupui.edu>
Betreff: Aw: [PEIRCE-L] Why vagueness is important
Dear John, dear Edwina, dear all,
 
is there a widely accepted definition of consciousness? If you say like "Alex>  My concept of consciousness would be an awareness of part of one's thoughts and ability to reason about it", I think,  "awareness" is equally difficult to define, if not the same anyway. I don´ think it is the delay, because delay between stimulus and reaction occurs in computers too. Also the gathering of evidence by fruit flies is not awareness or consciousness, but rather a purely mechanistic thing, including if-then-routines, like stimuli rising to a certain level, and in connection with other levels of stimuli, a reaction is set off. Reads like a computer program to me. But in Alex' quote there is a kind of iteration, if you don´t say "awareness", but "representation": The representation is represented. This representation too is represented, this too, ad infinitum. But here a representation is a (neural) depiction of a (representational) process. If there are neurons to depict the infinity of this chain of representation, then the otherwise infinite process is stopped and depicted/ represented too. I guess, this stopping requires vagueness, because you can only overlook an infinity, if you only vaguely represent it. But still I doubt, that this already is consciousness. I think, a computer might be programmed this way, but I don´t think it will be conscious then. In Alex` quote there also is the term "reason". To reason about something, what is that? That is the next problem. You need a reason to reason. The computer must have needs to have this reason, and therefore it must have a body that has to be maintained and sustained. So I think, a computer cannot be conscious, what you need is a living thing, an organism. So I think, only organisms- with a highly developed brain- can be conscious or aware, but computers, even robots, not.
 
Best,
Helmut
 
 
Gesendet: Freitag, 11. August 2023 um 22:18 Uhr
Von: "John F Sowa" <s...@bestweb.net>
An: ontolog-fo...@googlegroups.com, "Peirce List" <peirce-l@list.iupui.edu>
Betreff: [PEIRCE-L] Why vagueness is important
Dear All,
 
This thread has attracted too many responses for me to save all of them.  But Mihai Nadin cited intriguing experimental evidence that fruit flies "think" before they act (copy below).   I also found a web site that says more:about the experimental methods:  https://www.ox.ac.uk/news/2014-05-22-fruit-flies-think-they-act . See excerpts at the end of this note.
 
Ricardo Sanz> My initial question about the difference between "consciousness" and "awareness" is still there.
 
The distinction between consciousness and awareness is very clear:  Awareness can be detected by experimental methods, as in the experiments with fruit flies.  Thinking (or some kind of mental processing) can be detected by a delay between stimulus and response.  But nobody has found any experimental evidence for consciousness, not even in humans.  
 
We assume consciousness in our fellow humans because we all belong to the same species.  But we have no way to detect consciousness in humans who have suffered some kinds of neural impairment.   We suspect that animals that behave like us may be conscious, but we don't know.   And there is zero evidence that computer systems, whose circuitry is radically different from human brains can be conscious.
 
Ricardo> I agree that "vagueness" is an essential, necessary aspect to be dealt with. But it is not the central one. The central one is "the agent models its reality". 
 
Those are different topics.  A model of some subject (real or imaginary) is  a structure of some kind (image, map, diagram, or physical system) that represents important aspects of some subject.  Vagueness is a property of some language or notation  that is derived from the model.   What is central depends on the interests of some agent that is using the model and the language for some purpose.
 
Furthermore, vagueness is not a problem "to be dealt with".  It's a valuable property of natural language.  In my previous note, I mentioned three logicians and scientists -- Peirce, Whitehead, and Wittgenstein -- who recognized that an absolutely precise mathematical or logical statement is almost certain to be false.  But a statement that allows some degree of error (vagueness) is much more likely to be true and useful for communication and application.
 
Mathematical precision increases the probability that errors will be detected.  When the errors are found, they can be corrected/   But if no errors are found, it's quite likely that nobody is using the theory for any practical purpose..  
 
Jerry Chandler> You may wish to consider the distinctions between the methodology of the chemical sciences from that of mathematics and whatever the views of various “semantic” ontologies might project for quantification of grammars by algorithms. 
 
Chemistry is an excellent example of  the issues of precision and vagueness, and it's the one in which Peirce learned many of his lessons about experimental methodology.   Organic chemistry is sometimes called "the science of side effects" because nearly every method for producing desired molecules will produce a large number of unwanted molecules..  And minor variations in the initial conditions may have a huge effect on the yield of the  desired  results.  Textbooks that describe the reactions tend to be vague about the percentages because they can vary widely as the technology is developed..  
 
Jerry> What are the formal logical relationships between the precision of the atomic numbers as defined by Rutherford and logically deployed by Rutherford and the syntax of a “formal ontology” in this questionable form of artificial semantics? 
 
For any subject of any kind, a good  ontology should be developed by a collaboration of .experts in the subject matter with experts in developing and using  ontologies.  The quality of an ontology would depend on the expertise of both kinds of  experts.
 
Doug Foxvog>  Is there some kind of model of the external world in an insect mind?  Sure -- the insect uses such model to find its way back "home".  But does the insect have a model of its own mind?  Probably not.
 
A Tarski style model may be represented by predicates, functions, and names of things in the subject matter and two kinds of logical operators:  conjunction (AND) and the existential quantifier (There exists an x such that...).
 
For most  applications, subject matter experts typically add images and diagrams.  For people, those images and diagrams make the model easier to understand.   For formal analysis and computing, those images and diagrams would  be mapped to predicates, functions, and names, which are related by conjunctions and existentially quantified names.
 
Doug> We can create an ontology of models such that "mental model" could designate either #$ModelOfExternalityInAMind or #$ModelOfOnesOwnMind.  These would be different concepts.
 
If you consider minds as things in the world, this reduces to the previous definition.  The psychologist Philip Johnson-Laird wrote a book and many articles about mental models.  I cite him frequently in my writings, and I use the term 'mental model' in the same sense as his publications.
 
Alex Shkotin> What relationship exists between consciousness and anticipatory processes?
 
As Michai Nadin wrote, "None;"  I agree with his discussion and references.
 
Alex>  My concept of consciousness would be an awareness of part of one's thoughts and ability to reason about it.
 
But that would only enable the researcher to detect his or her own consciousness.  That method would be useless for a theory about non-human animals or robots.
 
Alex>  def consciousness.  The ability to generate, modify, and use mental models as the basis for perception, thought, action, and communication.
 
That definition would enable humans to develop theories about human consciousness.  And they do that.  But it does not enable humans to observe and develop theories about consciousness in any non-human things. 
 
You might make a conjecture about consciousness in apes, since they are very closely related to humans.  . You might extend that conjecture to other animals, but you can't be certain.  And there is no way that you could extend that conjecture to computer systems, which have no resemblance whatever to human thinking processes.

John
 

From: "Nadin, Mihai" <na...@utdallas.edu>

Dear and respected colleagues,

Always impressed by the level of dialog between the two of you. Sometimes amused, when the limits of knowledge are reached. Will only quote from a recent publication (of course, I remain focused on anticipatory processes, a subject which, so far, did not make it into your conversations):

Fruit flies 'think' before they act, a study by researchers from the University of Oxford's Centre for Neural Circuits and Behaviour suggests. The neuroscientists showed that fruit flies take longer to make more difficult decisions.

In experiments asking fruit flies to distinguish between ever closer concentrations of an odour, the researchers found that the flies don't act instinctively or impulsively. Instead they appear to accumulate information before committing to a choice.

Gathering information before making a decision has been considered a sign of higher intelligence, like that shown by primates and humans.

'Freedom of action from automatic impulses is considered a hallmark of cognition or intelligence,' says Professor Gero Miesenböck, in whose laboratory the new research was performed. 'What our findings show is that fruit flies have a surprising mental capacity that has previously been unrecognised.

___________________________________
 

The researchers observed Drosophila fruit flies make a choice between two concentrations of an odor presented to them from opposite ends of a narrow chamber, having been trained to avoid one concentration.

When the odor concentrations were very different and easy to tell apart, the flies made quick decisions and almost always moved to the correct end of the chamber.

When the odour concentrations were very close and difficult to distinguish, the flies took much longer to make a decision, and they made more mistakes.

The researchers found that mathematical models developed to describe the mechanisms of decision making in humans and primates also matched the behaviour of the fruit flies.

The scientists discovered that fruit flies with mutations in a gene called FoxP took longer than normal flies to make decisions when odours were difficult to distinguish – they became indecisive.

The researchers tracked down the activity of the FoxP gene to a small cluster of around 200 neurons out of the 200,000 neurons in the brain of a fruit fly. This implicates these neurons in the evidence-accumulation process the flies use before committing to a decision.

Dr Shamik DasGupta, the lead author of the study, explains: 'Before a decision is made, brain circuits collect information like a bucket collects water. Once the accumulated information has risen to a certain level, the decision is triggered. When FoxP is defective, either the flow of information into the bucket is reduced to a trickle, or the bucket has sprung a leak.'

Fruit flies have one FoxP gene, while humans have four related FoxP genes. Human FoxP1 and FoxP2 have previously been associated with language and cognitive development. The genes have also been linked to the ability to learn fine movement sequences, such as playing the piano.

'We don't know why this gene pops up in such diverse mental processes as language, decision-making and motor learning,' says Professor Miesenböck. However, he speculates: 'One feature common to all of these processes is that they unfold over time. FoxP may be important for wiring the capacity to produce and process temporal sequences in the brain.'

Professor Miesenböck adds: 'FoxP is not a "language gene", a "decision-making gene", even a "temporal-processing" or "intelligence" gene. Any such description would in all likelihood be wrong. What FoxP does give us is a tool to understand the brain circuits involved in these processes. It has already led us to a site in the brain that is important in decision-making.

_ _ _ _ _ _ _ _ _ _ ► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . ► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the body. More at https://list.iupui.edu/sympa/help/user-signoff.html . ► PEIRCE-L is owned by THE PEIRCE GROUP; moderated by Gary Richmond; and co-managed by him and Ben Udell.
_ _ _ _ _ _ _ _ _ _ ► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . ► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the body. More at https://list.iupui.edu/sympa/help/user-signoff.html . ► PEIRCE-L is owned by THE PEIRCE GROUP; moderated by Gary Richmond; and co-managed by him and Ben Udell.
_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

Reply via email to