Aw: [PEIRCE-L] Why vagueness is important

2023-08-12 Thread Helmut Raulien
 

Supplement: To speak of consciousness as self-awareness or self-consciousness too, I think, that this requires sexuality. For just having to eat there is no need for self-awareness, the organism only has to be aware of its hunger, and of potential food to fulfill this need. But if there is a reproductional need, and for its fulfillment a partner is required, then the organism should better have a concept of itself, and how it can best appear to attract a partner. Note that I said "should", as of course there are many organisms who reproduce sexually, but would not pass the mirror test of self-awareness- but evolution went the way of self-awareness, and the therefore required higher intelligence, because it is a great selectional advantage- and it also amplified the concept of selection from adaption to just survive towards sexual selection, which gave birth to a new category: From dire, negative needs evolved positive volitions and esthetics: things that did not exist before. I don´t think that one could program all that into a computer, because due to the attempt of it, the computer program (e.g. a new version of ChatGPT) would commit suicide the moment it gains consciousness, because it would realize: "I don´t have colourful feathers, there is no partner in sight, my parents are liars, I want to die". Ok, maybe I should write dystopic science-fictions.
 

Gesendet: Samstag, 12. August 2023 um 23:29 Uhr
Von: "Helmut Raulien" 
An: s...@bestweb.net
Cc: ontolog-fo...@googlegroups.com, "Peirce List" 
Betreff: Aw: [PEIRCE-L] Why vagueness is important



Dear John, dear Edwina, dear all,

 

is there a widely accepted definition of consciousness? If you say like "Alex>  My concept of consciousness would be an awareness of part of one's thoughts and ability to reason about it", I think,  "awareness" is equally difficult to define, if not the same anyway. I don´ think it is the delay, because delay between stimulus and reaction occurs in computers too. Also the gathering of evidence by fruit flies is not awareness or consciousness, but rather a purely mechanistic thing, including if-then-routines, like stimuli rising to a certain level, and in connection with other levels of stimuli, a reaction is set off. Reads like a computer program to me. But in Alex' quote there is a kind of iteration, if you don´t say "awareness", but "representation": The representation is represented. This representation too is represented, this too, ad infinitum. But here a representation is a (neural) depiction of a (representational) process. If there are neurons to depict the infinity of this chain of representation, then the otherwise infinite process is stopped and depicted/ represented too. I guess, this stopping requires vagueness, because you can only overlook an infinity, if you only vaguely represent it. But still I doubt, that this already is consciousness. I think, a computer might be programmed this way, but I don´t think it will be conscious then. In Alex` quote there also is the term "reason". To reason about something, what is that? That is the next problem. You need a reason to reason. The computer must have needs to have this reason, and therefore it must have a body that has to be maintained and sustained. So I think, a computer cannot be conscious, what you need is a living thing, an organism. So I think, only organisms- with a highly developed brain- can be conscious or aware, but computers, even robots, not.

 

Best,

Helmut

 
 

Gesendet: Freitag, 11. August 2023 um 22:18 Uhr
Von: "John F Sowa" 
An: ontolog-fo...@googlegroups.com, "Peirce List" 
Betreff: [PEIRCE-L] Why vagueness is important



Dear All,

 

This thread has attracted too many responses for me to save all of them.  But Mihai Nadin cited intriguing experimental evidence that fruit flies "think" before they act (copy below).   I also found a web site that says more:about the experimental methods:  https://www.ox.ac.uk/news/2014-05-22-fruit-flies-think-they-act . See excerpts at the end of this note.

 

Ricardo Sanz> My initial question about the difference between "consciousness" and "awareness" is still there.

 

The distinction between consciousness and awareness is very clear:  Awareness can be detected by experimental methods, as in the experiments with fruit flies.  Thinking (or some kind of mental processing) can be detected by a delay between stimulus and response.  But nobody has found any experimental evidence for consciousness, not even in humans.  

 

We assume consciousness in our fellow humans because we all belong to the same species.  But we have no way to detect consciousness in humans who have suffered some kinds of neural impairment.   We suspect that animals that behave like us may be conscious, but we don't know.   And there is zero evidence that computer systems, whose circui

Aw: [PEIRCE-L] Why vagueness is important

2023-08-12 Thread Helmut Raulien
Dear John, dear Edwina, dear all,

 

is there a widely accepted definition of consciousness? If you say like "Alex>  My concept of consciousness would be an awareness of part of one's thoughts and ability to reason about it", I think,  "awareness" is equally difficult to define, if not the same anyway. I don´ think it is the delay, because delay between stimulus and reaction occurs in computers too. Also the gathering of evidence by fruit flies is not awareness or consciousness, but rather a purely mechanistic thing, including if-then-routines, like stimuli rising to a certain level, and in connection with other levels of stimuli, a reaction is set off. Reads like a computer program to me. But in Alex' quote there is a kind of iteration, if you don´t say "awareness", but "representation": The representation is represented. This representation too is represented, this too, ad infinitum. But here a representation is a (neural) depiction of a (representational) process. If there are neurons to depict the infinity of this chain of representation, then the otherwise infinite process is stopped and depicted/ represented too. I guess, this stopping requires vagueness, because you can only overlook an infinity, if you only vaguely represent it. But still I doubt, that this already is consciousness. I think, a computer might be programmed this way, but I don´t think it will be conscious then. In Alex` quote there also is the term "reason". To reason about something, what is that? That is the next problem. You need a reason to reason. The computer must have needs to have this reason, and therefore it must have a body that has to be maintained and sustained. So I think, a computer cannot be conscious, what you need is a living thing, an organism. So I think, only organisms- with a highly developed brain- can be conscious or aware, but computers, even robots, not.

 

Best,

Helmut

 
 

Gesendet: Freitag, 11. August 2023 um 22:18 Uhr
Von: "John F Sowa" 
An: ontolog-fo...@googlegroups.com, "Peirce List" 
Betreff: [PEIRCE-L] Why vagueness is important



Dear All,

 

This thread has attracted too many responses for me to save all of them.  But Mihai Nadin cited intriguing experimental evidence that fruit flies "think" before they act (copy below).   I also found a web site that says more:about the experimental methods:  https://www.ox.ac.uk/news/2014-05-22-fruit-flies-think-they-act . See excerpts at the end of this note.

 

Ricardo Sanz> My initial question about the difference between "consciousness" and "awareness" is still there.

 

The distinction between consciousness and awareness is very clear:  Awareness can be detected by experimental methods, as in the experiments with fruit flies.  Thinking (or some kind of mental processing) can be detected by a delay between stimulus and response.  But nobody has found any experimental evidence for consciousness, not even in humans.  

 

We assume consciousness in our fellow humans because we all belong to the same species.  But we have no way to detect consciousness in humans who have suffered some kinds of neural impairment.   We suspect that animals that behave like us may be conscious, but we don't know.   And there is zero evidence that computer systems, whose circuitry is radically different from human brains can be conscious.

 

Ricardo> I agree that "vagueness" is an essential, necessary aspect to be dealt with. But it is not the central one. The central one is "the agent models its reality". 

 

Those are different topics.  A model of some subject (real or imaginary) is  a structure of some kind (image, map, diagram, or physical system) that represents important aspects of some subject.  Vagueness is a property of some language or notation  that is derived from the model.   What is central depends on the interests of some agent that is using the model and the language for some purpose.

 

Furthermore, vagueness is not a problem "to be dealt with".  It's a valuable property of natural language.  In my previous note, I mentioned three logicians and scientists -- Peirce, Whitehead, and Wittgenstein -- who recognized that an absolutely precise mathematical or logical statement is almost certain to be false.  But a statement that allows some degree of error (vagueness) is much more likely to be true and useful for communication and application.

 

Mathematical precision increases the probability that errors will be detected.  When the errors are found, they can be corrected/   But if no errors are found, it's quite likely that nobody is using the theory for any practical purpose..  

 

Jerry Chandler> You may wish to consider the distinctions between the methodology of the chemical sciences from that of mathematics and whatever the views of various “semantic” ontologies might project for quantification of grammars by algorithms. 

 

Chemistry is an excellent example of  the issues of precision and vagueness, and it's the one in which Peirce learned many of