John:

You write:

> Your notes remind me of the importance of vagueness and the limitations of 
> precision in any field -- especially science, engineering, and formal 
> ontology. 

You may wish to consider the distinctions between the methodology of the 
chemical sciences from that of mathematics and whatever the views of various 
“semantic” ontologies might project for quantification of grammars by 
algorithms. 

In short, the chemical methodologies for determinations of mathematically 
precise quantities for concrete objects containing billions of atoms are 
routinely used.   For example, in the clinical chemistry used to determine 
genetic sequences where the reference terms are molecular names and numbers. 
Orders of magnitude more precise than the objects you refer to.  This precision 
is possible because the scale of the ontology is at the atomic scale of 
ascription and description. 

I would pose a simple question:  What are the formal logical relationships 
between the precision of the atomic numbers as defined by Rutherford and 
logically deployed by Rutherford and the syntax of a “formal ontology” in this 
questionable form of artificial semantics?

Cheers

Jerry 

> On Aug 10, 2023, at 2:27 PM, John F Sowa <s...@bestweb.net> wrote:
> 
> Alex,
> 
> The answer to your question below is related to your previous note:  "Just a 
> question: do flies or bees have mental models?"
> 
> Short answer:  They behave as if they do,  Bees definitely develop a model of 
> the environment, and they go back to their nest and communicate it to their 
> colleagues by means of a dance that indicates (a) direction to the source of 
> food; (b) the distance; and (c) the amount available at that source.
> 
> That is very close to my definition of consciousness: "The ability to 
> generate, modify, and use mental models as the basis for perception, thought, 
> action, and communication."   The bees demonstrate generating and using 
> something that could be called a mental model for perception, action, and 
> communication.  The only question is about the amount and kind of thinking.  
> 
> In the quotation by Damasio, he wrote "Ultimately consciousness allows us to 
> experience maps as images, to manipulate those images, and to apply reasoning 
> to them."    It's not clear how and whether the bees can "manipulate those 
> images and apply reasoning to them."
> 
> Flies aren't as smart as bees.  They may have simple images that may be 
> generated automatically by perception and used for action.  But flies don't  
> use them for communication.
> 
> I admit that my definition is based on philosophical issues, but so is any 
> mathematical version.  And the issue of vagueness is related to generality.  
> An image that can only be applied to a single pattern is not very useful. 
> 
> Alex> The main question is: can we create a device (now these are autonomous 
> robots) capable of studying the outside world and then itself?
> 
> The application to bees and flies can be adapted to designing devices 
> "capable of studying the outside world and then itself".    Every aspect of 
> perception, thinking, action, and communication is certainly relevant, and 
> those four words are easier to explain and to test than the complex books 
> that Anatoly cited.  The most complex issues involve the definition of mental 
> models and methods of thinking about them and their relationship to the 
> world, to oneself, and to the future of oneself in the world.
> 
> And the issues about vagueness are extremely important to issues about 
> similarity, generality, and changes in the world and oneself in the future. 
> Those are fundamental issues of ontology, and every one of them involves 
> vagueness or incompleteness in perception, thinking, action, and 
> communication.  
> 
> As for mathematical precision, please note that Peirce, Whitehead, and 
> Wittgenstein all had a very strong background in logic, mathematics, and 
> science.   That may be why they were also very sensitive to issues about 
> vagueness.  I'll also quote Lord Kelvin:  "Better an approximate answer to 
> the right question than an exact answer to the wrong question."
> 
> John
>  
> 
> From: "Alex Shkotin" <alex.shko...@gmail.com>
> 
> Excerpt is very interesting and mostly philosophical.  The main question is: 
> can we create a device (now these are autonomous robots) capable of studying 
> the outside world and then itself?  The progress in this direction is one of 
> the main topics in robotic news.  And this progress is significant.
> 
> Alex
> 
> _ _ _ _ _ _ _ _ _ _
> ► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
> PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu 
> . 
> ► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
> with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in 
> the body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
> ► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
> co-managed by him and Ben Udell.

_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

Reply via email to