Aw: [PEIRCE-L] Why vagueness is important

2023-08-12 Thread Helmut Raulien
 

Supplement: To speak of consciousness as self-awareness or self-consciousness too, I think, that this requires sexuality. For just having to eat there is no need for self-awareness, the organism only has to be aware of its hunger, and of potential food to fulfill this need. But if there is a reproductional need, and for its fulfillment a partner is required, then the organism should better have a concept of itself, and how it can best appear to attract a partner. Note that I said "should", as of course there are many organisms who reproduce sexually, but would not pass the mirror test of self-awareness- but evolution went the way of self-awareness, and the therefore required higher intelligence, because it is a great selectional advantage- and it also amplified the concept of selection from adaption to just survive towards sexual selection, which gave birth to a new category: From dire, negative needs evolved positive volitions and esthetics: things that did not exist before. I don´t think that one could program all that into a computer, because due to the attempt of it, the computer program (e.g. a new version of ChatGPT) would commit suicide the moment it gains consciousness, because it would realize: "I don´t have colourful feathers, there is no partner in sight, my parents are liars, I want to die". Ok, maybe I should write dystopic science-fictions.
 

Gesendet: Samstag, 12. August 2023 um 23:29 Uhr
Von: "Helmut Raulien" 
An: s...@bestweb.net
Cc: ontolog-fo...@googlegroups.com, "Peirce List" 
Betreff: Aw: [PEIRCE-L] Why vagueness is important



Dear John, dear Edwina, dear all,

 

is there a widely accepted definition of consciousness? If you say like "Alex>  My concept of consciousness would be an awareness of part of one's thoughts and ability to reason about it", I think,  "awareness" is equally difficult to define, if not the same anyway. I don´ think it is the delay, because delay between stimulus and reaction occurs in computers too. Also the gathering of evidence by fruit flies is not awareness or consciousness, but rather a purely mechanistic thing, including if-then-routines, like stimuli rising to a certain level, and in connection with other levels of stimuli, a reaction is set off. Reads like a computer program to me. But in Alex' quote there is a kind of iteration, if you don´t say "awareness", but "representation": The representation is represented. This representation too is represented, this too, ad infinitum. But here a representation is a (neural) depiction of a (representational) process. If there are neurons to depict the infinity of this chain of representation, then the otherwise infinite process is stopped and depicted/ represented too. I guess, this stopping requires vagueness, because you can only overlook an infinity, if you only vaguely represent it. But still I doubt, that this already is consciousness. I think, a computer might be programmed this way, but I don´t think it will be conscious then. In Alex` quote there also is the term "reason". To reason about something, what is that? That is the next problem. You need a reason to reason. The computer must have needs to have this reason, and therefore it must have a body that has to be maintained and sustained. So I think, a computer cannot be conscious, what you need is a living thing, an organism. So I think, only organisms- with a highly developed brain- can be conscious or aware, but computers, even robots, not.

 

Best,

Helmut

 
 

Gesendet: Freitag, 11. August 2023 um 22:18 Uhr
Von: "John F Sowa" 
An: ontolog-fo...@googlegroups.com, "Peirce List" 
Betreff: [PEIRCE-L] Why vagueness is important



Dear All,

 

This thread has attracted too many responses for me to save all of them.  But Mihai Nadin cited intriguing experimental evidence that fruit flies "think" before they act (copy below).   I also found a web site that says more:about the experimental methods:  https://www.ox.ac.uk/news/2014-05-22-fruit-flies-think-they-act . See excerpts at the end of this note.

 

Ricardo Sanz> My initial question about the difference between "consciousness" and "awareness" is still there.

 

The distinction between consciousness and awareness is very clear:  Awareness can be detected by experimental methods, as in the experiments with fruit flies.  Thinking (or some kind of mental processing) can be detected by a delay between stimulus and response.  But nobody has found any experimental evidence for consciousness, not even in humans.  

 

We assume consciousness in our fellow humans because we all belong to the same species.  But we have no way to detect consciousness in humans who have suffered some kinds of neural impairment.   We suspect that animals that behave like us may be conscious, but we don't know.   And there is zero evidence that computer system

Re: [PEIRCE-L] Why vagueness is important

2023-08-12 Thread Edwina Taborsky
Helmut, John, List

I don’t know if there is a ‘widely accepted definition of consciousness’. I am 
referring to Peirce’s discussions of the term. I like his differentiation 
between  the immediate and mediate consciousness. Notice that he refers to BOTH 
types as ‘consciousness’. That is, he doesn’t confine consciousness to the use 
of a symbolic modelling process. 

IF one considers that all that is ‘existential’ functions within a triadic 
interaction, ie as a Sign, made up of O-R-I [object relation, representamen 
relation, interpretant relation], then, the immediate consciousness could be 
better classified as awareness - and we can understand this interaction to be 
composed of Relations within the mode of Secondness and/or Firstness. That is - 
I am assuming that consciousness emerges only within the operation of the mode 
of Thirdness.

But certainly, an animal or plant is aware of its surroundings, whether it be 
the scent of a predator, or, an actual insect eating that plant’s leaves. That 
is - I am suggesting that Thirdness, that modelling process, functions in 
plants and animals as well as humans. An animal/plant can analyze or measure 
the incoming stimulus data from the Object against its internalized model - 
whether this knowledge base is genetic or learned] and come to an 
interpretation that ’this is X…and I’d better do Y”.
Therefore - since Thirdness is functioning in this interaction - I’d call it 
consciousness, a mediated consciousness in this example.. So, I don’t think the 
fruit fly’s interaction is mechanical, but is both awareness [in a mode of 1ns 
and 2ns] but also, mediated consciousness, ie, in a mode of 3ns!!!

But, again, to symbolically model these relations - I think this requires - 
symbolic methods, eg, mathematics, and language etc - and so far, only humans 
can do this!!

Can a computer be ‘conscious’? It can certainly operate within the modes of 1ns 
and 2ns - in its most basic mechanical manner. It has a stored knowledge base 
against which it references incoming data, so, presumably, we’d have to say, 
this is operating within a mode of 3ns. According to our basic definition - it 
is ‘conscious’. It can make decisions on ‘what to do’. 

Can it learn? Apparently - yes. Can it translate its models into symbolic 
format? Yes.  So- what we are talking about, with computers, might be, instead, 
not consciousness but instead- does a computer have any capacity for morality 
or ethics? Does it have the capacity to understand the difference between good 
and evil?  I think those are the important questions. 

Edwina

> On Aug 12, 2023, at 5:29 PM, Helmut Raulien  wrote:
> 
> Dear John, dear Edwina, dear all,
>  
> is there a widely accepted definition of consciousness? If you say like 
> "Alex>  My concept of consciousness would be an awareness of part of one's 
> thoughts and ability to reason about it", I think,  "awareness" is equally 
> difficult to define, if not the same anyway. I don´ think it is the delay, 
> because delay between stimulus and reaction occurs in computers too. Also the 
> gathering of evidence by fruit flies is not awareness or consciousness, but 
> rather a purely mechanistic thing, including if-then-routines, like stimuli 
> rising to a certain level, and in connection with other levels of stimuli, a 
> reaction is set off. Reads like a computer program to me. But in Alex' quote 
> there is a kind of iteration, if you don´t say "awareness", but 
> "representation": The representation is represented. This representation too 
> is represented, this too, ad infinitum. But here a representation is a 
> (neural) depiction of a (representational) process. If there are neurons to 
> depict the infinity of this chain of representation, then the otherwise 
> infinite process is stopped and depicted/ represented too. I guess, this 
> stopping requires vagueness, because you can only overlook an infinity, if 
> you only vaguely represent it. But still I doubt, that this already is 
> consciousness. I think, a computer might be programmed this way, but I don´t 
> think it will be conscious then. In Alex` quote there also is the term 
> "reason". To reason about something, what is that? That is the next problem. 
> You need a reason to reason. The computer must have needs to have this 
> reason, and therefore it must have a body that has to be maintained and 
> sustained. So I think, a computer cannot be conscious, what you need is a 
> living thing, an organism. So I think, only organisms- with a highly 
> developed brain- can be conscious or aware, but computers, even robots, not.
>  
> Best,
> Helmut
>  
>  
> Gesendet: Freitag, 11. August 2023 um 22:18 Uhr
> Von: "John F Sowa" 
> An: ontolog-fo...@googlegroups.com, "Peirce List" 
> Betreff: [PEIRCE-L] Why

Aw: [PEIRCE-L] Why vagueness is important

2023-08-12 Thread Helmut Raulien
Dear John, dear Edwina, dear all,

 

is there a widely accepted definition of consciousness? If you say like "Alex>  My concept of consciousness would be an awareness of part of one's thoughts and ability to reason about it", I think,  "awareness" is equally difficult to define, if not the same anyway. I don´ think it is the delay, because delay between stimulus and reaction occurs in computers too. Also the gathering of evidence by fruit flies is not awareness or consciousness, but rather a purely mechanistic thing, including if-then-routines, like stimuli rising to a certain level, and in connection with other levels of stimuli, a reaction is set off. Reads like a computer program to me. But in Alex' quote there is a kind of iteration, if you don´t say "awareness", but "representation": The representation is represented. This representation too is represented, this too, ad infinitum. But here a representation is a (neural) depiction of a (representational) process. If there are neurons to depict the infinity of this chain of representation, then the otherwise infinite process is stopped and depicted/ represented too. I guess, this stopping requires vagueness, because you can only overlook an infinity, if you only vaguely represent it. But still I doubt, that this already is consciousness. I think, a computer might be programmed this way, but I don´t think it will be conscious then. In Alex` quote there also is the term "reason". To reason about something, what is that? That is the next problem. You need a reason to reason. The computer must have needs to have this reason, and therefore it must have a body that has to be maintained and sustained. So I think, a computer cannot be conscious, what you need is a living thing, an organism. So I think, only organisms- with a highly developed brain- can be conscious or aware, but computers, even robots, not.

 

Best,

Helmut

 
 

Gesendet: Freitag, 11. August 2023 um 22:18 Uhr
Von: "John F Sowa" 
An: ontolog-fo...@googlegroups.com, "Peirce List" 
Betreff: [PEIRCE-L] Why vagueness is important



Dear All,

 

This thread has attracted too many responses for me to save all of them.  But Mihai Nadin cited intriguing experimental evidence that fruit flies "think" before they act (copy below).   I also found a web site that says more:about the experimental methods:  https://www.ox.ac.uk/news/2014-05-22-fruit-flies-think-they-act . See excerpts at the end of this note.

 

Ricardo Sanz> My initial question about the difference between "consciousness" and "awareness" is still there.

 

The distinction between consciousness and awareness is very clear:  Awareness can be detected by experimental methods, as in the experiments with fruit flies.  Thinking (or some kind of mental processing) can be detected by a delay between stimulus and response.  But nobody has found any experimental evidence for consciousness, not even in humans.  

 

We assume consciousness in our fellow humans because we all belong to the same species.  But we have no way to detect consciousness in humans who have suffered some kinds of neural impairment.   We suspect that animals that behave like us may be conscious, but we don't know.   And there is zero evidence that computer systems, whose circuitry is radically different from human brains can be conscious.

 

Ricardo> I agree that "vagueness" is an essential, necessary aspect to be dealt with. But it is not the central one. The central one is "the agent models its reality". 

 

Those are different topics.  A model of some subject (real or imaginary) is  a structure of some kind (image, map, diagram, or physical system) that represents important aspects of some subject.  Vagueness is a property of some language or notation  that is derived from the model.   What is central depends on the interests of some agent that is using the model and the language for some purpose.

 

Furthermore, vagueness is not a problem "to be dealt with".  It's a valuable property of natural language.  In my previous note, I mentioned three logicians and scientists -- Peirce, Whitehead, and Wittgenstein -- who recognized that an absolutely precise mathematical or logical statement is almost certain to be false.  But a statement that allows some degree of error (vagueness) is much more likely to be true and useful for communication and application.

 

Mathematical precision increases the probability that errors will be detected.  When the errors are found, they can be corrected/   But if no errors are found, it's quite likely that nobody is using the theory for any practical purpose..  

 

Jerry Chandler> You may wish to consider the distinctions between the methodology of the chemical sciences from that of mathematics and whatever the views of various “semantic” ontologies might project for quant

Re: [PEIRCE-L] Why vagueness is important

2023-08-12 Thread Edwina Taborsky
John, List

I view the two terms of awareness and consciousness as aspects of Secondness 
and Thirdness, where, as Peirce wrote, “two sets of objects, what we are 
immediately conscious of and what we are mediately conscious of” [5.395]/. 
Awareness would obviously function within a categorical mode of Secondness, 
that brute unmediated immediate interaction of A with B, while consciousness 
requires a mediated analysis, so to speak, against an internal model.

I don’t view this model as requiring a language; indeed, I’d say modelling goes 
on everywhere - in the chemical-physical and biological realms as well as the 
human societal [who do use symbolic/linguistic models].  This definition would 
thus lead one to ask the question: Is this model transformable to one that is 
’separate’ from itself, so to speak, ie is the holder conscious of the model? 
I’d say that so far, only humans have this capacity to symbolically represent a 
model and thus be conscious of its form.

But I consider that all material reality requires models, which enable 
anticipation of interactions - and thus, enable decision-making, even in the 
most elementary bits of matter. Randomness is not the major force in the 
universe.

As for vagueness, I consider this a property of both Firstness [chance] and 
Thirdness [ habit, generality] See Peirce 5.450] - nothing to do with language 
or notation - but, a property of an interaction of A with B. 

I don’t consider vagueness as a synonym for ‘error’ but as the nature of a 
stimulus ‘outside of a determination ’, so to speak. That is, the interacting 
stimulus/data is open to interpretation. Some interactions, for example, in a 
mode of pure or genuine Secondness [2-2], are not open to interpretation,  such 
as a military order, or a Stop sign, or a basic chemical interaction -  but the 
more complex ones are open. 

Edwina Taborsky





> On Aug 11, 2023, at 4:18 PM, John F Sowa  wrote:
> 
> Dear All,
> 
> This thread has attracted too many responses for me to save all of them.  But 
> Mihai Nadin cited intriguing experimental evidence that fruit flies "think" 
> before they act (copy below).   I also found a web site that says more:about 
> the experimental methods:  
> https://www.ox.ac.uk/news/2014-05-22-fruit-flies-think-they-act . See 
> excerpts at the end of this note.
> 
> Ricardo Sanz> My initial question about the difference between 
> "consciousness" and "awareness" is still there.
> 
> The distinction between consciousness and awareness is very clear:  Awareness 
> can be detected by experimental methods, as in the experiments with fruit 
> flies.  Thinking (or some kind of mental processing) can be detected by a 
> delay between stimulus and response.  But nobody has found any experimental 
> evidence for consciousness, not even in humans.  
> 
> We assume consciousness in our fellow humans because we all belong to the 
> same species.  But we have no way to detect consciousness in humans who have 
> suffered some kinds of neural impairment.   We suspect that animals that 
> behave like us may be conscious, but we don't know.   And there is zero 
> evidence that computer systems, whose circuitry is radically different from 
> human brains can be conscious.
> 
> Ricardo> I agree that "vagueness" is an essential, necessary aspect to be 
> dealt with. But it is not the central one. The central one is "the agent 
> models its reality". 
> 
> Those are different topics.  A model of some subject (real or imaginary) is  
> a structure of some kind (image, map, diagram, or physical system) that 
> represents important aspects of some subject.  Vagueness is a property of 
> some language or notation  that is derived from the model.   What is central 
> depends on the interests of some agent that is using the model and the 
> language for some purpose.
> 
> Furthermore, vagueness is not a problem "to be dealt with".  It's a valuable 
> property of natural language.  In my previous note, I mentioned three 
> logicians and scientists -- Peirce, Whitehead, and Wittgenstein -- who 
> recognized that an absolutely precise mathematical or logical statement is 
> almost certain to be false.  But a statement that allows some degree of error 
> (vagueness) is much more likely to be true and useful for communication and 
> application.
> 
> Mathematical precision increases the probability that errors will be 
> detected.  When the errors are found, they can be corrected/   But if no 
> errors are found, it's quite likely that nobody is using the theory for any 
> practical purpose..  
>  
> Jerry Chandler> You may wish to consider the distinctions between the 
> methodology of the chemical sciences from that of mathematics and whatever 
> the views of various “semantic” ontologies might project for quantification 
> of grammars by algorithms. 
> 
> Chemistry is an excellent example of  the issues of precision and vagueness, 
> and it's the one in which Peirce learned many of his lessons about 

[PEIRCE-L] Why vagueness is important

2023-08-11 Thread John F Sowa
Dear All,

This thread has attracted too many responses for me to save all of them.  But 
Mihai Nadin cited intriguing experimental evidence that fruit flies "think" 
before they act (copy below).   I also found a web site that says more:about 
the experimental methods:  
https://www.ox.ac.uk/news/2014-05-22-fruit-flies-think-they-act . See excerpts 
at the end of this note.

Ricardo Sanz> My initial question about the difference between "consciousness" 
and "awareness" is still there.

The distinction between consciousness and awareness is very clear:  Awareness 
can be detected by experimental methods, as in the experiments with fruit 
flies.  Thinking (or some kind of mental processing) can be detected by a delay 
between stimulus and response.  But nobody has found any experimental evidence 
for consciousness, not even in humans.

We assume consciousness in our fellow humans because we all belong to the same 
species.  But we have no way to detect consciousness in humans who have 
suffered some kinds of neural impairment.   We suspect that animals that behave 
like us may be conscious, but we don't know.   And there is zero evidence that 
computer systems, whose circuitry is radically different from human brains can 
be conscious.

Ricardo> I agree that "vagueness" is an essential, necessary aspect to be dealt 
with. But it is not the central one. The central one is "the agent models its 
reality".

Those are different topics.  A model of some subject (real or imaginary) is  a 
structure of some kind (image, map, diagram, or physical system) that 
represents important aspects of some subject.  Vagueness is a property of some 
language or notation  that is derived from the model.   What is central depends 
on the interests of some agent that is using the model and the language for 
some purpose.

Furthermore, vagueness is not a problem "to be dealt with".  It's a valuable 
property of natural language.  In my previous note, I mentioned three logicians 
and scientists -- Peirce, Whitehead, and Wittgenstein -- who recognized that an 
absolutely precise mathematical or logical statement is almost certain to be 
false.  But a statement that allows some degree of error (vagueness) is much 
more likely to be true and useful for communication and application.

Mathematical precision increases the probability that errors will be detected.  
When the errors are found, they can be corrected/   But if no errors are found, 
it's quite likely that nobody is using the theory for any practical purpose..

Jerry Chandler> You may wish to consider the distinctions between the 
methodology of the chemical sciences from that of mathematics and whatever the 
views of various “semantic” ontologies might project for quantification of 
grammars by algorithms.

Chemistry is an excellent example of  the issues of precision and vagueness, 
and it's the one in which Peirce learned many of his lessons about experimental 
methodology.   Organic chemistry is sometimes called "the science of side 
effects" because nearly every method for producing desired molecules will 
produce a large number of unwanted molecules..  And minor variations in the 
initial conditions may have a huge effect on the yield of the  desired  
results.  Textbooks that describe the reactions tend to be vague about the 
percentages because they can vary widely as the technology is developed..

Jerry> What are the formal logical relationships between the precision of the 
atomic numbers as defined by Rutherford and logically deployed by Rutherford 
and the syntax of a “formal ontology” in this questionable form of artificial 
semantics?

For any subject of any kind, a good  ontology should be developed by a 
collaboration of .experts in the subject matter with experts in developing and 
using  ontologies.  The quality of an ontology would depend on the expertise of 
both kinds of  experts.

Doug Foxvog>  Is there some kind of model of the external world in an insect 
mind?  Sure -- the insect uses such model to find its way back "home".  But 
does the insect have a model of its own mind?  Probably not.

A Tarski style model may be represented by predicates, functions, and names of 
things in the subject matter and two kinds of logical operators:  conjunction 
(AND) and the existential quantifier (There exists an x such that...).

For most  applications, subject matter experts typically add images and 
diagrams.  For people, those images and diagrams make the model easier to 
understand.   For formal analysis and computing, those images and diagrams 
would  be mapped to predicates, functions, and names, which are related by 
conjunctions and existentially quantified names.

Doug> We can create an ontology of models such that "mental model" could 
designate either #$ModelOfExternalityInAMind or #$ModelOfOnesOwnMind.  These 
would be different concepts.

If you consider minds as things in the world, this reduces to the previous 
definition.  The psychologist Philip 

[PEIRCE-L] Why vagueness is important

2023-08-10 Thread John F Sowa
Alex,

The answer to your question below is related to your previous note:  "Just a 
question: do flies or bees have mental models?"

Short answer:  They behave as if they do,  Bees definitely develop a model of 
the environment, and they go back to their nest and communicate it to their 
colleagues by means of a dance that indicates (a) direction to the source of 
food; (b) the distance; and (c) the amount available at that source.

That is very close to my definition of consciousness: "The ability to generate, 
modify, and use mental models as the basis for perception, thought, action, and 
communication."   The bees demonstrate generating and using something that 
could be called a mental model for perception, action, and communication.  The 
only question is about the amount and kind of thinking.

In the quotation by Damasio, he wrote "Ultimately consciousness allows us to 
experience maps as images, to manipulate those images, and to apply reasoning 
to them."It's not clear how and whether the bees can "manipulate those 
images and apply reasoning to them."

Flies aren't as smart as bees.  They may have simple images that may be 
generated automatically by perception and used for action.  But flies don't  
use them for communication.

I admit that my definition is based on philosophical issues, but so is any 
mathematical version.  And the issue of vagueness is related to generality.  An 
image that can only be applied to a single pattern is not very useful.

Alex> The main question is: can we create a device (now these are autonomous 
robots) capable of studying the outside world and then itself?

The application to bees and flies can be adapted to designing devices "capable 
of studying the outside world and then itself".Every aspect of perception, 
thinking, action, and communication is certainly relevant, and those four words 
are easier to explain and to test than the complex books that Anatoly cited.  
The most complex issues involve the definition of mental models and methods of 
thinking about them and their relationship to the world, to oneself, and to the 
future of oneself in the world.

And the issues about vagueness are extremely important to issues about 
similarity, generality, and changes in the world and oneself in the future. 
Those are fundamental issues of ontology, and every one of them involves 
vagueness or incompleteness in perception, thinking, action, and communication. 

As for mathematical precision, please note that Peirce, Whitehead, and 
Wittgenstein all had a very strong background in logic, mathematics, and 
science.   That may be why they were also very sensitive to issues about 
vagueness.  I'll also quote Lord Kelvin:  "Better an approximate answer to the 
right question than an exact answer to the wrong question."

John


From: "Alex Shkotin" 

Excerpt is very interesting and mostly philosophical.  The main question is: 
can we create a device (now these are autonomous robots) capable of studying 
the outside world and then itself?  The progress in this direction is one of 
the main topics in robotic news.  And this progress is significant.

Alex
_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.


[PEIRCE-L] Why vagueness is important (was On the concept of consciousness

2023-08-10 Thread John F Sowa
A recent discussion about consciousness in Ontolog Forum showed that Peirce's 
writings are still important for understanding and directing research on the 
latest issues in artificial intelligence.  The note below is my response to a 
discussion about AI research on artificial consciousness.  The quotation from 
1906 (EP 2:544) is still an excellent guide for ongoing research.

John



Alex and Ricardo,

Your notes remind me of the importance of vagueness and the limitations of 
precision in any field -- especially science, engineering, and formal ontology. 
 Rather than sessions about consciousness,  I recommend a study of vagueness.  
That is why I changed the subject line.  For a summary of the issues, see below 
for an excerpt from an article I'm writing.

Alex> So we have not only plenty of theories [of consciousness], but R&D 
implementations.  Here a situation is possible that they need no formalization 
because they use math directly.  The formalization is still possible but when 
the main knowledge is in math, the math level is responsible for accuracy.

Yes.  Plenty of theories and some implementations, but no consensus on the 
theories, and nothing useful for any theoretical or practical applications of 
ontology.

Furthermore, every formal theory is stated in some version of mathematics.  
Every version of logic -- from Aristotle to today -- is considered a branch of 
mathematics.  Formalization is always an  application of mathematics.  The 
notation used for the math is irrelevant.  Aristotle's syllogisms are the first 
version of formal logic, and he invented the first controlled natural language 
for stating them.

Ricardo> I suggest this link: 
https://en.wikipedia.org/wiki/Artificial_consciousness   It is a bit old and 
biased, but gives a gist of what is being done in the artificial systems side.

Thanks for recommending that article.  It is an excellent overview with well 
over a hundred references to theory and implementations from every point of 
view, including Google's work up to 2022.

But I would not call it "old and biased".  Although it does not include 
anything about the 2023 work on GPT and related systems, it cites Google's work 
on their foundations.  GPT systems, by themselves, do not do anything related 
to consciousness.

Ricardo, quoting from a note by JFS> The sentence "Any time wasted on 
discussing consciousness would have no practical value for any applications of 
ontology." sounds a biit disrespectful for the people that wrote the 100,500 
books about consciousness that Anatoly mentioned.

Please read what I wrote above.  I show a high respect for the ongoing research 
and publications.  But I make the point that none of that work is relevant to 
the theory and applications of ontology.

Following is an excerpt from an article I'm writing.  Note the term 'mental 
model'.  I propose the following definition of consciousness:  the ability to 
generate, modify, and use mental models as the basis for perception, thought, 
action, and communication.  That definition is sufficiently vague to include 
normal uses of the word 'consciousness'.  It can also serve as a guideline for 
more detailed research and applications.  It could even be used to define 
artificial consciousness if and when any AI systems could "generate, modify, 
and use mental models as the basis for perception, thought, action, and 
communication."

John
__

Excerpt from a forthcoming article by J. F. Sowa:

Natural languages can be as precise as a formal language or as vague as 
necessary for planning and negotiating.  The precision of a formal language is 
determined by its form or syntax together with the meaning of its components.  
But natural languages are informal because the precise meaning of a word or 
sentence depends on the situation in which it’s spoken, the background 
knowledge of the speaker, and the speaker’s assumptions about the background 
knowledge of the listeners. Since no one has perfect knowledge of anyone else’s 
background, communication is an error-prone process that requires frequent 
questions and explanations.  Precision and clarity are the goal not the 
starting point.  Whitehead (1937) aptly summarized this point:
Human knowledge is a process of approximation.  In the focus of experience, 
there is comparative clarity.  But the discrimination of this clarity leads 
into the penumbral background.  There are always questions left over.  The 
problem is to discriminate exactly what we know vaguely.A novel theory of 
semantics, influenced by Wittgenstein’s language games and related developments 
in cognitive science, is the dynamic construal of meaning (DCM) proposed by 
Cruse (2002). The basic assumption of DCM is that the most stable aspect of a 
word is its spoken or written sign; its meaning is unstable and dynamically 
evolving as it is used in different contexts or language games. Cruse coin