I don't mean to be rude, but this is just nonsense. Your system uses a 
textual-input sensor to activate a response from the reasoning engine. In this 
case, the sensor is a data field, which requires a human, or bot to populate 
it. It's seemingly not even aware of that sensor, nor of the constraints of 
that sensor.

What you're describing is not neurophysiologically accepted as consciousness at 
all. For consciousness to begin, your machine needs to at least recognize that 
it has a sensor and try and show an "understanding" of how it relates to the 
rest of its field of reality. It needs orientation.

________________________________
From: [email protected] <[email protected]>
Sent: Saturday, 24 August 2019 09:39
To: AGI <[email protected]>
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

Sentiment detection, human-written detection, danger detection, there's 
infinite detections. It must recognize the input and say what entails using its 
knowledge.  The "concept" of who it speaks to is based on feeding input in and 
entailment out. That is "awareness" and "conscious".
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M5d2f8713a586364483f62770>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4cc29af1f6399bd26e529172
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to