Actually, exploring this further - human thinking is v. fundamentally different 
from the computational kind or most AGI conceptions - because it is massively 
and structurally metacognitive, self-examining (which comes under being a 
machine that works by "self-control").

Interestingly, Minsky's model of mind in The Emotion Machine includes this with 
three levels above "Deliberative Thinking":

Reflective Thinking
Self-Reflective Thinking
Self-Conscious Reflection

We don't just think about a problem, we simultaneously think about how we think 
about it, and consciously manage and take decisions about that thinking. We ask 
ourselves questions like:

-How long should we think about it?
-Should we follow our intuitions
-do we need examples?
-should we visualise
-should we follow our feelings of confusion?
-should we articulate our thoughts clearly and slowly or just let them whizz 
along, half-articulated?
-how would so-and-so handle it
-should we examine that part of the problem, or will it take too long?
-should we check the evidence?
-should we give up, or compromise?
-should we read a book for ideas? or consult a dictionary/thesaurus?

Such questions are all parts of our inner thinking dialogue.

As Minsky says, we have many ways to think, & we consciously choose from among 
them - & as a result different people devote very different amounts of time and 
resources to thinking at different times. But Minsky wants to make all this 
into an automatic process - and it can't be - how you think about problematic 
problems is fundamentally problematic in itself - which is why thinking is such 
a hesitant business.
  David Hart:/ MT : Is anyone trying to design a self-exploring robot or 
computer? Does this principle have a name?

  Interestingly, some views on AI advocate specifically prohibiting 
self-awareness and self-exploration as a precaution against the development of 
unfriendly AI. In my opinion, these views erroneously transfer familiar human 
motives onto 'alien' AGI cognitive architectures - there's a history of 
discussing this topic  on SL4 and other places.

  I believe however that most approaches to designing AGI (those that do not 
specifically prohibit self-aware and self-explortative behaviors) take for 
granted, and indeed intentionally promote, self-awareness and self-exploration 
at most stages of AGI development. In other words, efficient and effective 
recursive self-improvement (RSI) requires self-awareness and self-exploration. 
If any term exists to describe a 'self-exploring robot or computer', that term 
is RSI. Coining a lesser term for 'self-exploring AI' may be useful in some 
proto-AGI contexts, but I suspect that 'RSI' is ultimately a more useful and 
meaningful term.

  -dave


------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to