I don't think rational vs. irrational really maps to conscious vs.
subconscious. A rote behavior can still be an intelligent/rational one, but
it is intelligence that has been automated. If consciousness is access to
information by the part of the brain which handles the unexpected, then we
should expect to be less conscious of habitualized tasks than of
novel/creative ones. This reflects what I think is a clear distinction in
the architecture of the mind between fluid and crystallized intelligence
(as Todor already pointed out).

From this perspective, the conscious mind has the specific purpose of
dealing with novelty, and the subconscious mind has the specific purpose of
dealing with familiarity. And so when we look to building an AGI, we should
expect to build a "subconscious" subsystem which handles familiar
situations via feature-based lookup of an appropriate decision procedure,
and a "conscious" subsystem which manages the "subconscious" subsystem by
using situational analysis to add dynamically generated new decision
procedures or improve existing decision procedures. The "conscious"
subsystem acts as a default handler of last resort for situations that
aren't already covered by the "subconscious" subsystem, and is far more
general in its capabilities, at the expense of being much slower and less
reliable in its worst-case performance. (I am reminded of the classic
exploration/exploitation dichotomy in reinforcement learning algorithms.)
Both subsystems, however, are mostly rational, and when either fails we
call it irrationality.


On Sun, Dec 2, 2012 at 7:12 AM, Todor Arnaudov <[email protected]> wrote:

> Hi Ben, Mike,
>
> Ben>Whoa... that's a kinda surreal perspective!!!
> Ben>The unconscious is rational and algorithmic?? Â  That would be a big
> surprise to the psychiatry community ;O ;D ...
> Ben>Mike T, thanks for being baffling and silly in a different way than
> your usual; that brightened up a foggy, rainy Hong Kong morning for me ;)
>
> On Sun, Dec 2, 2012 at 8:42 AM, Mike Tintner <[email protected]>
>  wrote:
>
>> Yes. Put that more simply, the conscious mind supervises creative
>> thinking -  that which “we don’t know how to do†  pace Piaget, and
>> wh. is non-algorithmic,  - and the unconscious mind is in charge of
>> routine, (basically rational), algorithmic thinking, which we do already
>> know how to do. And that’s the essential architecture of a mildly evolved
>> AGI or lower organism – and a neat, fairly obvious division of labour.
>> Â
>>
>
>
> Ben, in fact, yes, the "unconscious" mind is *obviously* in charge of
> routine thinking, and it is also "rational", at least in my definitions of
> it, I wrote about also in the publications I linked in the previous email.
> I'll cite a part of it at the end of the message. As of the supervision,
> every higher level supervises the lower, the consciousness is just on the
> top. [If you mean not just subconscious, but *sub-cortical - *well, they
> work all the time anyway, and they are quite rational, just their cognitive
> capacity is much smaller. Fight or flight is rational (goal-directed),
> reaction to pain is also very reasonable, the conditioned fears as of the
> "Little Albert experiment" and other phobias are also rational and right
> for themselves from the perspective in the time when they were created,
> i.e. if its known how those "irrational" phobias have started, they stop
> being "irrational". Sometimes they are *unknown or untraceable* by the
> observer or the person but that doesn't mean non-algorithmic or
> non-goal-directed.
>
> You play the keyboard, aren't you(?).
>
> Hawkins have made this hypothesis in "On Intelligence", and I have done it
> since to me it's an obvious observation, before his book in a publication,
> that practice/mastering of particular domain moves it from the higher
> cognitive levels (and consciousness) down to the lower, i.e. it gets less
> "conscious", i.e. it goes into the *crystallized* intelligence. I think
> that's well proven fact in experiments with chess players' and other
> experts' vs non-experts' PFC patterns - if I recall correctly, the brain of
> one with experience uses a "bank" of cases and doesn't fire much, while the
> one without experience have to search from scratch each time and her brain
> "explodes" - it doesn't know what to do, and this is supposed to be
> "conscious", i.e. exploration, while the expert "just know", i.e. he moves
> "intuitively". Saying that it's "subconscious" or "irrational" is a
> nonsense, that experience or its precursors have been conscious and
> goal-oriented in the past, then it's processed and stored for fast access.
>
> Other well known facts are the evolution of the skills in riding a bike,
> driving a car, juggling or playing musical instruments.
>
> The better improviser you get with a guitar or piano, the more unconscious
> you become while improvising, it becomes more and more "automatic" -
> however that doesn't mean "uncreative" for non-qualified observers, it's
> still creative, improvisation is stamped as "creative", but the top-level
> parts are too slow to cope with it, they see it with a delay.
>
> The most untalented player needs the most of concentration and he is the
> slowest, because he has to rethink every tiny move with his most slow and
> clumsy circuits. The same goes for playing computer games - the best
> playing, especially in FPS games, often comes when you *stop* thinking and
> let your "subconscious" work, i.e. after you leave *faster* circuits to
> deal with the problems, which are simple.
>
> That is, higher intelligence and higher talent activities make humans look
> more "unconscious" and less of control of their actions, you also can't
> think of them in real time - it just gets too fast for consciousness to
> think about it.
>
> And that was how I concluded, that consciousness is apparently not
> required neither for intelligence, nor for creativity, which in my theory
> are a part of the same procedures. There are other examples, too.
>
>
> Mike> "Creativity is a whole different culture to that of rationality."
>
> It's not, it's the same, if you do both, like me. The answer to this is
> even in my works from 2002-2003-2004, but the ignorance of others is
> widespread.
>
> Science = Art + Fun ;)
>
> http://3.bp.blogspot.com/_yeqjAlu3lyQ/SmIjetZgJII/AAAAAAAAAbY/cKQEDUNy3uE/s1600/noshtni_oblojka_7_900.jpg
>
>
> Recently I found a paper, which presents my decade older theses and claims
> about the same process going in science in art in a "scientific" packing,
> I'll probably pack my response it in a paper, too.
>
>
> P.S. Regarding "rationality" and the "irrational subconsciousness" and the
> psychiaters (who apparently have missed the classes in neurobiology, which
> in the time of Freud just didn't exist),  an excerpt from the publication
> at:
>
> Nature or Nurture: Socialization, Social Pressure, Reinforcement Learning,
> Reward Systems: Current Virtual Self - No Intrinsic Integral Self, but an
> Integral of Infinitesimal Local Selfs - Irrational Intentional Actions Are
> Impossible- Akrasia is Confused - Hypothesis about Socialization and
> Eye-Contact as an Oxytocine 
> Source<http://artificial-mind.blogspot.com/2012/11/nature-or-nurture-socialization-social.html>
> http://artificial-mind.blogspot.com/2012/11/nature-or-nurture-socialization-social.html
>
>
> * "Akrasia" as doing something "against own good will" is Confused*
>
> As of the "akrasia", I'd partially challenge the concept. IMO the
> philosophical confusion comes from the lack of physiological knowledge,
> wrong assumption and overgeneralization. There's no integral self, i.e. the
> brain is not an integral system.
>
> It self-organizes and integrates the parts, because they are connected to
> each other, but this happens at the expense of "bugs" and apparently
> "irrational" behavior, because brain was not created at once and those
> integrations and effects were not planned.
>
> Body and repeated sensations of self integrate "self" in the POV of the
> prefrontal cortex, and of an external observer. However there are many
> competing subsystems that are patched over each other, the highest level
> "executive function" is strongly influenced and entangled with older
> systems, which creates a mess of mechanisms and motivations. The limitation
> of the body actuators (and of the basal ganglia) reduce the possible
> physical actions and make the body appear as having an integral
> personality/mind/soul.
>
> Philosophers who are searching for a global and valid-all-the-time
> non-contradicting integral "will", "moral", "good will" for all possible
> cases, face those paradoxes of "doing something against one's better
> judgment" (as cited in the Wikipedia article).
>
> *Integral of Infinitesimal Local Selfs over given Period...*
> *
> *
> *Current Virtual Self - A Snapshot of the Virtual Simulators in the Brain*
>
> I've discussed in (see... 2002, 2003, 2004, Analysis of the sense... )
> that if you do something intentionally, that means without your hands being
> pulled with a wire from another explicit causality-control unit (an agent),
> or without another agent to force you with a loaded gun etc., then that's
> what your current virtual self/"will" has chosen as the best action given
> the experience and the possibilities it understands, and given the
> time-span and rewards that it sees from its own perspective, at this very
> specific moment of decision/action, computed for a selected time-period
> etc. That self is virtual and "exists" at the moment of acting, e.g. moving
> your hand, grasping something etc. In the next moment there might be
> another virtual self, which has other goals and motivation, which are valid
> for the next moment, but they might be "inconsistent" with the past or the
> next, because the underlying model is covered under the skull and in the
> long history of experience.
>
> *An analogy can be an Integral of Infinitesimal Local Selfs, in Calculus
> terms - a Calculus of Self..*.
>
> Sometimes, for some cases, in some situations, different virtual current
> selfs match and are/appear as stable, because the set of possible actions
> is limited, and because brain has also stable parts and configurations as
> well (at certain resolution),* but the point is that "irrational" and
> "not-consistent" actions are not really such. I claimed in those papers,
> and still claim, that "irrational voluntary action" is a nonsense. *
> *
> *
> *If something seems "irrational", that means that the observer hasn't
> recognized either the correct agent, the correct "rationality" or both, or
> hasn't done with sufficient resolution in order to predict it right. The
> concept of "rational" (as "consistent") is confused and primitive.*
>
> Due to the mess in human cognitive and physical reward system *, the moral
> values can change all the time and the "good or bad" - too, especially if
> it's something "abstract", i.e. not directly linked to feeling of dopamine,
> oxytocin, etc. which can have a very fast effect.
>
> Some philosophers don't get it and treat self as a constant, it's like
> integrating a constant - it equals 0.
>
> Brain is not abstract and constant, it's more like a complex (complicated)
> function - it has specific needs at specific moments, which are caused by
> specific sensations stored now or before 10 years in specific circumstances
> etc. which are associated with specific physical sensations ("gut
> feelings", projected eventually to the insular cortex**).
>
> Brain constructs generalizations out of those specific experiences, but
> there's a lot of noise and variations, and also working and short-term
> memory (recent activities and experiences), the environment of every
> precise moment and the declarative/autobiographical memory contain many
> specifics, which can be called internally in a sequence that seems "random"
> for an external observer, while it may have it's very specific reasons,
> grounded in experience.
>
> Such an observer, - who is assuming "rationality" wrongly as something
> that he believes is "good", "best" etc., rather than what's best for the
> agent's own estimate, - wrongly concludes that if somebody breaks his
> apparently wrong model, he acts against "his good will". *NO, it acts
> against the WRONG model, following its own will. If an agent does something
> "against his will", then that's not his will.*
>
> "Will" is considered as something abstract and independent from the body,
> e.g. if "you want to quit smoking, but you don't", therefore "you have a
> weak will". In fact yes, it is separated from the body as the decisions may
> be initiated by the PFC, and the statements of will might be just words,
> while the real non-verbal actions are driven by lower dopamine-shortcuts,
> such as nicotine addictions.
>
>
> * We've discussed this on the AGI List, see also below
> ** See also Damasio's works
>
>
> Akrasia, as "watching too much TV, realizing that it's a waste of time" or
> "eating too much and not practising sports, knowing that it causes obesity"
> - in my opinion there are simple reasons and I don't think the reasons have
> been much different in the past.
>
> Do the average people 100 years ago used to study Vector Calculus,
> Maxwell's Equations or did they constructed cathode-ray tubes or radio
> equipment or did they studied all kinds of sciences in order to make new
> inventions, instead of just going to the pub, theater, cinema, chatting,
> flirting, reading newspaper articles about crimes and random news from the
> world?
>
> The reason why they didn't and why they preferred simpler "social"
> activities, is that the intellectual activities require cognitive profile
> and capacities that a small minority of the population have, and the
> long-term goals are hard for the mind even for the gifted ones. One reason
> - the relation to present or to the future present is questionable and
> unclear - as noted in the famous Einstein's quote about people that love
> chopping woods, because they can see the results right away.
>
> In physiological terms, there are dopamine shortcuts, or we may call them
> short circuits - humans *are* "wirehead"s - which are making long term
> activities harder, at the expense of short term ones.
>
> There are easier, simpler and cheaper short-term activities providing the
> desired "drugs", why shooting for something long-term that's uncertain?
>
> *The long term ones have to have some kind of immediate measurable effect
> in order to keep the interest and compete with the activities which provide
> feedback immediately. There we are some of the effects of the clumsy AI/NLP
> and other fields in the academia, where small, incremental and "completely
> provable immediately right now, with no delay" results must be presented,
> even if they are globally very vague or meaningless.*
>
> It's also an illustration about how bad and weak human brain's executive
> function is, and how pathetic working memory can be - that's one reason why
> we need to take notes and pin to-do notes on the wall.
>
> (...)
>
>
> --
> ....* Todor "Tosh" Arnaudov ....*
> *
> .... Twenkid Research:*  http://research.twenkid.com
>
> .... *Self-Improving General Intelligence Conference*:
> http://artificial-mind.blogspot.com/2012/07/news-sigi-2012-1-first-sigi-agi.html
>
> *.... Todor Arnaudov's Researches Blog**: *
> http://artificial-mind.blogspot.com
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to