The basis for AGI should be machine consciousness. Probability, or not 
probability, is but one of the reasoning tools. Likewise, so are abstraction 
and deabstration. I think we are getting hung up on the small stuff.

To progress: Here is an overview of a suggested programmable architecture for 
machine consciousness.

Comments and debate, using logic, or from all AI research-related knowledge 
domains are welcomed.

Assumptions:
1) Consciousness flows from a single, consciousness-evolving platform.

2) The platform is Mobius ringed by a seamlessly-integrated abstraction 
containing the means for contextual classification.
3) The platform is Mobius ringed by a seamlessly-integrated deabstraction 
platform for N input.
4) Other, functional Mobius rings exist too.
5) The brain is given a problem to solve. It accesses the consciousness 
platform to trigger a solution context, like an invisible balloon of 
consciousness, which emerges from the seamllessly-integrated platformed 
consciousness assembly as an invisible form of reality. That consciousness 
could be translated into an emotional schema as part of the sensory system, 
thereafter contextually classified. The emerging consciousness process is 
generic and is never ending for as long as the energy exists to power the 
consciousness architecture and the brain. It's fully recursive, linear, 
alinear, yet open and closed-loop simultaneously. It is governed by a 
personalized, thematic hierarchy (could be a sum of applied learning, 
structure, and priorities of the entity's net essence).
6) When in a state of unconsciousness, the connection between the brain and the 
consciousness architecture is terminated. The balloon of emerging consciousness 
(frontal lobe) simply does not activate.

One must imagine an architecture which is always on and seamlessly integrated 
in multi-dimensional modes. Each component of the overall architecture would be 
driven by its own array of parallel processors.

There would exist superprocessor arrays to manage signalling, connectivity, 
routing, redundancy, and messaging of the consciousness architectural module.


Rob Benjamin

________________________________
From: Jim Bromer <jimbro...@gmail.com>
Sent: 13 April 2017 05:51 PM
To: AGI
Subject: Re: [agi] I Still Do Not Believe That Probability Is a Good Basis for 
AGI

All components of thought are abstractions.

AI programs do not seem capable of much reflection of their own thinking. This 
shows that there is a general lack of ability to form effective abstractions 
(that will then be subsequently useful) and this means that they are unable to 
think about things in a way that we seem to be able. Ancient philosophy 
suggests that these abstractions need to be discovered.
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]<https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>
 |    
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>
Listbox &bull; Email Marketing<http://www.listbox.com/>
www.listbox.com
Easy-to-use Email marketing services, where you can create your campaign in our 
online composer or from your own email program like Outlook or Mac Mail. 
Listbox also offers discussion lists, so you can manage all your mass email in 
one spot.




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to