Consciousness in humans has been reduced to the part of the brain known as
the claustrum, at least in part.  If you stimulate this part of the brain
humans lose consciousness, and if you stop stimulating it humans regain
consciousness.  This doesn't mean that this part of the brain is ALL of
consciousness, but rather that it is a necessary part of consciousness.

In fact there are other parts of the brain which are responsible for the
processing of various kinds of things, such as vision, hearing, or touch,
etc. etc., the loss of which results in an inability to recall memories of,
or imagine (in consciousness) those associated kinds of things.  That is to
say, if you destroy (say) the part of the brain that is associated with the
processing of faces, then people will lose the ability to recall
(remember), or imagine the faces of other people.  You can show a man with
such brain damage, a picture of his wife and he may very well mistake her
for something other than she is (e.g. a hat).  However, for this person
they are entirely able to function perfectly well in just about all other
aspects of life.

It would seem to me that a biologically plausible conceptualization of
consciousness is that the various kinds of specialize processing in the
brain produce high level representations of things all of which are
aggregated in the claustrum,  So if you were to have a conscious experience
of (say) a person, the part of your brain that specialized in faces would
generate a representation of that person, and send this out to the
claustrum, and the parts of the auditory cortex would be stimulated to
produce representations of the sound of this persons voice, which would be
sent to the claustrum, and parts of the brain which specialize in gender,
age, height, etc. etc. would be activated by the retrieval queue of this
person being recalled, and the associated representations would all be
shipped to the claustrum where they are all fused, into that thing we call
our conscious experience.


If consciousness does indeed occur in the claustrum, then there are several
important ramifications.

1.  If you wished to create a machine representation of consciousness, it
would seem that one ought to..

a. Put electrodes into this part of the brain and try to reverse engineer
what kinds of mental activities do and do not stimulate various parts of
it.  This would be quite similar to how an electrical engineer would go
about reverse engineering a computer component they do not understand.
b. Examine the neural wiring diagram of this part of the brain, by
examining its connectome (i.e. looking at very thin slices of this part of
the brain to understand what kinds of neurons are here [there are many
different kinds of neurons] and determine how the neurons are wired
together (and more importantly what kinds of algorithms this wiring
necessitates).  This would also be similar to (another method) electrical
engineers use to reverse engineer CPU's.
c. Remove parts of the claustrum, and see if you can't disable parts of
consciousness, while still retaining others.  This could enable you to
generate a map of the different kinds of functions the claustrum performs
and how they are organized.
d. Establish an array of electrodes that read the input neural activity of
the claustrum, and establish an array of ellectrodes that read the output
of the neural activity of the claustrum, and run a neural net algorithm
where the fitness function is its ability to predict the output of the
claustrum array, given the inputs from the input array.
e. Prove that your model is accurate by replacing parts (or the entire
claustrum) with a neural prosthetic which allows the organism to maintain
consciousness (i.e. replace the biological part of the brain with a
microchip, which reads in the neural signaling from the inputs and outputs
neural signals just as the claustrum would, and have the organism
demonstrate consciousness just as it would without the prosthetic.
f. Use this claustrum inspired neural net algorithm as your AGI
consciousness

QED

2.  We could have means of determining wether other (lesser) organism are
conscious, i.e. the have a claustrum or not.
3.  Since the claustrum is deep in the cortex, and not in the neocortex,
this would suggest that many animals (even those without a neocortex) are
conscious.
4.  We would be better able to classify brain death, by measuring activity
in the claustrum.

On Mon, Feb 20, 2017 at 6:05 AM, Nanograte Knowledge Technologies <
[email protected]> wrote:

> I understand that AGI specifically pertains to machine intelligence
> copying human functionality. Further, I understand how XAI might be
> something way beyond AGI, specific to tracing and predicting
> highly-abstract decision making. Last, I understand that the obsession with
> studying the human brain, as most-suitable model to inform AGI with,
> seemingly makes sense.
>
> What I do not understand is the assumption that the human brain is the
> only feasible model of intelligence that would inform future AI. This is an
> assumption worth testing. For example, consider toxoplasma and latest
> findings on its probable impact on the human brain. Further, consider the
> effects of electromagnetic waves and microwaves on brain functioning. Last,
> consider a shock to the brain.
>
> All these events may seriously impact negatively on the reliability of the
> brain to serve the survival of its host. In an evolutionary sense, it would
> to a degree short-circuit the survivability of the affected host and all
> its direct descendants after the fact. We may conclude how any one of these
> events may negatively affect general intelligence.
>
> Why would science and commerce trust an instrument of intelligence that
> has critical, inherent flaws and vulnerabilities? Would we really want
> machine intelligence to be equally fallible?
>
> Suppose then the original assumption was incorrect? What are the other
> options for naturally-selected models of intelligence?  I'm excluding
> nurture from this argument, for as a standard operating model of
> intelligence, I think that too is seriously flawed.
>
> Your thoughts and comments would be appreciated.
>
> Robert Benjamin
>
> ------------------------------
> *From:* Jim Bromer <[email protected]>
> *Sent:* 20 February 2017 03:27 PM
> *To:* AGI
> *Subject:* Re: [agi] IIT: Conscious Programming Structures
>
> Now that I think about I have seen glimmers of this kind of
> self-awareness in Watson but since Watson was not able to follow up
> and learn something new from these glimmers I concluded that is was
> probably a bot-like algorithm that someone had pasted onto Watson.
> Jim Bromer
>
>
> On Mon, Feb 20, 2017 at 6:51 AM, Jim Bromer <[email protected]> wrote:
> > I started reading a couple of the links to Integrated Information
> > Theory that Logan supplied and I really do not see how it can be seen
> > relevant to AI or AGI. To me it looks like a case study of how an
> > over-abstraction of philosophical methodologies in an attempt to make
> > the philosophy more formal and more like a technical problem can go
> > wrong. We do not know how consciousness in all of its forms arise. We
> > can't use contemporary science to explain the causes of consciousness
> > as Chalmer described in his Hard Problem. To say that it simply exists
> > as an axiom is fine but Logan (to the best of my understanding)
> > started this thread by trying to apply that axiom to minimal computer
> > algorithmic methods or circuits. Logan's initial question was
> > interesting to me when I interpreted 'consciousness' in a way that
> > could reasonably be considered for an AI program. That is, are there
> > minimal sub-programs (abstractions of computer programs) which, for
> > example, might explain self-awareness. Going from there are there
> > minimal abstractions of programs which might be capable of more
> > efficient integration and differentiation of knowledge, especially
> > concerning self-awareness. We might and should ask about
> > self-awareness of our own thinking and how it might be used to further
> > understanding, and how this kind of knowledge might be used to develop
> > better AI AGI programs.
> >
> > My view is that GOFAI should have worked. The questions then are why
> > didn't it and how might it? We should see glimmers of AGI, capable of
> > self-awareness in at least the minimal sense of useful insight about
> > what the program is itself doing and discovering reasons why it
> > responded in a way that was not insightful. I say this kind of
> > artificial self-awareness should be feasible for a computer program. I
> > also thought that this is a minimal form of consciousness that could
> > be relevant to our discussions. I haven't seen a glimmer of this kind
> > of conscious self-awareness in AI. So is there something about minimal
> > self-awareness for computer programs that could be easily tested and
> > used to start a more robust form of AI? Could some kind computer
> > methodology be developed that could explain artificial self-awareness
> > and which could be used to simplify the problem of creating an AI
> > program?
> > Jim Bromer
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> Listbox &bull; Email Marketing and Discussion Groups
> <https://www.listbox.com/member/archive/303/=now>
> www.listbox.com
> Easy-to-use Email marketing services, where you can create your campaign
> in our online composer or from your own email program like Outlook or Mac
> Mail. Listbox also offers discussion lists, so you can manage all your mass
> email in one spot.
>
> RSS Feed: https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Listbox &bull; Email Marketing and Discussion Groups
> <https://www.listbox.com/member/?&;>
> www.listbox.com
> Easy-to-use Email marketing services, where you can create your campaign
> in our online composer or from your own email program like Outlook or Mac
> Mail. Listbox also offers discussion lists, so you can manage all your mass
> email in one spot.
>
> Powered by Listbox: http://www.listbox.com
> Listbox &bull; Email Marketing <http://www.listbox.com/>
> www.listbox.com
> Easy-to-use Email marketing services, where you can create your campaign
> in our online composer or from your own email program like Outlook or Mac
> Mail. Listbox also offers discussion lists, so you can manage all your mass
> email in one spot.
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to