Sent from Yahoo Mail on Android 
 
  On Wed, Feb 21, 2018 at 1:37 AM, Bruno Marchal<marc...@ulb.ac.be> wrote:   

On 19 Feb 2018, at 21:27, 'Chris de Morsella' via Everything List 
<everything-list@googlegroups.com> wrote:

 
 
  On Mon, Feb 19, 2018 at 3:56 AM, Lawrence 
Crowell<goldenfieldquaterni...@gmail.com> wrote:   On Sunday, February 18, 2018 
at 10:00:24 PM UTC-6, Brent wrote:
  
 
 On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
  
 Computers such as AlphaGo have complex algorithms for taking the rules of a 
game like chess and running through long Markov chains of game events to 
increase  their data base for playing the game. There is not really anything 
about "knowing something" going on here. There is a lot of hype over AI these 
days, but I suspect a lot of this is meant to beguile people. I do suspect in 
time we will interact with AI as if it were intelligent and conscious. The 
really  big changer though I think will be the neural-cyber interlink that will 
put brains as the primary internet nodes. 
 
 Why would you suppose that when electronics have a signal speed ten million 
times faster than neurons?  Presently neurons have an advantage in connection 
density and power dissipation; but I see no reason they can hold that advantage.
 
 Brent


I think it may come down to computers that obey the Church-Turing thesis, which 
is finite and bounded. Hofstadter's book Godel Escher Bach has a chapter Bloop, 
Floop, Gloop where the Bloop means bounded loop or a halting program on a 
Turing machine. Biology however is not Bloop, but is rather a web of processors 
that are more Floop, or free loop. The busy beaver algorithm is such a case, 
which grows in complexity with each step. The computation of many fractals is 
this as well, where the Mandelbrot set with each iteration on a certain scale 
needs refinement to another floating point precision and thus grows in huge 
complexity. These of course in practice halting because the programmer puts in 
by hand a stop. These are recursively enumerable, and their complement in a set 
theoretic sense are Godel loops or Gloop. For machines to have properties at 
least parallel to conscious behavior we really have to be running in at least 
Floop and maybe into Gloop.
LC
Not sure if this has been touched on in this thread but it seems to me that the 
emergent phenomenon of both self-awareness and consciousness depend on 
information hiding in some fundamental way. Both our self awareness and our 
conscious minds, which from our incomplete perspective seem to be innate and 
ever present (at least when we are awake) are themselves the emergent outcomes 
of a vast amount of neural networked activities that is exquisitely hidden from 
us. We are unaware of the Genesis of our own awareness. 
Evidence from MRI scans supports this conclusion that before we are aware of 
being aware of some objectively measurable external event, or before we 
experience having a thought, that the almost one hundred billion neurons 
crammed into our highly folded cortexual pizza pie stuffed inside our skulls 
have been very busy and chatty indeed. As the MRI scans indicate.
We are aware of being aware and we experience conscious existence, but the 
process by which both our conscious experience and our own awareness of being 
arises within our minds is largely hidden from us. I think it is a fair and 
reasonable question to ask: Is information hiding a necessary an integral 
aspect of processes through which self-awareness and consciousness arise?
In computer science the rather recent emergence of deep mind neural networks 
that are characterized by having many layers, of which only the input layer and 
output layer of neurons are directly measurable, while conversely the many 
other layers that are arrayed in the stack between them remain hidden offers 
some intriguing parallels that also seem to indicate a critical role for 
information hiding. The Google deep mind machine learned neural networks for 
image processing, for example, have 10 to 30 (or by now perhaps even more) 
stacked layers of artificial neurons, most of which are hidden.
Because of the non-linearity of the processes in play within these artificial 
deep stacks of layered artificial neurons it is difficult to really know in any 
definitive manner exactly what is going on. The outcomes from experimenting on 
the statistically trained (or in the vernacular, machine learned) models, by 
for example tweaking training parameters to experimentally see how doing so 
effects the resulting outcomes and by also subsequently forensically analyzing 
any generated logs & other telemetry are often surprisingly beautiful 
dreamscapes that are not reducible to a series of algorithmic steps applied by 
the many hidden layers to whatever input signals that have been fed to the 
input layer of neurons.
It seems to me that the emergence of consciousness & self awareness as well is 
exquisitely nonlinear in nature. And that this outcome characterized by being 
non-linear, itself depends on information hiding in order to be able to 
operate. Each successive layer in the stack is mostly unaware of the vast array 
of activities occurring on the layers beneath it... or above it for that 
matter. 
Would consciousness or self awareness even be possible without introducing 
information hiding in the deep stack through which these phenomena emerge? 
Personally I do not think we could be conscious or self aware without the 
multiple degrees of non-linearity introduced into the sensorial signal + 
triggered memory recall processing stream by the fire wall of information 
hiding.
It is by hiding away, by far most of the processing stack from us that we 
experience this seemingly magical state of being. We emerge in a non linear 
manner from a hidden world that we are (for the most part) blithely unaware of.
The fact that a very similar kind of process seems to be taking place in these 
stacked layers of artificial neurons most of which are hidden supports this 
thesis.
Is information hiding in fact, necessary to the emergence of self-awareness & 
consciousness?  
This is the question I pose.

The mechanist answer to this is “yes”. The more you have neurons, the less 
conscious you are. The brain is a filter of the (arithmetical) information. I 
will not insist now, as it is shocking and quite counter-intuitive, but 
somehow, the Löbian machine, which is more complex than the usual universal 
machine (she knows that she is universal) is more deluded, it soul is 
already”falling”, and it is less conscious. The math explains why the machine 
will tend to believe the contrary, and why nature benefits of that ignorance in 
some way. Now, the hidden information is not necessarily related to the hidden 
layers of a neural network, at least not at first sight. The hiding is more 
logical/modal, at a deeper level, independent of the implementations used in 
the computation.
--------In some ways, I think you are correct. One of the brain's functions is 
to throw out information that it decides is irrelevant or unimportant from it's 
own peculiar Darwinian perspective. It is a filter, and necessary one, in order 
to survive and thrive within the sensorial onslaught of reality. Sometimes less 
is more.However though perhaps a spider may exist in a less filtered internal 
state of being than a mouse, I don't see how it is more conscious. Is an amoeba 
even more conscious then than a spider. Is the simplest most elementary 
particle the most conscious entity of all?
Now, I grant that consciousness & self-awareness may themselves be an elaborate 
and necessary, schism & illusion arising from and within the labrynthian neural 
networks of our brains and resulting in our hermetic selves being cutoff by the 
very act of self identification from the wellspring of a much vaster, deeper 
ineffable being. So in this particular sense the very emergence of self 
identification becomes a veil that cuts us off from direct experience. We exist 
in reified mental constructs, inside a filtered mind-generated virtual reality. 
We don't see, hear, touch, smell or taste directly; instead we experience that 
which our mind serves up to us.
-Chris 
Bruno 





-Chris


e you are subscribed to the Google Groups "Everything List" group.To 
unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to