On Wed, Mar 15, 2023, 7:47 AM spudboy100 via Everything List <
everything-list@googlegroups.com> wrote:

> The question offered up 6 weeks ago was how does the similarity to animal
> brains arise from a Server Farm?

There was this recent paper that showed self-arising similarity between
language models and neural anatomical structures in in the language centers
of human brains:

https://pubmed.ncbi.nlm.nih.gov/35173264/ “Brains and algorithms partially
converge in natural language processing”

> At this point, I claim it doesn't and that 3 and 4 are clever Language
> Machines.
> To the claim that via magic, a consciousness arises in silicon or gallium
> arsenide seems a tall order. I have seen no article by any computer
> scientist, neurobiologist, or physicist, indicating HOW computer
> consciousness arose? If there is something out there, somebody please
> present a link to this august mailing-list.

It's not via magic, but through the thinking that our bodies are machines.
Then asking: would another machine, with similar behavior and functions as
us, not have an equal claim to consciousness as we have?

Indeed, if philosophical zombies are impossible, then Turing universality
guarantees that an appropriately programmed computer would be
consciousness, and could be conscious in the exactly same way as humans
are. See Chalmers's "Dancing, Fading Qualia" paper for a good argument of
this. It's freely accessible on his website.

Below I show how this thinking has developed over the past few thousand

“But the facts are that the power of perception is never
found apart from the power of self-nutrition, while-in plants-the
latter is found isolated from the former. Again, no sense is found
apart from that of touch, while touch is found by itself; many animals
have neither sight, hearing, nor smell. Again, among living things
that possess sense some have the power of locomotion, some not. Lastly,
certain living beings-a small minority-possess calculation and thought,
for (among mortal beings) those which possess calculation have all
the other powers above mentioned, while the converse does not hold-indeed
some live by imagination alone, while others have not even imagination.
The mind that knows with immediate intuition presents a different
-- Aristotle "On the Soul" 350 B.C.

"I should like you to consider that these functions (including passion,
memory, and imagination) follow from the mere arrangement of the machine’s
organs every bit as naturally as the movements of a clock or other
automaton follow from the arrangement of its counter-weights and wheels."
-- René Descartes, Treatise on Man, published in 1633

Man a Machine - Julien Offray de La Mettrie 1748
“Man is so complicated a machine that it is impossible to get a clear idea
of the machine beforehand, and hence impossible to define it.”

Alan Turing in BBC Radio Interview: “Can Digital Computers Think?” May 1951.
“In order to arrange for our computer to imitate a given machine it is only
necessary to programme the computer to calculate what the machine in
question would do under given circumstances, and in particular what answers
it would print out. The computer can then be made to print out the same
If now, some particular machine can be described as a brain we have only to
programme our digital computer to imitate it and it will also be a brain.
If it is accepted that real brains, as found in animals, and in particular
in men, are a sort of machine it will follow that our digital computer
suitably programmed will behave like a brain.”

“The important result of Turing’s is that in this way the first [universal]
machine can be caused to imitate the behavior of any other machine.”
-- John von Neumann in “The Computer and the Brain” (1958)

Minds and Machines - Hilary Putnam (1960)
"The functional organization (problem solving, thinking) of the human being
or machine can be described in terms of the sequences of mental or logical
states respectively (and the accompanying verbalizations), without
reference to the nature of the “physical realization” of these states."

Pribram (1976),

“I tend to view animals, especially furry animals, as conscious-not plants,

not inanimate crystals, not computers. This might be termed the "cuddliness

criterion" for consciousness. My reasons are practical: it makes little
difference at present whether computers are conscious or not. (p. 298)”

Freud's Project reassessed

Book by Karl H. Pribram


http://www-formal.stanford.edu/jmc/ascribing.pdf  “ASCRIBING MENTAL

MACHINES” (1979)

“Machines as simple as thermostats can be said to have beliefs, and having

beliefs seems to be a characteristic of most machines capable of problem
solving performance. However, the machines mankind has so far found it

to construct rarely have beliefs about beliefs, although such beliefs will

needed by computer programs that reason about what knowledge they lack

and where to get it.”

“Whether we are based on carbon or on silicon makes no fundamental
difference; we should each be treated with appropriate respect.”
― Arthur C. Clarke, 2010: Odyssey Two (1982) p. 230

 Paul and Patricia Churchland, wrote in 1983,
“Church's Thesis says that whatever is computable is Turing computable.
Assuming, with some safety, that what the mind-brain does is computable,
then it can in principle be simulated by a Computer.””

http://www-formal.stanford.edu/jmc/little.pdf (1983) “The Little Thoughts
of Thinking Machines”
“Ever since Descartes, philosophically minded people have wrestled with
the question of whether it is possible for machines to think. As we interact
more and more with computers — both personal computers and others — the
questions of whether machines can think and what kind of thoughts they can
have become ever more pertinent. We can ask whether machines remember,
believe, know, intend, like or dislike, want, understand, promise, owe, have
rights or duties, or deserve rewards or punishment. Is this an
question, or can we say that some machines do some of these things and not
others, or that they do them to some extent?”

David Deutsch in 1985 wrote, “I can now state the physical version of the
Church-Turing principle: "Every finitely realizable physical system can be
perfectly simulated by a universal model computing
machine operating by finite means."”

Facing Up to the Hard Problem - David Chalmers (1996)
“Where there is simple information processing, there is simple experience,
and where there is complex information processing, there is complex
experience. A mouse has a simpler information-processing structure than a
human, and has correspondingly simpler experience; perhaps a thermostat, a
maximally simple information processing structure, might have maximally
simple experience?”

Pattern on the Stone - Danny Hillis (1998)
"The theoretical limitations of computers provide no useful dividing line
between human beings and machines. As far as we know, the brain is a kind
of computer, and thought is just a complex computation. Perhaps this
conclusion sounds harsh to you, but in my view it takes away nothing from
the wonder of human thought. The statement that thought is a complex
computation is like the statement sometimes made by biologists that life is
a complex chemical reaction: both statements are true, and yet they still
may be seen as incomplete. They identify the correct components but they
ignore the mystery. To me, life and thought are both made all the more
wonderful by the realization that they emerge from simple, understandable
parts. I do not feel diminished by my kinship to Turing's machine."

“The unrelenting advance of machine intelligence, [...] will bring machines
to human levels of intricate and refinement and beyond within several
decades. Will these machines be conscious?”
-- Ray Kurzweil in "The Age of Spiritual Machines" (1999)

“It is interesting to speculate on just what our principles of coherence
imply for the existence of consciousness outside the human race, and in
particular in much simpler organisms. The matter is unclear, as our notion
of awareness is only clearly defined for cases approximating human
complexity. It seems reasonable to say that a dog is aware, and even that a
mouse is aware (perhaps they are not self-aware, but that is a different
matter). For example, it seems reasonable to say that a dog is aware of a
fire hydrant in the basic sense of the term “aware.” The dog’s control
systems certainly have access to information about the hydrant, and can use
it to control behavior appropriately. By the coherence principle, it seems
likely that the dog experiences the hydrant, in a way not unlike our visual
experience of the world. This squares with common sense; all I am doing
here is making the common sense reasoning a little more explicit.
The same is arguably true for mice and even for flies. Flies have some
limited perceptual access to environmental information, and their
perceptual contents presumably permeate their cognitive systems and are
available to direct behavior. It seems reasonable to suppose that this
qualifies as awareness, and that by the coherence principle there is some
kind of accompanying experience.”
-- David Chalmers in "The Conscious Mind" (1996)

“As we discussed in chapter 2, one of William Jame’s most valuable insights
was to realize that consciousness is not a thing but a process. Although
few people would disagree in principle with this conclusion, it is often
ignored in practice, as indicated by the continuing attempts to identify
some special intrinsic marker of those neurons that would generate
conscious experience. The dynamic core hypothesis takes James’s insight
seriously: As a process, a dynamic core is defined in terms of neural
interactions. In other words, the definition of a dynamic core is a
functional one, in that it is based on the strength of an ensemble of
interactions, rather than just on a structure, a property of some neurons,
or their location.”
-- Gerald Maurice Edelman and Giulio Tononi in "A Universe of
Consciousness" (2000)

"Amoeba’s Secret" Bruno Marchal (2014)
“The hypothesis is that of Mechanism: the idea that we could be digital
machines, in a sense that will be rendered more clearly in due course.
Broadly speaking, we might be machines in the precise sense that no parts
of our bodies are privileged with respect to an eventual functional
substitution. This says that we can survive a heart substitution by the
transplant of an artificial heart, or of a kidney substitution by an
artificial kidney, etc., inasmuch as the substitution is carried out at a
sufficiently fine-grained level.”

“When people talk about consciousness, something often mentioned is
“self-awareness” or the ability to “think about one’s own processes of
thinking”. Without the conceptual framework of computation, this might seem
quite mysterious. But the idea of universal computation instead makes it
seem almost inevitable. The whole point of a universal computer is that it
can be made to emulate any computational system—even itself.”
Stephen Wolfram in “What is Consciousness” (2021)


“Dr Nando de Freitas said “the game is over” in the decades-long quest to
realise artificial general intelligence (AGI) after DeepMind unveiled an AI
system capable of completing a wide range of complex tasks, from stacking
blocks to writing poetry.
Described as a “generalist agent”, DeepMind’s new Gato AI needs to just be
scaled up in order to create an AI capable of rivalling human intelligence,
Dr de Freitas said.“

> Now, for life arising out of chemicals on planet earth, I stumbled upon
> this yesterday. The theory is called Nickleback (O Canada!)
> Scientists Have Found Molecule That Is Behind The Origin Of Life On Earth?
> Read To Know
> https://www.republicworld.com/science/space/scientists-have-found-molecule-that-is-behind-the-origin-of-life-on-earth-read-to-know-articleshow.html
> Somebody come up with a theory that network systems can accidentally
> produce a human level mind, before we celebrate chat4 overmuch.
> Let humans come up with a network that invents technology that produce
> inventions that humans alone would not have arrived at for decades of
> centuries! That, would be the big breakthrough, and not a fun chatbox.

There is Koza's "Invention Machine":

And computers have been used to design computers since at least the 80s
with Danny Hillis's "Thinking Machines". He notes that the human brain
can't keep track of a device with billions of parts, only machines can do
that. So humans alone would not in centuries be able to design the CPU
chips we have and use today.


> -----Original Message-----
> From: Telmo Menezes <te...@telmomenezes.net>
> To: Everything List <everything-list@googlegroups.com>
> Sent: Tue, Mar 14, 2023 11:45 am
> Subject: Re: The connectome and uploading
> Am Di, 14. Mär 2023, um 13:48, schrieb John Clark:
> On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes <te...@telmomenezes.net>
> wrote:
> > One of the authors of the article says "It’s interesting that the
> computer-science field is converging onto what evolution has discovered",
> he said that because it turns out that 41% of the fly brain's neurons are
> in recurrent loops that provide feedback to other neurons that are upstream
> of the data processing path, and that's just what we see in modern AIs like
> ChatGPT.
> *> I do not think this is true. ChatGPT is a fine-tuned Large Language
> Model (LLM), and LLMs use a transformer architecture, which is deep but
> purely feed-forward, and uses attention heads. The attention mechanism was
> the big breakthrough back in 2017, that finally enabled the training of
> such big models:*
> I was under the impression that transformers are superior to recurrent
> neural networks because recurrent processing of data was not necessary with
> transformers so more paralyzation is possible than with recursive neural
> networks; it can analyze an entire sentence at once and doesn't need to do
> so word by word.  So Transformers learn faster and need less trading data.
> It is true that transformers are faster for the reason you say, but the
> vanishing gradient problem was definitely an issue. Right before
> transformers, the dominant architecture was LSTM, which was recurrent but
> designed in such a way as to deal with the vanishing gradient:
> https://en.wikipedia.org/wiki/Long_short-term_memory
> Memory is the obvious way to deal with context, but like you say
> transformers consider the entire sentence (or more) all at once. Attention
> heads allow for parallel learning to focus on several aspects of the
> sentence at the same time, and then combining them at higher and higher
> layers of abstraction.
> I do not think that any of this has any impact on the size of the training
> corpus required.
> *> My intuition is that if we are going to successfully imitate biology we
> must model the various neurotransmitters.*
> That is not my intuition. I see nothing sacred in hormones,
> I agree that there is nothing sacred about hormones, the only important
> thing is that there are several of them, with different binding properties.
> Current artificial neural networks (ANNs) only have one type of signal
> between neurons, the activation signal. Our brains can signal different
> things, importantly using dopamine to regulate learning -- and thus serve
> as a building block for a decentralized, emergent learning algorithm that
> clearly can deal with recursive connections with no problem.
> With recursive connections a NN becomes Turing complete. I would be
> extremely surprised if Turing completeness turns out to not be a
> requirement for AGI.
> I don't see the slightest reason why they or any neurotransmitter would be
> especially difficult to simulate through computation, because chemical
> messengers are not a sign of sophisticated design on nature's part, rather
> it's an example of Evolution's bungling. If you need to inhibit a nearby
> neuron there are better ways of sending that signal then launching a GABA
> molecule like a message in a bottle thrown into the sea and waiting ages
> for it to diffuse to its random target.
> Of course, they are easy to simulate. Another question is if they are easy
> to simulate at the speed that we can perform gradient descent using
> contemporary GPU architectures. Of course, this is just a technical
> problem, not a fundamental one. What is more fundamental (and apparently
> hard) is to know *what* to simulate, so that a powerful learning algorithm
> emerges from such local interactions.
> Neuroscience provides us with a wealth of information about the biological
> reality of our brains, but what to abstract from this to create the master
> learning algorithm that we crave is perhaps the crux of the matter. Maybe
> it will take an Einstein level of intellect to achieve this breakthrough.
> I'm not interested in brain chemicals, only in the information they
> contain, if somebody wants  information to get transmitted from one place
> to another as fast and reliablely as possible, nobody would send smoke
> signals if they had a fiber optic cable. The information content in each
> molecular message must be tiny, just a few bits because only about 60
> neurotransmitters such as acetylcholine, norepinephrine and GABA are known,
> even if the true number is 100 times greater (or a million times for that
> matter) the information content of each signal must be tiny. Also, for the
> long range stuff, exactly which neuron receives the signal can not be
> specified because it relies on a random process, diffusion. The fact that
> it's slow as molasses in February does not add to its charm.
> I completely agree, I am not fetishizing the wetware. Silicon is much
> faster.
> Telmo
> If your job is delivering packages and all the packages are very small,
> and your boss doesn't care who you give them to as long as they're on the
> correct continent, and you have until the next ice age to get the work
> done, then you don't have a very difficult profession.  Artificial neurons
> could be made to communicate as inefficiently as natural ones do by
> releasing chemical neurotransmitters if anybody really wanted to, but it
> would be pointless when there are much faster, and much more reliable, and
> much more specific ways of operating.
> John K Clark    See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> kuh
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv089oC%3DAc-DswW5simNfWzQsGAZADjusaWOacE4M6kt9g%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv089oC%3DAc-DswW5simNfWzQsGAZADjusaWOacE4M6kt9g%40mail.gmail.com?utm_medium=email&utm_source=footer>.
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/5076bb8a-99dd-4673-965f-92c6d35edc70%40app.fastmail.com
> <https://groups.google.com/d/msgid/everything-list/5076bb8a-99dd-4673-965f-92c6d35edc70%40app.fastmail.com?utm_medium=email&utm_source=footer>
> .
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/1412189407.215328.1678880860736%40mail.yahoo.com
> <https://groups.google.com/d/msgid/everything-list/1412189407.215328.1678880860736%40mail.yahoo.com?utm_medium=email&utm_source=footer>
> .

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Reply via email to