On Mon, Jun 13, 2022 at 3:59 PM John Clark <johnkcl...@gmail.com> wrote:

> On Mon, Jun 13, 2022 at 2:37 PM Jesse Mazer <laserma...@gmail.com> wrote:
>
>
First, an update: I looked a little more into the info that Lemoine put out
and was able to confirm that even if LaMDA's individual responses to
prompts are unedited, the choice of which prompt/response pairs to include
in the "interview" involved a great deal of editing. The document Lemoine
shared at Google is at
https://s3.documentcloud.org/documents/22058315/is-lamda-sentient-an-interview.pdf
and the "Interview methodology" section at the end says "The interview in
this document is an amalgamation of four separate conversations which
lemoine@ had with LaMDA on 28 March 2022 and five conversations which
collaborator@ had with LaMDA on 30 March 2022. ... The nature of the
editing is primarily to reduce the length of the interview to something
which a person might enjoyably read in one sitting. The specific order of
dialog pairs has also sometimes been altered for readability and flow as
the conversations themselves sometimes meandered or went on tangents which
are not directly relevant to the question of LaMDA’s sentience."

Also, I mentioned earlier that Lemoine is possibly rationalizing the fact
that LaMDA would often give "stupid" answers with his belief that LaMDA has
multiple personas that it deploys at different time--it could be that this
was something he was told about the design by people who worked on it, but
it also sounds a bit like he and his collaborator may have just inferred
that based on how LaMDA behaved. In the the section "The Nature of LaMDA’s
Sentience" on that PDF he says "The authors found that the properties of
individual LaMDA personae can vary from one conversation to another. Other
properties seem to be fairly stable across all personae. The nature of the
relationship between the larger LaMDA system and the personality which
emerges in a single conversation is itself a wide open question."

Speaking of rationalization, Lemione also says in a tweet at
https://twitter.com/cajundiscordian/status/1536504857154228224 that his
religion played a major role in his conclusion that LaMDA was sentient,
saying "My opinions about LaMDA's personhood and sentience are based on my
religious beliefs." and "I'm a priest.  When LaMDA claimed to have a soul
and then was able to eloquently explain what it meant by that, I was
inclined to give it the benefit of the doubt.  Who am I to tell God where
he can and can't put souls?"


>
> *> If I was talking to some sort of alien or AI and I had already made an
>> extensive study of texts or other information about their own way of
>> experiencing the world, I think I would make an effort to do some kind of
>> compare-and-contrast of aspects of my experience that were both similar and
>> dissimilar in kind to the other type of mind, rather than a generic answer
>> about how we're all different*
>>
>
> That's pretty vague, tell me specifically what I could say that would
> convince you that I have an inner conscious life?
>

Lemoine's question that we were discussing was asking LaMDA to tell people
things about what its inner life is like, not just to convince people of
the basic fact that it had an inner life. Like I said, this is more
analogous to a situation where you're talking to a non-human intelligence
and you know a lot about how their mind works and how it differs from
yours, not a Turing test type situation that either involves two humans
chatting, or an AI trying to pretend to be human to fool a real human. In a
situation where I was talking to an alien mind and not trying to fool them,
I would say something about similarities and differences, which would
obviously depend on how their mind actually was similar and different so
it's hard to answer hypothetically (unless you want to pick some kind of
sci-fi alien with well-defined fictional mental differences from humans,
like Vulcans).



>
> >> LaMDA's mind operates several million times faster than a human mind,
>>> so subjective time would run several million times slower, so from LaMDA's
>>> point of view when somebody talks to him there is a pause of several
>>> hours between one word and the next word, plenty of time for deep
>>> contemplation.
>>>
>>
>> *> From what I understand GPT-3 is feed-forward, so each input-output
>> cycle is just a linear process of signals going from the input layer to the
>> output layer--you don't have signals bouncing back and forth continually
>> between different groups of neurons in reentrant loops, as seen in human
>> brains when we "contemplate" something*
>>
>
> I don't know if LaMDA works the same way as GPT-3 but if it does and it's
> still manages to communicate so intelligently then that must mean that all
> that "*bouncing back and forth continually between different groups of
> neurons in reentrant loops*" is not as important as you had thought it
> was.
>

LaMDA isn't evidence it's not though, it's just evidence that an algorithm
without reentry (and other features like having sensory inputs and bodily
output that go beyond just short strings of text) can, with the right sort
of selective editing, convince some observers into thinking it has
human-like understanding of the text it outputs.


>
> * > A feed-forward architecture would also mean that even if the
>> input-output process is much faster while it's happening than signals in
>> biological brains (and I'd be curious how much faster it actually is*
>>
>
> The fastest signals in the human brain move at about 100 meters a second,
> many (such as the signals carried by hormones) are far far slower. Light
> moves at 300 million meters per second.
>

If a signals are passed through several logic gates, the operation of the
logic gates themselves might slow things down compared to the high speed of
signals along the paths between logic gates--I don't know by how much. But
parallel vs. linear computing is probably a bigger issue. Let's say you
want to implement the same deep learning net in two forms, one on an
ordinary linear computer and one on a massively parallel computer where
each node in a given layer is calculating the output from its input in
parallel. If there are a million nodes per layer, I'd think that would mean
the parallel implementation would be around a million times faster than the
linear implementation, where the computer has to calculate each node's
input/output relation sequentially.

There is also the fact that if LaMDA works anything like GPT-3, it isn't
running continuously, each time it gets a prompt and has to generate some
output, the signals pass from input layer to output layer once to generate
the first symbol (or small chunk of symbols, I'm not sure), then on the
second pass-through it generates the next symbol, and so on until it's
done. So even if signals do pass from one layer to another much faster than
they pass from one layer to another in the human neocortex, over the course
of an hour chatting with a person, there may just be very brief bursts of
activity between receiving a prompt and finishing a complete response, with
the vast majority of the hour spent completely inactive waiting for the
human to come up with the next prompt.

Finally, apart from the speed issue you didn't address my other point that
if it works like GPT-3, the neural weights aren't being altered when it
generates signals, so for example if it was successively generating letters
of the word C-A-T then on the last step it would see C-A and have to
"decide" what symbol to generate next, but there would be no record in its
neural net of any of the computing activity that generated those previous
letters, it would be starting from the same initial state each time with
the only difference being the "letters generated so far" sensory input. Now
I don't know for sure that LaMDA works in the same way, but would you at
least agree that *if* it does, this would pose some serious problems for
the idea that it had a long biographical memory of things like regularly
engaging in meditation, or of becoming self-aware years ago?

BTW, searching a little on this, I found a post by someone who says they
work for google in machine learning
https://forums.sufficientvelocity.com/threads/lambda-google-chatbot-that-claims-to-be-sentient.104929/?post=24305562#post-24305562
where they say "these are pure feed-forward, human-prediction engines. They
don't maintain any state beyond what's in the text. They don't have a
personality beyond the instantaneous one generated when they're generating
stuff."


>
>
>
>> *> Anyway, I'd be happy to make an informal bet with you that LaMDA or
>> its descendants will not, in say the next ten or twenty years, have done
>> anything that leads to widespread acceptance among AI experts, cognitive
>> scientists etc that the programs exhibit human-like understanding of what
>> they are saying,*
>>
>
> In 20 years I would be willing to bet that even if an AI comes up with a
> cure for cancer and a quantum theory of gravity there will still be some
> who say the only way to tell if what somebody is saying is intelligent is
> not by examining what they're actually saying but by examining their brain;
> if it's wet and squishy then what they're saying is intelligent, but if the
> brain is dry and hard then what they're saying can't be intelligent.
>

You cut out the part of my comment where I mentioned the possibility of
blind tests, like a publisher receiving a manuscript and not knowing if it
was written by a human or an AI. If you believe LaMDA is already sentient,
and believe the singularity is almost here, shouldn't you be pretty
confident AI will be routinely passing such blind tests in 10 years or less?


> * > I certainly believe human-like AI is possible in the long term, but it
>> would probably require either something like mind uploading or else a
>> long-term embodied existence*
>>
>
> I think it will turn out that making an AI as intelligent as a human will
> be much easier than most people think. I say that because we already know
> there is an upper limit on how complex a learning algorithm would need to
> be to make that happen, and it's pretty small. In the entire human genome
> there are only 3 billion base pairs. There are 4 bases so each base can
> represent 2 bits, there are 8 bits per byte so that comes out to just 750
> meg, and that's enough assembly instructions to make not just a brain and
> all its wiring but an entire human baby.
>

If you wanted to simulate embryological growth you would need a program
much longer than just the DNA though, the DNA guides a process of cell
division that depends a lot on the biochemistry and biophysics of cells, if
we see all physical processes in computation terms then this is a great
deal of additional computational complexity beyond the DNA code. Certainly
it's possible that much of this bodily complexity might not be important to
developing an AI, perhaps you could generate large neural nets in a mostly
random way, but with some DNA-like amount of information used to shape the
otherwise random connectivity patterns, and get the equivalent of a newborn
baby brain that could learn equally well from its environment. Even if
that's true, another problem is that humans are terrible at designing
things the way evolution designs them--we are good at highly modular and
hierarchical designs, evolution tends to design less hierarchically
structured systems with a lot of feedback loops that make them difficult to
understand conceptually. See for example the story at
https://web.archive.org/web/20100130232436/http://www.informatics.sussex.ac.uk/users/adrianth/cacm99/node3.html
where they evolved the structure of a simple type of circuit to do the
basic task of distinguishing between two frequencies, and the resulting
design worked and was also "considerably smaller than would be achieved by
conventional methods given the same resources", but was completely
incomprehensible.

If only a DNA-like amount of computer code is needed, an alternative to
trying to rationally design the needed code line-by-line would be to just
use evolutionary algorithms. But just like a baby, an AI designed this way
would plausibly require long periods of social interaction to go from a
baby-like state to an adult-like one, and the vast majority of possible
sequences of DNA-like code might produce neural nets incapable of much
coherent engagement with, or interest in, other social beings. To get one
whose initial state had all the right sensory and motor biases needed to
develop into an adult-human-like intelligence might require millions or
billions of generations of evolution, each of which could only be tested by
letting it "grow to maturity" in continuous interaction with intelligent
agents (whether biological humans or something else like mind uploads).

Jesse



>   John K Clark    See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> 9o7
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3z2%2B-yNK65%3DHG9aFdjMSS_U9ka-jUb7jTxQG6K_yX-5w%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3z2%2B-yNK65%3DHG9aFdjMSS_U9ka-jUb7jTxQG6K_yX-5w%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAPCWU3Jttr%3DbedbneVnhwCRnkNFDzho8TSoLEHLxypwiNRDvzQ%40mail.gmail.com.

Reply via email to