Re: GPT-4 solving hard riddles

2023-03-21 Thread spudboy100 via Everything List
Newton was correct but he of course knew, jack shit about the minds after his 
life. Mere, intelligent computer is what you are really asking. "High Speed 
morons was a phrase from science fiction gawd, Arthur C. Clarke. Thus, a very 
fast, very capable robot could beat a human soldier most of the time. Musk's 
neural-jacked humans may prove equal to the machinery and if I recall, Hawking 
also advocated something like that, 25+ years ago. 
Hence our species need for Magnus, Robot Fighter, 4000 AD!
By the way, Magnus was trained in robot fighting by AI-Robots, A1, at his 
secret base under the Antarctic ice. AI, wanted to see the human species 
survive. Magnus often cut deals with robots because it was mutually beneficial. 



-Original Message-
From: John Clark 
To: everything-list@googlegroups.com
Sent: Tue, Mar 21, 2023 8:12 am
Subject: Re: GPT-4 solving hard riddles

On Tue, Mar 21, 2023 at 5:39 AM Telmo Menezes  wrote:


> the important methodological distinction here is between learning intelligent 
> behavior and demonstrating intelligent behavior. Obviously it is possible to 
> learn and generalize from a dataset, otherwise there would be no point in 
> wasting time with ML. But if you want to convince other people that you have 
> indeed achieved generalization, then the scientific gold standard is to 
> demonstrate this on data that was not used in training,

That "gold standard" for intelligence has never been met by computers or by 
human beings, even Newton, who certainly was not a modest person, admitted that 
he achieved what he did by "standing on the shoulders of giants".  Should we 
the give credit for discovering General Relativity to Einstein's teachers and 
not to Einstein?  Since GPT4 went public one week ago people all over the world 
have been asking hundreds of thousands, perhaps millions, of questions and 
receiving good and sometimes brilliant answers but, considering the fact that 
one of the many things it was trained on was the entirety of Wikipedia,  it 
would be impossible to prove that none of the questions it had been asked had 
the slightest similarity to something it was trained on. I know one thing for 
certain, if a human could answer questions and solve puzzles as well as GPT4 
nobody would hesitate in judging him to be intelligent. 
I think it's only fair to use the same criteria for judging machines as we do 
for humans. As Martin Luther King said  " I have a dream that one day the 
intelligence of beings will not be judged by the squishiness of their brains 
but by the content of their minds" ah or at least he said something 
like that, I may have gotten one or two words wrong  


> I am really just insisting on sticking to the scientific attitude.

It is not a scientific attitude to start an investigation of a machine's 
intelligence by insisting that the machine could never be intelligent. The 
double blind Turing Test is just a specific example of the scientific method, 
have 2 test groups and keep everything the same between them except for one 
thing and see what happens, in this case the one thing that is different is the 
squishiness of the brain.


 > I do not understand what I could saying that is so controversial...


You not understand why it's controversial not to accept the evidence of your 
own eyes? 

 > There is still a huge chasm between Human Intelligence (HI) and GPT-4. 

If there is a intelligence chasm between humans and machines then humans are 
standing on the wrong side of it, and the chasm is getting wider every day. 

> How long will it take to cross that chasm? 

Negative one week.  

 > But this only goes so far. It can never defeat a competent chess player with 
 > such an architecture. Of course, we can integrate GPT-4 with some API and 
 > let it call some explore_deep_tree() function, but this is not the sort of 
 > deep integration that one imagines in sophisticated AI. True recurrence 
 > would allow for true computational power within the model.


Why? Because if whenever GPT-4 came upon a board game problem like Chess or GO 
it  called upon AlphaZero to provide the answer then it wouldn't be able to 
explain exactly why it made the move it did? But the same thing is true for 
human Chess grandmasters, when asked to explain why they made the move they did 
they can only give vague answers like " instinct told me that the upper left 
part of the board looked a little weak and needed reinforcing", he can explain 
why it turned out to be a winning move but he can't explain how he came up with 
the idea of making that move in the first place.  People were always asking 
Einstein how he came up with his ideas but he was never able to tell them, if 
he had been then we'd all be as smart as Einstein. 
 John K Clark    See what's on my new list at  Extropolis7vs  -- 
You received this mes

Re: GPT-4 solving hard riddles

2023-03-21 Thread Jason Resch
On Tue, Mar 21, 2023, 5:39 AM Telmo Menezes  wrote:

>
>
>
> Over-fitting is less of an issue here because it's trivial to write a
> sentence that's never before been written by any human in history.
>
>
> That is not enough. A small variation on a standard IQ test is still the
> same IQ test for a super powerful pattern detector such as GPT-4.
>
> I have no doubt that GPT-4 can generalize in its domain. It was rigorously
> designed and tested for that by people who know what they are doing. My
> doubt is that you can give it an IQ test and claim OMG GPT-4 IQ > 140. This
> is just silly and it is junk science.
>
>
> It's true that once one learns a way to solve problems it becomes easier
> to reapply that method when you next encounter a related problem.
>
> But isn't that partly what intelligence is? If a system has read the whole
> Internet and seen every type of problem we know how to solve, and it can
> generalize to know what method to use in any situation, that's an
> incredible level of intelligence which until now, we haven't had in machine
> form before.
>
>
> I would say that the important methodological distinction here is between
> learning intelligent behavior and demonstrating intelligent behavior.
> Obviously it is possible to learn and generalize from a dataset, otherwise
> there would be no point in wasting time with ML. But if you want to
> convince other people that you have indeed achieved generalization, then
> the scientific gold standard is to demonstrate this on data that was not
> used in training, because beyond generalization there can be also (and
> often is) overfitting. This is not a controversial statement. Take any
> published ML result and apply it to the training data, and 99.999% of
> the time it will perform better / much better in the training data. Because
> it also learned the little details (over-fitting) that guide it towards the
> correct answer.
>
> An extreme case of this is stock trading. I am not kidding, and I suspect
> you know it: I can easily produce an ML model that achieves >1000% profit
> per month on the derivatives market, as long as we only test on in-corpus
> data. But I will raise the stakes! Are you ready?
>
> I promise I will train my algorithm only on ONE crypto coin from 2020 to
> 2022. Then we will apply it to OTHER crypto coins. I still promise >1000%
> profit per month. Do you want it now?
>
> I understand that GPT-4 is trained on most available text in natural
> language. That is amazing, I love it. But this comes with additional
> methodological challenges. I am pretty sure that the GPT-4 teams knows
> about them, and they probably have a rigorously reserved training set to
> guide their own research. Also, I fully believe that they are serious
> researchers and would never embark in this IQ test bullshit.
>
> I am really just insisting on sticking to the scientific attitude. I do
> not understand what I could saying that is so controversial...
>

I see your point about testing. Someone on the entropy list chose to write
their own word problem puzzle for it to solve. Perhaps this is the way, to
design new intelligence tests from scratch. But I don't see a way to ensure
we have developed entirely new classes of problem of a type not seen before
in the corpus of the Internet. Perhaps the opportunity will only exist when
some mathematician proves something new.



>
> You can tweak the parameters of the problem to guarantee it's a problem it
> has never before been seen, and it can still solve it.
>
>
> Some yes, some no. Almost one century of computer science still applies.
>
> You can choose to wait for the academic write ups to come out a few months
> down the line but by then things will have advanced another few levels from
> where we are today.
>
>
> I am not wanting to wait for anything, I am asking questions that can be
> addressed right now:
>
> - Are there IQ tests in the training data of GPT-4. Yes or no?
> - Can we conceive of human-level intelligence without recurrent
> connections or some form of ongoing recursivity / Turing completeness? Yes
> or no?
>
>
>
> I've been thinking about this a lot.
>
>
> My friend with access to GPT-4 asked it: "Does your neural network contain
> such reflexive loops, or is it strictly feed forward?", below is its reply:
>
> 
>
> "As an AI language model, my underlying architecture is based on the
> Transformer model, which is primarily feedforward in nature but includes
> some elements of recurrence or reflexivity, specifically in the
> self-attention mechanism.
>
> The Transformer model consists of a multi-layer feedforward neural network
> with self-attention mechanisms that allow the model to learn complex
> relationships between input and output sequences. The self-attention
> mechanism weighs the importance of different input elements relative to
> each other, effectively capturing long-range dependencies and relationships
> within the input data. This mechanism introduces a form o

Re: GPT-4 solving hard riddles

2023-03-21 Thread John Clark
On Tue, Mar 21, 2023 at 5:39 AM Telmo Menezes 
wrote:

> *the important methodological distinction here is between learning
> intelligent behavior and demonstrating intelligent behavior. Obviously it
> is possible to learn and generalize from a dataset, otherwise there would
> be no point in wasting time with ML. But if you want to convince other
> people that you have indeed achieved generalization, then the scientific
> gold standard is to demonstrate this on data that was not used in training,*
>

That "gold standard" for intelligence has never been met by computers or by
human beings, even Newton, who certainly was not a modest person, admitted
that he achieved what he did by "standing on the shoulders of giants".
Should we the give credit for discovering General Relativity to Einstein's
teachers and not to Einstein?  Since GPT4 went public one week ago people
all over the world have been asking hundreds of thousands, perhaps millions,
of questions and receiving good and sometimes brilliant answers but,
considering the fact that one of the many things it was trained on was the
entirety of Wikipedia,  it would be impossible to prove that none of the
questions it had been asked had the slightest similarity to something it
was trained on. I know one thing for certain, if a human could answer
questions and solve puzzles as well as GPT4 nobody would hesitate in
judging him to be intelligent.

I think it's only fair to use the same criteria for judging machines as we
do for humans. As Martin Luther King said  " I have a dream that one day the
intelligence of beings will not be judged by the squishiness of their
brains but by the content of their minds" ah or at least he said
something like that, I may have gotten one or two words wrong


*> I am really just insisting on sticking to the scientific attitude.*
>

It is not a scientific attitude to start an investigation of a machine's
intelligence by insisting that the machine could never be intelligent. The
double blind Turing Test is just a specific example of the scientific
method, have 2 test groups and keep everything the same between them except
for one thing and see what happens, in this case the one thing that is
different is the squishiness of the brain.


> * > I do not understand what I could saying that is so controversial...*
>

You not understand why it's controversial not to accept the evidence of
your own eyes?

* > There is still a huge chasm between Human Intelligence (HI) and GPT-4. *


If there is a intelligence chasm between humans and machines then humans
are standing on the wrong side of it, and the chasm is getting wider every
day.

*> How long will it take to cross that chasm? *


Negative one week.

* > But this only goes so far. It can never defeat a competent chess player
> with such an architecture. Of course, we can integrate GPT-4 with some API
> and let it call some explore_deep_tree() function, but this is not the sort
> of deep integration that one imagines in sophisticated AI. True recurrence
> would allow for true computational power within the model.*
>

Why? Because if whenever GPT-4 came upon a board game problem like Chess or
GO it  called upon AlphaZero to provide the answer then it wouldn't be able
to explain exactly why it made the move it did? But the same thing is true
for human Chess grandmasters, when asked to explain why they made the move
they did they can only give vague answers like " instinct told me that the
upper left part of the board looked a little weak and needed
reinforcing", he can explain why it turned out to be a winning move but he
can't explain how he came up with the idea of making that move in the first
place.  People were always asking Einstein how he came up with his ideas
but he was never able to tell them, if he had been then we'd all be as
smart as Einstein.

 John K ClarkSee what's on my new list at  Extropolis

7vs

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0Jgba_iJjTn20%3DJsuLQKKDQw61QqQ%2B%3DfhZuyGASqatJQ%40mail.gmail.com.


Re: GPT-4 solving hard riddles

2023-03-21 Thread Telmo Menezes

> 
>> 
>>> Over-fitting is less of an issue here because it's trivial to write a 
>>> sentence that's never before been written by any human in history.
>> 
>> That is not enough. A small variation on a standard IQ test is still the 
>> same IQ test for a super powerful pattern detector such as GPT-4.
>> 
>> I have no doubt that GPT-4 can generalize in its domain. It was rigorously 
>> designed and tested for that by people who know what they are doing. My 
>> doubt is that you can give it an IQ test and claim OMG GPT-4 IQ > 140. This 
>> is just silly and it is junk science.
> 
> It's true that once one learns a way to solve problems it becomes easier to 
> reapply that method when you next encounter a related problem.
> 
> But isn't that partly what intelligence is? If a system has read the whole 
> Internet and seen every type of problem we know how to solve, and it can 
> generalize to know what method to use in any situation, that's an incredible 
> level of intelligence which until now, we haven't had in machine form before.

I would say that the important methodological distinction here is between 
learning intelligent behavior and demonstrating intelligent behavior. Obviously 
it is possible to learn and generalize from a dataset, otherwise there would be 
no point in wasting time with ML. But if you want to convince other people that 
you have indeed achieved generalization, then the scientific gold standard is 
to demonstrate this on data that was not used in training, because beyond 
generalization there can be also (and often is) overfitting. This is not a 
controversial statement. Take any published ML result and apply it to the 
training data, and 99.999% of the time it will perform better / much better 
in the training data. Because it also learned the little details (over-fitting) 
that guide it towards the correct answer.

An extreme case of this is stock trading. I am not kidding, and I suspect you 
know it: I can easily produce an ML model that achieves >1000% profit per month 
on the derivatives market, as long as we only test on in-corpus data. But I 
will raise the stakes! Are you ready?

I promise I will train my algorithm only on ONE crypto coin from 2020 to 2022. 
Then we will apply it to OTHER crypto coins. I still promise >1000% profit per 
month. Do you want it now?

I understand that GPT-4 is trained on most available text in natural language. 
That is amazing, I love it. But this comes with additional methodological 
challenges. I am pretty sure that the GPT-4 teams knows about them, and they 
probably have a rigorously reserved training set to guide their own research. 
Also, I fully believe that they are serious researchers and would never embark 
in this IQ test bullshit.

I am really just insisting on sticking to the scientific attitude. I do not 
understand what I could saying that is so controversial...

> 
>>> You can tweak the parameters of the problem to guarantee it's a problem it 
>>> has never before been seen, and it can still solve it.
>> 
>> Some yes, some no. Almost one century of computer science still applies.
>> 
>>> You can choose to wait for the academic write ups to come out a few months 
>>> down the line but by then things will have advanced another few levels from 
>>> where we are today.
>> 
>> I am not wanting to wait for anything, I am asking questions that can be 
>> addressed right now:
>> 
>> - Are there IQ tests in the training data of GPT-4. Yes or no?
>> - Can we conceive of human-level intelligence without recurrent connections 
>> or some form of ongoing recursivity / Turing completeness? Yes or no?
> 
> 
> I've been thinking about this a lot.
> 
> 
> My friend with access to GPT-4 asked it: "Does your neural network contain 
> such reflexive loops, or is it strictly feed forward?", below is its reply:
> 
> 
> 
> "As an AI language model, my underlying architecture is based on the 
> Transformer model, which is primarily feedforward in nature but includes some 
> elements of recurrence or reflexivity, specifically in the self-attention 
> mechanism.
> 
> The Transformer model consists of a multi-layer feedforward neural network 
> with self-attention mechanisms that allow the model to learn complex 
> relationships between input and output sequences. The self-attention 
> mechanism weighs the importance of different input elements relative to each 
> other, effectively capturing long-range dependencies and relationships within 
> the input data. This mechanism introduces a form of reflexivity or 
> recurrence, as it allows the model to consider the context of the entire 
> input sequence when generating outputs.
> 
> However, it is important to note that the self-attention mechanism in the 
> Transformer model is not equivalent to traditional recurrent neural networks 
> (RNNs), which maintain a hidden state that is updated at each time step. The 
> Transformer model processes input sequences in parallel rather than 
> sequentia

Re: GPT-4 solving hard riddles

2023-03-20 Thread Brent Meeker




On 3/20/2023 4:00 AM, Telmo Menezes wrote:



Meanwhile, back in reality:

(1) Do you understand the importance of testing machine learning 
algorithms in out-of-corpus data? Do you understand the difference 
between generalization and overfitting? This is the bread and butter 
of machine learning. This is how ChatGPT was built. You are SUPER 
EXCITED abou ChatGPT but you do not give a shit about the fundamentals 
of machine learning? You think they no longer apply, while at the same 
time cheerleading for its achievements? It's truly bizarre. I 
approached this topic but you refuse to engage. I actually do 
peer-review of ML papers and there is no way I (or anyone I work with) 
would take in-corpus tests seriously. They often look absurdly good. 
Will you take my trading algorithm offer?


(2) Human beings can form coherent memories and are capable of 
long-term goals, strategy and slow thinking -- the Turing complete 
kind. I have even seen people now claim  that ChatGPT is good at 
chess. It is incredibly good at chess given that it is a language 
model trained with chess books amongst many other things, so it can 
easily defeat naive players with chess recipes. It is capable of 
navigating a min-max tree? Of course not, because it lacks recurrence. 
It cannot possibly win against older generation AIs that do navigate 
min-max trees and do defeat grand masters. So how do we combine the 
two types of AI?


This seems like a crucial task for making really usable AI consultants.  
You expect an AI be good at the things computers are good at and there 
are plenty of computer modules to do mathematical inference and Bayesian 
reasoning.


Brent

It looks like you don't care about any of this, instead you want to 
convince me that ChatGPT is the answer to everything. Ok, maybe you 
are right and I am crazy.


Telmo


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/4ab9b62f-e319-095e-77a8-92474b8debcc%40gmail.com.


Re: GPT-4 solving hard riddles

2023-03-20 Thread Jason Resch
On Mon, Mar 20, 2023 at 9:37 AM John Clark  wrote:

> On Mon, Mar 20, 2023 at 10:15 AM Jason Resch  wrote:
>
> Jason, that was a very interesting and insightful post, thanks for posting
> it.
>

Thank you John, I appreciate that. Thank you for sharing that video. I have
passed it on to numerous others.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjQ_eqCb29gXtLSQsy2QTzbePN1r-Uv433wU42m2WfR0g%40mail.gmail.com.


Re: GPT-4 solving hard riddles

2023-03-20 Thread John Clark
On Mon, Mar 20, 2023 at 10:15 AM Jason Resch  wrote:

Jason, that was a very interesting and insightful post, thanks for posting
it.



John K ClarkSee what's on my new list at  Extropolis


i70


>
> On Mon, Mar 20, 2023, 9:51 AM Telmo Menezes 
> wrote:
>
>>
>>
>> Am Mo, 20. Mär 2023, um 14:28, schrieb Jason Resch:
>>
>> The video John shared is worth watching. This is significant. It is now
>> solving complex math problems which requires a long sequence of steps.
>>
>>
>> I agree that it is significant and extremely impressive. I never said the
>> opposite. What baffles me is that John is now requiring religious reverence
>> towards a scientific result, and criticizing when I ask questions that are
>> part of the same standard machine learning methodology that got us here.
>>
>
> I see, I appreciate that clarification.
>
>
>
>> Over-fitting is less of an issue here because it's trivial to write a
>> sentence that's never before been written by any human in history.
>>
>>
>> That is not enough. A small variation on a standard IQ test is still the
>> same IQ test for a super powerful pattern detector such as GPT-4.
>>
>> I have no doubt that GPT-4 can generalize in its domain. It was
>> rigorously designed and tested for that by people who know what they are
>> doing. My doubt is that you can give it an IQ test and claim OMG GPT-4 IQ >
>> 140. This is just silly and it is junk science.
>>
>
> It's true that once one learns a way to solve problems it becomes easier
> to reapply that method when you next encounter a related problem.
>
> But isn't that partly what intelligence is? If a system has read the whole
> Internet and seen every type of problem we know how to solve, and it can
> generalize to know what method to use in any situation, that's an
> incredible level of intelligence which until now, we haven't had in machine
> form before.
>
>
>
>> You can tweak the parameters of the problem to guarantee it's a problem
>> it has never before been seen, and it can still solve it.
>>
>>
>> Some yes, some no. Almost one century of computer science still applies.
>>
>> You can choose to wait for the academic write ups to come out a few
>> months down the line but by then things will have advanced another few
>> levels from where we are today.
>>
>>
>> I am not wanting to wait for anything, I am asking questions that can be
>> addressed right now:
>>
>> - Are there IQ tests in the training data of GPT-4. Yes or no?
>> - Can we conceive of human-level intelligence without recurrent
>> connections or some form of ongoing recursivity / Turing completeness? Yes
>> or no?
>>
>
>
> I've been thinking about this a lot.
>
>
> My friend with access to GPT-4 asked it: "Does your neural network contain
> such reflexive loops, or is it strictly feed forward?", below is its reply:
>
> 
>
> "As an AI language model, my underlying architecture is based on the
> Transformer model, which is primarily feedforward in nature but includes
> some elements of recurrence or reflexivity, specifically in the
> self-attention mechanism.
>
> The Transformer model consists of a multi-layer feedforward neural network
> with self-attention mechanisms that allow the model to learn complex
> relationships between input and output sequences. The self-attention
> mechanism weighs the importance of different input elements relative to
> each other, effectively capturing long-range dependencies and relationships
> within the input data. This mechanism introduces a form of reflexivity or
> recurrence, as it allows the model to consider the context of the entire
> input sequence when generating outputs.
>
> However, it is important to note that the self-attention mechanism in the
> Transformer model is not equivalent to traditional recurrent neural
> networks (RNNs), which maintain a hidden state that is updated at each time
> step. The Transformer model processes input sequences in parallel rather
> than sequentially, which makes it fundamentally different from RNNs.
>
> In summary, while my neural network architecture is primarily feedforward,
> it includes some elements of reflexivity in the form of self-attention
> mechanisms that allow the model to capture complex relationships within
> input sequences."
>
> 
>
> Is this enough to meet Hofstadter's requirements of recursion? I do not
> have the expertise to say. But I do see recursion exist in a way no one
> seems to ever mention:
>
> The output of the LLM is fed back in, as input to the LLM that produced
> it. So all the high level processing and operation of the network at the
> highest level, used to produce a few characters of output, then reaches
> back down to the lowest level to effect the lowest level of the input
> layers of the network.
>
> If you asked the network, where did that input that it sees come from, it
> would have no other choice but to refer back to itself, as "I". "I
> generated that text."
>
> Loops are needed 

Re: GPT-4 solving hard riddles

2023-03-20 Thread Jason Resch
On Mon, Mar 20, 2023, 9:51 AM Telmo Menezes  wrote:

>
>
> Am Mo, 20. Mär 2023, um 14:28, schrieb Jason Resch:
>
> The video John shared is worth watching. This is significant. It is now
> solving complex math problems which requires a long sequence of steps.
>
>
> I agree that it is significant and extremely impressive. I never said the
> opposite. What baffles me is that John is now requiring religious reverence
> towards a scientific result, and criticizing when I ask questions that are
> part of the same standard machine learning methodology that got us here.
>

I see, I appreciate that clarification.



> Over-fitting is less of an issue here because it's trivial to write a
> sentence that's never before been written by any human in history.
>
>
> That is not enough. A small variation on a standard IQ test is still the
> same IQ test for a super powerful pattern detector such as GPT-4.
>
> I have no doubt that GPT-4 can generalize in its domain. It was rigorously
> designed and tested for that by people who know what they are doing. My
> doubt is that you can give it an IQ test and claim OMG GPT-4 IQ > 140. This
> is just silly and it is junk science.
>

It's true that once one learns a way to solve problems it becomes easier to
reapply that method when you next encounter a related problem.

But isn't that partly what intelligence is? If a system has read the whole
Internet and seen every type of problem we know how to solve, and it can
generalize to know what method to use in any situation, that's an
incredible level of intelligence which until now, we haven't had in machine
form before.



> You can tweak the parameters of the problem to guarantee it's a problem it
> has never before been seen, and it can still solve it.
>
>
> Some yes, some no. Almost one century of computer science still applies.
>
> You can choose to wait for the academic write ups to come out a few months
> down the line but by then things will have advanced another few levels from
> where we are today.
>
>
> I am not wanting to wait for anything, I am asking questions that can be
> addressed right now:
>
> - Are there IQ tests in the training data of GPT-4. Yes or no?
> - Can we conceive of human-level intelligence without recurrent
> connections or some form of ongoing recursivity / Turing completeness? Yes
> or no?
>


I've been thinking about this a lot.


My friend with access to GPT-4 asked it: "Does your neural network contain
such reflexive loops, or is it strictly feed forward?", below is its reply:



"As an AI language model, my underlying architecture is based on the
Transformer model, which is primarily feedforward in nature but includes
some elements of recurrence or reflexivity, specifically in the
self-attention mechanism.

The Transformer model consists of a multi-layer feedforward neural network
with self-attention mechanisms that allow the model to learn complex
relationships between input and output sequences. The self-attention
mechanism weighs the importance of different input elements relative to
each other, effectively capturing long-range dependencies and relationships
within the input data. This mechanism introduces a form of reflexivity or
recurrence, as it allows the model to consider the context of the entire
input sequence when generating outputs.

However, it is important to note that the self-attention mechanism in the
Transformer model is not equivalent to traditional recurrent neural
networks (RNNs), which maintain a hidden state that is updated at each time
step. The Transformer model processes input sequences in parallel rather
than sequentially, which makes it fundamentally different from RNNs.

In summary, while my neural network architecture is primarily feedforward,
it includes some elements of reflexivity in the form of self-attention
mechanisms that allow the model to capture complex relationships within
input sequences."



Is this enough to meet Hofstadter's requirements of recursion? I do not
have the expertise to say. But I do see recursion exist in a way no one
seems to ever mention:

The output of the LLM is fed back in, as input to the LLM that produced it.
So all the high level processing and operation of the network at the
highest level, used to produce a few characters of output, then reaches
back down to the lowest level to effect the lowest level of the input
layers of the network.

If you asked the network, where did that input that it sees come from, it
would have no other choice but to refer back to itself, as "I". "I
generated that text."

Loops are needed to maintain and modify a persistent state or memory, to
create a strange loop of self-reference, and to achieve Turing
completeness. But a loop may not exist entirely in the "brain" of an
entity, it might offload part of the loop into the environment in which it
is operating. I think that is the case for things like thermostats, guided
missiles, AlphaGo, and perhaps even ourselves.

We observe our own ac

Re: GPT-4 solving hard riddles

2023-03-20 Thread Telmo Menezes


Am Mo, 20. Mär 2023, um 14:28, schrieb Jason Resch:
> The video John shared is worth watching. This is significant. It is now 
> solving complex math problems which requires a long sequence of steps.

I agree that it is significant and extremely impressive. I never said the 
opposite. What baffles me is that John is now requiring religious reverence 
towards a scientific result, and criticizing when I ask questions that are part 
of the same standard machine learning methodology that got us here.

> Over-fitting is less of an issue here because it's trivial to write a 
> sentence that's never before been written by any human in history.

That is not enough. A small variation on a standard IQ test is still the same 
IQ test for a super powerful pattern detector such as GPT-4.

I have no doubt that GPT-4 can generalize in its domain. It was rigorously 
designed and tested for that by people who know what they are doing. My doubt 
is that you can give it an IQ test and claim OMG GPT-4 IQ > 140. This is just 
silly and it is junk science.

> You can tweak the parameters of the problem to guarantee it's a problem it 
> has never before been seen, and it can still solve it.

Some yes, some no. Almost one century of computer science still applies.

> You can choose to wait for the academic write ups to come out a few months 
> down the line but by then things will have advanced another few levels from 
> where we are today.

I am not wanting to wait for anything, I am asking questions that can be 
addressed right now:

- Are there IQ tests in the training data of GPT-4. Yes or no?
- Can we conceive of human-level intelligence without recurrent connections or 
some form of ongoing recursivity / Turing completeness? Yes or no?

In any case, all of this discussion will become moot in a few weeks.

Telmo

> I think it's worth paying attention to the latest results, even if it means 
> having to watch some YouTube videos.
> 
> Jason 
> 
> 
> On Mon, Mar 20, 2023, 9:19 AM John Clark  wrote:
>> On Mon, Mar 20, 2023 at 7:00 AM Telmo Menezes  wrote:
>> 
>>> >*** I want to discuss scientific research and peer-reviews academic 
>>> >articles, but you want me to get excited about YouTube clickbait instead. 
>>> >What happened to you John?*
>> 
>> I'll tell you exactly what happened to me, last Tuesday happened to me. And 
>> by the way, refusing to look at something does not make it go away.  
>> 
>> GPT-4 solving hard riddles  
>> 
>>> *> You are SUPER EXCITED abou ChatGPT but you do not give a shit about the 
>>> fundamentals of machine learning*
>> 
>> You are absolutely correct. When it comes to judging it's intelligence I 
>> don't give a shit about how GPT4 works, *I CARE ABOUT WHAT GPT4 DOES* 
>> because behavior is the only way we have of judging the intelligence of our 
>> fellow human beings, and that is also the only way we have of judging the 
>> intelligence of a computer program. All I'm saying is that regardless of how 
>> something works, if it's behaving intelligently then it's intelligent. 
>> That's true for people and it's also true for computers, and I think it's 
>> bizarre that some people think that is a controversial statement.
>>> > *Human beings can form coherent memories and are capable of long-term 
>>> > goals, strategy and slow thinking -- the Turing complete kind.*
>> 
>> All computers are Turing Machines so obviously they are also Turing complete.
>> 
>>> *> I have even seen people now claim  that ChatGPT is good at chess. It is 
>>> incredibly good at chess given that it is a language model trained with 
>>> chess books*
>> 
>> Wow, that's a remarkably weak argument, computers have had the ability to 
>> beat any human being at chess for a quarter of a century!  It would be 
>> trivially easy for GPT4 to offload the problem to AlphaZero which can start 
>> with zero knowledge of chess and after an hour or two of thinking about it 
>> play the game at a superhuman level. Then for GPT4 playing chess  (or any 
>> board game) at a super human level would be a simple reflex just as for us 
>> breathing is a simple reflex.
>>  
>>> > * It is capable of navigating a min-max tree? Of course not, because it 
>>> > lacks recurrence. It cannot possibly win against older generation AIs*
>> 
>> The discovery of transformer Technology in 2017 was enormously important, 
>> but it would be silly to say that is the only technique that an AI is 
>> allowed to use.  
>> 
>>> *>you want to convince me that ChatGPT is the answer to everything.*
>> 
>> Don't be ridiculous!  
>> 
>>> >* **Ok, maybe you are right and I am crazy.*
>> 
>> Yeah maybe.  
>> 
>>John K ClarkSee what's on my new list at  Extropolis 
>> 
>> rxq
>> 
>>> 
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send a

Re: GPT-4 solving hard riddles

2023-03-20 Thread Jason Resch
The video John shared is worth watching. This is significant. It is now
solving complex math problems which requires a long sequence of steps.

Over-fitting is less of an issue here because it's trivial to write a
sentence that's never before been written by any human in history.

You can tweak the parameters of the problem to guarantee it's a problem it
has never before been seen, and it can still solve it.

You can choose to wait for the academic write ups to come out a few months
down the line but by then things will have advanced another few levels from
where we are today.

I think it's worth paying attention to the latest results, even if it means
having to watch some YouTube videos.

Jason


On Mon, Mar 20, 2023, 9:19 AM John Clark  wrote:

> On Mon, Mar 20, 2023 at 7:00 AM Telmo Menezes 
> wrote:
>
> >* I want to discuss scientific research and peer-reviews academic
>> articles, but you want me to get excited about YouTube clickbait instead.
>> What happened to you John?*
>>
>
> I'll tell you exactly what happened to me, last Tuesday happened to me.
> And by the way, refusing to look at something does not make it go away.
>
> GPT-4 solving hard riddles 
>
> *> You are SUPER EXCITED abou ChatGPT but you do not give a shit about the
>> fundamentals of machine learning*
>>
>
> You are absolutely correct. When it comes to judging it's intelligence I
> don't give a shit about how GPT4 works, *I CARE ABOUT WHAT GPT4 DOES*
> because behavior is the only way we have of judging the intelligence of our
> fellow human beings, and that is also the only way we have of judging the
> intelligence of a computer program. All I'm saying is that regardless of
> how something works, if it's behaving intelligently then it's intelligent.
> That's true for people and it's also true for computers, and I think it's
> bizarre that some people think that is a controversial statement.
>
> > *Human beings can form coherent memories and are capable of long-term
>> goals, strategy and slow thinking -- the Turing complete kind.*
>>
>
> All computers are Turing Machines so obviously they are also Turing
> complete.
>
> * > I have even seen people now claim  that ChatGPT is good at chess. It
>> is incredibly good at chess given that it is a language model trained with
>> chess books*
>>
>
> Wow, that's a remarkably weak argument, computers have had the ability to
> beat any human being at chess for a quarter of a century!  It would be
> trivially easy for GPT4 to offload the problem to AlphaZero which can start
> with zero knowledge of chess and after an hour or two of thinking about it
> play the game at a superhuman level. Then for GPT4 playing chess  (or any
> board game) at a super human level would be a simple reflex just as for us
> breathing is a simple reflex.
>
>
>> > * It is capable of navigating a min-max tree? Of course not, because
>> it lacks recurrence. It cannot possibly win against older generation AIs*
>>
>
> The discovery of transformer Technology in 2017 was enormously important,
> but it would be silly to say that is the only technique that an AI is
> allowed to use.
>
> *>you want to convince me that ChatGPT is the answer to everything.*
>>
>
> Don't be ridiculous!
>
> >
>> *Ok, maybe you are right and I am crazy.*
>>
>
> Yeah maybe.
>
>John K ClarkSee what's on my new list at  Extropolis
> 
> rxq
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0Ko%2BMPKt-T2gsnOMXeTJZFRr%3DuB2-gsHGB%3D%2BTEHaW%3DaA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhxEegTiFhpunMPN-UL38JJ7G0%3D6VuMZcO1o_Ohg3ORig%40mail.gmail.com.


Re: GPT-4 solving hard riddles

2023-03-20 Thread John Clark
On Mon, Mar 20, 2023 at 7:00 AM Telmo Menezes 
wrote:

>* I want to discuss scientific research and peer-reviews academic
> articles, but you want me to get excited about YouTube clickbait instead.
> What happened to you John?*
>

I'll tell you exactly what happened to me, last Tuesday happened to me. And
by the way, refusing to look at something does not make it go away.

GPT-4 solving hard riddles 

*> You are SUPER EXCITED abou ChatGPT but you do not give a shit about the
> fundamentals of machine learning*
>

You are absolutely correct. When it comes to judging it's intelligence I
don't give a shit about how GPT4 works, *I CARE ABOUT WHAT GPT4 DOES*
because behavior is the only way we have of judging the intelligence of our
fellow human beings, and that is also the only way we have of judging the
intelligence of a computer program. All I'm saying is that regardless of
how something works, if it's behaving intelligently then it's intelligent.
That's true for people and it's also true for computers, and I think it's
bizarre that some people think that is a controversial statement.

> *Human beings can form coherent memories and are capable of long-term
> goals, strategy and slow thinking -- the Turing complete kind.*
>

All computers are Turing Machines so obviously they are also Turing
complete.

* > I have even seen people now claim  that ChatGPT is good at chess. It is
> incredibly good at chess given that it is a language model trained with
> chess books*
>

Wow, that's a remarkably weak argument, computers have had the ability to
beat any human being at chess for a quarter of a century!  It would be
trivially easy for GPT4 to offload the problem to AlphaZero which can start
with zero knowledge of chess and after an hour or two of thinking about it
play the game at a superhuman level. Then for GPT4 playing chess  (or any
board game) at a super human level would be a simple reflex just as for us
breathing is a simple reflex.


> > * It is capable of navigating a min-max tree? Of course not, because it
> lacks recurrence. It cannot possibly win against older generation AIs*
>

The discovery of transformer Technology in 2017 was enormously important,
but it would be silly to say that is the only technique that an AI is
allowed to use.

*>you want to convince me that ChatGPT is the answer to everything.*
>

Don't be ridiculous!

>
> *Ok, maybe you are right and I am crazy.*
>

Yeah maybe.

   John K ClarkSee what's on my new list at  Extropolis

rxq


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0Ko%2BMPKt-T2gsnOMXeTJZFRr%3DuB2-gsHGB%3D%2BTEHaW%3DaA%40mail.gmail.com.


Re: GPT-4 solving hard riddles

2023-03-20 Thread Telmo Menezes
Am Mo, 20. Mär 2023, um 10:44, schrieb John Clark:
> On Mon, Mar 20, 2023 at 4:25 AM Telmo Menezes  wrote:
> 
>> __
>> >* Are you worried that some of us are not being sufficiently obsequious?*
> 
> No, I'm not worried about that because fortunately GPT-4 has not been 
> behaving like the biblical Yahweh, I have seen no evidence that GPT-4 
> demands, or even would enjoy, constant flattery by humans. All I want is for 
> you to look at this video and then do the rational thing and retract your 
> claim that GPT-4 is "*not even close*" to human intelligence.
> 
> GPT-4 solving hard riddles  

I gave you two meaningful topics of discussion (I will reiterate below) that I 
believe are actually interesting. I want to discuss scientific research and 
peer-reviews academic articles, but you want me to get excited about YouTube 
clickbait instead. What happened to you John?

> 
>> *> I don't understand your preocupation John.*
> 
> You don't?!  Can you think of anything more important to be preoccupied with? 
>  Can you think of anything that has happened in the world in your lifetime 
> that was more significant than passing the Turing Test with flying colors? I 
> can't.

I will be worried when these models became capable of self-modification and 
self-improvement, but self-modification and self-improvement require recurrent 
connections or any such computational equivalents, and ChatGPT does not have 
that and it is not something that is trivial to add because of the vanishing 
gradient problem. But vanishing gradients are boring and the Turing Test is 
exciting, even though the former is an actual scientific topic and the latter 
is a pop culture topic.

> 
>> *> If GPT-4 is indeed close to human intelligence, this will become 
>> undeniable in the next few weeks.*
> 
> It's been undeniable to all rational observers since last Tuesday, but you 
> denied it. 


Meanwhile, back in reality:

(1) Do you understand the importance of testing machine learning algorithms in 
out-of-corpus data? Do you understand the difference between generalization and 
overfitting? This is the bread and butter of machine learning. This is how 
ChatGPT was built. You are SUPER EXCITED abou ChatGPT but you do not give a 
shit about the fundamentals of machine learning? You think they no longer 
apply, while at the same time cheerleading for its achievements? It's truly 
bizarre. I approached this topic but you refuse to engage. I actually do 
peer-review of ML papers and there is no way I (or anyone I work with) would 
take in-corpus tests seriously. They often look absurdly good. Will you take my 
trading algorithm offer?

(2) Human beings can form coherent memories and are capable of long-term goals, 
strategy and slow thinking -- the Turing complete kind. I have even seen people 
now claim  that ChatGPT is good at chess. It is incredibly good at chess given 
that it is a language model trained with chess books amongst many other things, 
so it can easily defeat naive players with chess recipes. It is capable of 
navigating a min-max tree? Of course not, because it lacks recurrence. It 
cannot possibly win against older generation AIs that do navigate min-max trees 
and do defeat grand masters. So how do we combine the two types of AI? It looks 
like you don't care about any of this, instead you want to convince me that 
ChatGPT is the answer to everything. Ok, maybe you are right and I am crazy.

Telmo


>   John K ClarkSee what's on my new list at  Extropolis 
> 
> 
> aro
> 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a4696acf-9e61-4aec-b628-94e01866d42d%40app.fastmail.com.


Re: GPT-4 solving hard riddles

2023-03-20 Thread John Clark
On Mon, Mar 20, 2023 at 4:25 AM Telmo Menezes 
wrote:

>* Are you worried that some of us are not being sufficiently obsequious?*
>

No, I'm not worried about that because fortunately GPT-4 has not been
behaving like the biblical Yahweh, I have seen no evidence that GPT-4
demands, or even would enjoy, constant flattery by humans. All I want is
for you to look at this video and then do the rational thing and retract
your claim that GPT-4 is "*not even close*" to human intelligence.

GPT-4 solving hard riddles 

*> I don't understand your preocupation John.*
>

You don't?!  Can you think of anything more important to be preoccupied
with?  Can you think of anything that has happened in the world in your
lifetime that was more significant than passing the Turing Test with flying
colors? I can't.

*> If GPT-4 is indeed close to human intelligence, this will become
> undeniable in the next few weeks.*
>

It's been undeniable to all rational observers since last Tuesday, but you
denied it.

  John K ClarkSee what's on my new list at  Extropolis


aro

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2hShe_6cp14_VS%2Bp6Nx6xATi6RB2n1Yu2U3u2V7eP4jA%40mail.gmail.com.


Re: GPT-4 solving hard riddles

2023-03-20 Thread Telmo Menezes
Does GPT-4 demand adoration? Are you worried that some of us are not being 
sufficiently obsequious?

I don't understand your preocupation John. If GPT-4 is indeed close to human 
intelligence, this will become undeniable in the next few weeks. Society will 
be completely upended. There will be no need or room for debate.

Telmo

Am So, 19. Mär 2023, um 15:45, schrieb John Clark:
> I challenge anyone to look at this video and then try to make the case that 
> GPT-4 is "not even close" to achieving human intelligence as some have 
> claimed. It not only was able to solve these riddles using common sense, it 
> was able to explain the logical process used to find the answer.
> 
> GPT-4 solving hard riddles 
> 
> John K ClarkSee what's on my new list at  Extropolis 
> 
> shr
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv17gDvhRbpF3p-HDFZzE9krv3ETeqCYmPSsSqH-qNESNw%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/46fc9767-503f-4531-b0f2-d4dcb631a247%40app.fastmail.com.