GPT4 can have unlimited memory, right?  Just give it access to a query
engine.  Max token context length (input PLUS output) is 32k in the latest
model.  GPT3.5 is 4096.

https://openai.com/pricing

Importantly, GPT4 has built 'world models' as a side effect of its
training.  And when it predicts the next token, that token is not merely
compatible with the current context, but also the entirety of its network
and the internal world models that it has built.  I think the word tokens
themselves are about 12K dimensional vectors and the networks are billions
and billions of parameters.  You can do a lot with that, and they have.

https://thegradient.pub/othello/

I think there's a lot of 'human consciousness is center of universe' type
vanity going around, which is why a lot of people have a hard time
accepting that LLMs are enough.

On Sat, Apr 8, 2023 at 7:38 PM Boom <danieldi...@gmail.com> wrote:

> The most recent versions of Stockfish, the best chess engines, combines
> "brute force", the usual branching algorithm, with NN. ChatGTP 4.0 (which
> is actually quite similar to 3.5) uses plugins to be smarter. For example,
> it can evoke wolfram alpha if it needs to make calculations. This modular
> approach is quickly becoming more common.
>
>
> Em sáb., 8 de abr. de 2023 às 21:05, Jed Rothwell <jedrothw...@gmail.com>
> escreveu:
>
>> I wrote:
>>
>>
>>> The methods used to program ChatGPT and light years away from anything
>>> like human cognition. As different as what bees do with their brains
>>> compared to what we do.
>>>
>>
>> To take another example, the human brain can add 2 + 2 = 4. A computer
>> ALU can also do this, in binary arithmetic. The brain and the ALU get the
>> same answer, but the methods are COMPLETELY different. Some people claim
>> that ChatGPT is somewhat intelligent. Artificially intelligent. For the
>> sake of argument, let us say this is a form of intelligence. In that case,
>> it is an alien form as different from human intelligence as an ALU. A bee
>> brain is probably closer to ours than ChatGPT. It may be that a future AGI,
>> even a sentient one, has totally different mechanisms than the human brain.
>> As alien as an ALU. In that case, I do not think it will be possible for
>> the AGI to actually emulate a human, although it might be able to imitate
>> one, the way ChatGPT does. I doubt it will ever be able to feel what it is
>> like to be a human. We humans cannot imagine what it feels like to be a
>> bee, or even a more intelligent creature such as a bat, because bats have
>> such a different way of living, and sensing (echolocation). We do know what
>> it is like being a chimpanzee, because we share so much DNA and we have
>> many behaviors in common, such as anger, politics, and grieving over dead
>> children.
>>
>>
>
> --
> Daniel Rocha - RJ
> danieldi...@gmail.com
>

Reply via email to