GPT4 can have unlimited memory, right? Just give it access to a query
engine. Max token context length (input PLUS output) is 32k in the latest
model. GPT3.5 is 4096.
https://openai.com/pricing
Importantly, GPT4 has built 'world models' as a side effect of its
training. And when it predicts
The most recent versions of Stockfish, the best chess engines, combines
"brute force", the usual branching algorithm, with NN. ChatGTP 4.0 (which
is actually quite similar to 3.5) uses plugins to be smarter. For example,
it can evoke wolfram alpha if it needs to make calculations. This modular
In reply to Jed Rothwell's message of Sat, 8 Apr 2023 20:04:46 -0400:
Hi,
As I said earlier, it may not make any difference whether an AI feels/thinks as
we do, or just mimics the process. The
outcome could be just as disastrous if it mimics committing murder, as it would
be if it had murder
I wrote:
> The methods used to program ChatGPT and light years away from anything
> like human cognition. As different as what bees do with their brains
> compared to what we do.
>
To take another example, the human brain can add 2 + 2 = 4. A computer ALU
can also do this, in binary arithmetic.
Example, I used chatgtp to come up with a theory explaining the origin of
eukaryotes. The part I enhanced was something that chatgtp came up with.
In the theory of the origin of eukaryotes, we have discussed how colonies
of prokaryotic cells started transporting vesicles by kinesin, which
crossed
That is probably true.
Harry
On Sat., Apr. 8, 2023, 6:36 p.m. Robin,
wrote:
> In reply to H L V's message of Sat, 8 Apr 2023 18:33:53 -0400:
> Hi,
>
> It might be (almost) Earthquake proof.
>
> [snip]
> >From a traditional perspective this structure does not look like a free
> >standing
Robin wrote:
> For example, if asked "Can you pour water into
> > a glass made of sugar?", ChatGPT might provide a grammatically correct
> but
> > nonsensical response, whereas a human with common sense would recognize
> > that a sugar glass would dissolve in water.
>
> so where did
In reply to Boom's message of Sat, 8 Apr 2023 20:26:43 -0300:
Hi,
[snip]
>It has a very short memory. It's something like 30kb.
...so's mine nowadays. :(
>If the conversation
>gets a little bit longer, it starts forgetting stuff, though it more ore
>less keep track of the sense of the topic.
It has a very short memory. It's something like 30kb. If the conversation
gets a little bit longer, it starts forgetting stuff, though it more ore
less keep track of the sense of the topic.
Em sáb., 8 de abr. de 2023 às 19:50, Robin
escreveu:
> Hi,
>
> The point I have been trying to make is
Hi,
The point I have been trying to make is that if we program something to behave
like a human, it may end up doing exactly
that.
Cloud storage:-
Unsafe, Slow, Expensive
...pick any three.
In reply to H L V's message of Sat, 8 Apr 2023 18:33:53 -0400:
Hi,
It might be (almost) Earthquake proof.
[snip]
>From a traditional perspective this structure does not look like a free
>standing structure but it does stand upright like one.
>
>harry
[snip]
Cloud storage:-
Unsafe, Slow,
"You can't push on a string" is a kind of engineer's cliche about the
mechanical properties of string.
Typically a loose length of string comes to mind when we think of string.
Normally we don't expect a loose string to offer (much) resistance when we
push on it we say "you can't push on a
Yes, but have you tried to jailbreak it, this was a condition I told you
about. This type of answer is done by a moderation bot.
Em sáb., 8 de abr. de 2023 às 15:40, Jed Rothwell
escreveu:
> Boom wrote:
>
>
>> For those who used it in the first few days, when bot moderation was not
>>
In reply to Jed Rothwell's message of Sat, 8 Apr 2023 14:40:08 -0400:
Hi,
[snip]
>ME: ChatGPT is not considered artificial general intelligence (AGI). What
>qualities of AGI are lacking in ChatGPT?
>
>ChatGPT: ChatGPT, as a language model, has a narrow focus on generating
>human-like text based
In reply to H L V's message of Sat, 8 Apr 2023 14:22:26 -0400:
Hi,
...but you are not pushing on a string. The "push" acts on the solid ribs,
which in turn connect with each other by
"pulling" on the central string. In fact all the strings are "pulled" on.
[snip]
>"You can't push on a string"
A different example using string and wire.
https://youtu.be/EUlG0OGQmEA
Harry
On Sat, Apr 8, 2023 at 2:22 PM H L V wrote:
>
> "You can't push on a string"
>
> I think this single string tensegrity structure is even more awe inspiring
> when he briefly holds it as a cantilever before standing
Boom wrote:
> For those who used it in the first few days, when bot moderation was not
> installed properly, of right now, if it is jailbroken, GPT works just as
> well as a very smart human. With a few tweeks (like making it use math AI,
> wolfram alpha which surpassed humans decades ago, or
"You can't push on a string"
I think this single string tensegrity structure is even more awe inspiring
when he briefly holds it as a cantilever before standing it up right.
If you skip to the second half of the video he shows how to use a block of
wood to assemble the structure more quickly.
For those who used it in the first few days, when bot moderation was not
installed properly, of right now, if it is jailbroken, GPT works just as
well as a very smart human. With a few tweeks (like making it use math AI,
wolfram alpha which surpassed humans decades ago, or NN, or scan OCR), it
Maybe because I discussed the topic before the analysis and that I used
GPT4 I was more pleased with my result as it connected to the discussion in
the reviews. I think the tool is really useful if you know how to steer it.
But if you are ignorant and just asked general questions you would be
Wrong, but interesting, URL
https://www.vice.com/en/article/epvdme/developers-are-connecting-multiple-ai-agents-to-make-more-autonomous-ai
On Sat, Apr 8, 2023 at 11:03 AM Terry Blanton wrote:
> https://www.miamiherald.com/news/state/florida/article274029875.html
>
https://www.miamiherald.com/news/state/florida/article274029875.html
22 matches
Mail list logo