LLM do not have intrinsic short or modifiable long term memory. Both require supplemental systems - reprompting of recent history or expensive offline fine tuning or even more expensive retraining.I think it’s fair to say no AGI until those are designed in, particularly the ability to actually
Indeed, it can. It comes up with fake information. But now it is heavily
moderated to not allow that.
Em seg., 10 de abr. de 2023 às 16:33, H L V escreveu:
> Can it dream?
> Harry
>
> On Mon, Apr 10, 2023 at 11:49 AM Alain Sepeda
> wrote:
>
>> There are works to allow LLM to discuss in order
I may have posted this here before . . . Here is Stephen Wolfram writing
about the new Wolfram plugin for ChatGPT, with examples of how the plugin
enhances ChatGPT's capabilities:
>
https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/
In reply to Alain Sepeda's message of Mon, 10 Apr 2023 17:48:38 +0200:
Hi,
[snip]
>The real difference is that today, AI are not the fruit of a Darwinian
>evolution, with struggle to survive, dominate, eat or be eaten, so it's
>less frightening than people or animals.
The way a neural network
GPT is at tool used in computer linguistics since more than 10 years.
It was just a matter of time until some brainless nerds would use it for
KI...
GPT just analysis and classifies text >the texts you give GPT. So
its not KI its the condensed shit some people want to throw at you.
Can it dream?
Harry
On Mon, Apr 10, 2023 at 11:49 AM Alain Sepeda
wrote:
> There are works to allow LLM to discuss in order to have reflection...
> I've seen reference to an architecture where two GPT instances talk to
> each other, with different roles, one as a searcher, the other as a
>
In reply to Jed Rothwell's message of Mon, 10 Apr 2023 09:33:48 -0400:
Hi,
[snip]
>I hope that an advanced AGI *will* have a concept of the real world, and it
>will know the difference. I do not think that the word "care" applies here,
>but if we tell it not to use a machine gun in the real
I wrote:
> Food is contaminated despite our best efforts to prevent that.
> Contamination is a complex process that we do not fully understand or
> control, although of course we know a lot about it. It seems to me that as
> AI becomes more capable it may become easier to understand, and more
>
What Is ChatGPT Doing ... and Why Does It Work? https://a.co/d/glEBRxd
*The first thing to explain is that what ChatGPT is always fundamentally
trying to do is to produce a “reasonable continuation” of whatever text
it’s got so far, where by “reasonable” we mean “what one might expect
someone to write after seeing what people have written on billions of
webpages,
There are works to allow LLM to discuss in order to have reflection...
I've seen reference to an architecture where two GPT instances talk to each
other, with different roles, one as a searcher, the other as a critic...
Look at this article.
LLM may just be the building block of something
Robin wrote:
As I said earlier, it may not make any difference whether an AI
> feels/thinks as we do, or just mimics the process.
That is certainly true.
As you pointed out, the AI has no concept of the real world, so it's not
> going to care whether it's shooting people up
> in a video
12 matches
Mail list logo