On 6/2/23 17:05, [email protected] wrote:
> It doesn't access the internet. Everything it knows is from training on an 
> enormous amount of data.
It's not clear to me whether ChatGPT's "training" refers to its initial 
linguistic analysis or the substance of its responses, but I think probably the 
former.  I find it hard to imagine that it has been trained from a blank slate 
to even guess at an answer to Bernard's test query about the Bravais pendulum.

The Wikipedia article has a few possibly relevant quotes:

    OpenAI CEO Sam Altman, wrote that advancing software could pose "(for 
example) a huge cybersecurity risk" and also continued to predict "we could get 
to real AGI (artificial general intelligence) in the next decade, so we have to 
take the risk of that extremely seriously".  Altman argued that, while ChatGPT 
is "obviously not close to AGI", one should "trust the exponential.  Flat 
looking backwards, vertical looking forwards."[10]

and

    Writing in Inside Higher Ed professor Steven Mintz  [Univ. Texas at Austin] 
wrote:
    "I'm well aware of ChatGPT's limitations.  That it's unhelpful on topics 
with fewer than 10,000 citations.  That factual references are sometimes false. 
 That its ability to cite sources accurately is very limited.  That the 
strength of its responses diminishes rapidly after only a couple of paragraphs. 
 That ChatGPT lacks ethics and can't currently rank sites for reliability, 
quality, or trustworthiness.:

That sounds to me as though it's doing some sort of search, potentially on each 
query.
_
Cheers, D.
_
_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to