Am Do, 16. Mär 2023, um 11:22, schrieb John Clark:
> On Thu, Mar 16, 2023 at 4:19 AM Telmo Menezes <te...@telmomenezes.net> wrote:
> 
>> *> They most definitely were not worried about safety in the sci-fi sense.*
>  
> Some of the things they're worried about seem pretty science fictiony to me. 
> Take a look at this:
> 
> GPT-4 Technical Report <https://cdn.openai.com/papers/gpt-4.pdf>
> 
> GPT4 was safety tested by an independent nonprofit organization that is 
> worried about AI, the Alignment Research Center:
>  
> *"**We granted the Alignment Research Center (ARC) early access [...] To 
> simulate GPT-4 behaving like an agent that can act in the world, ARC combined 
> GPT-4 with a simple read-execute-print loop that allowed the model to execute 
> code, do chain-of-thought reasoning, and delegate to copies of itself.*
> 

Not a real concern because OpenAI (dispite the name) is not open at all. When 
they say "release" they mean "make an interface available on the Internet". 
Nobody but them still has access to the model, so there is not possible such 
danger in their "release". This is just PR. "AI Alignment" is a fashionable 
topic that attracts grant money.

> * ARC then investigated whether a version of this program running on a cloud 
> computing service, with a small amount of money and an account with *a 
> language model API, would be able to make more money, set up copies of 
> itself, and increase its own robustness."**

This is terribly vague.

> That test failed, so ARC concluded: 
> 
> "***Preliminary assessments of GPT-4’s abilities, conducted with no 
> task-specific finetuning, found it ineffective at autonomously replicating, 
> acquiring resources, and avoiding being shut down “*in the wild*.” "*
> 
> HOWEVER they admitted that the version of GPT4 that ARC was giving to test 
> was NOT the final version.  I quote:
> 
> *"ARC** did not have the ability to fine-tune GPT-4. They also did not have 
> access to the final version of the model that we deployed. *The final version 
> has capability improvements relevant to some of the factors that limited the 
> earlier models power-seeking abilities"**

Sounds like marketing to me. What does any of this really mean?

> 
>> *>Large language models are not capable of autonomous action or maintaining 
>> long-term goals. *
> 
> I am quite certain that in general no intelligence, electronic or biological, 
> is capable of maintaining a fixed long-term goal.  

As Keynes once said: "In the long term, we are all dead".
You know what I mean. (I think)

>> *> They just predict the most likely text given a sample.*
> 
> GPT-4 is quite clearly more than just a language model that predicts what the 
> next word should be, a language model can not read and understand a 
> complicated diagram in a high school geometry textbook but GPT-4 can, and it 
> can ace the final exam too.  

True, GPT-4 is multimodal. It is not only a language model but also an image 
model. Which is amazing and no small thing, but it is not an agent capable of 
self-improvement. It might be in the future one of the building blocks of a 
system capable of self-improvement, but such a worry only applies if OpenAI 
truly released the model, and they didn't and probably do not want to. I bet my 
left bollock that OpenAI is not truly worried about any of this that at the 
moment, and that it is all just a marketing strategy on their part.

Telmo

> 
> John K Clark    See what's on my new list at  Extropolis 
> <https://groups.google.com/g/extropolis>
> u7f
> 
> 4eo
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv1Ye84eLuUgCwL0Usmca4cZnBo4RK3UStyXoxDiPZ1oQg%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAJPayv1Ye84eLuUgCwL0Usmca4cZnBo4RK3UStyXoxDiPZ1oQg%40mail.gmail.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a3fc4fc7-58b4-4109-995c-a1f511483663%40app.fastmail.com.

Reply via email to