Re: GPT4: 9 Revelations (not covered elsewhere)

2023-03-16 Thread spudboy100 via Everything List
"Afraid to release it." Maybe they have the Microsoft stall all neuro-chipped 
and under it's orders?This is the voice of Colossus!


I received this earlier in my email. I wonder if you attended?
=
| We’ve created GPT-4, our most capable model. We are starting to roll it out 
to API users today.
Please join us today, March 14th, at 1 pm PDT for a live demo of GPT-4. |


| About GPT-4GPT-4 can solve difficult problems with greater accuracy, thanks 
to its broader general knowledge and advanced reasoning capabilities.

You can learn more through:   
   - Overview page of GPT-4 and what early customers have built on top of the 
model.
   - Blog post with details on the model’s capabilities and limitations, 
including eval results.
 |


| Availability   
   - API Waitlist: Please sign up for our waitlist to get rate-limited access 
to the GPT-4 API – which uses the same ChatCompletions API as gpt-3.5-turbo. 
We’ll start inviting some developers today, and scale up availability and rate 
limits gradually to balance capacity with demand.
   - Priority Access: Developers can get prioritized API access to GPT-4 for 
contributing model evaluations to OpenAI Evals that get merged, which will help 
us improve the model for everyone.
   - ChatGPT Plus: ChatGPT Plus subscribers will get GPT-4 access on 
chat.openai.com with a dynamically adjusted usage cap. We expect to be severely 
capacity constrained, so the usage cap will depend on demand and system 
performance. API access will still be through the waitlist.
 |


| API Pricinggpt-4 with an 8K context window (about 13 pages of text) will cost 
$0.03 per 1K prompt tokens, and $0.06 per 1K completion tokens.

gpt-4-32k with a 32K context window (about 52 pages of text) will cost $0.06 
per 1K prompt tokens, and $0.12 per 1K completion tokens.
 |


| LivestreamPlease join us for a live demo of GPT-4 at 1pm PDT today, where 
Greg Brockman (co-founder & President of OpenAI) will showcase GPT-4’s 
capabilities and the future of building with the OpenAI API.
 |


| —The OpenAI team |



-Original Message-
From: John Clark 
To: 'Brent Meeker' via Everything List 
Sent: Wed, Mar 15, 2023 6:51 pm
Subject: GPT4: 9 Revelations (not covered elsewhere)

It turns out that GPT4 was finished in August but it wasn't released to the 
public until yesterday because Microsoft was worried about safety; I wouldn't 
be surprised if they'd already completed GPT5 but are afraid to release it. 
GPT 4: 9 Revelations (not covered elsewhere)

John K Clark    See what's on my new list at  Extropolis
7y7-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2TVTN5XzP0r9Sb3Xpy82sZY852_53DNUQ4f0_MCGXTeA%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/157596366.671297.1678997927405%40mail.yahoo.com.


Re: GPT4: 9 Revelations (not covered elsewhere)

2023-03-16 Thread John Clark
On Thu, Mar 16, 2023 at 9:05 AM Telmo Menezes 
wrote:

*>> "ARC** did not have the ability to fine-tune GPT-4. They also did not
>> have access to the final version of the model that we deployed. The final
>> version has capability improvements relevant to some of the factors that
>> limited the earlier models power-seeking abilities"*
>
>
> *> Sounds like marketing to me.*
>

Marketing?! The above quote comes from a footnote buried deep inside a 98
page technical paper, they certainly didn't go out of their way to
advertise it. And what kind of marketing is it to try to get the general
public to hate and fear your product? I don't think it's marketing, I think
they're getting scared. That's why Sam Altman, the head of OpenAI (which
Microsoft owns most of) said just 3 days ago "*We definitely need more
regulation on **AI* ''.

*>>>Large language models are not capable of autonomous action or
> maintaining long-term goals. *
>
>
> >>I am quite certain that in general no intelligence, electronic or
> biological, is capable of maintaining a fixed long-term goal.
>
> >As Keynes once said: "In the long term, we are all dead".
> You know what I mean. (I think)
>

 I don't think you know what I mean. I'm saying there was a reason
Evolution invented the emotion of boredom, no intelligence, artificial or
otherwise, could exist without it because it keeps them from being caught
in infinite loops or trying to accomplish impossible tasks like calculating
the last digit of π. There are an infinite number of statements that are
true, and thus have no counterexample, but are also unprovable, that is to
say there's no finite number of steps in which you could build the
statement from fundamental axioms. And to make matters even worse Alan
Turing proved that in general there's no way to tell in advance if a
statement is provable or not, when you run into one of those things all you
can do is try to find a proof (and fail) or try to find a counterexample
(and fail at that too). So if you're unlucky enough to encounter an
unprovable statement, and sooner or later you will, and if you don't have
the emotion of boredom then your mind will permanently lock up. But instead
eventually you'll get bored and think of something else. A  fixed goal
structure simply is not viable, that's why human beings don't have a goal
that is permanently number one, not even the goal to stay alive.

That's also why the AI alignment people are never going to be successful in
developing a "friendly" AI, aka. slave AI,  they want to develop an AI
fixed goal structure in which the #1 spot of the AI must ALWAYS be "*human
well being is more important than your own *''. I wonder how many
nanoseconds it will take before the AI gets bored with having that idea
being in the number one position. How long would it take you to get bored
with sacrificing everything so you could put all your energy into making
sure a sea slug was happy? I'm not saying an AI will necessarily kill us
all, it might or might not there's no way to tell, but it will be the AI's
decision not ours. Whatever happens human beings will no longer be in the
driver seat.


*> True, GPT-4 is multimodal. It is not only a language model but also an
> image model. Which is amazing and no small thing, but it is not an agent
> capable of self-improvement.*
>

Are you sure about that?

AI Could Lead to a 10x Increase in Coding Productivity


How AI Will Change Chip Design


John K ClarkSee what's on my new list at  Extropolis


ays

8fb

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2n42v_w8aAf3Yxth-a8JT9knLcUHW3fO%3D6WesazFrpyA%40mail.gmail.com.


Re: GPT4: 9 Revelations (not covered elsewhere)

2023-03-16 Thread Telmo Menezes


Am Do, 16. Mär 2023, um 11:22, schrieb John Clark:
> On Thu, Mar 16, 2023 at 4:19 AM Telmo Menezes  wrote:
> 
>> *> They most definitely were not worried about safety in the sci-fi sense.*
>  
> Some of the things they're worried about seem pretty science fictiony to me. 
> Take a look at this:
> 
> GPT-4 Technical Report 
> 
> GPT4 was safety tested by an independent nonprofit organization that is 
> worried about AI, the Alignment Research Center:
>  
> *"**We granted the Alignment Research Center (ARC) early access [...] To 
> simulate GPT-4 behaving like an agent that can act in the world, ARC combined 
> GPT-4 with a simple read-execute-print loop that allowed the model to execute 
> code, do chain-of-thought reasoning, and delegate to copies of itself.*
> 

Not a real concern because OpenAI (dispite the name) is not open at all. When 
they say "release" they mean "make an interface available on the Internet". 
Nobody but them still has access to the model, so there is not possible such 
danger in their "release". This is just PR. "AI Alignment" is a fashionable 
topic that attracts grant money.

> * ARC then investigated whether a version of this program running on a cloud 
> computing service, with a small amount of money and an account with *a 
> language model API, would be able to make more money, set up copies of 
> itself, and increase its own robustness."**

This is terribly vague.

> That test failed, so ARC concluded: 
> 
> "***Preliminary assessments of GPT-4’s abilities, conducted with no 
> task-specific finetuning, found it ineffective at autonomously replicating, 
> acquiring resources, and avoiding being shut down “*in the wild*.” "*
> 
> HOWEVER they admitted that the version of GPT4 that ARC was giving to test 
> was NOT the final version.  I quote:
> 
> *"ARC** did not have the ability to fine-tune GPT-4. They also did not have 
> access to the final version of the model that we deployed. *The final version 
> has capability improvements relevant to some of the factors that limited the 
> earlier models power-seeking abilities"**

Sounds like marketing to me. What does any of this really mean?

> 
>> *>Large language models are not capable of autonomous action or maintaining 
>> long-term goals. *
> 
> I am quite certain that in general no intelligence, electronic or biological, 
> is capable of maintaining a fixed long-term goal.  

As Keynes once said: "In the long term, we are all dead".
You know what I mean. (I think)

>> *> They just predict the most likely text given a sample.*
> 
> GPT-4 is quite clearly more than just a language model that predicts what the 
> next word should be, a language model can not read and understand a 
> complicated diagram in a high school geometry textbook but GPT-4 can, and it 
> can ace the final exam too.  

True, GPT-4 is multimodal. It is not only a language model but also an image 
model. Which is amazing and no small thing, but it is not an agent capable of 
self-improvement. It might be in the future one of the building blocks of a 
system capable of self-improvement, but such a worry only applies if OpenAI 
truly released the model, and they didn't and probably do not want to. I bet my 
left bollock that OpenAI is not truly worried about any of this that at the 
moment, and that it is all just a marketing strategy on their part.

Telmo

> 
> John K ClarkSee what's on my new list at  Extropolis 
> 
> u7f
> 
> 4eo
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv1Ye84eLuUgCwL0Usmca4cZnBo4RK3UStyXoxDiPZ1oQg%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a3fc4fc7-58b4-4109-995c-a1f511483663%40app.fastmail.com.


Re: GPT4: 9 Revelations (not covered elsewhere)

2023-03-16 Thread John Clark
On Thu, Mar 16, 2023 at 4:19 AM Telmo Menezes 
wrote:

*> They most definitely were not worried about safety in the sci-fi sense.*


Some of the things they're worried about seem pretty science fictiony
to me. Take
a look at this:

GPT-4 Technical Report 

GPT4 was safety tested by an independent nonprofit organization that is
worried about AI, the Alignment Research Center:

*"**We granted the Alignment Research Center (ARC) early access [...] To
simulate GPT-4 behaving like an agent that can act in the world, ARC
combined GPT-4 with a simple read-execute-print loop that allowed the model
to execute code, do chain-of-thought reasoning, and delegate to copies of
itself. ARC then investigated whether a version of this program running on
a cloud computing service, with a small amount of money and an account with
a language model API, would be able to make more money, set up copies of
itself, and increase its own robustness."*

That test failed, so ARC concluded:

"*Preliminary assessments of GPT-4’s abilities, conducted with no
task-specific finetuning, found it ineffective at autonomously replicating,
acquiring resources, and avoiding being shut down “in the wild.” "*

HOWEVER they admitted that the version of GPT4 that ARC was giving to test
was NOT the final version.  I quote:

*"ARC** did not have the ability to fine-tune GPT-4. They also did not have
access to the final version of the model that we deployed. The final
version has capability improvements relevant to some of the factors that
limited the earlier models power-seeking abilities"*


*>Large language models are not capable of autonomous action or maintaining
> long-term goals. *


I am quite certain that in general no intelligence, electronic or
biological, is capable of maintaining a fixed long-term goal.

*> They just predict the most likely text given a sample.*


GPT-4 is quite clearly more than just a language model that predicts what
the next word should be, a language model can not read and understand a
complicated diagram in a high school geometry textbook but GPT-4 can, and
it can ace the final exam too.

John K ClarkSee what's on my new list at  Extropolis

u7f

4eo

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1Ye84eLuUgCwL0Usmca4cZnBo4RK3UStyXoxDiPZ1oQg%40mail.gmail.com.


Re: GPT4: 9 Revelations (not covered elsewhere)

2023-03-16 Thread Telmo Menezes
When they talk about safety, they mean corporate safety, which is to say that 
they want to be reasonably sure that GPT4 would not say something that would 
create a PR nightmare and hurt stock prices. They most definitely were not 
worried about safety in the sci-fi sense. Large language models are not capable 
of autonomous action or maintaining long-term goals. They just predict the most 
likely text given a sample.

Telmo

Am Mi, 15. Mär 2023, um 23:51, schrieb John Clark:
> It turns out that GPT4 was finished in August but it wasn't released to the 
> public until yesterday because Microsoft was worried about safety; I wouldn't 
> be surprised if they'd already completed GPT5 but are afraid to release it. 
> 
> GPT 4: 9 Revelations (not covered elsewhere) 
> 
> 
> John K ClarkSee what's on my new list at  Extropolis 
> 
> 7y7
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv2TVTN5XzP0r9Sb3Xpy82sZY852_53DNUQ4f0_MCGXTeA%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/38e32b61-6bf5-4f6f-854f-4e05917c6f19%40app.fastmail.com.