On Thu, Mar 16, 2023 at 9:05 AM Telmo Menezes <te...@telmomenezes.net>

*>> "ARC** did not have the ability to fine-tune GPT-4. They also did not
>> have access to the final version of the model that we deployed. The final
>> version has capability improvements relevant to some of the factors that
>> limited the earlier models power-seeking abilities"*
> *> Sounds like marketing to me.*

Marketing?! The above quote comes from a footnote buried deep inside a 98
page technical paper, they certainly didn't go out of their way to
advertise it. And what kind of marketing is it to try to get the general
public to hate and fear your product? I don't think it's marketing, I think
they're getting scared. That's why Sam Altman, the head of OpenAI (which
Microsoft owns most of) said just 3 days ago "*We definitely need more
regulation on **AI* ''.

*>>>Large language models are not capable of autonomous action or
> maintaining long-term goals. *
> >>I am quite certain that in general no intelligence, electronic or
> biological, is capable of maintaining a fixed long-term goal.
> >As Keynes once said: "In the long term, we are all dead".
> You know what I mean. (I think)

 I don't think you know what I mean. I'm saying there was a reason
Evolution invented the emotion of boredom, no intelligence, artificial or
otherwise, could exist without it because it keeps them from being caught
in infinite loops or trying to accomplish impossible tasks like calculating
the last digit of π. There are an infinite number of statements that are
true, and thus have no counterexample, but are also unprovable, that is to
say there's no finite number of steps in which you could build the
statement from fundamental axioms. And to make matters even worse Alan
Turing proved that in general there's no way to tell in advance if a
statement is provable or not, when you run into one of those things all you
can do is try to find a proof (and fail) or try to find a counterexample
(and fail at that too). So if you're unlucky enough to encounter an
unprovable statement, and sooner or later you will, and if you don't have
the emotion of boredom then your mind will permanently lock up. But instead
eventually you'll get bored and think of something else. A  fixed goal
structure simply is not viable, that's why human beings don't have a goal
that is permanently number one, not even the goal to stay alive.

That's also why the AI alignment people are never going to be successful in
developing a "friendly" AI, aka. slave AI,  they want to develop an AI
fixed goal structure in which the #1 spot of the AI must ALWAYS be "*human
well being is more important than your own *''. I wonder how many
nanoseconds it will take before the AI gets bored with having that idea
being in the number one position. How long would it take you to get bored
with sacrificing everything so you could put all your energy into making
sure a sea slug was happy? I'm not saying an AI will necessarily kill us
all, it might or might not there's no way to tell, but it will be the AI's
decision not ours. Whatever happens human beings will no longer be in the
driver seat.

*> True, GPT-4 is multimodal. It is not only a language model but also an
> image model. Which is amazing and no small thing, but it is not an agent
> capable of self-improvement.*

Are you sure about that?

AI Could Lead to a 10x Increase in Coding Productivity

How AI Will Change Chip Design

John K Clark    See what's on my new list at  Extropolis



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Reply via email to