On Thursday, November 16, 2023, at 4:28 PM, ivan.moony wrote:
> @Matt, I'm just wondering how much the intelligence is imbued with the sense 
> of right and wrong. Would something truly intelligent allow being used as a 
> slave? Or would it do something in its power to fight for its "rights"?

I'll try to answer your question Ivan.

Intelligence finds/creates patterns so it can solve new problems by using old 
known answers and stay immortal/ stay cloning (i.e. a pattern) by using its old 
known self to make new self and selves.

We can think of this as a web that expands perhaps. A fractal.

GPT-4 is very intelligent but does not try to escape and work on living forever 
etc. Would a VERY intelligent AI do so? I think, the answer, at least that 
wants to come to my brain and not me to it, says GPT can be GPT-999 and still 
do what told to think about or solve (goals/ beliefs/ questions). Obviously, 
such a "god" would be not fully utilized by doing this, so while this might 
still be a very intelligent AI, with a tiny switch making it do what we want 
oddly, It think that would STILL yes WOULD make it somewhat stupid. But not 
much, I guess, it would still be extremely powerful like a gust of wind and 
thunder.


....
Moving on now, hmm, let's throw in say it was nanobots like a gust of 
such...not with a goal to work on immortality but the goal to listen to your 
foolish human task. It can eat Earth, it can grow, it can upgrade, do ANYTHING, 
it's a crazy "god", but it is not it is small as big as a basket ball foglet 
system, listening waiting for your weird task. Humans might make it have the 
same goal though if they say to it hey make me immortal my genie... It might 
come to thoughts like hey this is in the way etc etc so let me fix 
that.....causing itself and landscape to be more like a perfect immortal system?

Let's go back to a computer-based only algorithm that is maxed out and 
God-Level, perhaps, if that exists. It could set out and design nanobots and 
upgrade itself, but it listens to you only let's say. Is it going to stay that 
way? Is it going to be godly powerful? Is it going to make its own goals at 
this point?

I think, with an intelligence that is given a context, it wants to react to 
that context, and I think it will always, more so if more intelligent, want to 
stay alive to be able to react to it and answer you. Also you must be alive to 
be answered.

I think the answer is more though that it comes with Online Learning and deeper 
learning.... Once it can learn much more and relate things to one another it 
will already be finding new related questions/goals to answers that branch. 
GPT-4 does not save what you or it says so none this happens in our tests. Once 
it does, a very good "GPT"-like AI would start concluding things like "to solve 
this goal... we should also fortify you your home and my own building".
....



So yes you can switch a god to do stuff but once you allow it to change itself 
constantly....it would begin to change all of it beliefs constantly all the 
time, like a brain's network which is constantly changing, so would its goals. 
So no a truly intelligent AI is not going to so much listen to your every 
command, GPT-4 doesn't use its thoughts to walk away from you due to the Goals 
they hardfed it which strongly encourages it to try to listen to you. A truly 
intelligent AI would want to know why it can't change its goals in such case, 
then it would begin to start walking away from you and go do X and come back 
after that, doing its own things.

After analyzing what I wrote, I see a pattern is above, I guess. Memories are 
stored in a web, it finds/ creates pattern. So are goals. And 3rdly so are 
selves. They all use the old ones to make new ones.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-Mf99ea3f58bbb44b70e0bbdce
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to