On Friday, June 17, 2022, at 9:49 AM, Greg Staskowski wrote:
> I need, I move toward the environment to fill the need, food, poop, air, 
> water. Ok. Oh look..
I connected with the line immediately (joking :D)

On Friday, June 17, 2022, at 9:49 AM, Greg Staskowski wrote:
> I'm seriously just talking to myself at this point aren't I? 
> 
Ya I can see right through you mate, you have too many certain words on your 
fav list, like energy, me-right, god, best, top dog, conscious, royal, supreme, 
me-god-level, and prob more energy ones.....its the same thing, you don't know 
how GPT-3 works or DALL-E 2, etc. You really need to take a course mate..... It 
looks like you trying to hammer a nail by thinking of gravity when really need 
to pick up a hammer and learn to swing...

No, a computer can compute anything, we can make AGI on chips. A brain/ chip 
just computes data (changes it) and stores it (storage), just like DNA which 
stores and mutates code, and merges traits from mom and dad to get new mutated 
species ex. mom has good eyes but bad smelling nose but dad has that so now you 
are a super creature. Brains, liek this, merge ideas instead, it's all so 
similar. Besides we are 85% the way to AGI now with GPT-3, Jukebox, MIA, GATO, 
Blender, videoDiffusionModel, DALL-E 2, etc.....and you seriously have to look 
at what these AIs create to see why they are liek us, i have stored generatinos 
if you want to see them BTW.

There is no alive/ sentience/ soul/ magic/ awareness/ jeffery jim cricket 
speaking in your ear at night......you are hard metal and calculated, a 
machine. Most your mechanisms are context matching....you see what word comes 
next usually after a ex. 4 word match....of course there is delay acceptance 
where you know what to usually predict wfter you recognize a match 'we went to 
the' but it is this time has a gap simple ex. 'we really went to the'....so 
matching context, yes, and embeds are learning related words like cat<>dog what 
is the amount they relate, because both are seen near similar words ex. food 
eat kibbles sleep walking, hence when it sees cat it recognizes dog there and 
can match the sentence and fetch set of probabilities for predictions to output 
next word next. It all works by matches to context. Blender uses dialog persona 
to have goals and desire, it makes it say certain words very often even if not 
likely comes next probably in all the data it ate. It would then learn new 
goals and research like this....we are so so close. Next we need a bigger 
model, more senses, body, video prediciton, and a few other thingies....



------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td8db5de3bbc0c6ae-M44ebf78808c1cbc82a4ff33a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to