This thread sounds more fun now. Ok. But Ben better be watching.

On Saturday, May 23, 2020, at 10:42 AM, Alan Grimes wrote:
> Intelligence requires that there multiple possible futures, otherwise we
would simply be mechanically unfolding a pre-determined destiny.

First I'll start with the obvious side of the coin. We do know our universe is 
at least somewhat predictable, that's why we can repeat the same lab 
experiments around Earth, and build neural models of the world to learn 
patterns. The laws of physics make our world at least partially unfold along a 
deterministic path. A computer simulation or calculator is also replay-able - 
it's predictable. We are able to understand things because they are not random.

On the other hand, the word "random", by definition, means an outcome/result 
that varies maximally. So instead of a computer algorithm spitting out the 
number 5 predictably, you may get 1, then 8 next time, then 3, 0, 8, 6.... If 
it wasn't [maximally] random you'd get outputs like 6, 4, 5, 5, 6, 4, 4, 5. So 
random just means a wider variation. It doesn't mean it disobeys the laws of 
physics. It just means the view we look at something is ignoring fine details. 
For example, a woman can write down which color dresses she'll show you, so she 
knows the order of colors, but you don't, so to you it appears unpredictable, 
but to her, maybe the dress order is down pat and she remembers it like the 
alphabet. Another example, you fill a glass with milk until it pours over the 
rim, but the side that leaks first is different each time. Why? The glass is 
perfectly flat at the top let's say. It's because of the direction the human 
poured the milk in that caused it to pour over different sides. The human 
didn't stand in the same spot each time! The definition of random, that I gave 
here, basically equates to: you don't know something, so you output the wrong 
answer. But someone else can know what will happen! Lol. In other words, in the 
physics/laws we have, you can get an algorithm that outputs 5 each time, or 
outputs 4, 6, 5, 5, 6, 4, 4, or outputs 7, 1, 3, 0, 8, 4, 2, 8. And there's an 
actual reason behind it. Not magic. The definition of "random" I gave here, 
therefore, is when you lack information. You don't know what will occur. But 
once you learn what will occur, you know in the future what would result. This 
is assuming you can look in a computer or brain to see the stored algorithm. If 
you can't know what's inside, then you don't know if it will output 5 every 
time.

So, do we have another definition for the word "random"? Yes. I call it True 
Random. It would need to break the laws of physics. For example, an atom or 
particle would be shot into space, travelling, and after 45 minutes, decides to 
change its direction! There's no reason it should have, though. Nothing touched 
the system and nothing left the system. Now, we already know our world is at 
least 50% not True Random, but predictable. And in computers there's a thing 
called redundancy that stops errors from popping up. You can run a car 
simulation perfectly each time, the same way each time! You could run a human 
simulation, with no True Randomness! Unless it makes us act the way we do. So, 
True Randomness may exist, and it may be helpful in making more robust 
predictors that handle uncertainty. You could just make your world/borg garage 
larger. Larger systems can avoid errors and damage more than brittle delicate 
small systems. It takes longer for the errors to show up. So the borgs could 
more easily predict where things are at the high level. Now, one could argue 
that if particles acted truly random 50% of the time, it would show up in 
computer car simulations! But it doesn't. So the real reason we get errors is 
because there is faults at low levels we don't know about. That's all. Not True 
Randomness. Now, can we solve this? Yes. We already are. Humans produce babes 
without the DNA information disappearing. We can repair cars indefinitely. But 
we can't know where every particle in our system is, for to do so would require 
knowing where the particles (that make up our knowing) are, which is 
impossible. You could make everything into solid cubes, but you still can't 
model your world perfectly, only approximately.

The 4th side of the coin is magic orbs from God herself. Unfortunately, if you 
were hoping for this to be a valid thing, you are mistaken. Magic has no place, 
magic has to be either True Randomness, Randomness, or Laws of physics. There's 
no, such, thing, as magic. Either a particle moves as expected based on its 
and/or other surrounding context/conditions -OR- it pops into/outof existence 
some "move" or "particle" or "law" that truly is random. Say we had a genie 
ghost waving its hand with Free Will, granting wishes. The way it works is not 
by a existing predictive mechanism, but by popping into existence stuff, and 
must be non-random stuff. But why non-random? Because the genie would not 
exist, it'd be illogical soup. But what sort of "dimensional ether" is 
remembering or directing non-random creation in real time? We need something 
already existing to do this. A designer who creates a designer who... So it's 
impossible.

On Saturday, May 23, 2020, at 10:42 AM, Alan Grimes wrote:
> Your compression thingy will basically produce something that spews
language, gibberish actually because there is no world model or
understanding behind it. Much more importantly,there is no path to
general problem solving, or even generalized language gibberish spewing,
just a specific language.

"Your compression thingy"

This shows you lack understanding. Gosh. Lossless Compression is just an 
Evaluation for my neural net predictor I made. I could use Perplexity. Same 
algorithm, just different test of how good my algorithm is at predicting data 
in the distribution.

"will basically produce something that spews
language"
"gibberish actually because there is no world model or
understanding behind it."

Again, you're lacking here. Neural networks learn a model of DATA. Be it text 
or vision. - Both are language. Which means they CAN learn PATTERNS. Patterns 
mean frequency, because in a dataset you may see the letter 'z' or word 
'grommet' appears not too often! Maybe nothing re-occurs! Maybe the whole 
dataset is tttttttttt. So you can predict/generate the likely future, being the 
letter 'e' or word 'the'. Now, because of these re-occurring letters or words, 
words like cat & dog can be found to share the same contexts. Dogs eat, dogs 
jump, cats eat, cats jump. Thank god the word "jump" appears at least twice 
lol. Else no semantics! SO: A neural model can learn the letter 'e' appears 
very frequently, 'z' appears infrequently, 'cat' is very contextually similar 
to 'dog', and 'cat' is very different than 'jog'. Neural Models help organisms 
to survive longer in Evolution. Even if you don't believe text data mirrors 
human vision_thoughts data, you can still trust the algorithm can work on ANY 
dataset by "finding" patterns. In FACT, the Transformer architecture used in 
GPT-2, works on vision and music datasets.

"or even generalized language gibberish spewing,
just a specific language."

First of all, the algorithm I already coded from scratch can predict the next 
letter of any language/ generate other languages too, like Hindi, French, etc. 
You just feed it such dataset and it learns the patterns. Currently I use 
enwik8. Now, my future algorithm, and the already existing GPT-2 made by 
OpenAI, can already learn cat=dog semantically by shared contexts, cat/dog are 
interchangeable and it can recognize unseen sentences. It helps it knows what 
entails a given word or phrase by looking at many many similar situations from 
past experience. As well, it can learn hello=bonjour, if it is fed diverse data 
that has enough French words! This works for vision too. And if you use text + 
vision you will need to associate them in the same time they were shown.

"Much more importantly,there is no path to
general problem solving"

You've literally just asked me how to create AGI. AGI needs to solve many 
different types, of Hard Problems. To do so, it needs a large/diverse model, 
not just so it can solve various domains, but so it can use all sorts of 
domains when solving a problem in a given domain. It needs to know frequencies 
or IOW Cause > Effect probabilities of our physics (dogs usually breath, not 
eat) to logically think about paths it COULD take. And must take a path it 
desires too, to reach the desired outcome. It must wait at steps, until they 
are completed. It must update goals through induction/semantics. Food = money = 
jobs = truck = wrenches. It will ask new questions and seek new data from 
specialized sources or questions. It may need to search/mutate answers before 
mentally generates a good well-backed/aligned answer. It needs to be told when 
you look at 2+2=, it must be a precise answer, not 8, even though it kind of 
answers the question. It needs to be told when you are unsure of the prediction 
for 2+2=, you must look at it a different way or collect more specific data, if 
it is unsure about 573+481= it can look at it a different way (assuming you are 
sure of 2+2=4 etc etc). You are told to resort to look at [5]73[+][4]81[=] so 
all you hear is 5+4=9, to carry over numbers and stack the 4 results together 
(must hold onto them therefore) to get 573+481=1054. A good challenge is taking 
requirements and translating it to Python code. Basically AGI needs to look at 
CERTAIN context, hold onto them or forget them (ignore/Attention), combine data 
or probabilities, like a Turing Tape. To do AND, OR, NOR, requires enough 
energy to activate it binaryally yes/no. These nodes can be made in the brain, 
like rules, by talking to the AGI. AGI is basically a net holding onto 
energies, triggering semantics or syntactics, developing "rules" for when to 
fire nodes or what features to look for or look at ex. word or letter level 
[567] or [5]67. You could look at everything I just wrote and figure out why I 
typed it all. Maybe I'm a GPT-2?
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td0702c568757f706-Mb70e25ae989645df45a19dbc
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to