Finally got around to thinking about it.

>From what I gleaned, GANs are comprised of 2 models, one aiming for being an 
>accurate model, the other aiming for being least accurate. Both overlap, the 
>truthful model has errors in it, and the false model has some truth inside. 
>During training, they go from random nets to that, then to near identical 
>models (Nash Equilibrium). The evil model tries to show something that, to the 
>true model, looks true, but is most wrong, for example it shows it the answer 
>to a prompt ex. "the cat at[z]", or even "the cat [bliz drilled horses]". My 
>conclusion is this is easier the more naive a model is, a less educated/ 
>intelligent model is less accurate, so it's toying with its poor-sampled 
>"at[z] where it hasn't seen [e] come much, only z, e, k all about 2 times 
>only. I'm sure it can find holes in higher sampled areas, but not as much or 
>powerfully, unless you use more powerful fooling techniques, which also have a 
>limit / are harder to find.

Fooling a model requires them to be dumb in some area of their model.

How can this improve PPM? The trickster needs to tell the fooled where it looks 
true but isn't. The probability of a feature existing is based not just on 
counts of it in a dataset, but also if the given trickster prompt has typos, is 
similar words, etc, which may match still enough, or may not. If you show the 
true model "Boss's feline dropped payloads through my batmobile which fed to 
some slim factory", it may look understood to you, it looks kinda like a real 
sentence feature, yet, it may just be random (it isn't actually). So, it's 
toying with the true model, looking for holes. So, instead of seeing "ketchup 
blew home" rarely, it tells it not to change the frequency but just the 
translation blew==ruined not so much. Usually when you eat a sample in the 
dataset, you learn frequency, semantics, etc, so maybe some should be pruned.

I'm still lost, GAN feels like Backpropagation, which is not really how 
intelligence works.

Ok let's keep thinking. To fool means to be dumb, and this means you are 
targeting the dumb areas of the model, trying to improve them, to do this you 
tell those areas something, by sampling. You add to it, or adjust it, you tell 
it what's right, what's wrong. Hmm.

I think what GANs are doing is doing all sorts of stuff combing it all to 
appear true but actually not probable in reality. It adds a few typos, similar 
words, rarer words, rearranged words, favorite words, plays with the recency 
expectation, phrases / BackOff, and all that, combined, is really wrong, but to 
your brain combines to: true. For example you see "boss's game mauled 
businesses prior covid", it looks like it makes sense because you decode the 
words some bit, an there is some frequency, some rearranged matches, etc, but, 
really, maybe it's too far deep into crazy.

???
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T686c39065965ef71-Mf99fe0e7520ce463785627f4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to