My conclusion now is GAN is useless, it does something probably mine does but 
by the wrong way. Reason is simple, while odd sentences may seem barely 
recognizable, the ex. odd word rearrangement is down-voted AND merged with 
other votes/weights, so it is useful accordingly, not overused. So I can't find 
any thing it does....GAN attacks weaker areas of the model more, there's 
nothing you can do here but update the ALL the model based on data samples, 
what can a 2nd model of the data do using the idea of "GAN"? Feed model a 
model? I give up for now here.

Ok let's try more, here's this, there's hope.
https://developers.google.com/machine-learning/gan/generator
some new information here
full:
https://developers.google.com/machine-learning/gan
processing now, give me time
I think this is going to do it, bets are on

*So the generator is sending inputs to the discriminator binary CLASSIFIER 
along with real inputs too. During gen training, it uses the backprop loss that 
travels from disc output layer (that has 2 nodes binary) backwards to gen to 
update only the gen when failed to fool the disc. During disc training the disc 
loss updates only the disc when it is fooled by misclassifying a real instance 
as fake or a fake instance as real. Gen needs random input apparently. Training 
stops once each model is 50% accurate. Avoid Mode Collapse by averaging future 
discriminators. Wasserstein Loss is: Critic Loss = [average critic score on 
real images] – [average critic score on fake images. Generator Loss = -[average 
critic score on fake images].*

Do I get it now? Will think about it in bed, not yet but I feel the big click 
coming.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T686c39065965ef71-M3685194ed7ee34615a099410
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to