Seriously here, GANs sound hogwash to me, I'm feeling it is just doing 
something else. You can't do backprop on backprop like this.....

I now found a even deeper article, that finally explains something clearer!!, 
using overly loads of text and algebra and explanations not even helpful >:\
https://towardsdatascience.com/understanding-generative-adversarial-networks-gans-cd6e4651a29

*"The brilliant idea that rules GANs consists in replacing this direct 
comparison by an indirect one that takes the form of a downstream task over 
these two distributions. The training of the generative network is then done 
with respect to this task such that it forces the generated distribution to get 
closer and closer to the true distribution."*

After this quote is more such interestingness.

It seems he says that we get an accurate model by, um, looking at another model 
at the same level of accuracy, erm, and, erm, um, but one such that is trying 
to maximize the error.

decent explanatory video:
https://www.youtube.com/watch?v=6v7lJHFaZZ4

ok....so....none you apes are gonna explain it better than that i bet.....so 
hmm, let me see, 

CONCLUSION:
Directly updating a model causes it to predict better, very very 
straightforward. Indirect updating updates your accuracy model by being told 
from a 2nd model trying to model all that is wrong but falls within the other's 
model so to fool it. And vice versa, until converge to Nash Equilibrium. Hmm. 
Makes much more sense.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T71d305816eb449ff-M6d7a1796695b08f6fdaea54a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to