I still don't understand, if Backprop knows the weights to update, how is GAN any better at weight updating?
An article seems to say, as if it answers it, that once done weight updating, it is told if fooled or didn't fool the other net, hence is updated again to 'fix' the original non-GAN backprop.....but I don't think this can do anything...... See, if you observe a new example, you update your weights to update your data_model, like in the PPM algorithm......how can being told you predicted wrong help you again update further!!?? - Unless we are just using prediction error as another net modeling its inaccuracies....if such works.... ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T71d305816eb449ff-Me020dc6065dd42ed46520a6c Delivery options: https://agi.topicbox.com/groups/agi/subscription
