In dcgan.py, the code for updating G is as followings.
print(netD(fake))
with autograd.record():
output = netD(fake)
output = output.reshape((-1, 2))
errG = loss(output, real_label)
errG.backward()
trainerG.step(opt.batch_size)
print(netD(fake))
I print the output of netD before and after updating G, I found they have
different outputs, which means the netD was also updated? In my opinion, only
the parameters of G should be updated as excuting trainerG.step(). So what's
problem?
[ Full content available at:
https://github.com/apache/incubator-mxnet/issues/12582 ]
This message was relayed via gitbox.apache.org for [email protected]