chrishkchris opened a new issue #578: autograd.cross_entropy may has some problem URL: https://github.com/apache/singa/issues/578 The autograd.cross_entropy may have some problem, where I found it when I took part in the review of PR #572. In examples/autograd/mlp.py (multilayer perception), the result is: ``` ubuntu@ip-172-31-26-47:~/singa/examples/autograd$ python3 mlp.py train_data_shape: (400, 2) train_label_shape: (400, 2) training loss = 0.6908062 training loss = 0.5960194 training loss = 0.57797414 training loss = 0.55334115 training loss = 0.48568404 training loss = 0.38458923 training loss = 0.30776194 training loss = 0.24188559 training loss = 0.18657134 training loss = 0.15864176 training loss = 0.13929243 ``` However, if I use softmax + cross_entropy instead of softmax_cross_entropy, there is such error: ``` ubuntu@ip-172-31-26-47:~/singa/examples/autograd$ python3 mlp.py train_data_shape: (400, 2) train_label_shape: (400, 2) training loss = 6.682101 WARNING: Logging before InitGoogleLogging() is written to STDERR F0113 09:20:05.180658 12032 tensor_math_cpp.h:357] Check failed: a > 0.f (-nan vs. 0) *** Check failure stack trace: *** Aborted (core dumped) ``` I did not suspect SoftMax because I compared the result with pytorch in the review of PR #572
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
