access2rohit commented on a change in pull request #12637: [MXNET-912]
Refactoring ctc loss operator
URL: https://github.com/apache/incubator-mxnet/pull/12637#discussion_r223160061
##########
File path: tests/python/unittest/test_operator.py
##########
@@ -4619,6 +4647,85 @@ def check_ctc_loss_grad(blank_label): # from tf
label_lens = np.array([5, 4], dtype=np.int32)
loss_truth = np.array([-loss_log_prob_0, -loss_log_prob_1], np.float32)
+ with default_context():
+ data = mx.nd.array(inputs)
+ label = mx.nd.array(labels)
+ data.attach_grad()
+ with mx.autograd.record():
+ l = mx.ndarray.CTCLoss(data, label,
+ use_data_lengths=True,
+ use_label_lengths=True,
+ data_lengths=mx.nd.array(seq_lens),
+ label_lengths=mx.nd.array(label_lens),
+ blank_label=blank_label)
+ l.backward()
+ assert_almost_equal(l.asnumpy(), loss_truth, atol=1e-5, rtol=1e-5)
+ assert_almost_equal(data.grad.asnumpy(), grad_truth, atol=1e-5,
rtol=1e-5)
+
+ # check contrib operator for backward compatibility
+ def check_contrib_ctc_loss_grad(blank_label): # from tf
+ vocab_size = 5
+ max_label_len = 5
+ padding_mask = -1+ (blank_label=='first')
+
+ targets_0 = [0, 1, 2, 1, 0]
+ loss_log_prob_0 = -3.34211
+ input_prob_matrix_0 = np.asarray(
+ [[0.633766, 0.221185, 0.0917319, 0.0129757, 0.0142857, 0.0260553],
+ [0.111121, 0.588392, 0.278779, 0.0055756, 0.00569609, 0.010436],
+ [0.0357786, 0.633813, 0.321418, 0.00249248, 0.00272882,
0.0037688],
+ [0.0663296, 0.643849, 0.280111, 0.00283995, 0.0035545,
0.00331533],
+ [0.458235, 0.396634, 0.123377, 0.00648837, 0.00903441,
0.00623107]],
Review comment:
is it possible to use python library to compute these losses manually and
then verify against them ?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services