chenxiwarm commented on issue #17512: MXNet _LIB.MXGetLastError()  when calling 
.asscalar() on GPU context
URL: 
https://github.com/apache/incubator-mxnet/issues/17512#issuecomment-581809485
 
 
   Find the cause of the problem, it's not because of ".asscalar()" function, 
but because that I did not use padding when loading data using dataloader, 
hence the training samples in the same batch does not have the same length. 
Problem solved by using padding in the batchify function as follows:
   ```python
   def load_data_no_bucket_sample(dataset, dataset_name, batch_size=64, 
lazy=True, shuffle=True):
       # Pad data, stack bow_vectors and label
   
       batchify_fn = nlp.data.batchify.Tuple(
           nlp.data.batchify.Pad(axis=1, pad_val=0, dtype="float32"),
           nlp.data.batchify.Stack(dtype="float32"),
       )
       dataloader = get_dataloader_for_a_dataset(
           dataset, batch_size, batchify_fn, dataset_name=dataset_name, 
lazy=lazy, shuffle=shuffle
       )
       return dataloader
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to