yaochengji commented on a change in pull request #7772: Use memcopy instead of 
assigning each individual element
URL: https://github.com/apache/incubator-mxnet/pull/7772#discussion_r137454782
 
 

 ##########
 File path: src/operator/tensor/cast_storage-inl.h
 ##########
 @@ -120,9 +119,7 @@ struct CastStorageRspDnsKernel {
     IType rid = idx[i];
     dim_t dns_offset = rid * row_length;
     dim_t rsp_offset = i * row_length;
-    for (dim_t col = 0; col < row_length; col++) {
-      dns[dns_offset + col] = data[rsp_offset + col];
-    }
+    memcpy(dns + dns_offset, data + rsp_offset, sizeof(DType) * row_length);
 
 Review comment:
   Thanks for your reply. I checked the `cudaMemcpyAsync`  
[method](http://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY.html#group__CUDART__MEMORY_1g85073372f776b4c4d5f89f7124b7bf79)
 in cuda 8.0 and found that now it is not limited to device-to-device memcpy.
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to