XiaotaoChen commented on a change in pull request #12926: parallelize
NDArray::Copy<cpu, cpu> when data size is large
URL: https://github.com/apache/incubator-mxnet/pull/12926#discussion_r228051759
##########
File path: src/ndarray/ndarray_function.cc
##########
@@ -32,19 +32,34 @@
namespace mxnet {
namespace ndarray {
+template<typename DType>
+void OMPCopy(const TBlob &from, TBlob *to, const index_t size) {
+ DType* dst_dptr = to->dptr<DType>();
+ DType* src_dptr = from.dptr<DType>();
+ #pragma omp parallel for
num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
+ for (index_t i = 0; i < size; ++i) {
+ dst_dptr[i] = src_dptr[i];
+ }
+}
+
template<>
void Copy<cpu, cpu>(const TBlob &from, TBlob *to,
Context from_ctx, Context to_ctx,
RunContext ctx) {
MSHADOW_TYPE_SWITCH(to->type_flag_, DType, {
if (to->type_flag_ == from.type_flag_) {
- mshadow::Copy(to->FlatTo1D<cpu, DType>(),
- from.FlatTo1D<cpu, DType>());
+ index_t copy_block_size = dmlc::GetEnv("MXNET_CPU_PARALLEL_COPY_SIZE",
200000);
+ const index_t size = from.Size();
+ if (size >= copy_block_size) {
+ OMPCopy<DType>(from, to, size);
+ } else {
+ mshadow::Copy(to->FlatTo1D<cpu, DType>(), from.FlatTo1D<cpu, DType>());
Review comment:
We test the perf of omp copy in single thread and memcpy called by
mshadow::Copy when data size is less than 200,000 as below.
size | 20 | 200 | 200 | 20000 | 200000
-- | -- | -- | -- | -- | --
memcpy(us) | 0.0422 | 0.038967 | 0.172933 | 2.213567 | 74.105064
omp copy single thread(us) | 0.254033 | 0.2407 | 0.389933 | 2.541833 |
49.168999
speedup | 0.16612015 | 0.16189032 | 0.443494139 | 0.870854616 | 1.507150146
It shows that memcpy's perf is better than assignment directly in single
thread when data size is small. So we want to keep mshadow::copy called when
data size is less than MXNET_CPU_PARALLEL_COPY_SIZE. Looking forward to your
suggestions.@szha
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services