[GitHub] cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes.

2018-04-04 Thread GitBox
cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve 
performance of MKLDNN in small batch sizes.
URL: https://github.com/apache/incubator-mxnet/pull/10317#discussion_r179326522
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base-inl.h
 ##
 @@ -334,11 +334,104 @@ const mkldnn::memory *GetWeights(const NDArray ,
  const mkldnn::memory::primitive_desc 
_pd,
  int num_groups);
 
-mkldnn_memory_format_t GetDefaultFormat(mkldnn::memory::desc desc);
+mkldnn_memory_format_t GetDefaultFormat(const mkldnn::memory::desc );
 mkldnn_memory_format_t GetDefaultFormat(int num_dims);
 mkldnn::memory::primitive_desc GetPrimitiveDesc(mkldnn::memory::primitive_desc 
pd,
 mkldnn_memory_format_t format);
 
+static inline bool same_shape(const TShape , const mkldnn_dims_t dims, 
int ndims) {
+  if (shape.ndim() != (size_t)ndims)
+return false;
+  for (int i = 0; i < ndims; i++)
+if (shape[i] != dims[i])
+  return false;
+  return true;
+}
+
+static inline bool same_shape(const TShape , int dtype,
 
 Review comment:
   same here


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes.

2018-04-04 Thread GitBox
cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve 
performance of MKLDNN in small batch sizes.
URL: https://github.com/apache/incubator-mxnet/pull/10317#discussion_r179326420
 
 

 ##
 File path: src/operator/nn/lrn-inl.h
 ##
 @@ -58,8 +58,35 @@ struct LRNParam : public dmlc::Parameter {
 DMLC_DECLARE_FIELD(nsize)
 .describe("normalization window width in elements.");
   }
+
+  bool operator==(const LRNParam& other) const {
+return (fabs(this->alpha - other.alpha) < 1e-6 &&
 
 Review comment:
   it’s better to check the nsize first because it’s a far less expensive check 
than fabs()


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes.

2018-04-04 Thread GitBox
cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve 
performance of MKLDNN in small batch sizes.
URL: https://github.com/apache/incubator-mxnet/pull/10317#discussion_r179326176
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base-inl.h
 ##
 @@ -334,11 +334,104 @@ const mkldnn::memory *GetWeights(const NDArray ,
  const mkldnn::memory::primitive_desc 
_pd,
  int num_groups);
 
-mkldnn_memory_format_t GetDefaultFormat(mkldnn::memory::desc desc);
+mkldnn_memory_format_t GetDefaultFormat(const mkldnn::memory::desc );
 mkldnn_memory_format_t GetDefaultFormat(int num_dims);
 mkldnn::memory::primitive_desc GetPrimitiveDesc(mkldnn::memory::primitive_desc 
pd,
 mkldnn_memory_format_t format);
 
+static inline bool same_shape(const TShape , const mkldnn_dims_t dims, 
int ndims) {
 
 Review comment:
   don’t use static in a header. in-line is fine by itself.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes.

2018-04-04 Thread GitBox
cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve 
performance of MKLDNN in small batch sizes.
URL: https://github.com/apache/incubator-mxnet/pull/10317#discussion_r179326773
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base-inl.h
 ##
 @@ -334,11 +334,104 @@ const mkldnn::memory *GetWeights(const NDArray ,
  const mkldnn::memory::primitive_desc 
_pd,
  int num_groups);
 
-mkldnn_memory_format_t GetDefaultFormat(mkldnn::memory::desc desc);
+mkldnn_memory_format_t GetDefaultFormat(const mkldnn::memory::desc );
 mkldnn_memory_format_t GetDefaultFormat(int num_dims);
 mkldnn::memory::primitive_desc GetPrimitiveDesc(mkldnn::memory::primitive_desc 
pd,
 mkldnn_memory_format_t format);
 
+static inline bool same_shape(const TShape , const mkldnn_dims_t dims, 
int ndims) {
+  if (shape.ndim() != (size_t)ndims)
+return false;
+  for (int i = 0; i < ndims; i++)
+if (shape[i] != dims[i])
+  return false;
+  return true;
+}
+
+static inline bool same_shape(const TShape , int dtype,
+  const mkldnn::memory::desc ) {
+  return same_shape(shape, desc.data.dims, desc.data.ndims)
+  && get_mkldnn_type(dtype) == desc.data.data_type;
+}
+
+/*
+ * There is a large overhead of getting mkldnn::memory::primitive_desc from
+ * mkldnn::memory. This class is created to cache the metadata of mkldnn memory
+ * to provide a much more lightweight method to access them.
+ */
+class MKLDNNMemory {
+  std::shared_ptr mem;
+  mkldnn::memory::desc desc;
+  size_t size;  // The number of bytes.
+
+ public:
+  MKLDNNMemory(mkldnn::memory::primitive_desc pd, void *addr): desc(pd.desc()) 
{
+mem.reset(new mkldnn::memory(pd, addr));
+size = pd.get_size();
+  }
+
+  explicit MKLDNNMemory(std::shared_ptr mem): desc(
+  mem->get_primitive_desc().desc()) {
+this->mem = mem;
+auto pd = mem->get_primitive_desc();
 
 Review comment:
   nit: it isn’t clear what auto is here


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes.

2018-04-04 Thread GitBox
cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve 
performance of MKLDNN in small batch sizes.
URL: https://github.com/apache/incubator-mxnet/pull/10317#discussion_r179326679
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base-inl.h
 ##
 @@ -334,11 +334,104 @@ const mkldnn::memory *GetWeights(const NDArray ,
  const mkldnn::memory::primitive_desc 
_pd,
  int num_groups);
 
-mkldnn_memory_format_t GetDefaultFormat(mkldnn::memory::desc desc);
+mkldnn_memory_format_t GetDefaultFormat(const mkldnn::memory::desc );
 mkldnn_memory_format_t GetDefaultFormat(int num_dims);
 mkldnn::memory::primitive_desc GetPrimitiveDesc(mkldnn::memory::primitive_desc 
pd,
 mkldnn_memory_format_t format);
 
+static inline bool same_shape(const TShape , const mkldnn_dims_t dims, 
int ndims) {
+  if (shape.ndim() != (size_t)ndims)
+return false;
+  for (int i = 0; i < ndims; i++)
+if (shape[i] != dims[i])
+  return false;
+  return true;
+}
+
+static inline bool same_shape(const TShape , int dtype,
+  const mkldnn::memory::desc ) {
+  return same_shape(shape, desc.data.dims, desc.data.ndims)
+  && get_mkldnn_type(dtype) == desc.data.data_type;
+}
+
+/*
+ * There is a large overhead of getting mkldnn::memory::primitive_desc from
+ * mkldnn::memory. This class is created to cache the metadata of mkldnn memory
+ * to provide a much more lightweight method to access them.
+ */
+class MKLDNNMemory {
+  std::shared_ptr mem;
+  mkldnn::memory::desc desc;
+  size_t size;  // The number of bytes.
+
+ public:
+  MKLDNNMemory(mkldnn::memory::primitive_desc pd, void *addr): desc(pd.desc()) 
{
+mem.reset(new mkldnn::memory(pd, addr));
+size = pd.get_size();
+  }
+
+  explicit MKLDNNMemory(std::shared_ptr mem): desc(
 
 Review comment:
   can this pointer be passed by reference to reduce the interlocked operation?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve performance of MKLDNN in small batch sizes.

2018-04-04 Thread GitBox
cjolivier01 commented on a change in pull request #10317: [MXNET-264] Improve 
performance of MKLDNN in small batch sizes.
URL: https://github.com/apache/incubator-mxnet/pull/10317#discussion_r179325697
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -74,6 +74,7 @@ enum NDArrayFormatErr {
   kRSPIdxErr, // indices error for row sparse
 };
 
+class MKLDNNMemory;
 
 Review comment:
   one reason is that they cause annoying compile errors when used in pointer 
classes when code the compiler decides it needs the type in order to generate 
the destructor code, for instance or during template instantiation of something 
that uses it directly or indirectly. i’m not going to block the PR over it and 
if you feel
   strongly that you want to use it then fine, but it’s not done much in the 
code base and that’s probably not an accident.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services