sxjscience commented on a change in pull request #16104: Faster Transpose 2D
URL: https://github.com/apache/incubator-mxnet/pull/16104#discussion_r332219914
##########
File path: src/operator/tensor/matrix_op-inl.h
##########
@@ -257,6 +257,42 @@ struct TransposeParam : public
dmlc::Parameter<TransposeParam> {
}
};
+
+/*!
+ * \brief This function performs transpose operation on a 2D matrix by
utilizing the L1 cache
+ * \param in input tensor
+ * \param out output tensor
+ * \param row shape of dim 0 of input
+ * \param col shape of dim 1 of input
+ */
+template<typename DType>
+MSHADOW_XINLINE void Transpose2D(const DType *in, DType *out, index_t row,
index_t col) {
+ // ensure cache line hits and prevent cache miss for any configuration
+ // L1 cache size to be utilized = 32kb = 2^15
+ // Largest size of a single unit of any dtype <= 8 byte = 2^3
+ // Number of elements - (2^15/2^3) = 2^12
+ // Block-size - 2^6 v 2^6 (64 v 64)
+
+ // But we could leverage unrolling of for loops (for parallelization)
+ // Block-size - 2^5 v 2^5 (32 v 32) with 4 pragma for loop unrolled
+ // blocksize * blocksize * num_threads = cache_size / dtype_size
+ index_t blocksize = 32;
+
+ for (index_t i = 0; i < row; i += blocksize) {
+ #pragma omp parallel for
+ for (index_t j = 0; j < col; j += blocksize) {
+ // transpose the block
+ #pragma unroll 4
+ for (index_t a = j; a < blocksize && a < col; ++a) {
Review comment:
Should it be `a < j + blocksize`?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services