reminisce commented on issue #9012: Implementing New Operators
URL: 
https://github.com/apache/incubator-mxnet/issues/9012#issuecomment-351633685
 
 
   1.
   i-ii You can take a look at this line: 
https://github.com/apache/incubator-mxnet/pull/8302/files#diff-8417803f764246e7864dc1d54c256dcfR661.
 It tells the indices of aux data in the input data vector as registered in the 
form of `vector<uint32_t>{3, 4}`.
   iii MXNet only supports dynamically allocating memory space for sparse 
tensors in the format of row-sparse and csr.
   iv I guess inputs[1] and inputs[2] stores the grads of `mean` and `var`. 
Does it explain why there are 11 inputs to you?
   2. 
   i weights as a param in convolution just specifies the shape of filter. The 
graph executor will allocate a tensor for it. Once you define the backward 
function for a input tensor, it becomes learnable.
   ii Right now, MXNet only supports allocating temp space once per 
forward/backward function. You can pre-calculate the total space you need and 
allocate once for all.
   3. Agree that too many macros lower the readability of the code and makes 
running gdb tracing harder. But the ones you listed are mostly the common ones 
you are going to see while writing cpp code and they let you escape from 
writing quite a lot of boilerplate code.
   4. I guess so. You just need to write a correct Makefile to include headers 
and link the MXNet lib.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to