eric-haibin-lin commented on a change in pull request #10025: Language model 
with Google's billion words dataset
URL: https://github.com/apache/incubator-mxnet/pull/10025#discussion_r173073054
 
 

 ##########
 File path: src/operator/nn/fully_connected.cc
 ##########
 @@ -87,8 +90,16 @@ void FullyConnectedComputeExCPU(const nnvm::NodeAttrs& 
attrs,
     return;
   }
   FallBackCompute(FullyConnectedCompute<cpu>, attrs, ctx, inputs, req, 
outputs);
+#else
+  std::vector<TBlob> in_blobs(inputs.size());
+  for (size_t i = 0; i < in_blobs.size(); i++) in_blobs[i] = inputs[i].data();
+  std::vector<TBlob> out_blobs(outputs.size());
+  for (size_t i = 0; i < out_blobs.size(); i++) out_blobs[i] = 
outputs[i].data();
+  FullyConnectedCompute<cpu>(attrs, ctx, in_blobs, req, out_blobs);
+#endif
 
 Review comment:
   Does MKL support kFComputeFallback dispatch mode? 
   Are you both referring to line 93 - line 99? `FallBackCompute ` is only 
defined when `USE_MKL=1`. Can I still use it?
   What I need to address is the following case for inference:
   - data = dense
   - weight = rowsparse
   - bias = rowsparse
   - output = dense
   But I don't know how to deal with this efficiently with `USE_MKL=1`. 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to