arcadiaphy edited a comment on issue #15576: Multi-threaded inference broken 
with MKLDNN
URL: 
https://github.com/apache/incubator-mxnet/issues/15576#issuecomment-512695364
 
 
   @pengzhao-intel @wuxun-zhang @ZhennanQin 
   
   The difference between `MXPredCreateMultiThread` and create executor 
separately is that the former  shares model parameters. I've tested that if the 
weight sharing is disabled by moving `arg_arrays` and `aux_arrays` creation 
into for loop, then the  result is normal even with MKLDNN.
   
   ```
     for (int i = 0; i < num_threads; i++) {
       std::vector<NDArray> arg_arrays, aux_arrays;
       for (size_t i = 0; i < arg_shapes.size(); ++i) {
         NDArray nd = NDArray(arg_shapes[i], ctx);
         if (arg_params.count(arg_names[i]) != 0) {
           CopyFromTo(arg_params[arg_names[i]], &nd);
         }
         arg_arrays.push_back(nd);
       }
       for (size_t i = 0; i < aux_shapes.size(); ++i) {
         NDArray nd = NDArray(aux_shapes[i], ctx);
         if (aux_params.count(aux_names[i]) != 0) {
           CopyFromTo(aux_params[aux_names[i]], &nd);
         }
         aux_arrays.push_back(nd);
       }
   
       std::unique_ptr<MXAPIPredictor> ret(new MXAPIPredictor());
       ret->sym = sym;
       ret->ctx = ctx;
       ret->key2arg = key2arg;
       ret->arg_arrays = arg_arrays;
       ret->aux_arrays = aux_arrays;
       ret->out_shapes = out_shapes;
   
       if (!lazy) {
         std::map<std::string, Context> ctx_map;
         std::vector<NDArray> grad_store(arg_arrays.size());
         std::vector<OpReqType> grad_req(arg_arrays.size(), kNullOp);
         ret->exec.reset(Executor::Bind(sym, ctx, ctx_map,
                                        arg_arrays,
                                        grad_store, grad_req,
                                        aux_arrays));
         ret->out_arrays = ret->exec->outputs();
       }
       out[i] = ret.release();
     }
   ```
   
   It's very strange, I think the model parameters are read-only, will it 
affect MKLDNN calling?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to