anirudh2290 commented on a change in pull request #16654: Multithreaded Inference Support URL: https://github.com/apache/incubator-mxnet/pull/16654#discussion_r365411936
########## File path: docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md ########## @@ -0,0 +1,199 @@ +-- +layout: page_api +title: Multi Threaded Inference +action: Get Started +action_url: /get_started +permalink: /api/cpp/docs/tutorials/multi_threaded_inference +is_tutorial: true +tag: cpp +-- +<!--- Licensed to the Apache Software Foundation (ASF) under one --> +<!--- or more contributor license agreements. See the NOTICE file --> +<!--- distributed with this work for additional information --> +<!--- regarding copyright ownership. The ASF licenses this file --> +<!--- to you under the Apache License, Version 2.0 (the --> +<!--- "License"); you may not use this file except in compliance --> +<!--- with the License. You may obtain a copy of the License at --> + +<!--- http://www.apache.org/licenses/LICENSE-2.0 --> + +<!--- Unless required by applicable law or agreed to in writing, --> +<!--- software distributed under the License is distributed on an --> +<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY --> +<!--- KIND, either express or implied. See the License for the --> +<!--- specific language governing permissions and limitations --> +<!--- under the License. --> + +## Multi Threaded Inference API + +A long standing request from MXNet users has been to invoke parallel inference on a model from multiple threads while sharing the parameters. +With this use case in mind, the threadsafe version of CachedOp was added to provide a way for customers to do multi-threaded inference for MXNet users. +This doc attempts to do the following: +1. Discuss the current state of thread safety in MXNet +2. Explain how one can use C API and thread safe version of cached op, along with CPP package to achieve iultithreaded inference. This will be useful for end users as well as frontend developers of different language bindings +3. Discuss the limitations of the above approach +4. Future Work + +## Current state of Thread Safety in MXNet + +Examining the current state of thread safety in MXNet we can arrive to the following conclusion: + +1. MXNet Dependency Engine is thread safe (except for WaitToRead invoked inside a spawned thread. Please see Limitations section) +2. Graph Executor which is Module/Symbolic/C Predict API backend is not thread safe Review comment: Thanks for the kind words @eric-haibin-lin . For graph executor, I saw core dumps with memory leaks for graph executor bind which indicated that Graph Executor Bind was not thread safe. I didnt dig more into what exactly in Graph Executor bind, it could be one of the NNVM passes. For cached op, the issues where 1. with respect to not using thread local variables for intermediate states buffer (buff) in CachedOp::DynamicForward 2. Issues related to hang in accept4 call in cuda lib when op push is not serialized in CachedOp::Forward. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
