anirudh2290 commented on a change in pull request #16654: Multithreaded Inference Support URL: https://github.com/apache/incubator-mxnet/pull/16654#discussion_r367063730
########## File path: src/imperative/cached_op_threadsafe.cc ########## @@ -0,0 +1,329 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +#include <unordered_set> +#include <iostream> +#include "./imperative_utils.h" +#include "../executor/exec_pass.h" +#include "./cached_op_threadsafe.h" +#include "../profiler/profiler.h" +#include "../operator/operator_common.h" +#include "../operator/subgraph/common.h" + +namespace mxnet { + +DMLC_REGISTER_PARAMETER(CachedOpThreadSafeConfig); Review comment: > Is it necessary to have another implementation for cached op? What does it take to make the original cached op thread safe? This makes the code hard to maintain. Hi @eric-haibin-lin , the scope of this project was to allow for multi threaded inference with the cached op. This is used in the DJL project (https://github.com/awslabs/djl). There are two options for implementation : 1. either include all these changes with a different code path inside cached op code or 2. create a new implementation for thread safe cached op. Between the two, I find the latter to be easier to maintain. To make the existing cached op thread safe (as @rondogency pointed out), it would require training support, bulking support, multi context support, subgraph api support, dynamic shape support, ensuring that there are no perf regressions. At the very least, there is a lot of additional testing required. > Is there a plan of future works of thread_safe cached op? For the future works of the thread_safe cached op, at this point we want to continue maintaining and addressing customer feedback for the current thread safe cached op version and address performance issues. > Do we plan to merge it back to original cached op? I am not aware of a plan for this currently. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
