Thanks for the thoughtful and valuable comments @arcadiaphy.
> I've deployed many models with scala API, and run them in multiple threads.
> The whole system has run smoothly in production environment for more than 2
> months.
> The backend of inference is graph executor, which is created for
@anirudh2290 Just see this RFC. Let me share what I've done in multithreaded
infererce, I think it's the only viable way now in mxnet.
I've deployed many models with scala API, and run them in multiple threads. The
whole system has run smoothly in production environment for more than 2 months.
@ptrendx I am trying to open a PR by Friday. On the status : the two prereqs
issues https://github.com/dmlc/dmlc-core/pull/573 and
https://github.com/apache/incubator-mxnet/issues/16434 have been better
understood and fixed/worked around. I have made C API and backend changes and
currently
Hi @anirudh2290, what is the status of this proposal? When do you think changes
will be ready?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/16431#issuecomment-545681158
Thanks @marcoabreu !
> Will the new C-API functions be threadsafe in general? Speak, I can invoke
> them at any point in time from any thread without the need of a lock,
> sticky-thread or a thread hierarchy? (I'm thinking of the thread-safety being
> done on the backend level)
The issue I
Thanks to @nswamy for his inputs and design discussions related to this project
and @frankfliu for explaining the requirements and the use case from customer
perspective.
# Problem Statement
One of the big un-catered for use cases in MXNet is loading a model and being
able to run parallel