[ 
https://issues.apache.org/jira/browse/MXNET-11?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394829#comment-16394829
 ] 

Patric Zhao commented on MXNET-11:
----------------------------------

[~cjolivier01] Thanks for the information.

I am not very familiar with python to C. But the mxnet supports multiple 
frontends, like python, R, Scala, so more general method will be used.

Maybe creating pthread in the backend from some start points and then each 
pthread executes the different graph or only difference input data?

 

Some of our test data as below, just launch several mxnet instances by & at the 
end of the command line.

Tested on 1 socket with 28 physical cores. And you can see the total throughput 
are improved a lot!

*I think this idea is really worth to try!*

 
|1 batch size, multiple instance, overall throughput (images/sec)| | | |
| |Inception-v3|Resnet-50|SSD-VGG|
|1 core, 28 instance |200.82|203.16|39.46|
|2 core, 14 instance |189.42|199.72|34.94|
|4 core, 7 instance|171.25|195.02|29.28|
|7 core, 4 instance|137.35|168.04|23.06|
|14 core, 2 instance|95.28|110.09|15.64|
|28 core, 1 instance|59.71|78.82|9.48|
| |200.82|203.16|39.46|

 

 

> Multithreaded Inference
> -----------------------
>
>                 Key: MXNET-11
>                 URL: https://issues.apache.org/jira/browse/MXNET-11
>             Project: Apache MXNet
>          Issue Type: Epic
>          Components: MXNet Engine
>            Reporter: Chris Olivier
>            Priority: Major
>              Labels: inference
>
> Add the ability to do multithreaded inference without using fork() or using 
> multiple copies of a given model



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@mxnet.apache.org
For additional commands, e-mail: issues-h...@mxnet.apache.org

Reply via email to