First of all, when you are talking about "acceleration of running deep neural
network" what do you mean, training or inference? 
If you are talking about inference, it actually doesn't matter what cluster
management system we use, we need only to run parallel/distributed
inference. 
If you are talking about training, how are you going to speed it up? Use
distributed training on multiple machines? Or GPU's? Or maybe you just want
to speed up data preprocessing?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to