My dear friend. Our essential goals are just as below: 1. Distributed training for training model 2. Utilize Gpu in addition to Cpu for accelerate execution. In tensorflow do you implement two above mentioned? How much speed up do you gain relative to pure tensorflow?
On Tuesday, December 18, 2018, dmitrievanthony <[email protected]> wrote: > First of all, when you are talking about "acceleration of running deep > neural > network" what do you mean, training or inference? > If you are talking about inference, it actually doesn't matter what cluster > management system we use, we need only to run parallel/distributed > inference. > If you are talking about training, how are you going to speed it up? Use > distributed training on multiple machines? Or GPU's? Or maybe you just want > to speed up data preprocessing? > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
