My dear friend. I have a glance on tensorflow on ignite. I have the same idea as you but only one thing is added. In addition distributing job to local data, i will use gpu in addition to cpu core. Are you agree with total idea? Is there anything for debating?
On Wednesday, December 19, 2018, dmitrievanthony <[email protected]> wrote: > Yes, in TensorFlow on Apache Ignite we support distributed learning as you > described it (please see details in this documentation > <https://apacheignite.readme.io/docs/ignite-dataset> ). > > Speaking about performance, TensorFlow supports distributed learning itself > (please see details here > <https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/ > distribute> > ). But to start distributed learning in pure TensorFlow you need to setup > cluster manually, manually distribute training data between cluster nodes > and handle node failures. > > In TensorFlow on Apache Ignite we do it for you automatically. Apache > Ignite > plays cluster manager role, it starts and maintains TensorFlow cluster with > optimal configuration and handles node failures. At the same time, the > training is fully performed by TensorFlow anyway. So, the training > performance is absolutely equal to the case when you use pure TensorFlow > with proper manually configured and started TensorFlow cluster because we > don't participate in the training process when the cluster is running > properly. > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
