absalama commented on issue #11855: Distributed learning with Async update does not work. URL: https://github.com/apache/incubator-mxnet/issues/11855#issuecomment-408604815 I did not set optimizer so I think the default one is used, below is more detailed log. `INFO:root:Starting new image-classification task:, Namespace(batch_norm=False, batch_size=128, builtin_profiler=0, data_dir='', dataset='dummy', dtype='float32', epochs=120, gpus='0', kvstore='dist_async', log_interval=50, lr=0.01, lr_factor=0.1, lr_steps='30,60,90', mode=None, model='alexnet', momentum=0.9, num_workers=4, prefix='', profile=False, resume='', save_frequency=10, seed=123, start_epoch=0, use_pretrained=False, use_thumbnail=False, wd=0.0005) [17:42:05] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:107: Running performance tests to find the best convolution algorithm, this can take a while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable) terminate called after throwing an instance of 'dmlc::Error' what(): [17:42:08] src/kvstore/././kvstore_dist_server.h:294: Check failed: sync_mode_ Updater needs to be set for async mode ` The following command I use in my training: `python3 launch.py -n 4 -s 1 -H ${NODES_DIR}/hosts --launcher slurm alex_train_single_GPU_dist_async.sh` and **_alex_train_single_GPU_dist.sh_** is as follows: ``` export USE_CUDA=1 export MXNET_CUDNN_AUTOTUNE_DEFAULT=1 export USE_PROFILER=1 export USE_CUDNN=1 export USE_NVRTC=1 export DMLC_INTERFACE="ib0" export USE_DIST_KVSTORE=1 python $MXNET_GLUON_ASYNC/image_classification.py --dataset dummy --kvstore dist_async --gpus=0 --model alexnet --batch-size 128 --lr 0.01 --mom 0.9 --wd 0.0005 ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
