cyh-ustc opened a new issue #14731: profile result seems mixed up URL: https://github.com/apache/incubator-mxnet/issues/14731 # profile result seems mixed up I tried to do some profiling on some network. But the result seems to be mixed up on operater scale. eg. [_backward_FullyConnected,_backward_FullyConnected,_backward_Concat,add_n,_backward_mul,_backward_Activation,_backward_broadcas 🔍 | 34.913 ms | 0.000 ms | 34.913 ms | 1 -- | -- | -- | -- | -- [_backward_WarpCTC,_backward_Concat,_backward_FullyConnected,_backward_Concat,_backward_mul,_backward_Activation,_backward_broa 🔍 | 5.815 ms | 0.000 ms | 5.815 ms | 1 [Activation,Activation,elemwise_mul,Concat,FullyConnected,broadcast_mul,FullyConnected,BatchNorm,FullyConnected,elemwise_add,Sl 🔍 | 238.717 ms | 0.000 ms | 19.893 ms | 12 [Activation,Activation,elemwise_mul,elemwise_add,broadcast_mul,elemwise_add,Activation,Activation,elemwise_mul,Concat,FullyConn 🔍 | 237.716 ms | 0.000 ms | 19.810 ms | 12 [Activation,Activation,elemwise_mul,FullyConnected,elemwise_add,SliceChannel,elemwise_add,Activation,elemwise_mul,broadcast_mul 🔍 | 846.575 ms | 0.000 ms | 14.110 ms | 60 I wonder why the result is shown in a combined style([A, FC, bFC, ...]) rather than separately. Maybe there is some technique in mxnet to execute some operators together to improve performance? Or maybe I did something wrong in building & profiling? OS: ubuntu 16.04.6 Compiler: gcc-5.4, g++-5.4, cuda-10.1 MXNet commit hash: 52a3553fe200214437c717e7b35e6ce39adb59d8 Build config: ```bash cmake .. \ -DUSE_CUDNN=OFF \ -DUSE_CUDA=ON \ -DUSE_MKLDNN=OFF \ -DBLAS=Open \ -DUSE_GPROF=ON make -j ``` environment: https://github.com/apache/incubator-mxnet/tree/master/example/speech_recognition edit: https://github.com/apache/incubator-mxnet/blob/master/example/speech_recognition/train.py +137 ```python mx.profiler.set_config(profile_all=True, filename='profile_output.json') while True: if n_epoch >= num_epoch: break loss_metric.reset() log.info('---------train---------') for nbatch, data_batch in enumerate(data_train): mx.profiler.set_state('run') # prof start module.forward_backward(data_batch) module.update() # mxboard setting if (nbatch + 1) % show_every == 0: module.update_metric(loss_metric, data_batch.label) mx.profiler.set_state('stop') # prof stop ``` (also I have changed num_epoch to 1)
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
