marcoabreu edited a comment on issue #11120: Address already in use during 
tutorial test
URL: 
https://github.com/apache/incubator-mxnet/issues/11120#issuecomment-410216354
 
 
   Happened again: 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/NightlyTests_onBinaries/detail/NightlyTests_onBinaries/102/pipeline
   
   ```
   + nosetests-3.4 --with-xunit --xunit-file 
nosetests_straight_dope_python3_single_gpu.xml test_notebooks_single_gpu.py 
--nologcapture
   
   .......[01:16:58] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:109: Running 
performance tests to find the best convolution algorithm, this can take a 
while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
   
   .....[01:28:00] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:109: Running 
performance tests to find the best convolution algorithm, this can take a 
while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
   
   [01:28:00] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:109: Running 
performance tests to find the best convolution algorithm, this can take a 
while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
   
   .Traceback (most recent call last):
   
     File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
   
       "__main__", mod_spec)
   
     File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
   
       exec(code, run_globals)
   
     File "/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py", line 
16, in <module>
   
       app.launch_new_instance()
   
     File 
"/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 
657, in launch_instance
   
       app.initialize(argv)
   
     File "<decorator-gen-123>", line 2, in initialize
   
     File 
"/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 
87, in catch_config_error
   
       return method(app, *args, **kwargs)
   
     File "/usr/local/lib/python3.5/dist-packages/ipykernel/kernelapp.py", line 
456, in initialize
   
       self.init_sockets()
   
     File "/usr/local/lib/python3.5/dist-packages/ipykernel/kernelapp.py", line 
238, in init_sockets
   
       self.shell_port = self._bind_socket(self.shell_socket, self.shell_port)
   
     File "/usr/local/lib/python3.5/dist-packages/ipykernel/kernelapp.py", line 
180, in _bind_socket
   
       s.bind("tcp://%s:%i" % (self.ip, port))
   
     File "zmq/backend/cython/socket.pyx", line 549, in 
zmq.backend.cython.socket.Socket.bind
   
     File "zmq/backend/cython/checkrc.pxd", line 25, in 
zmq.backend.cython.checkrc._check_rc
   
   zmq.error.ZMQError: Address already in use
   
   ERROR:root:Kernel died before replying to kernel_info
   
   F...........................[01:48:50] 
src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:109: Running performance tests to 
find the best convolution algorithm, this can take a while... (setting env 
variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
   
   .
   
   ======================================================================
   
   FAIL: test_generative_adversarial_networks 
(test_notebooks_single_gpu.StraightDopeSingleGpuTests)
   
   ----------------------------------------------------------------------
   
   Traceback (most recent call last):
   
     File 
"/work/mxnet/tests/nightly/straight_dope/test_notebooks_single_gpu.py", line 
274, in test_generative_adversarial_networks
   
       assert 
_test_notebook('chapter14_generative-adversarial-networks/conditional')
   
   AssertionError
   
   
   
   ----------------------------------------------------------------------
   
   Ran 42 tests in 2291.515s
   
   
   
   FAILED (failures=1)
   
   build.py: 2018-08-03 01:49:31,003 Running of command in container failed (1):
   
   nvidia-docker\
   
        run\
   
        --rm\
   
        -t\
   
        --shm-size=500m\
   
        -v\
   
        /home/jenkins_slave/workspace/straight_dope-single_gpu:/work/mxnet\
   
        -v\
   
        
/home/jenkins_slave/workspace/straight_dope-single_gpu/build:/work/build\
   
        -v\
   
        /tmp/ci_ccache:/work/ccache\
   
        -u\
   
        1001:1001\
   
        -e\
   
        CCACHE_MAXSIZE=500G\
   
        -e\
   
        CCACHE_TEMPDIR=/tmp/ccache\
   
        -e\
   
        CCACHE_DIR=/work/ccache\
   
        -e\
   
        CCACHE_LOGFILE=/tmp/ccache.log\
   
        mxnetci/build.ubuntu_nightly_gpu\
   
        /work/runtime_functions.sh\
   
        nightly_straight_dope_python3_single_gpu_tests
   
   
   
   build.py: 2018-08-03 01:49:31,003 You can get into the container by adding 
the -i option
   
   Traceback (most recent call last):
   
     File "ci/build.py", line 408, in <module>
   
       sys.exit(main())
   
     File "ci/build.py", line 337, in main
   
       local_ccache_dir=args.ccache_dir, interactive=args.interactive)
   
     File "ci/build.py", line 224, in container_run
   
       raise subprocess.CalledProcessError(ret, cmd)
   
   subprocess.CalledProcessError: Command 'nvidia-docker\
   
        run\
   
        --rm\
   
        -t\
   
        --shm-size=500m\
   
        -v\
   
        /home/jenkins_slave/workspace/straight_dope-single_gpu:/work/mxnet\
   
        -v\
   
        
/home/jenkins_slave/workspace/straight_dope-single_gpu/build:/work/build\
   
        -v\
   
        /tmp/ci_ccache:/work/ccache\
   
        -u\
   
        1001:1001\
   
        -e\
   
        CCACHE_MAXSIZE=500G\
   
        -e\
   
        CCACHE_TEMPDIR=/tmp/ccache\
   
        -e\
   
        CCACHE_DIR=/work/ccache\
   
        -e\
   
        CCACHE_LOGFILE=/tmp/ccache.log\
   
        mxnetci/build.ubuntu_nightly_gpu\
   
        /work/runtime_functions.sh\
   
        nightly_straight_dope_python3_single_gpu_tests' returned non-zero exit 
status 1
   
   script returned exit code 1
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to