Coderx7 opened a new issue #4821: OSError: [WinError 10048] Only one usage of 
each socket address (protocol/network address/port) is normally permitted and 
OSError: [WinError 10049] The requested address is not valid in its context
URL: https://github.com/apache/incubator-tvm/issues/4821
 
 
   This issue is a followup to the #4819, After applying the #4820 changes, 
Under windows, trying to run the example : [Auto-tuning a convolutional network 
for x86 
CPU](https://docs.tvm.ai/tutorials/autotvm/tune_relay_x86.html#sphx-glr-tutorials-autotvm-tune-relay-x86-py)
 this error occurs. 
   the error occurs in the last cell : 
   ```python 
   def tune_and_evaluate(tuning_opt):
       # extract workloads from relay program
       print("Extract tasks...")
       mod, params, data_shape, out_shape = get_network(model_name, batch_size)
       tasks = autotvm.task.extract_from_program(mod["main"], target=target,
                                                 params=params, 
ops=(relay.op.nn.conv2d,))
   
       # run tuning tasks
       print("Tuning...")
       tune_kernels(tasks, **tuning_opt)
       tune_graph(mod["main"], data_shape, log_file, graph_opt_sch_file)
   
       # compile kernels with graph-level best records
       with autotvm.apply_graph_best(graph_opt_sch_file):
           print("Compile...")
           with relay.build_config(opt_level=3):
               graph, lib, params = relay.build_module.build(
                   mod, target=target, params=params)
   
           # upload parameters to device
           ctx = tvm.cpu()
           data_tvm = 
tvm.nd.array((np.random.uniform(size=data_shape)).astype(dtype))
           module = runtime.create(graph, lib, ctx)
           module.set_input(input_name, data_tvm)
           module.set_input(**params)
   
           # evaluate
           print("Evaluate inference time cost...")
           ftimer = module.module.time_evaluator("run", ctx, number=100, 
repeat=3)
           prof_res = np.array(ftimer().results) * 1000  # convert to 
millisecond
           print("Mean inference time (std dev): %.2f ms (%.2f ms)" %
                 (np.mean(prof_res), np.std(prof_res)))
   
   # We do not run the tuning in our webpage server since it takes too long.
   # Uncomment the following line to run it by yourself.
   
   # tune_and_evaluate(tuning_option)
   ```
   
   Error : 
   
   ```python
   Extract tasks...
   ANTLR runtime and generated code versions disagree: 4.8!=4.7.2
   ANTLR runtime and generated code versions disagree: 4.8!=4.7.2
   Tuning...
   [Task  1/12]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/252) | 
0.00 sTraceback (most recent call last):
   
     File "C:\Users\User\Anaconda3\lib\runpy.py", line 193, in 
_run_module_as_main
       "__main__", mod_spec)
   
     File "C:\Users\User\Anaconda3\lib\runpy.py", line 85, in _run_code
       exec(code, run_globals)
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\exec\rpc_server.py",
 line 138, in <module>
       main(args)
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\exec\rpc_server.py",
 line 58, in main
       silent=args.silent)
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\server.py",
 line 389, in __init__
       raise sock_err
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\server.py",
 line 382, in __init__
       sock.bind((host, my_port))
   
   OSError: [WinError 10048] Only one usage of each socket address 
(protocol/network address/port) is normally permitted
   
   Exception ignored in: <function Server.__del__ at 0x0000021F5BF3A8B8>
   Traceback (most recent call last):
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\server.py",
 line 419, in __del__
       self.terminate()
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\server.py",
 line 414, in terminate
       if self.proc:
   AttributeError: 'Server' object has no attribute 'proc'
   Exception in thread Thread-3:
   Traceback (most recent call last):
     File "C:\Users\User\Anaconda3\lib\threading.py", line 926, in 
_bootstrap_inner
       self.run()
     File "C:\Users\User\Anaconda3\lib\threading.py", line 870, in run
       self._target(*self._args, **self._kwargs)
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\autotvm\measure\measure_methods.py",
 line 572, in _check
       remote = request_remote(device_key, host, port, priority)
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\autotvm\measure\measure_methods.py",
 line 539, in request_remote
       tracker = _rpc.connect_tracker(host, port)
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\client.py",
 line 430, in connect_tracker
       return TrackerSession((url, port))
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\client.py",
 line 221, in __init__
       self._connect()
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\client.py",
 line 227, in _connect
       self._sock = base.connect_with_retry(self._addr)
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\base.py",
 line 171, in connect_with_retry
       raise sock_err
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\base.py",
 line 167, in connect_with_retry
       sock.connect(addr)
   OSError: [WinError 10049] The requested address is not valid in its context
   
   Traceback (most recent call last):
   
     File "tune_relay_x86.py", line 225, in <module>
       tune_and_evaluate(tuning_option)
   
     File "tune_relay_x86.py", line 198, in tune_and_evaluate
       tune_kernels(tasks, **tuning_opt)
   
     File "tune_relay_x86.py", line 170, in tune_kernels
       autotvm.callback.log_to_file(log_filename)])
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\autotvm\tuner\tuner.py",
 line 125, in tune
       results = measure_batch(inputs)
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\autotvm\measure\measure.py",
 line 260, in measure_batch
       build_results = builder.build(measure_inputs)
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\autotvm\measure\measure_methods.py",
 line 105, in build
       **self.build_kwargs)
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\autotvm\measure\local_executor.py",
 line 151, in submit
       process.start()
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\process.py", line 112, 
in start
       self._popen = self._Popen(self)
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\context.py", line 223, 
in _Popen
       return _default_context.get_context().Process._Popen(process_obj)
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\context.py", line 322, 
in _Popen
       return Popen(process_obj)
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", 
line 89, in __init__
       reduction.dump(process_obj, to_child)
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\reduction.py", line 60, 
in dump
       ForkingPickler(file, protocol).dump(obj)
   
   AttributeError: Can't pickle local object 
'_wrap_build_func.<locals>._wrapped'
   
    Done.
   Exception ignored in: <function Server.__del__ at 0x000001EDA5A7BDC8>
   Traceback (most recent call last):
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\server.py",
 line 419, in __del__
       self.terminate()
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\server.py",
 line 411, in terminate
       os.killpg(self.proc.pid, signal.SIGTERM)
   AttributeError: module 'os' has no attribute 'killpg'
   
   D:\Codes\tvm_testbed>Extract tasks...
   ANTLR runtime and generated code versions disagree: 4.8!=4.7.2
   ANTLR runtime and generated code versions disagree: 4.8!=4.7.2
   Tuning...
   [Task  1/12]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/252) | 
0.00 sTraceback (most recent call last):
   
     File "<string>", line 1, in <module>
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\spawn.py", line 105, in 
spawn_main
       exitcode = _main(fd)
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\spawn.py", line 114, in 
_main
       prepare(preparation_data)
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\spawn.py", line 225, in 
prepare
       _fixup_main_from_path(data['init_main_from_path'])
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\spawn.py", line 277, in 
_fixup_main_from_path
       run_name="__mp_main__")
   
     File "C:\Users\User\Anaconda3\lib\runpy.py", line 263, in run_path
       pkg_name=pkg_name, script_name=fname)
   
     File "C:\Users\User\Anaconda3\lib\runpy.py", line 96, in _run_module_code
       mod_name, mod_spec, pkg_name, script_name)
   
     File "C:\Users\User\Anaconda3\lib\runpy.py", line 85, in _run_code
       exec(code, run_globals)
   
     File "D:\Codes\tvm_testbed\tune_relay_x86.py", line 225, in <module>
       tune_and_evaluate(tuning_option)
   
     File "D:\Codes\tvm_testbed\tune_relay_x86.py", line 198, in 
tune_and_evaluate
       tune_kernels(tasks, **tuning_opt)
   
     File "D:\Codes\tvm_testbed\tune_relay_x86.py", line 170, in tune_kernels
       autotvm.callback.log_to_file(log_filename)])
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\autotvm\tuner\tuner.py",
 line 108, in tune
       measure_batch = create_measure_batch(self.task, measure_option)
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\autotvm\measure\measure.py",
 line 252, in create_measure_batch
       attach_objects = runner.set_task(task)
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\autotvm\measure\measure_methods.py",
 line 332, in set_task
       tracker = Tracker('0.0.0.0', port=9000, port_end=10000, silent=True)
   
     File 
"C:\Users\User\Anaconda3\lib\site-packages\tvm-0.7.dev0-py3.7-win-amd64.egg\tvm\rpc\tracker.py",
 line 404, in __init__
       self.proc.start()
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\process.py", line 112, 
in start
       self._popen = self._Popen(self)
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\context.py", line 223, 
in _Popen
       return _default_context.get_context().Process._Popen(process_obj)
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\context.py", line 322, 
in _Popen
       return Popen(process_obj)
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", 
line 46, in __init__
       prep_data = spawn.get_preparation_data(process_obj._name)
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\spawn.py", line 143, in 
get_preparation_data
       _check_not_importing_main()
   
     File "C:\Users\User\Anaconda3\lib\multiprocessing\spawn.py", line 136, in 
_check_not_importing_main
       is not going to be frozen to produce an executable.''')
   
   RuntimeError:
           An attempt has been made to start a new process before the
           current process has finished its bootstrapping phase.
   
           This probably means that you are not using fork to start your
           child processes and you have forgotten to use the proper idiom
           in the main module:
   
               if __name__ == '__main__':
                   freeze_support()
                   ...
   
           The "freeze_support()" line can be omitted if the program
           is not going to be frozen to produce an executable.
   
    Done.
   
   
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to