[jira] [Updated] (FLINK-25883) The value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is too large

2022-02-16 Thread Konstantin Knauf (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Knauf updated FLINK-25883:
-
Fix Version/s: 1.13.7
   (was: 1.13.6)

> The value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is too large 
> --
>
> Key: FLINK-25883
> URL: https://issues.apache.org/jira/browse/FLINK-25883
> Project: Flink
>  Issue Type: Bug
> Environment: Windows, Python 3.8
>Reporter: Mikhail
>Assignee: Dian Fu
>Priority: Minor
> Fix For: 1.15.0, 1.12.8, 1.14.4, 1.13.7
>
>
> In [this 
> line|https://github.com/apache/flink/blob/fb38c99a38c63ba8801e765887f955522072615a/flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py#L30],
>  the value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is set to 
> 315360. This is more than the default value of threading.TIMEOUT_MAX on 
> Windows Python, which is 4294967. Due to this, "OverflowError: timeout value 
> is too large" error is produced.
> Full traceback:
> {code:java}
>  File 
> "G:\PycharmProjects\PyFlink\venv_from_scratch\lib\site-packages\apache_beam\runners\worker\data_plane.py",
>  line 218, in run
>   while not self._finished.wait(next_call - time.time()):
>  File "C:\Python38\lib\threading.py", line 558, in wait
>   signaled = self._cond.wait(timeout)
>  File "C:\Python38\lib\threading.py", line 306, in wait
>   gotit = waiter.acquire(True, timeout)
> OverflowError: timeout value is too large{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25883) The value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is too large

2022-01-30 Thread Mikhail (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail updated FLINK-25883:

Description: 
In [this 
line|https://github.com/apache/flink/blob/fb38c99a38c63ba8801e765887f955522072615a/flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py#L30],
 the value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is set to 
315360. This is more than the default value of threading.TIMEOUT_MAX on 
Windows Python. Due to this, "OverflowError: timeout value is too large" error 
is produced.

Full traceback:
{code:java}
File 
"G:\PycharmProjects\PyFlink\venv_from_scratch\lib\site-packages\apache_beam\runners\worker\data_plane.py",
 line 218, in run
while not self._finished.wait(next_call - time.time()):
File "C:\Python38\lib\threading.py", line 558, in wait
signaled = self._cond.wait(timeout)
File "C:\Python38\lib\threading.py", line 306, in wait
gotit = waiter.acquire(True, timeout)
OverflowError: timeout value is too large{code}

  was:
In [this 
line|https://github.com/apache/flink/blob/fb38c99a38c63ba8801e765887f955522072615a/flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py#L30],
 the value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is set to 
315360. This is more than the default value of threading.TIMEOUT_MAX on 
Windows Python. Due to this, "OverflowError: timeout value is too large" error 
is produced.

Full traceback:
  File 
"G:\PycharmProjects\PyFlink\venv_from_scratch\lib\site-packages\apache_beam\runners\worker\data_plane.py",
 line 218, in run
while not self._finished.wait(next_call - time.time()):
  File "C:\Python38\lib\threading.py", line 558, in wait
signaled = self._cond.wait(timeout)
  File "C:\Python38\lib\threading.py", line 306, in wait
gotit = waiter.acquire(True, timeout)
OverflowError: timeout value is too large


> The value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is too large 
> --
>
> Key: FLINK-25883
> URL: https://issues.apache.org/jira/browse/FLINK-25883
> Project: Flink
>  Issue Type: Bug
> Environment: Windows, Python 3.8
>Reporter: Mikhail
>Priority: Minor
>
> In [this 
> line|https://github.com/apache/flink/blob/fb38c99a38c63ba8801e765887f955522072615a/flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py#L30],
>  the value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is set to 
> 315360. This is more than the default value of threading.TIMEOUT_MAX on 
> Windows Python. Due to this, "OverflowError: timeout value is too large" 
> error is produced.
> Full traceback:
> {code:java}
> File 
> "G:\PycharmProjects\PyFlink\venv_from_scratch\lib\site-packages\apache_beam\runners\worker\data_plane.py",
>  line 218, in run
> while not self._finished.wait(next_call - time.time()):
> File "C:\Python38\lib\threading.py", line 558, in wait
> signaled = self._cond.wait(timeout)
> File "C:\Python38\lib\threading.py", line 306, in wait
> gotit = waiter.acquire(True, timeout)
> OverflowError: timeout value is too large{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25883) The value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is too large

2022-01-30 Thread Mikhail (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail updated FLINK-25883:

Description: 
In [this 
line|https://github.com/apache/flink/blob/fb38c99a38c63ba8801e765887f955522072615a/flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py#L30],
 the value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is set to 
315360. This is more than the default value of threading.TIMEOUT_MAX on 
Windows Python, which is 4294967. Due to this, "OverflowError: timeout value is 
too large" error is produced.

Full traceback:
{code:java}
 File 
"G:\PycharmProjects\PyFlink\venv_from_scratch\lib\site-packages\apache_beam\runners\worker\data_plane.py",
 line 218, in run
  while not self._finished.wait(next_call - time.time()):
 File "C:\Python38\lib\threading.py", line 558, in wait
  signaled = self._cond.wait(timeout)
 File "C:\Python38\lib\threading.py", line 306, in wait
  gotit = waiter.acquire(True, timeout)
OverflowError: timeout value is too large{code}

  was:
In [this 
line|https://github.com/apache/flink/blob/fb38c99a38c63ba8801e765887f955522072615a/flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py#L30],
 the value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is set to 
315360. This is more than the default value of threading.TIMEOUT_MAX on 
Windows Python. Due to this, "OverflowError: timeout value is too large" error 
is produced.

Full traceback:
{code:java}
 File 
"G:\PycharmProjects\PyFlink\venv_from_scratch\lib\site-packages\apache_beam\runners\worker\data_plane.py",
 line 218, in run
  while not self._finished.wait(next_call - time.time()):
 File "C:\Python38\lib\threading.py", line 558, in wait
  signaled = self._cond.wait(timeout)
 File "C:\Python38\lib\threading.py", line 306, in wait
  gotit = waiter.acquire(True, timeout)
OverflowError: timeout value is too large{code}


> The value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is too large 
> --
>
> Key: FLINK-25883
> URL: https://issues.apache.org/jira/browse/FLINK-25883
> Project: Flink
>  Issue Type: Bug
> Environment: Windows, Python 3.8
>Reporter: Mikhail
>Priority: Minor
>
> In [this 
> line|https://github.com/apache/flink/blob/fb38c99a38c63ba8801e765887f955522072615a/flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py#L30],
>  the value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is set to 
> 315360. This is more than the default value of threading.TIMEOUT_MAX on 
> Windows Python, which is 4294967. Due to this, "OverflowError: timeout value 
> is too large" error is produced.
> Full traceback:
> {code:java}
>  File 
> "G:\PycharmProjects\PyFlink\venv_from_scratch\lib\site-packages\apache_beam\runners\worker\data_plane.py",
>  line 218, in run
>   while not self._finished.wait(next_call - time.time()):
>  File "C:\Python38\lib\threading.py", line 558, in wait
>   signaled = self._cond.wait(timeout)
>  File "C:\Python38\lib\threading.py", line 306, in wait
>   gotit = waiter.acquire(True, timeout)
> OverflowError: timeout value is too large{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25883) The value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is too large

2022-01-30 Thread Mikhail (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail updated FLINK-25883:

Description: 
In [this 
line|https://github.com/apache/flink/blob/fb38c99a38c63ba8801e765887f955522072615a/flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py#L30],
 the value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is set to 
315360. This is more than the default value of threading.TIMEOUT_MAX on 
Windows Python. Due to this, "OverflowError: timeout value is too large" error 
is produced.

Full traceback:
{code:java}
 File 
"G:\PycharmProjects\PyFlink\venv_from_scratch\lib\site-packages\apache_beam\runners\worker\data_plane.py",
 line 218, in run
  while not self._finished.wait(next_call - time.time()):
 File "C:\Python38\lib\threading.py", line 558, in wait
  signaled = self._cond.wait(timeout)
 File "C:\Python38\lib\threading.py", line 306, in wait
  gotit = waiter.acquire(True, timeout)
OverflowError: timeout value is too large{code}

  was:
In [this 
line|https://github.com/apache/flink/blob/fb38c99a38c63ba8801e765887f955522072615a/flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py#L30],
 the value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is set to 
315360. This is more than the default value of threading.TIMEOUT_MAX on 
Windows Python. Due to this, "OverflowError: timeout value is too large" error 
is produced.

Full traceback:
{code:java}
File 
"G:\PycharmProjects\PyFlink\venv_from_scratch\lib\site-packages\apache_beam\runners\worker\data_plane.py",
 line 218, in run
while not self._finished.wait(next_call - time.time()):
File "C:\Python38\lib\threading.py", line 558, in wait
signaled = self._cond.wait(timeout)
File "C:\Python38\lib\threading.py", line 306, in wait
gotit = waiter.acquire(True, timeout)
OverflowError: timeout value is too large{code}


> The value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is too large 
> --
>
> Key: FLINK-25883
> URL: https://issues.apache.org/jira/browse/FLINK-25883
> Project: Flink
>  Issue Type: Bug
> Environment: Windows, Python 3.8
>Reporter: Mikhail
>Priority: Minor
>
> In [this 
> line|https://github.com/apache/flink/blob/fb38c99a38c63ba8801e765887f955522072615a/flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py#L30],
>  the value of DEFAULT_BUNDLE_PROCESSOR_CACHE_SHUTDOWN_THRESHOLD_S is set to 
> 315360. This is more than the default value of threading.TIMEOUT_MAX on 
> Windows Python. Due to this, "OverflowError: timeout value is too large" 
> error is produced.
> Full traceback:
> {code:java}
>  File 
> "G:\PycharmProjects\PyFlink\venv_from_scratch\lib\site-packages\apache_beam\runners\worker\data_plane.py",
>  line 218, in run
>   while not self._finished.wait(next_call - time.time()):
>  File "C:\Python38\lib\threading.py", line 558, in wait
>   signaled = self._cond.wait(timeout)
>  File "C:\Python38\lib\threading.py", line 306, in wait
>   gotit = waiter.acquire(True, timeout)
> OverflowError: timeout value is too large{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)