[jira] [Closed] (FLINK-34616) python dist doesn't clean when open method construct resource

2024-03-07 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-34616.

Fix Version/s: 1.19.1
   1.18.2
 Assignee: Jacky Lau
   Resolution: Fixed

Merged into master via 21306f4f5dbcc72a2cde2f15e6c072951aa03f49

Merged into release-1.19 via 75c88fa4f19d3f703e0cce3b917a9aa070eadffe

Merged into release-1.18 via 31f13614c5e1bccbcfc14f31561aac3892b86e85

> python dist doesn't clean when open method construct resource
> -
>
> Key: FLINK-34616
> URL: https://issues.apache.org/jira/browse/FLINK-34616
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.1, 1.18.2, 1.20.0
>
> Attachments: image-2024-03-07-17-58-06-493.png
>
>
> our enviroment found lots of python-dist causing disk full.
> the main resource is
> constructEnvironmentVariables -> constructArchivesDirectory -> 
> CompressionUtils.extractFile which has 
> ClosedByInterruptException Exception and the root exception has lost. we 
> found it by arthas.
> and it will not run the clean dir logic
>  
> 2024-03-07 18:19:34,265 ERROR [[vertex-1]MiniBatchAssigner(interval=[5000ms], 
> mode=[ProcTime]) -> PythonCalc(select=[content, sourc (18/128)#31] 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager [] - Error when 
> create resource.
> java.nio.channels.ClosedByInterruptException: null
> at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:199)
>  ~[?:?]
> at sun.nio.ch.FileChannelImpl.endBlocking(FileChannelImpl.java:162) ~[?:?]
> at sun.nio.ch.FileChannelImpl.readInternal(FileChannelImpl.java:816) ~[?:?]
> at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:796) ~[?:?]
> at 
> org.apache.commons.compress.archivers.zip.ZipFile$BoundedFileChannelInputStream.read(ZipFile.java:1420)
>  ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.commons.compress.utils.BoundedArchiveInputStream.read(BoundedArchiveInputStream.java:82)
>  ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[?:?]
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:292) ~[?:?]
> at java.io.BufferedInputStream.read(BufferedInputStream.java:351) ~[?:?]
> at java.io.SequenceInputStream.read(SequenceInputStream.java:199) ~[?:?]
> at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:243) ~[?:?]
> at 
> org.apache.commons.compress.archivers.zip.InflaterInputStreamWithStatistics.fill(InflaterInputStreamWithStatistics.java:52)
>  ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:159) ~[?:?]
> at 
> org.apache.commons.compress.archivers.zip.InflaterInputStreamWithStatistics.read(InflaterInputStreamWithStatistics.java:67)
>  ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at java.io.FilterInputStream.read(FilterInputStream.java:107) ~[?:?]
> at org.apache.flink.util.IOUtils.copyBytes(IOUtils.java:61) 
> ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at org.apache.flink.util.IOUtils.copyBytes(IOUtils.java:86) 
> ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.util.CompressionUtils.extractZipFileWithPermissions(CompressionUtils.java:223)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.util.CompressionUtils.extractFile(CompressionUtils.java:61)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.constructArchivesDirectory(AbstractPythonEnvironmentManager.java:365)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.constructEnvironmentVariables(AbstractPythonEnvironmentManager.java:178)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.lambda$open$0(AbstractPythonEnvironmentManager.java:126)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager$PythonEnvResources.createResource(AbstractPythonEnvironmentManager.java:468)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> 

[jira] [Updated] (FLINK-34616) python dist doesn't clean when open method construct resource

2024-03-07 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-34616:
-
Affects Version/s: 1.18.1
   1.19.0
   (was: 1.20.0)

> python dist doesn't clean when open method construct resource
> -
>
> Key: FLINK-34616
> URL: https://issues.apache.org/jira/browse/FLINK-34616
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
> Attachments: image-2024-03-07-17-58-06-493.png
>
>
> our enviroment found lots of python-dist causing disk full.
> the main resource is
> constructEnvironmentVariables -> constructArchivesDirectory -> 
> CompressionUtils.extractFile which has 
> ClosedByInterruptException Exception and the root exception has lost. we 
> found it by arthas.
> and it will not run the clean dir logic
>  
> 2024-03-07 18:19:34,265 ERROR [[vertex-1]MiniBatchAssigner(interval=[5000ms], 
> mode=[ProcTime]) -> PythonCalc(select=[content, sourc (18/128)#31] 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager [] - Error when 
> create resource.
> java.nio.channels.ClosedByInterruptException: null
> at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:199)
>  ~[?:?]
> at sun.nio.ch.FileChannelImpl.endBlocking(FileChannelImpl.java:162) ~[?:?]
> at sun.nio.ch.FileChannelImpl.readInternal(FileChannelImpl.java:816) ~[?:?]
> at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:796) ~[?:?]
> at 
> org.apache.commons.compress.archivers.zip.ZipFile$BoundedFileChannelInputStream.read(ZipFile.java:1420)
>  ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.commons.compress.utils.BoundedArchiveInputStream.read(BoundedArchiveInputStream.java:82)
>  ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[?:?]
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:292) ~[?:?]
> at java.io.BufferedInputStream.read(BufferedInputStream.java:351) ~[?:?]
> at java.io.SequenceInputStream.read(SequenceInputStream.java:199) ~[?:?]
> at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:243) ~[?:?]
> at 
> org.apache.commons.compress.archivers.zip.InflaterInputStreamWithStatistics.fill(InflaterInputStreamWithStatistics.java:52)
>  ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:159) ~[?:?]
> at 
> org.apache.commons.compress.archivers.zip.InflaterInputStreamWithStatistics.read(InflaterInputStreamWithStatistics.java:67)
>  ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at java.io.FilterInputStream.read(FilterInputStream.java:107) ~[?:?]
> at org.apache.flink.util.IOUtils.copyBytes(IOUtils.java:61) 
> ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at org.apache.flink.util.IOUtils.copyBytes(IOUtils.java:86) 
> ~[flink-dist_2.12-1.15.2-SNAPSHOT.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.util.CompressionUtils.extractZipFileWithPermissions(CompressionUtils.java:223)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.util.CompressionUtils.extractFile(CompressionUtils.java:61)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.constructArchivesDirectory(AbstractPythonEnvironmentManager.java:365)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.constructEnvironmentVariables(AbstractPythonEnvironmentManager.java:178)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.lambda$open$0(AbstractPythonEnvironmentManager.java:126)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager$PythonEnvResources.createResource(AbstractPythonEnvironmentManager.java:468)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager$PythonEnvResources.getOrAllocateSharedResource(AbstractPythonEnvironmentManager.java:435)
>  
> ~[flink-python_2.12-1.15.2-SNAPSHOT-jar-with-dependencies.jar:1.15.2-SNAPSHOT]
> at 
> 

[jira] [Updated] (FLINK-34582) release build tools lost the newly added py3.11 packages for mac

2024-03-05 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-34582:
-
Fix Version/s: 1.19.0

> release build tools lost the newly added py3.11 packages for mac
> 
>
> Key: FLINK-34582
> URL: https://issues.apache.org/jira/browse/FLINK-34582
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0, 1.20.0
>Reporter: lincoln lee
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> during 1.19.0-rc1 building binaries via 
> tools/releasing/create_binary_release.sh
> lost the newly added py3.11  2 packages for mac



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34582) release build tools lost the newly added py3.11 packages for mac

2024-03-05 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-34582.

Resolution: Fixed

Merged into master via a9d9bab47a6b2a9520f7d2b6f3791690df50e214

Merged into release-1.19 via fa738bb09310a0012b5c8341e403c597855079b1

> release build tools lost the newly added py3.11 packages for mac
> 
>
> Key: FLINK-34582
> URL: https://issues.apache.org/jira/browse/FLINK-34582
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0, 1.20.0
>Reporter: lincoln lee
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available
>
> during 1.19.0-rc1 building binaries via 
> tools/releasing/create_binary_release.sh
> lost the newly added py3.11  2 packages for mac



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-02-18 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818269#comment-17818269
 ] 

Xingbo Huang commented on FLINK-34202:
--

I reviewed all the stages where timeouts occurred and found that these stages 
all ran on AlibabaCI001. Simultaneously, the runtime of all other successful 
stages is consistently around 2 hours and 40 minutes. In the logs, I didn't 
notice any tests being stuck or having an overly long runtime, so I think the 
timeout is largely due to AlibabaCI001's performance not being sufficient to 
complete 4 Python version tests within 4 hours. I submitted a PR to have the 
nightly CI randomly select a Python version for testing rather than running all 
4 Python versions. By only running one Python version test, even if the 
machine's performance is poor, it should not exceed 2 hours (the lab triggered 
by the PR only runs the latest Python version, which takes about 40 minutes).

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34202) python tests take suspiciously long in some of the cases

2024-02-17 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang reassigned FLINK-34202:


Assignee: Xingbo Huang

> python tests take suspiciously long in some of the cases
> 
>
> Key: FLINK-34202
> URL: https://issues.apache.org/jira/browse/FLINK-34202
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: test-stability
>
> [This release-1.18 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a]
>  has the python stage running into a timeout without any obvious reason. The 
> [python stage run for 
> JDK17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56603=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06]
>  was also getting close to the 4h timeout.
> I'm creating this issue for documentation purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34195) PythonEnvUtils creates python environment instead of python3

2024-01-23 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17809826#comment-17809826
 ] 

Xingbo Huang commented on FLINK-34195:
--

[~mapohl] Thanks for the great work. Since the second version of PyFlink's 
inception, 1.10, it has only supported Python3. I believe that using `python3` 
rather than `python` is indeed a more standard practice. For information on 
which Python version PyFlink supports, one method is to check the Flink 
documentationhttps://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/python/installation/,
 and another method is to view the Python package meta on PyPI 
https://pypi.org/project/apache-flink/.

> PythonEnvUtils creates python environment instead of python3
> 
>
> Key: FLINK-34195
> URL: https://issues.apache.org/jira/browse/FLINK-34195
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Build System / CI
>Reporter: Matthias Pohl
>Priority: Major
>
> I was looking into the Python installation of the Flink test suite because I 
> working on updating the CI Docker image from 16.04 (Xenial) to 22.04 
> (FLINK-34194). I noticed that there is test code still relying on the 
> {{python}} command instead of {{{}python3{}}}. For Ubuntu 16.04 that meant 
> relying on Python 2. Therefore, we have tests still relying on Python 2 as 
> far as I understand.
> I couldn't find any documentation or mailing list discussion on major Python 
> version support. But AFAIU, we're relying on Python3 (based on the e2e tests) 
> which makes these tests out-dated.
> Additionally, 
> [python.client.executable|https://github.com/apache/flink/blob/50cb4ee8c545cd38d0efee014939df91c2c9c65f/flink-python/src/main/java/org/apache/flink/python/PythonOptions.java#L170]
>  relies on {{{}python{}}}.
> Should we make it more explicit in our test code that we're actually 
> expecting python3? Additionally, should that be mentioned somewhere in the 
> docs? Or if it's already mentioned, could you point me to it? (As someone 
> looking into PyFlink for the "first" time) I would have expected something 
> like that being mentioned on the [PyFlink 
> overview|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/python/overview/].
>  Or is it the default to assume nowadays that {{python}} refers to 
> {{python3?}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34077) Python Sphinx version error

2024-01-14 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-34077:
-
Issue Type: Technical Debt  (was: Bug)

> Python Sphinx version error
> ---
>
> Key: FLINK-34077
> URL: https://issues.apache.org/jira/browse/FLINK-34077
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Yunfeng Zhou
>Assignee: Xingbo Huang
>Priority: Major
>  Labels: pull-request-available
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56357=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]
>  
> {code:java}
> Jan 14 15:49:17 /__w/2/s/flink-python/dev/.conda/bin/sphinx-build -b html -d 
> _build/doctrees -a -W . _build/html
> Jan 14 15:49:17 Running Sphinx v4.5.0
> Jan 14 15:49:17
> Jan 14 15:49:17 Sphinx version error:
> Jan 14 15:49:17 The sphinxcontrib.applehelp extension used by this project 
> needs at least Sphinx v5.0; it therefore cannot be built with this version.   
>  
> Jan 14 15:49:17 Makefile:76: recipe for target 'html' failed
> Jan 14 15:49:17 make: *** [html] Error 2
> Jan 14 15:49:18 ==sphinx checks... [FAILED]=== {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34077) Python Sphinx version error

2024-01-14 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-34077:
-
Summary: Python Sphinx version error  (was: Sphinx version needs upgrade)

> Python Sphinx version error
> ---
>
> Key: FLINK-34077
> URL: https://issues.apache.org/jira/browse/FLINK-34077
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Yunfeng Zhou
>Assignee: Xingbo Huang
>Priority: Major
>  Labels: pull-request-available
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56357=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]
>  
> {code:java}
> Jan 14 15:49:17 /__w/2/s/flink-python/dev/.conda/bin/sphinx-build -b html -d 
> _build/doctrees -a -W . _build/html
> Jan 14 15:49:17 Running Sphinx v4.5.0
> Jan 14 15:49:17
> Jan 14 15:49:17 Sphinx version error:
> Jan 14 15:49:17 The sphinxcontrib.applehelp extension used by this project 
> needs at least Sphinx v5.0; it therefore cannot be built with this version.   
>  
> Jan 14 15:49:17 Makefile:76: recipe for target 'html' failed
> Jan 14 15:49:17 make: *** [html] Error 2
> Jan 14 15:49:18 ==sphinx checks... [FAILED]=== {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34077) Sphinx version needs upgrade

2024-01-14 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang resolved FLINK-34077.
--
Resolution: Fixed

Merged into master via d2fbe464b1a353a7eb35926299d5c048647a3073

> Sphinx version needs upgrade
> 
>
> Key: FLINK-34077
> URL: https://issues.apache.org/jira/browse/FLINK-34077
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Yunfeng Zhou
>Assignee: Xingbo Huang
>Priority: Major
>  Labels: pull-request-available
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56357=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]
>  
> {code:java}
> Jan 14 15:49:17 /__w/2/s/flink-python/dev/.conda/bin/sphinx-build -b html -d 
> _build/doctrees -a -W . _build/html
> Jan 14 15:49:17 Running Sphinx v4.5.0
> Jan 14 15:49:17
> Jan 14 15:49:17 Sphinx version error:
> Jan 14 15:49:17 The sphinxcontrib.applehelp extension used by this project 
> needs at least Sphinx v5.0; it therefore cannot be built with this version.   
>  
> Jan 14 15:49:17 Makefile:76: recipe for target 'html' failed
> Jan 14 15:49:17 make: *** [html] Error 2
> Jan 14 15:49:18 ==sphinx checks... [FAILED]=== {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34077) Sphinx version needs upgrade

2024-01-14 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806595#comment-17806595
 ] 

Xingbo Huang commented on FLINK-34077:
--

Some sphinxcontrib(sphinxcontrib-applehelp, sphinxcontrib.devhelp 
sphinxcontrib.htmlhelp and so on) packages have released new versions, but they 
have not done compatibility, so the document build fails. I will hofix to limit 
the versions of these packages. Regarding upgrading the sphix version, some 
current conf configurations need to be changed, which are incompatible with the 
current conf. I think it can be done as a new feature.

> Sphinx version needs upgrade
> 
>
> Key: FLINK-34077
> URL: https://issues.apache.org/jira/browse/FLINK-34077
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Yunfeng Zhou
>Priority: Major
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56357=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]
>  
> {code:java}
> Jan 14 15:49:17 /__w/2/s/flink-python/dev/.conda/bin/sphinx-build -b html -d 
> _build/doctrees -a -W . _build/html
> Jan 14 15:49:17 Running Sphinx v4.5.0
> Jan 14 15:49:17
> Jan 14 15:49:17 Sphinx version error:
> Jan 14 15:49:17 The sphinxcontrib.applehelp extension used by this project 
> needs at least Sphinx v5.0; it therefore cannot be built with this version.   
>  
> Jan 14 15:49:17 Makefile:76: recipe for target 'html' failed
> Jan 14 15:49:17 make: *** [html] Error 2
> Jan 14 15:49:18 ==sphinx checks... [FAILED]=== {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34077) Sphinx version needs upgrade

2024-01-14 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang reassigned FLINK-34077:


Assignee: Xingbo Huang

> Sphinx version needs upgrade
> 
>
> Key: FLINK-34077
> URL: https://issues.apache.org/jira/browse/FLINK-34077
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Yunfeng Zhou
>Assignee: Xingbo Huang
>Priority: Major
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56357=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]
>  
> {code:java}
> Jan 14 15:49:17 /__w/2/s/flink-python/dev/.conda/bin/sphinx-build -b html -d 
> _build/doctrees -a -W . _build/html
> Jan 14 15:49:17 Running Sphinx v4.5.0
> Jan 14 15:49:17
> Jan 14 15:49:17 Sphinx version error:
> Jan 14 15:49:17 The sphinxcontrib.applehelp extension used by this project 
> needs at least Sphinx v5.0; it therefore cannot be built with this version.   
>  
> Jan 14 15:49:17 Makefile:76: recipe for target 'html' failed
> Jan 14 15:49:17 make: *** [html] Error 2
> Jan 14 15:49:18 ==sphinx checks... [FAILED]=== {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-34012.

Resolution: Fixed

Merged into master via 639deeca33757c7380d474d43b8a70bacb84dd20

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-34012:
-
Issue Type: Technical Debt  (was: Bug)

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang reassigned FLINK-34012:


Assignee: Xingbo Huang

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33531) Nightly Python fails with NPE at metadataHandlerProvider on AZP (StreamDependencyTests.test_add_python_archive)

2023-11-20 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-33531:
-
Issue Type: Technical Debt  (was: Bug)

> Nightly Python fails with NPE at metadataHandlerProvider on AZP 
> (StreamDependencyTests.test_add_python_archive)
> ---
>
> Key: FLINK-33531
> URL: https://issues.apache.org/jira/browse/FLINK-33531
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Xingbo Huang
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> It seems starting 02.11.2023 every master nightly fails with this (that's why 
> it is a blocker)
> for instance
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=54512=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]
> {noformat}
> 2023-11-12T02:10:24.5082784Z Nov 12 02:10:24 if is_error(answer)[0]:
> 2023-11-12T02:10:24.5083620Z Nov 12 02:10:24 if len(answer) > 1:
> 2023-11-12T02:10:24.5084326Z Nov 12 02:10:24 type = answer[1]
> 2023-11-12T02:10:24.5085164Z Nov 12 02:10:24 value = 
> OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 2023-11-12T02:10:24.5086061Z Nov 12 02:10:24 if answer[1] == 
> REFERENCE_TYPE:
> 2023-11-12T02:10:24.5086850Z Nov 12 02:10:24 >   raise 
> Py4JJavaError(
> 2023-11-12T02:10:24.5087677Z Nov 12 02:10:24 "An 
> error occurred while calling {0}{1}{2}.\n".
> 2023-11-12T02:10:24.5088538Z Nov 12 02:10:24 
> format(target_id, ".", name), value)
> 2023-11-12T02:10:24.5089551Z Nov 12 02:10:24 E   
> py4j.protocol.Py4JJavaError: An error occurred while calling 
> o3371.executeInsert.
> 2023-11-12T02:10:24.5090832Z Nov 12 02:10:24 E   : 
> java.lang.NullPointerException: metadataHandlerProvider
> 2023-11-12T02:10:24.5091832Z Nov 12 02:10:24 Eat 
> java.util.Objects.requireNonNull(Objects.java:228)
> 2023-11-12T02:10:24.5093399Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.getMetadataHandlerProvider(RelMetadataQueryBase.java:122)
> 2023-11-12T02:10:24.5094480Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.revise(RelMetadataQueryBase.java:118)
> 2023-11-12T02:10:24.5095365Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getPulledUpPredicates(RelMetadataQuery.java:844)
> 2023-11-12T02:10:24.5096306Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:307)
> 2023-11-12T02:10:24.5097238Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:337)
> 2023-11-12T02:10:24.5098014Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556)
> 2023-11-12T02:10:24.5098753Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:420)
> 2023-11-12T02:10:24.5099517Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeRuleInstance(HepPlanner.java:243)
> 2023-11-12T02:10:24.5100373Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance$State.execute(HepInstruction.java:178)
> 2023-11-12T02:10:24.5101313Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.lambda$executeProgram$0(HepPlanner.java:211)
> 2023-11-12T02:10:24.5102410Z Nov 12 02:10:24 Eat 
> org.apache.flink.calcite.shaded.com.google.common.collect.ImmutableList.forEach(ImmutableList.java:422)
> 2023-11-12T02:10:24.5103343Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:210)
> 2023-11-12T02:10:24.5104105Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepProgram$State.execute(HepProgram.java:118)
> 2023-11-12T02:10:24.5104868Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:205)
> 2023-11-12T02:10:24.5105616Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:191)
> 2023-11-12T02:10:24.5106421Z Nov 12 02:10:24 Eat 
> 

[jira] [Resolved] (FLINK-33531) Nightly Python fails with NPE at metadataHandlerProvider on AZP (StreamDependencyTests.test_add_python_archive)

2023-11-20 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang resolved FLINK-33531.
--
Fix Version/s: 1.19.0
   Resolution: Fixed

Merged into master via 942d23636b13df7a118274d307943f5524cd

> Nightly Python fails with NPE at metadataHandlerProvider on AZP 
> (StreamDependencyTests.test_add_python_archive)
> ---
>
> Key: FLINK-33531
> URL: https://issues.apache.org/jira/browse/FLINK-33531
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Xingbo Huang
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> It seems starting 02.11.2023 every master nightly fails with this (that's why 
> it is a blocker)
> for instance
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=54512=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]
> {noformat}
> 2023-11-12T02:10:24.5082784Z Nov 12 02:10:24 if is_error(answer)[0]:
> 2023-11-12T02:10:24.5083620Z Nov 12 02:10:24 if len(answer) > 1:
> 2023-11-12T02:10:24.5084326Z Nov 12 02:10:24 type = answer[1]
> 2023-11-12T02:10:24.5085164Z Nov 12 02:10:24 value = 
> OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 2023-11-12T02:10:24.5086061Z Nov 12 02:10:24 if answer[1] == 
> REFERENCE_TYPE:
> 2023-11-12T02:10:24.5086850Z Nov 12 02:10:24 >   raise 
> Py4JJavaError(
> 2023-11-12T02:10:24.5087677Z Nov 12 02:10:24 "An 
> error occurred while calling {0}{1}{2}.\n".
> 2023-11-12T02:10:24.5088538Z Nov 12 02:10:24 
> format(target_id, ".", name), value)
> 2023-11-12T02:10:24.5089551Z Nov 12 02:10:24 E   
> py4j.protocol.Py4JJavaError: An error occurred while calling 
> o3371.executeInsert.
> 2023-11-12T02:10:24.5090832Z Nov 12 02:10:24 E   : 
> java.lang.NullPointerException: metadataHandlerProvider
> 2023-11-12T02:10:24.5091832Z Nov 12 02:10:24 Eat 
> java.util.Objects.requireNonNull(Objects.java:228)
> 2023-11-12T02:10:24.5093399Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.getMetadataHandlerProvider(RelMetadataQueryBase.java:122)
> 2023-11-12T02:10:24.5094480Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.revise(RelMetadataQueryBase.java:118)
> 2023-11-12T02:10:24.5095365Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getPulledUpPredicates(RelMetadataQuery.java:844)
> 2023-11-12T02:10:24.5096306Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:307)
> 2023-11-12T02:10:24.5097238Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:337)
> 2023-11-12T02:10:24.5098014Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556)
> 2023-11-12T02:10:24.5098753Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:420)
> 2023-11-12T02:10:24.5099517Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeRuleInstance(HepPlanner.java:243)
> 2023-11-12T02:10:24.5100373Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance$State.execute(HepInstruction.java:178)
> 2023-11-12T02:10:24.5101313Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.lambda$executeProgram$0(HepPlanner.java:211)
> 2023-11-12T02:10:24.5102410Z Nov 12 02:10:24 Eat 
> org.apache.flink.calcite.shaded.com.google.common.collect.ImmutableList.forEach(ImmutableList.java:422)
> 2023-11-12T02:10:24.5103343Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:210)
> 2023-11-12T02:10:24.5104105Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepProgram$State.execute(HepProgram.java:118)
> 2023-11-12T02:10:24.5104868Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:205)
> 2023-11-12T02:10:24.5105616Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:191)
> 2023-11-12T02:10:24.5106421Z Nov 12 02:10:24 E   

[jira] [Commented] (FLINK-33531) Nightly Python fails with NPE at metadataHandlerProvider on AZP (StreamDependencyTests.test_add_python_archive)

2023-11-19 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787824#comment-17787824
 ] 

Xingbo Huang commented on FLINK-33531:
--

After doing some experiments, I came to the following conclusions:
1. In Python 3.9 + Cython 0.29.36 environment, the `test_denpendency.py` test 
will fail stably in my private Azure pipeline. Although I don't think Python 
and Cython versions have anything to do with this test failure.
2. Change the Python or Cython version of this test and the failure case will 
no longer appear.
3. This problem cannot be reproduced locally using the same versions of all 
packages such as Python and Cython.
4. After reverting the commit that may cause the problem, this case will still 
fail in Azure.(I didn't revert all the commits because I don't think these are 
the root causes.)

My preferred solution right now is to upgrade Cython to address testing issues 
caused by the Azure environment.

> Nightly Python fails with NPE at metadataHandlerProvider on AZP 
> (StreamDependencyTests.test_add_python_archive)
> ---
>
> Key: FLINK-33531
> URL: https://issues.apache.org/jira/browse/FLINK-33531
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Xingbo Huang
>Priority: Blocker
>  Labels: test-stability
>
> It seems starting 02.11.2023 every master nightly fails with this (that's why 
> it is a blocker)
> for instance
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=54512=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]
> {noformat}
> 2023-11-12T02:10:24.5082784Z Nov 12 02:10:24 if is_error(answer)[0]:
> 2023-11-12T02:10:24.5083620Z Nov 12 02:10:24 if len(answer) > 1:
> 2023-11-12T02:10:24.5084326Z Nov 12 02:10:24 type = answer[1]
> 2023-11-12T02:10:24.5085164Z Nov 12 02:10:24 value = 
> OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 2023-11-12T02:10:24.5086061Z Nov 12 02:10:24 if answer[1] == 
> REFERENCE_TYPE:
> 2023-11-12T02:10:24.5086850Z Nov 12 02:10:24 >   raise 
> Py4JJavaError(
> 2023-11-12T02:10:24.5087677Z Nov 12 02:10:24 "An 
> error occurred while calling {0}{1}{2}.\n".
> 2023-11-12T02:10:24.5088538Z Nov 12 02:10:24 
> format(target_id, ".", name), value)
> 2023-11-12T02:10:24.5089551Z Nov 12 02:10:24 E   
> py4j.protocol.Py4JJavaError: An error occurred while calling 
> o3371.executeInsert.
> 2023-11-12T02:10:24.5090832Z Nov 12 02:10:24 E   : 
> java.lang.NullPointerException: metadataHandlerProvider
> 2023-11-12T02:10:24.5091832Z Nov 12 02:10:24 Eat 
> java.util.Objects.requireNonNull(Objects.java:228)
> 2023-11-12T02:10:24.5093399Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.getMetadataHandlerProvider(RelMetadataQueryBase.java:122)
> 2023-11-12T02:10:24.5094480Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.revise(RelMetadataQueryBase.java:118)
> 2023-11-12T02:10:24.5095365Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getPulledUpPredicates(RelMetadataQuery.java:844)
> 2023-11-12T02:10:24.5096306Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:307)
> 2023-11-12T02:10:24.5097238Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:337)
> 2023-11-12T02:10:24.5098014Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556)
> 2023-11-12T02:10:24.5098753Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:420)
> 2023-11-12T02:10:24.5099517Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeRuleInstance(HepPlanner.java:243)
> 2023-11-12T02:10:24.5100373Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance$State.execute(HepInstruction.java:178)
> 2023-11-12T02:10:24.5101313Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.lambda$executeProgram$0(HepPlanner.java:211)
> 2023-11-12T02:10:24.5102410Z Nov 12 02:10:24 Eat 
> org.apache.flink.calcite.shaded.com.google.common.collect.ImmutableList.forEach(ImmutableList.java:422)
> 

[jira] [Assigned] (FLINK-33531) Nightly Python fails with NPE at metadataHandlerProvider on AZP (StreamDependencyTests.test_add_python_archive)

2023-11-16 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang reassigned FLINK-33531:


Assignee: Xingbo Huang

> Nightly Python fails with NPE at metadataHandlerProvider on AZP 
> (StreamDependencyTests.test_add_python_archive)
> ---
>
> Key: FLINK-33531
> URL: https://issues.apache.org/jira/browse/FLINK-33531
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Xingbo Huang
>Priority: Blocker
>  Labels: test-stability
>
> It seems starting 02.11.2023 every master nightly fails with this (that's why 
> it is a blocker)
> for instance
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=54512=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]
> {noformat}
> 2023-11-12T02:10:24.5082784Z Nov 12 02:10:24 if is_error(answer)[0]:
> 2023-11-12T02:10:24.5083620Z Nov 12 02:10:24 if len(answer) > 1:
> 2023-11-12T02:10:24.5084326Z Nov 12 02:10:24 type = answer[1]
> 2023-11-12T02:10:24.5085164Z Nov 12 02:10:24 value = 
> OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 2023-11-12T02:10:24.5086061Z Nov 12 02:10:24 if answer[1] == 
> REFERENCE_TYPE:
> 2023-11-12T02:10:24.5086850Z Nov 12 02:10:24 >   raise 
> Py4JJavaError(
> 2023-11-12T02:10:24.5087677Z Nov 12 02:10:24 "An 
> error occurred while calling {0}{1}{2}.\n".
> 2023-11-12T02:10:24.5088538Z Nov 12 02:10:24 
> format(target_id, ".", name), value)
> 2023-11-12T02:10:24.5089551Z Nov 12 02:10:24 E   
> py4j.protocol.Py4JJavaError: An error occurred while calling 
> o3371.executeInsert.
> 2023-11-12T02:10:24.5090832Z Nov 12 02:10:24 E   : 
> java.lang.NullPointerException: metadataHandlerProvider
> 2023-11-12T02:10:24.5091832Z Nov 12 02:10:24 Eat 
> java.util.Objects.requireNonNull(Objects.java:228)
> 2023-11-12T02:10:24.5093399Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.getMetadataHandlerProvider(RelMetadataQueryBase.java:122)
> 2023-11-12T02:10:24.5094480Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.revise(RelMetadataQueryBase.java:118)
> 2023-11-12T02:10:24.5095365Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getPulledUpPredicates(RelMetadataQuery.java:844)
> 2023-11-12T02:10:24.5096306Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:307)
> 2023-11-12T02:10:24.5097238Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:337)
> 2023-11-12T02:10:24.5098014Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556)
> 2023-11-12T02:10:24.5098753Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:420)
> 2023-11-12T02:10:24.5099517Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeRuleInstance(HepPlanner.java:243)
> 2023-11-12T02:10:24.5100373Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance$State.execute(HepInstruction.java:178)
> 2023-11-12T02:10:24.5101313Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.lambda$executeProgram$0(HepPlanner.java:211)
> 2023-11-12T02:10:24.5102410Z Nov 12 02:10:24 Eat 
> org.apache.flink.calcite.shaded.com.google.common.collect.ImmutableList.forEach(ImmutableList.java:422)
> 2023-11-12T02:10:24.5103343Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:210)
> 2023-11-12T02:10:24.5104105Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepProgram$State.execute(HepProgram.java:118)
> 2023-11-12T02:10:24.5104868Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:205)
> 2023-11-12T02:10:24.5105616Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:191)
> 2023-11-12T02:10:24.5106421Z Nov 12 02:10:24 Eat 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:64)
> 

[jira] [Commented] (FLINK-33531) Nightly Python fails with NPE at metadataHandlerProvider on AZP (StreamDependencyTests.test_add_python_archive)

2023-11-16 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17786676#comment-17786676
 ] 

Xingbo Huang commented on FLINK-33531:
--

I will take a look. This case only fails on Python 3.9, which is very strange, 
because the stack does not seem to have much to do with this case or the Python 
environment. And other branches are still normal.  I will try to confirm which 
commit caused it firstly.

> Nightly Python fails with NPE at metadataHandlerProvider on AZP 
> (StreamDependencyTests.test_add_python_archive)
> ---
>
> Key: FLINK-33531
> URL: https://issues.apache.org/jira/browse/FLINK-33531
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>  Labels: test-stability
>
> It seems starting 02.11.2023 every master nightly fails with this (that's why 
> it is a blocker)
> for instance
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=54512=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]
> {noformat}
> 2023-11-12T02:10:24.5082784Z Nov 12 02:10:24 if is_error(answer)[0]:
> 2023-11-12T02:10:24.5083620Z Nov 12 02:10:24 if len(answer) > 1:
> 2023-11-12T02:10:24.5084326Z Nov 12 02:10:24 type = answer[1]
> 2023-11-12T02:10:24.5085164Z Nov 12 02:10:24 value = 
> OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 2023-11-12T02:10:24.5086061Z Nov 12 02:10:24 if answer[1] == 
> REFERENCE_TYPE:
> 2023-11-12T02:10:24.5086850Z Nov 12 02:10:24 >   raise 
> Py4JJavaError(
> 2023-11-12T02:10:24.5087677Z Nov 12 02:10:24 "An 
> error occurred while calling {0}{1}{2}.\n".
> 2023-11-12T02:10:24.5088538Z Nov 12 02:10:24 
> format(target_id, ".", name), value)
> 2023-11-12T02:10:24.5089551Z Nov 12 02:10:24 E   
> py4j.protocol.Py4JJavaError: An error occurred while calling 
> o3371.executeInsert.
> 2023-11-12T02:10:24.5090832Z Nov 12 02:10:24 E   : 
> java.lang.NullPointerException: metadataHandlerProvider
> 2023-11-12T02:10:24.5091832Z Nov 12 02:10:24 Eat 
> java.util.Objects.requireNonNull(Objects.java:228)
> 2023-11-12T02:10:24.5093399Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.getMetadataHandlerProvider(RelMetadataQueryBase.java:122)
> 2023-11-12T02:10:24.5094480Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.revise(RelMetadataQueryBase.java:118)
> 2023-11-12T02:10:24.5095365Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getPulledUpPredicates(RelMetadataQuery.java:844)
> 2023-11-12T02:10:24.5096306Z Nov 12 02:10:24 Eat 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:307)
> 2023-11-12T02:10:24.5097238Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:337)
> 2023-11-12T02:10:24.5098014Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:556)
> 2023-11-12T02:10:24.5098753Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:420)
> 2023-11-12T02:10:24.5099517Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeRuleInstance(HepPlanner.java:243)
> 2023-11-12T02:10:24.5100373Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance$State.execute(HepInstruction.java:178)
> 2023-11-12T02:10:24.5101313Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.lambda$executeProgram$0(HepPlanner.java:211)
> 2023-11-12T02:10:24.5102410Z Nov 12 02:10:24 Eat 
> org.apache.flink.calcite.shaded.com.google.common.collect.ImmutableList.forEach(ImmutableList.java:422)
> 2023-11-12T02:10:24.5103343Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:210)
> 2023-11-12T02:10:24.5104105Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepProgram$State.execute(HepProgram.java:118)
> 2023-11-12T02:10:24.5104868Z Nov 12 02:10:24 Eat 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:205)
> 2023-11-12T02:10:24.5105616Z Nov 12 02:10:24 Eat 
> 

[jira] [Updated] (FLINK-32110) TM native memory leak when using time window in Pyflink ThreadMode

2023-05-25 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-32110:
-
Affects Version/s: (was: 1.16.1)

> TM native memory leak when using time window in Pyflink ThreadMode
> --
>
> Key: FLINK-32110
> URL: https://issues.apache.org/jira/browse/FLINK-32110
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.0
>Reporter: Yunjun Luo
>Assignee: Yunjun Luo
>Priority: Major
> Fix For: 1.17.2
>
>
> If job use time window in Pyflink thread mode, TM native memory will grow 
> slowly during the job running until TM can't allocate memory from operate 
> system.
> The leak rate is likely proportional to the number of key.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32110) TM native memory leak when using time window in Pyflink ThreadMode

2023-05-25 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-32110.

Fix Version/s: 1.17.2
   Resolution: Fixed

Merged into master via 2f015eb899feb6e60a379a33baab442f93f17ca2

Merged into release-1.17 11ccc44b1e7beacc198a78cdc75be6294840a74b

> TM native memory leak when using time window in Pyflink ThreadMode
> --
>
> Key: FLINK-32110
> URL: https://issues.apache.org/jira/browse/FLINK-32110
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.0, 1.16.1
>Reporter: Yunjun Luo
>Assignee: Yunjun Luo
>Priority: Major
> Fix For: 1.17.2
>
>
> If job use time window in Pyflink thread mode, TM native memory will grow 
> slowly during the job running until TM can't allocate memory from operate 
> system.
> The leak rate is likely proportional to the number of key.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32110) TM native memory leak when using time window in Pyflink ThreadMode

2023-05-25 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang reassigned FLINK-32110:


Assignee: Yunjun Luo

> TM native memory leak when using time window in Pyflink ThreadMode
> --
>
> Key: FLINK-32110
> URL: https://issues.apache.org/jira/browse/FLINK-32110
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.0, 1.16.1
>Reporter: Yunjun Luo
>Assignee: Yunjun Luo
>Priority: Major
>
> If job use time window in Pyflink thread mode, TM native memory will grow 
> slowly during the job running until TM can't allocate memory from operate 
> system.
> The leak rate is likely proportional to the number of key.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31726) PyFlink module java.base does not "opens java.lang" to unnamed module

2023-04-07 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-31726.

Resolution: Not A Problem

Flink doesn't support JDK 19, you can try to use jdk 8 or jdk 11.

> PyFlink module java.base does not "opens java.lang" to unnamed module
> -
>
> Key: FLINK-31726
> URL: https://issues.apache.org/jira/browse/FLINK-31726
> Project: Flink
>  Issue Type: Bug
>Reporter: padavan
>Priority: Major
>
> I want to run simple example from Flink documentation. And after start i got 
> exception:
> {code:java}
> Unable to make field private final byte[] java.lang.String.value accessible: 
> module java.base does not "opens java.lang" to unnamed module @228575c0{code}
> Installed:
> {code:java}
> Python 3.10.6
> openjdk version "19.0.2" 2023-01-17 
> OpenJDK Runtime Environment (build 19.0.2+7-Ubuntu-0ubuntu322.04) 
> OpenJDK 64-Bit Server VM (build 19.0.2+7-Ubuntu-0ubuntu322.04, mixed mode, 
> sharing){code}
> Simple code from flink site:
> [https://nightlies.apache.org/flink/flink-docs-master/api/python/examples/datastream/word_count.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29421) Support python 3.10

2023-03-21 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29421:
-
Release Note: PyFlink 1.17 will support Python 3.10 and remove the support 
of Python 3.6

> Support python 3.10
> ---
>
> Key: FLINK-29421
> URL: https://issues.apache.org/jira/browse/FLINK-29421
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Reporter: Eric Sirianni
>Assignee: Huang Xingbo
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> The {{apache-flink}} package fails to install on Python 3.10 due to inability 
> to compile {{numpy}}
> {noformat}
> numpy/core/src/multiarray/scalartypes.c.src:3242:12: error: too 
> few arguments to function ‘_Py_HashDouble’
>  3242 | return 
> _Py_HashDouble(npy_half_to_double(((PyHalfScalarObject *)obj)->obval));
>   |^~
> In file included from 
> /home/sirianni/.asdf/installs/python/3.10.6/include/python3.10/Python.h:77,
>  from 
> numpy/core/src/multiarray/scalartypes.c.src:3:
> 
> /home/sirianni/.asdf/installs/python/3.10.6/include/python3.10/pyhash.h:10:23:
>  note: declared here
>10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
> {noformat}
> Numpy issue https://github.com/numpy/numpy/issues/19033
> [Mailing list 
> thread|https://lists.apache.org/thread/f4r9hjt1l33xf5ngnswszhnls4cxkk52]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29421) Support python 3.10

2023-03-21 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703083#comment-17703083
 ] 

Xingbo Huang commented on FLINK-29421:
--

[~Peter_Howe] apache beam hasn't supported python 3.11. After beam support 
python 3.11, pyflink can upgrade dependency to support 3.11.

> Support python 3.10
> ---
>
> Key: FLINK-29421
> URL: https://issues.apache.org/jira/browse/FLINK-29421
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Reporter: Eric Sirianni
>Assignee: Huang Xingbo
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> The {{apache-flink}} package fails to install on Python 3.10 due to inability 
> to compile {{numpy}}
> {noformat}
> numpy/core/src/multiarray/scalartypes.c.src:3242:12: error: too 
> few arguments to function ‘_Py_HashDouble’
>  3242 | return 
> _Py_HashDouble(npy_half_to_double(((PyHalfScalarObject *)obj)->obval));
>   |^~
> In file included from 
> /home/sirianni/.asdf/installs/python/3.10.6/include/python3.10/Python.h:77,
>  from 
> numpy/core/src/multiarray/scalartypes.c.src:3:
> 
> /home/sirianni/.asdf/installs/python/3.10.6/include/python3.10/pyhash.h:10:23:
>  note: declared here
>10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
> {noformat}
> Numpy issue https://github.com/numpy/numpy/issues/19033
> [Mailing list 
> thread|https://lists.apache.org/thread/f4r9hjt1l33xf5ngnswszhnls4cxkk52]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31524) StreamDependencyTests.test_add_python_file failed

2023-03-20 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-31524:
-
Issue Type: Technical Debt  (was: Bug)

> StreamDependencyTests.test_add_python_file failed
> -
>
> Key: FLINK-31524
> URL: https://issues.apache.org/jira/browse/FLINK-31524
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Python
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Xingbo Huang
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47318=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=24508
> Being caused by a "No module named 'test_dependency_manage_lib'" error:
> {code}
> [...]
> Mar 18 05:42:33 E   Caused by: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error 
> received from SDK harness for instruction 1: Traceback (most recent call 
> last):
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 287, in _execute
> Mar 18 05:42:33 E   response = task()
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 360, in 
> Mar 18 05:42:33 E   lambda: 
> self.create_worker().do_instruction(request), request)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 597, in do_instruction
> Mar 18 05:42:33 E   getattr(request, request_type), 
> request.instruction_id)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 634, in process_bundle
> Mar 18 05:42:33 E   
> bundle_processor.process_bundle(instruction_id))
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
>  line 1004, in process_bundle
> Mar 18 05:42:33 E   element.data)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
>  line 227, in process_encoded
> Mar 18 05:42:33 E   self.output(decoded_value)
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 526, in 
> apache_beam.runners.worker.operations.Operation.output
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 528, in 
> apache_beam.runners.worker.operations.Operation.output
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 237, in 
> apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 240, in 
> apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
> Mar 18 05:42:33 E File 
> "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 169, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process
> Mar 18 05:42:33 E File 
> "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 196, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/pyflink/fn_execution/table/operations.py", line 110, 
> in process_element
> Mar 18 05:42:33 E   return self.func(value)
> Mar 18 05:42:33 E File "", line 1, in 
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/pyflink/table/tests/test_dependency.py", line 51, in 
> plus_two
> Mar 18 05:42:33 E   from test_dependency_manage_lib 
> import add_two
> Mar 18 05:42:33 E   ModuleNotFoundError: No module named 
> 'test_dependency_manage_lib'
> Mar 18 05:42:33 E   
> Mar 18 05:42:33 E at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Mar 18 05:42:33 E at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Mar 18 05:42:33 E at 
> org.apache.beam.sdk.util.MoreFutures.get(MoreFutures.java:61)
> Mar 18 05:42:33 E at 
> 

[jira] [Assigned] (FLINK-31524) StreamDependencyTests.test_add_python_file failed

2023-03-20 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang reassigned FLINK-31524:


Assignee: Xingbo Huang

> StreamDependencyTests.test_add_python_file failed
> -
>
> Key: FLINK-31524
> URL: https://issues.apache.org/jira/browse/FLINK-31524
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Xingbo Huang
>Priority: Blocker
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47318=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=24508
> Being caused by a "No module named 'test_dependency_manage_lib'" error:
> {code}
> [...]
> Mar 18 05:42:33 E   Caused by: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error 
> received from SDK harness for instruction 1: Traceback (most recent call 
> last):
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 287, in _execute
> Mar 18 05:42:33 E   response = task()
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 360, in 
> Mar 18 05:42:33 E   lambda: 
> self.create_worker().do_instruction(request), request)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 597, in do_instruction
> Mar 18 05:42:33 E   getattr(request, request_type), 
> request.instruction_id)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 634, in process_bundle
> Mar 18 05:42:33 E   
> bundle_processor.process_bundle(instruction_id))
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
>  line 1004, in process_bundle
> Mar 18 05:42:33 E   element.data)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
>  line 227, in process_encoded
> Mar 18 05:42:33 E   self.output(decoded_value)
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 526, in 
> apache_beam.runners.worker.operations.Operation.output
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 528, in 
> apache_beam.runners.worker.operations.Operation.output
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 237, in 
> apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 240, in 
> apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
> Mar 18 05:42:33 E File 
> "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 169, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process
> Mar 18 05:42:33 E File 
> "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 196, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/pyflink/fn_execution/table/operations.py", line 110, 
> in process_element
> Mar 18 05:42:33 E   return self.func(value)
> Mar 18 05:42:33 E File "", line 1, in 
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/pyflink/table/tests/test_dependency.py", line 51, in 
> plus_two
> Mar 18 05:42:33 E   from test_dependency_manage_lib 
> import add_two
> Mar 18 05:42:33 E   ModuleNotFoundError: No module named 
> 'test_dependency_manage_lib'
> Mar 18 05:42:33 E   
> Mar 18 05:42:33 E at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Mar 18 05:42:33 E at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Mar 18 05:42:33 E at 
> org.apache.beam.sdk.util.MoreFutures.get(MoreFutures.java:61)
> Mar 18 05:42:33 E at 
> 

[jira] [Updated] (FLINK-31524) StreamDependencyTests.test_add_python_file failed

2023-03-20 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-31524:
-
Priority: Major  (was: Blocker)

> StreamDependencyTests.test_add_python_file failed
> -
>
> Key: FLINK-31524
> URL: https://issues.apache.org/jira/browse/FLINK-31524
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Xingbo Huang
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47318=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=24508
> Being caused by a "No module named 'test_dependency_manage_lib'" error:
> {code}
> [...]
> Mar 18 05:42:33 E   Caused by: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error 
> received from SDK harness for instruction 1: Traceback (most recent call 
> last):
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 287, in _execute
> Mar 18 05:42:33 E   response = task()
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 360, in 
> Mar 18 05:42:33 E   lambda: 
> self.create_worker().do_instruction(request), request)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 597, in do_instruction
> Mar 18 05:42:33 E   getattr(request, request_type), 
> request.instruction_id)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 634, in process_bundle
> Mar 18 05:42:33 E   
> bundle_processor.process_bundle(instruction_id))
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
>  line 1004, in process_bundle
> Mar 18 05:42:33 E   element.data)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
>  line 227, in process_encoded
> Mar 18 05:42:33 E   self.output(decoded_value)
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 526, in 
> apache_beam.runners.worker.operations.Operation.output
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 528, in 
> apache_beam.runners.worker.operations.Operation.output
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 237, in 
> apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 240, in 
> apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
> Mar 18 05:42:33 E File 
> "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 169, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process
> Mar 18 05:42:33 E File 
> "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 196, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/pyflink/fn_execution/table/operations.py", line 110, 
> in process_element
> Mar 18 05:42:33 E   return self.func(value)
> Mar 18 05:42:33 E File "", line 1, in 
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/pyflink/table/tests/test_dependency.py", line 51, in 
> plus_two
> Mar 18 05:42:33 E   from test_dependency_manage_lib 
> import add_two
> Mar 18 05:42:33 E   ModuleNotFoundError: No module named 
> 'test_dependency_manage_lib'
> Mar 18 05:42:33 E   
> Mar 18 05:42:33 E at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Mar 18 05:42:33 E at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Mar 18 05:42:33 E at 
> org.apache.beam.sdk.util.MoreFutures.get(MoreFutures.java:61)
> Mar 18 05:42:33 E at 
> 

[jira] [Commented] (FLINK-31524) StreamDependencyTests.test_add_python_file failed

2023-03-20 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17702985#comment-17702985
 ] 

Xingbo Huang commented on FLINK-31524:
--

[~mapohl] Thanks a lot for reporting this issue. The failure of this test is 
related to the instability of the test environment. This test was first 
introduced in 1.14, and we have not made relevant changes in 1.17. So I lower 
the priority of it firstly, and keep watching.

> StreamDependencyTests.test_add_python_file failed
> -
>
> Key: FLINK-31524
> URL: https://issues.apache.org/jira/browse/FLINK-31524
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Priority: Blocker
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47318=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=24508
> Being caused by a "No module named 'test_dependency_manage_lib'" error:
> {code}
> [...]
> Mar 18 05:42:33 E   Caused by: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error 
> received from SDK harness for instruction 1: Traceback (most recent call 
> last):
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 287, in _execute
> Mar 18 05:42:33 E   response = task()
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 360, in 
> Mar 18 05:42:33 E   lambda: 
> self.create_worker().do_instruction(request), request)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 597, in do_instruction
> Mar 18 05:42:33 E   getattr(request, request_type), 
> request.instruction_id)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
>  line 634, in process_bundle
> Mar 18 05:42:33 E   
> bundle_processor.process_bundle(instruction_id))
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
>  line 1004, in process_bundle
> Mar 18 05:42:33 E   element.data)
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/.tox/py37-cython/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
>  line 227, in process_encoded
> Mar 18 05:42:33 E   self.output(decoded_value)
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 526, in 
> apache_beam.runners.worker.operations.Operation.output
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 528, in 
> apache_beam.runners.worker.operations.Operation.output
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 237, in 
> apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
> Mar 18 05:42:33 E File 
> "apache_beam/runners/worker/operations.py", line 240, in 
> apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
> Mar 18 05:42:33 E File 
> "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 169, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process
> Mar 18 05:42:33 E File 
> "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 196, in 
> pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/pyflink/fn_execution/table/operations.py", line 110, 
> in process_element
> Mar 18 05:42:33 E   return self.func(value)
> Mar 18 05:42:33 E File "", line 1, in 
> Mar 18 05:42:33 E File 
> "/__w/1/s/flink-python/pyflink/table/tests/test_dependency.py", line 51, in 
> plus_two
> Mar 18 05:42:33 E   from test_dependency_manage_lib 
> import add_two
> Mar 18 05:42:33 E   ModuleNotFoundError: No module named 
> 'test_dependency_manage_lib'
> Mar 18 05:42:33 E   
> Mar 18 05:42:33 E at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Mar 18 05:42:33 E at 
> 

[jira] [Updated] (FLINK-30277) Allow PYTHONPATH of Python Worker configurable

2023-02-26 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-30277:
-
Issue Type: New Feature  (was: Improvement)

> Allow PYTHONPATH of Python Worker configurable
> --
>
> Key: FLINK-30277
> URL: https://issues.apache.org/jira/browse/FLINK-30277
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Affects Versions: 1.16.0
>Reporter: Prabhu Joseph
>Assignee: Samrat Deb
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Currently, below are the ways Python Worker gets the Python Flink 
> Dependencies.
>  # Worker Node's System Python Path (/usr/local/lib64/python3.7/site-packages)
>  # Client passes the python Dependencies through -pyfs and --pyarch which is 
> localized into PYTHONPATH of Python Worker.
>  # Client passes the requirements through -pyreq which gets installed on 
> Worker Node and added into PYTHONPATH of Python Worker.
> This Jira intends to allow PYTHONPATH of Python Worker configurable where 
> Admin/Service provider can install the required python flink depencies on a 
> custom path (/usr/lib/pyflink/lib/python3.7/site-packages) on all Worker 
> Nodes and then set the path in the client machine configuration 
> flink-conf.yaml. This way it works without any configurations from the 
> Application Users and also without affecting any other components dependent 
> on System Python Path.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30277) Allow PYTHONPATH of Python Worker configurable

2023-02-26 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-30277:
-
Affects Version/s: 1.18.0
   (was: 1.16.0)

> Allow PYTHONPATH of Python Worker configurable
> --
>
> Key: FLINK-30277
> URL: https://issues.apache.org/jira/browse/FLINK-30277
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Affects Versions: 1.18.0
>Reporter: Prabhu Joseph
>Assignee: Samrat Deb
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Currently, below are the ways Python Worker gets the Python Flink 
> Dependencies.
>  # Worker Node's System Python Path (/usr/local/lib64/python3.7/site-packages)
>  # Client passes the python Dependencies through -pyfs and --pyarch which is 
> localized into PYTHONPATH of Python Worker.
>  # Client passes the requirements through -pyreq which gets installed on 
> Worker Node and added into PYTHONPATH of Python Worker.
> This Jira intends to allow PYTHONPATH of Python Worker configurable where 
> Admin/Service provider can install the required python flink depencies on a 
> custom path (/usr/lib/pyflink/lib/python3.7/site-packages) on all Worker 
> Nodes and then set the path in the client machine configuration 
> flink-conf.yaml. This way it works without any configurations from the 
> Application Users and also without affecting any other components dependent 
> on System Python Path.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30277) Allow PYTHONPATH of Python Worker configurable

2023-02-26 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-30277.

Fix Version/s: 1.18.0
   Resolution: Fixed

Merged into master via 2e0efe4e0723429e26ca04e2f61fcf89884dd077

> Allow PYTHONPATH of Python Worker configurable
> --
>
> Key: FLINK-30277
> URL: https://issues.apache.org/jira/browse/FLINK-30277
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.16.0
>Reporter: Prabhu Joseph
>Assignee: Samrat Deb
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Currently, below are the ways Python Worker gets the Python Flink 
> Dependencies.
>  # Worker Node's System Python Path (/usr/local/lib64/python3.7/site-packages)
>  # Client passes the python Dependencies through -pyfs and --pyarch which is 
> localized into PYTHONPATH of Python Worker.
>  # Client passes the requirements through -pyreq which gets installed on 
> Worker Node and added into PYTHONPATH of Python Worker.
> This Jira intends to allow PYTHONPATH of Python Worker configurable where 
> Admin/Service provider can install the required python flink depencies on a 
> custom path (/usr/lib/pyflink/lib/python3.7/site-packages) on all Worker 
> Nodes and then set the path in the client machine configuration 
> flink-conf.yaml. This way it works without any configurations from the 
> Application Users and also without affecting any other components dependent 
> on System Python Path.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30886) Content area is too narrow to show all content

2023-02-06 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17685000#comment-17685000
 ] 

Xingbo Huang commented on FLINK-30886:
--

T he content that is too long needs to be viewed by scrolling, we can improve 
this style.

!image-2023-02-07-10-12-09-237.png!

> Content area is too narrow to show all content
> --
>
> Key: FLINK-30886
> URL: https://issues.apache.org/jira/browse/FLINK-30886
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Ari Huttunen
>Priority: Minor
> Attachments: Screenshot 2023-02-02 at 15.13.57.png, 
> image-2023-02-07-10-12-09-237.png
>
>
> If you open a page like this, you notice that the main content is not fully 
> visible.
> [https://nightlies.apache.org/flink/flink-docs-master/api/python/reference/pyflink.table/table_environment.html]
> Here's a screenshot. You can see that the right-most characters are cut off. 
> The screenshot is of Vivaldi, but it looks like that on Safari as well.
> !Screenshot 2023-02-02 at 15.13.57.png|width=1498,height=833!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30886) Content area is too narrow to show all content

2023-02-06 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-30886:
-
Attachment: image-2023-02-07-10-12-09-237.png

> Content area is too narrow to show all content
> --
>
> Key: FLINK-30886
> URL: https://issues.apache.org/jira/browse/FLINK-30886
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Ari Huttunen
>Priority: Minor
> Attachments: Screenshot 2023-02-02 at 15.13.57.png, 
> image-2023-02-07-10-12-09-237.png
>
>
> If you open a page like this, you notice that the main content is not fully 
> visible.
> [https://nightlies.apache.org/flink/flink-docs-master/api/python/reference/pyflink.table/table_environment.html]
> Here's a screenshot. You can see that the right-most characters are cut off. 
> The screenshot is of Vivaldi, but it looks like that on Safari as well.
> !Screenshot 2023-02-02 at 15.13.57.png|width=1498,height=833!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26974) Python EmbeddedThreadDependencyTests.test_add_python_file failed on azure

2023-01-03 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17653944#comment-17653944
 ] 

Xingbo Huang commented on FLINK-26974:
--

[~mapohl] Sure. I will take a look.

> Python EmbeddedThreadDependencyTests.test_add_python_file failed on azure
> -
>
> Key: FLINK-26974
> URL: https://issues.apache.org/jira/browse/FLINK-26974
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0, 1.16.0, 1.17.0
>Reporter: Yun Gao
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> Mar 31 10:49:17 === FAILURES 
> ===
> Mar 31 10:49:17 __ 
> EmbeddedThreadDependencyTests.test_add_python_file __
> Mar 31 10:49:17 
> Mar 31 10:49:17 self = 
>  testMethod=test_add_python_file>
> Mar 31 10:49:17 
> Mar 31 10:49:17 def test_add_python_file(self):
> Mar 31 10:49:17 python_file_dir = os.path.join(self.tempdir, 
> "python_file_dir_" + str(uuid.uuid4()))
> Mar 31 10:49:17 os.mkdir(python_file_dir)
> Mar 31 10:49:17 python_file_path = os.path.join(python_file_dir, 
> "test_dependency_manage_lib.py")
> Mar 31 10:49:17 with open(python_file_path, 'w') as f:
> Mar 31 10:49:17 f.write("def add_two(a):\nraise 
> Exception('This function should not be called!')")
> Mar 31 10:49:17 self.t_env.add_python_file(python_file_path)
> Mar 31 10:49:17 
> Mar 31 10:49:17 python_file_dir_with_higher_priority = os.path.join(
> Mar 31 10:49:17 self.tempdir, "python_file_dir_" + 
> str(uuid.uuid4()))
> Mar 31 10:49:17 os.mkdir(python_file_dir_with_higher_priority)
> Mar 31 10:49:17 python_file_path_higher_priority = 
> os.path.join(python_file_dir_with_higher_priority,
> Mar 31 10:49:17 
> "test_dependency_manage_lib.py")
> Mar 31 10:49:17 with open(python_file_path_higher_priority, 'w') as f:
> Mar 31 10:49:17 f.write("def add_two(a):\nreturn a + 2")
> Mar 31 10:49:17 
> self.t_env.add_python_file(python_file_path_higher_priority)
> Mar 31 10:49:17 
> Mar 31 10:49:17 def plus_two(i):
> Mar 31 10:49:17 from test_dependency_manage_lib import add_two
> Mar 31 10:49:17 return add_two(i)
> Mar 31 10:49:17 
> Mar 31 10:49:17 self.t_env.create_temporary_system_function(
> Mar 31 10:49:17 "add_two", udf(plus_two, DataTypes.BIGINT(), 
> DataTypes.BIGINT()))
> Mar 31 10:49:17 table_sink = source_sink_utils.TestAppendSink(
> Mar 31 10:49:17 ['a', 'b'], [DataTypes.BIGINT(), 
> DataTypes.BIGINT()])
> Mar 31 10:49:17 self.t_env.register_table_sink("Results", table_sink)
> Mar 31 10:49:17 t = self.t_env.from_elements([(1, 2), (2, 5), (3, 
> 1)], ['a', 'b'])
> Mar 31 10:49:17 >   t.select(expr.call("add_two", t.a), 
> t.a).execute_insert("Results").wait()
> Mar 31 10:49:17 
> Mar 31 10:49:17 pyflink/table/tests/test_dependency.py:63: 
> Mar 31 10:49:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ _ _ _ _ _ _ _ _ 
> Mar 31 10:49:17 pyflink/table/table_result.py:76: in wait
> Mar 31 10:49:17 get_method(self._j_table_result, "await")()
> Mar 31 10:49:17 
> .tox/py38-cython/lib/python3.8/site-packages/py4j/java_gateway.py:1321: in 
> __call__
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=34001=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=27239



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29461) ProcessDataStreamStreamingTests.test_process_function unstable

2022-12-12 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-29461.

Fix Version/s: 1.17.0
   1.16.1
   Resolution: Fixed

Merged into master via 4df6a398bbe2a9de7c23977176789e54cc0848fa

Merged into release-1.16 via 36c86f1c6cd34482c2eb3cc939d348e08fd08a2b

> ProcessDataStreamStreamingTests.test_process_function unstable
> --
>
> Key: FLINK-29461
> URL: https://issues.apache.org/jira/browse/FLINK-29461
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.17.0, 1.16.1
>
>
> {code:java}
> 2022-09-29T02:10:45.3571648Z Sep 29 02:10:45 self = 
>  testMethod=test_process_function>
> 2022-09-29T02:10:45.3572279Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3572810Z Sep 29 02:10:45 def 
> test_process_function(self):
> 2022-09-29T02:10:45.3573495Z Sep 29 02:10:45 
> self.env.set_parallelism(1)
> 2022-09-29T02:10:45.3574148Z Sep 29 02:10:45 
> self.env.get_config().set_auto_watermark_interval(2000)
> 2022-09-29T02:10:45.3580634Z Sep 29 02:10:45 
> self.env.set_stream_time_characteristic(TimeCharacteristic.EventTime)
> 2022-09-29T02:10:45.3583194Z Sep 29 02:10:45 data_stream = 
> self.env.from_collection([(1, '1603708211000'),
> 2022-09-29T02:10:45.3584515Z Sep 29 02:10:45  
>(2, '1603708224000'),
> 2022-09-29T02:10:45.3585957Z Sep 29 02:10:45  
>(3, '1603708226000'),
> 2022-09-29T02:10:45.3587132Z Sep 29 02:10:45  
>(4, '1603708289000')],
> 2022-09-29T02:10:45.3588094Z Sep 29 02:10:45  
>   type_info=Types.ROW([Types.INT(), Types.STRING()]))
> 2022-09-29T02:10:45.3589090Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3589949Z Sep 29 02:10:45 class 
> MyProcessFunction(ProcessFunction):
> 2022-09-29T02:10:45.3590710Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3591856Z Sep 29 02:10:45 def 
> process_element(self, value, ctx):
> 2022-09-29T02:10:45.3592873Z Sep 29 02:10:45 
> current_timestamp = ctx.timestamp()
> 2022-09-29T02:10:45.3593862Z Sep 29 02:10:45 
> current_watermark = ctx.timer_service().current_watermark()
> 2022-09-29T02:10:45.3594915Z Sep 29 02:10:45 yield "current 
> timestamp: {}, current watermark: {}, current_value: {}"\
> 2022-09-29T02:10:45.3596201Z Sep 29 02:10:45 
> .format(str(current_timestamp), str(current_watermark), str(value))
> 2022-09-29T02:10:45.3597089Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3597942Z Sep 29 02:10:45 watermark_strategy = 
> WatermarkStrategy.for_monotonous_timestamps()\
> 2022-09-29T02:10:45.3599260Z Sep 29 02:10:45 
> .with_timestamp_assigner(SecondColumnTimestampAssigner())
> 2022-09-29T02:10:45.3600611Z Sep 29 02:10:45 
> data_stream.assign_timestamps_and_watermarks(watermark_strategy)\
> 2022-09-29T02:10:45.3601877Z Sep 29 02:10:45 
> .process(MyProcessFunction(), 
> output_type=Types.STRING()).add_sink(self.test_sink)
> 2022-09-29T02:10:45.3603527Z Sep 29 02:10:45 self.env.execute('test 
> process function')
> 2022-09-29T02:10:45.3604445Z Sep 29 02:10:45 results = 
> self.test_sink.get_results()
> 2022-09-29T02:10:45.3605684Z Sep 29 02:10:45 expected = ["current 
> timestamp: 1603708211000, current watermark: "
> 2022-09-29T02:10:45.3607157Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=1, f1='1603708211000')",
> 2022-09-29T02:10:45.3608256Z Sep 29 02:10:45 "current 
> timestamp: 1603708224000, current watermark: "
> 2022-09-29T02:10:45.3609650Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=2, f1='1603708224000')",
> 2022-09-29T02:10:45.3610854Z Sep 29 02:10:45 "current 
> timestamp: 1603708226000, current watermark: "
> 2022-09-29T02:10:45.3612279Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=3, f1='1603708226000')",
> 2022-09-29T02:10:45.3613382Z Sep 29 02:10:45 "current 
> timestamp: 1603708289000, current watermark: "
> 2022-09-29T02:10:45.3615683Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=4, f1='1603708289000')"]
> 2022-09-29T02:10:45.3617687Z Sep 29 02:10:45 >   
> self.assert_equals_sorted(expected, results)
> 2022-09-29T02:10:45.3618620Z Sep 29 02:10:45 
> 

[jira] [Closed] (FLINK-30366) Python Group Agg failed in cleaning the idle state

2022-12-12 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-30366.

Resolution: Fixed

Merged into master via 72a70313b59352736514b4927a1dfadc2e8e4232

Merged into release-1.16 via 041d863552396105d08af097a456ee291263d434

Merged into release-1.15 via d3413718bc5751a10dda6c5a7b4162753c07

> Python Group Agg failed in cleaning the idle state
> --
>
> Key: FLINK-30366
> URL: https://issues.apache.org/jira/browse/FLINK-30366
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0, 1.15.3
>Reporter: Xingbo Huang
>Assignee: Xingbo Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.16.1, 1.15.4
>
>
> {code:java}
> # aggregate_fast.pyx
> cpdef void on_timer(self, InternalRow key):
> if self.state_cleaning_enabled:
> self.state_backend.set_current_key(key) # The key must be a list, but 
> it is a InternalRow here.
> accumulator_state = self.state_backend.get_value_state(
> "accumulators", self.state_value_coder)
> accumulator_state.clear()
> self.aggs_handle.cleanup() {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29461) ProcessDataStreamStreamingTests.test_process_function unstable

2022-12-12 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17646050#comment-17646050
 ] 

Xingbo Huang commented on FLINK-29461:
--

[~mapohl] I have prepared a PR to make the test more stable.

> ProcessDataStreamStreamingTests.test_process_function unstable
> --
>
> Key: FLINK-29461
> URL: https://issues.apache.org/jira/browse/FLINK-29461
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> 2022-09-29T02:10:45.3571648Z Sep 29 02:10:45 self = 
>  testMethod=test_process_function>
> 2022-09-29T02:10:45.3572279Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3572810Z Sep 29 02:10:45 def 
> test_process_function(self):
> 2022-09-29T02:10:45.3573495Z Sep 29 02:10:45 
> self.env.set_parallelism(1)
> 2022-09-29T02:10:45.3574148Z Sep 29 02:10:45 
> self.env.get_config().set_auto_watermark_interval(2000)
> 2022-09-29T02:10:45.3580634Z Sep 29 02:10:45 
> self.env.set_stream_time_characteristic(TimeCharacteristic.EventTime)
> 2022-09-29T02:10:45.3583194Z Sep 29 02:10:45 data_stream = 
> self.env.from_collection([(1, '1603708211000'),
> 2022-09-29T02:10:45.3584515Z Sep 29 02:10:45  
>(2, '1603708224000'),
> 2022-09-29T02:10:45.3585957Z Sep 29 02:10:45  
>(3, '1603708226000'),
> 2022-09-29T02:10:45.3587132Z Sep 29 02:10:45  
>(4, '1603708289000')],
> 2022-09-29T02:10:45.3588094Z Sep 29 02:10:45  
>   type_info=Types.ROW([Types.INT(), Types.STRING()]))
> 2022-09-29T02:10:45.3589090Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3589949Z Sep 29 02:10:45 class 
> MyProcessFunction(ProcessFunction):
> 2022-09-29T02:10:45.3590710Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3591856Z Sep 29 02:10:45 def 
> process_element(self, value, ctx):
> 2022-09-29T02:10:45.3592873Z Sep 29 02:10:45 
> current_timestamp = ctx.timestamp()
> 2022-09-29T02:10:45.3593862Z Sep 29 02:10:45 
> current_watermark = ctx.timer_service().current_watermark()
> 2022-09-29T02:10:45.3594915Z Sep 29 02:10:45 yield "current 
> timestamp: {}, current watermark: {}, current_value: {}"\
> 2022-09-29T02:10:45.3596201Z Sep 29 02:10:45 
> .format(str(current_timestamp), str(current_watermark), str(value))
> 2022-09-29T02:10:45.3597089Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3597942Z Sep 29 02:10:45 watermark_strategy = 
> WatermarkStrategy.for_monotonous_timestamps()\
> 2022-09-29T02:10:45.3599260Z Sep 29 02:10:45 
> .with_timestamp_assigner(SecondColumnTimestampAssigner())
> 2022-09-29T02:10:45.3600611Z Sep 29 02:10:45 
> data_stream.assign_timestamps_and_watermarks(watermark_strategy)\
> 2022-09-29T02:10:45.3601877Z Sep 29 02:10:45 
> .process(MyProcessFunction(), 
> output_type=Types.STRING()).add_sink(self.test_sink)
> 2022-09-29T02:10:45.3603527Z Sep 29 02:10:45 self.env.execute('test 
> process function')
> 2022-09-29T02:10:45.3604445Z Sep 29 02:10:45 results = 
> self.test_sink.get_results()
> 2022-09-29T02:10:45.3605684Z Sep 29 02:10:45 expected = ["current 
> timestamp: 1603708211000, current watermark: "
> 2022-09-29T02:10:45.3607157Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=1, f1='1603708211000')",
> 2022-09-29T02:10:45.3608256Z Sep 29 02:10:45 "current 
> timestamp: 1603708224000, current watermark: "
> 2022-09-29T02:10:45.3609650Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=2, f1='1603708224000')",
> 2022-09-29T02:10:45.3610854Z Sep 29 02:10:45 "current 
> timestamp: 1603708226000, current watermark: "
> 2022-09-29T02:10:45.3612279Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=3, f1='1603708226000')",
> 2022-09-29T02:10:45.3613382Z Sep 29 02:10:45 "current 
> timestamp: 1603708289000, current watermark: "
> 2022-09-29T02:10:45.3615683Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=4, f1='1603708289000')"]
> 2022-09-29T02:10:45.3617687Z Sep 29 02:10:45 >   
> self.assert_equals_sorted(expected, results)
> 2022-09-29T02:10:45.3618620Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3619425Z Sep 29 02:10:45 
> pyflink/datastream/tests/test_data_stream.py:986: 
> 2022-09-29T02:10:45.3620424Z Sep 29 02:10:45 _ _ _ _ _ _ _ _ _ _ 

[jira] [Created] (FLINK-30366) Python Group Agg failed in cleaning the idle state

2022-12-11 Thread Xingbo Huang (Jira)
Xingbo Huang created FLINK-30366:


 Summary: Python Group Agg failed in cleaning the idle state
 Key: FLINK-30366
 URL: https://issues.apache.org/jira/browse/FLINK-30366
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.15.3, 1.16.0
Reporter: Xingbo Huang
Assignee: Xingbo Huang
 Fix For: 1.17.0, 1.16.1, 1.15.4


{code:java}
# aggregate_fast.pyx
cpdef void on_timer(self, InternalRow key):
if self.state_cleaning_enabled:
self.state_backend.set_current_key(key) # The key must be a list, but 
it is a InternalRow here.
accumulator_state = self.state_backend.get_value_state(
"accumulators", self.state_value_coder)
accumulator_state.clear()
self.aggs_handle.cleanup() {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-21223) Support to specify the input/output types of Python UDFs via string

2022-12-11 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-21223.

Fix Version/s: 1.17.0
   Resolution: Done

Merged into master via 6cc00a707b238facbf5bf88a9fd727c8f9daab89

> Support to specify the input/output types of Python UDFs via string
> ---
>
> Key: FLINK-21223
> URL: https://issues.apache.org/jira/browse/FLINK-21223
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Huang Xingbo
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
> Fix For: 1.17.0
>
>
> Currently, users need to specify the input/output types as following:
> {code}
> {{@udf(result_type=DataTypes.BIGINT())
> def add(i, j):
>    return i + j
> }}{code}
> [FLIP-65|https://cwiki.apache.org/confluence/display/FLINK/FLIP-65%3A+New+type+inference+for+Table+API+UDFs]
>  makes it possible to support syntaxes as following:
> {code}
> {{@udf(result_type="BIGINT")
> def add(i, j):
>    return i + j
> }}{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29155) Improve default config of grpcServer in Process Mode

2022-11-30 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-29155.

Fix Version/s: 1.17.0
   1.16.1
   1.15.4
   Resolution: Fixed

Merged into master via 37bfd8cedf003c2bd714c8b4e87d1594924c748e

Merged into release-1.16 via 6d66200d5722aa5f33a32de801259eb16f095d15

Merged into release-1.15 via 6659d0049d9270173458b44fa87dd5317e9c6638

> Improve default config of grpcServer in Process Mode
> 
>
> Key: FLINK-29155
> URL: https://issues.apache.org/jira/browse/FLINK-29155
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.14.5, 1.16.0, 1.15.3
>Reporter: Xingbo Huang
>Assignee: Xingbo Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.16.1, 1.15.4
>
>
> The existing grpcServer configuration may cause channel disconnection when 
> there is a large amount of data. Some PyFlink users are very troubled by this 
> problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29461) ProcessDataStreamStreamingTests.test_process_function unstable

2022-11-29 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17640473#comment-17640473
 ] 

Xingbo Huang commented on FLINK-29461:
--

[~mapohl] Sorry for the late reply, I will take a look into this issue in these 
two days.

> ProcessDataStreamStreamingTests.test_process_function unstable
> --
>
> Key: FLINK-29461
> URL: https://issues.apache.org/jira/browse/FLINK-29461
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-29T02:10:45.3571648Z Sep 29 02:10:45 self = 
>  testMethod=test_process_function>
> 2022-09-29T02:10:45.3572279Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3572810Z Sep 29 02:10:45 def 
> test_process_function(self):
> 2022-09-29T02:10:45.3573495Z Sep 29 02:10:45 
> self.env.set_parallelism(1)
> 2022-09-29T02:10:45.3574148Z Sep 29 02:10:45 
> self.env.get_config().set_auto_watermark_interval(2000)
> 2022-09-29T02:10:45.3580634Z Sep 29 02:10:45 
> self.env.set_stream_time_characteristic(TimeCharacteristic.EventTime)
> 2022-09-29T02:10:45.3583194Z Sep 29 02:10:45 data_stream = 
> self.env.from_collection([(1, '1603708211000'),
> 2022-09-29T02:10:45.3584515Z Sep 29 02:10:45  
>(2, '1603708224000'),
> 2022-09-29T02:10:45.3585957Z Sep 29 02:10:45  
>(3, '1603708226000'),
> 2022-09-29T02:10:45.3587132Z Sep 29 02:10:45  
>(4, '1603708289000')],
> 2022-09-29T02:10:45.3588094Z Sep 29 02:10:45  
>   type_info=Types.ROW([Types.INT(), Types.STRING()]))
> 2022-09-29T02:10:45.3589090Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3589949Z Sep 29 02:10:45 class 
> MyProcessFunction(ProcessFunction):
> 2022-09-29T02:10:45.3590710Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3591856Z Sep 29 02:10:45 def 
> process_element(self, value, ctx):
> 2022-09-29T02:10:45.3592873Z Sep 29 02:10:45 
> current_timestamp = ctx.timestamp()
> 2022-09-29T02:10:45.3593862Z Sep 29 02:10:45 
> current_watermark = ctx.timer_service().current_watermark()
> 2022-09-29T02:10:45.3594915Z Sep 29 02:10:45 yield "current 
> timestamp: {}, current watermark: {}, current_value: {}"\
> 2022-09-29T02:10:45.3596201Z Sep 29 02:10:45 
> .format(str(current_timestamp), str(current_watermark), str(value))
> 2022-09-29T02:10:45.3597089Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3597942Z Sep 29 02:10:45 watermark_strategy = 
> WatermarkStrategy.for_monotonous_timestamps()\
> 2022-09-29T02:10:45.3599260Z Sep 29 02:10:45 
> .with_timestamp_assigner(SecondColumnTimestampAssigner())
> 2022-09-29T02:10:45.3600611Z Sep 29 02:10:45 
> data_stream.assign_timestamps_and_watermarks(watermark_strategy)\
> 2022-09-29T02:10:45.3601877Z Sep 29 02:10:45 
> .process(MyProcessFunction(), 
> output_type=Types.STRING()).add_sink(self.test_sink)
> 2022-09-29T02:10:45.3603527Z Sep 29 02:10:45 self.env.execute('test 
> process function')
> 2022-09-29T02:10:45.3604445Z Sep 29 02:10:45 results = 
> self.test_sink.get_results()
> 2022-09-29T02:10:45.3605684Z Sep 29 02:10:45 expected = ["current 
> timestamp: 1603708211000, current watermark: "
> 2022-09-29T02:10:45.3607157Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=1, f1='1603708211000')",
> 2022-09-29T02:10:45.3608256Z Sep 29 02:10:45 "current 
> timestamp: 1603708224000, current watermark: "
> 2022-09-29T02:10:45.3609650Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=2, f1='1603708224000')",
> 2022-09-29T02:10:45.3610854Z Sep 29 02:10:45 "current 
> timestamp: 1603708226000, current watermark: "
> 2022-09-29T02:10:45.3612279Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=3, f1='1603708226000')",
> 2022-09-29T02:10:45.3613382Z Sep 29 02:10:45 "current 
> timestamp: 1603708289000, current watermark: "
> 2022-09-29T02:10:45.3615683Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=4, f1='1603708289000')"]
> 2022-09-29T02:10:45.3617687Z Sep 29 02:10:45 >   
> self.assert_equals_sorted(expected, results)
> 2022-09-29T02:10:45.3618620Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3619425Z Sep 29 02:10:45 
> pyflink/datastream/tests/test_data_stream.py:986: 
> 2022-09-29T02:10:45.3620424Z Sep 29 02:10:45 _ _ _ _ _ _ _ 

[jira] [Created] (FLINK-30169) Adds version switcher in PyFlink API doc

2022-11-23 Thread Xingbo Huang (Jira)
Xingbo Huang created FLINK-30169:


 Summary: Adds version switcher in PyFlink API doc
 Key: FLINK-30169
 URL: https://issues.apache.org/jira/browse/FLINK-30169
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python, Documentation
Reporter: Xingbo Huang
Assignee: Xingbo Huang


Adds version switcher in PyFlink API doc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-29833) Improve PyFlink support in Windows

2022-11-22 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang reopened FLINK-29833:
--

> Improve PyFlink support in Windows
> --
>
> Key: FLINK-29833
> URL: https://issues.apache.org/jira/browse/FLINK-29833
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.16.0, 1.15.3
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>
> Many users are used to developing PyFlink jobs on Windows. It is necessary to 
> improve the simplicity of PyFlink job development on Windows



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29833) Improve PyFlink support in Windows

2022-11-21 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-29833.

Resolution: Fixed

> Improve PyFlink support in Windows
> --
>
> Key: FLINK-29833
> URL: https://issues.apache.org/jira/browse/FLINK-29833
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.16.0, 1.15.3
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>
> Many users are used to developing PyFlink jobs on Windows. It is necessary to 
> improve the simplicity of PyFlink job development on Windows



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29421) Support python 3.10

2022-11-21 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29421:
-
Issue Type: New Feature  (was: Bug)

> Support python 3.10
> ---
>
> Key: FLINK-29421
> URL: https://issues.apache.org/jira/browse/FLINK-29421
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Reporter: Eric Sirianni
>Priority: Minor
>
> The {{apache-flink}} package fails to install on Python 3.10 due to inability 
> to compile {{numpy}}
> {noformat}
> numpy/core/src/multiarray/scalartypes.c.src:3242:12: error: too 
> few arguments to function ‘_Py_HashDouble’
>  3242 | return 
> _Py_HashDouble(npy_half_to_double(((PyHalfScalarObject *)obj)->obval));
>   |^~
> In file included from 
> /home/sirianni/.asdf/installs/python/3.10.6/include/python3.10/Python.h:77,
>  from 
> numpy/core/src/multiarray/scalartypes.c.src:3:
> 
> /home/sirianni/.asdf/installs/python/3.10.6/include/python3.10/pyhash.h:10:23:
>  note: declared here
>10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
> {noformat}
> Numpy issue https://github.com/numpy/numpy/issues/19033
> [Mailing list 
> thread|https://lists.apache.org/thread/f4r9hjt1l33xf5ngnswszhnls4cxkk52]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-30097) CachedDataStream java example in the document is not correct

2022-11-20 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang reassigned FLINK-30097:


Assignee: Xuannan Su

> CachedDataStream java example in the document is not correct
> 
>
> Key: FLINK-30097
> URL: https://issues.apache.org/jira/browse/FLINK-30097
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.16.0
>Reporter: Prabhu Joseph
>Assignee: Xuannan Su
>Priority: Minor
>
> CachedDataStream java example in the document is not correct - 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/overview/#datastream-rarr-cacheddatastream]
>  
> {code:java}
> DataStream dataStream = //...
> CachedDataStream cachedDataStream = dataStream.cache();{code}
> The example shows to invoke cache() on a DataStream instance but DataStream 
> class does not have cache() method. The right usage is to call cache() on an 
> instance of DataStreamSource/SideOutputDataStream/SingleOutputStreamOperator. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29479) Support whether using system PythonPath for PyFlink jobs

2022-10-25 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang resolved FLINK-29479.
--
Fix Version/s: 1.15.3
   1.16.1
 Assignee: jackylau
   Resolution: Fixed

Merged into master via 8e16cc8e424e352c5b45b46f1520ecf0edec70be

Merged into release-1.16 via 9213effb32a4e80d8113ba7bf36782f33a5e197c

Merged into release-1.15 via 91ccde95c7eae7f020d68592a7fa76201674724a

> Support whether using system PythonPath for PyFlink jobs
> 
>
> Key: FLINK-29479
> URL: https://issues.apache.org/jira/browse/FLINK-29479
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.15.3, 1.16.1
>
>
> It exists PYTHONPATH env in system,like yarn/k8s images, it will cause 
> conflict with users python depdendency sometimes. so i suggest add a config 
> to do whether using system env of PYTHONPATH



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29479) Support whether using system PythonPath for PyFlink jobs

2022-10-25 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29479:
-
Issue Type: Bug  (was: Improvement)

> Support whether using system PythonPath for PyFlink jobs
> 
>
> Key: FLINK-29479
> URL: https://issues.apache.org/jira/browse/FLINK-29479
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> It exists PYTHONPATH env in system,like yarn/k8s images, it will cause 
> conflict with users python depdendency sometimes. so i suggest add a config 
> to do whether using system env of PYTHONPATH



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29677) Prevent dropping the current catalog

2022-10-25 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29677:
-
Fix Version/s: 1.17.0
   1.16.1

> Prevent dropping the current catalog
> 
>
> Key: FLINK-29677
> URL: https://issues.apache.org/jira/browse/FLINK-29677
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.16.1
>
> Attachments: image-2022-10-18-16-55-38-525.png, 
> image-2022-10-18-17-02-30-318.png
>
>
> h3. Issue Description
> Currently, the drop catalog statement
>  
> {code:java}
> DROP CATALOG my_cat{code}
>  
> does not reset the current catalog. As a result, if dropping a catalog in 
> use, then the following statements will yield different results.
>  
> {code:java}
> SHOW CURRENT CATALOG
> SHOW CATALOGS
> {code}
>  
> h3. How to Reproduce
> !image-2022-10-18-16-55-38-525.png|width=444,height=421!
>  
> h3. Proposed Fix Plan
> The root cause is that `CatalogManager#unregisterCatalog` does not reset 
> `currentCatalogName`. 
> Regarding this issue, I checked MySQL and PG's behavior.
> For MySQL, it is allowed to drop a database current-in-use and set the 
> current database to NULL.
> !image-2022-10-18-17-02-30-318.png|width=288,height=435!
>  
> For PG, it is not allowed to drop the database currently in use.
>  
> I think both behaviors are reasonable, while for simplicity I suggest 
> adhering to PG, that throw an Exception when dropping the current catalog.
> cc [~jark]  [~fsk119]  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29217) CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.testConcurrentCheckpoint failed with AssertionFailedError

2022-10-24 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29217:
-
Fix Version/s: 1.16.1
   (was: 1.16.0)

> CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.testConcurrentCheckpoint
>  failed with AssertionFailedError
> -
>
> Key: FLINK-29217
> URL: https://issues.apache.org/jira/browse/FLINK-29217
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.0
>Reporter: Xingbo Huang
>Assignee: Yunfeng Zhou
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.1
>
>
> {code:java}
> 2022-09-07T02:00:50.2507464Z Sep 07 02:00:50 [ERROR] 
> org.apache.flink.streaming.runtime.tasks.CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.testConcurrentCheckpoint
>   Time elapsed: 2.137 s  <<< FAILURE!
> 2022-09-07T02:00:50.2508673Z Sep 07 02:00:50 
> org.opentest4j.AssertionFailedError: 
> 2022-09-07T02:00:50.2509309Z Sep 07 02:00:50 
> 2022-09-07T02:00:50.2509945Z Sep 07 02:00:50 Expecting value to be false but 
> was true
> 2022-09-07T02:00:50.2511950Z Sep 07 02:00:50  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 2022-09-07T02:00:50.2513254Z Sep 07 02:00:50  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> 2022-09-07T02:00:50.2514621Z Sep 07 02:00:50  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> 2022-09-07T02:00:50.2516342Z Sep 07 02:00:50  at 
> org.apache.flink.streaming.runtime.tasks.CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.testConcurrentCheckpoint(CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.java:173)
> 2022-09-07T02:00:50.2517852Z Sep 07 02:00:50  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-07T02:00:50.251Z Sep 07 02:00:50  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-07T02:00:50.2520065Z Sep 07 02:00:50  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-07T02:00:50.2521153Z Sep 07 02:00:50  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-07T02:00:50.2522747Z Sep 07 02:00:50  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-09-07T02:00:50.2523973Z Sep 07 02:00:50  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-09-07T02:00:50.2525158Z Sep 07 02:00:50  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-09-07T02:00:50.2526347Z Sep 07 02:00:50  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-09-07T02:00:50.2527525Z Sep 07 02:00:50  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-09-07T02:00:50.2528646Z Sep 07 02:00:50  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-09-07T02:00:50.2529708Z Sep 07 02:00:50  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-09-07T02:00:50.2530744Z Sep 07 02:00:50  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-07T02:00:50.2532008Z Sep 07 02:00:50  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-09-07T02:00:50.2533137Z Sep 07 02:00:50  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-09-07T02:00:50.2544265Z Sep 07 02:00:50  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-09-07T02:00:50.2545595Z Sep 07 02:00:50  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-09-07T02:00:50.2546782Z Sep 07 02:00:50  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-09-07T02:00:50.2547810Z Sep 07 02:00:50  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-09-07T02:00:50.2548890Z Sep 07 02:00:50  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-09-07T02:00:50.2549932Z Sep 07 02:00:50  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-09-07T02:00:50.2550933Z Sep 07 02:00:50  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-09-07T02:00:50.2552325Z Sep 07 02:00:50  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-07T02:00:50.2553660Z Sep 07 02:00:50  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 

[jira] [Updated] (FLINK-29733) Error Flink connector hive Test failing

2022-10-24 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29733:
-
Labels: pull-request-available test-stability  (was: pull-request-available)

> Error Flink connector hive Test failing
> ---
>
> Key: FLINK-29733
> URL: https://issues.apache.org/jira/browse/FLINK-29733
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.17.0
>Reporter: Samrat Deb
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42328=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=d04c9862-880c-52f5-574b-a7a79fef8e0f]
> This is caused by FLINK-29478
> reported by [~hxbks2ks] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29733) Error Flink connector hive Test failing

2022-10-24 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29733:
-
Component/s: Connectors / Hive

> Error Flink connector hive Test failing
> ---
>
> Key: FLINK-29733
> URL: https://issues.apache.org/jira/browse/FLINK-29733
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Samrat Deb
>Priority: Major
>  Labels: pull-request-available
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42328=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=d04c9862-880c-52f5-574b-a7a79fef8e0f]
> This is caused by FLINK-29478
> reported by [~hxbks2ks] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29733) Error Flink connector hive Test failing

2022-10-24 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29733:
-
Affects Version/s: 1.17.0

> Error Flink connector hive Test failing
> ---
>
> Key: FLINK-29733
> URL: https://issues.apache.org/jira/browse/FLINK-29733
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.17.0
>Reporter: Samrat Deb
>Priority: Major
>  Labels: pull-request-available
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42328=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=d04c9862-880c-52f5-574b-a7a79fef8e0f]
> This is caused by FLINK-29478
> reported by [~hxbks2ks] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29405) InputFormatCacheLoaderTest is unstable

2022-10-24 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17622973#comment-17622973
 ] 

Xingbo Huang commented on FLINK-29405:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42340=logs=f2c100be-250b-5e85-7bbe-176f68fcddc5=05efd11e-5400-54a4-0d27-a4663be008a9=11649

> InputFormatCacheLoaderTest is unstable
> --
>
> Key: FLINK-29405
> URL: https://issues.apache.org/jira/browse/FLINK-29405
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Chesnay Schepler
>Assignee: Alexander Smirnov
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> #testExceptionDuringReload/#testCloseAndInterruptDuringReload fail reliably 
> when run in a loop.
> {code}
> java.lang.AssertionError: 
> Expecting AtomicInteger(0) to have value:
>   0
> but did not.
>   at 
> org.apache.flink.table.runtime.functions.table.fullcache.inputformat.InputFormatCacheLoaderTest.testCloseAndInterruptDuringReload(InputFormatCacheLoaderTest.java:161)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-24119) KafkaITCase.testTimestamps fails due to "Topic xxx already exist"

2022-10-24 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17622971#comment-17622971
 ] 

Xingbo Huang commented on FLINK-24119:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42337=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203=37457

> KafkaITCase.testTimestamps fails due to "Topic xxx already exist"
> -
>
> Key: FLINK-24119
> URL: https://issues.apache.org/jira/browse/FLINK-24119
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0, 1.16.0
>Reporter: Xintong Song
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.16.1
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23328=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7419
> {code}
> Sep 01 15:53:20 [ERROR] Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 162.65 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> Sep 01 15:53:20 [ERROR] testTimestamps  Time elapsed: 23.237 s  <<< FAILURE!
> Sep 01 15:53:20 java.lang.AssertionError: Create test topic : tstopic failed, 
> org.apache.kafka.common.errors.TopicExistsException: Topic 'tstopic' already 
> exists.
> Sep 01 15:53:20   at org.junit.Assert.fail(Assert.java:89)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:226)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testTimestamps(KafkaITCase.java:191)
> Sep 01 15:53:20   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 01 15:53:20   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 01 15:53:20   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 01 15:53:20   at java.lang.reflect.Method.invoke(Method.java:498)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Sep 01 15:53:20   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Sep 01 15:53:20   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29405) InputFormatCacheLoaderTest is unstable

2022-10-24 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17622969#comment-17622969
 ] 

Xingbo Huang commented on FLINK-29405:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42328=logs=f2c100be-250b-5e85-7bbe-176f68fcddc5=05efd11e-5400-54a4-0d27-a4663be008a9

> InputFormatCacheLoaderTest is unstable
> --
>
> Key: FLINK-29405
> URL: https://issues.apache.org/jira/browse/FLINK-29405
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Chesnay Schepler
>Assignee: Alexander Smirnov
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> #testExceptionDuringReload/#testCloseAndInterruptDuringReload fail reliably 
> when run in a loop.
> {code}
> java.lang.AssertionError: 
> Expecting AtomicInteger(0) to have value:
>   0
> but did not.
>   at 
> org.apache.flink.table.runtime.functions.table.fullcache.inputformat.InputFormatCacheLoaderTest.testCloseAndInterruptDuringReload(InputFormatCacheLoaderTest.java:161)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29478) Flink sql Connector hive to support 3.1.3

2022-10-24 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17622967#comment-17622967
 ] 

Xingbo Huang commented on FLINK-29478:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42328=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=d04c9862-880c-52f5-574b-a7a79fef8e0f]
{code:java}
2022-10-22T01:14:14.6319640Z Oct 22 01:14:14 java.lang.AssertionError: Unknown 
test version 3.1.3
2022-10-22T01:14:14.6321027Z Oct 22 01:14:14at 
org.apache.flink.table.module.hive.HiveModuleTest.verifyNumBuiltInFunctions(HiveModuleTest.java:83)
2022-10-22T01:14:14.6322334Z Oct 22 01:14:14at 
org.apache.flink.table.module.hive.HiveModuleTest.testNumberOfBuiltinFunctions(HiveModuleTest.java:54)
 {code}
Some hive3 tests failed due to this patch. 

> Flink sql Connector hive to support 3.1.3 
> --
>
> Key: FLINK-29478
> URL: https://issues.apache.org/jira/browse/FLINK-29478
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> Currently , flink-connector hive support flink-sql-connector-hive-3.1.2 as 
> highest version ! 
> h3. hive 3.1.3 released on 08 April 2022
> Proposal :- 
> We should think of adding support for 3.1.3. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29387) IntervalJoinITCase.testIntervalJoinSideOutputRightLateData failed with AssertionError

2022-10-24 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17622965#comment-17622965
 ] 

Xingbo Huang commented on FLINK-29387:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42328=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=8069

> IntervalJoinITCase.testIntervalJoinSideOutputRightLateData failed with 
> AssertionError
> -
>
> Key: FLINK-29387
> URL: https://issues.apache.org/jira/browse/FLINK-29387
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.17.0
>Reporter: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-22T04:40:21.9296331Z Sep 22 04:40:21 [ERROR] 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.testIntervalJoinSideOutputRightLateData
>   Time elapsed: 2.46 s  <<< FAILURE!
> 2022-09-22T04:40:21.9297487Z Sep 22 04:40:21 java.lang.AssertionError: 
> expected:<[(key,2)]> but was:<[]>
> 2022-09-22T04:40:21.9298208Z Sep 22 04:40:21  at 
> org.junit.Assert.fail(Assert.java:89)
> 2022-09-22T04:40:21.9298927Z Sep 22 04:40:21  at 
> org.junit.Assert.failNotEquals(Assert.java:835)
> 2022-09-22T04:40:21.9299655Z Sep 22 04:40:21  at 
> org.junit.Assert.assertEquals(Assert.java:120)
> 2022-09-22T04:40:21.9300403Z Sep 22 04:40:21  at 
> org.junit.Assert.assertEquals(Assert.java:146)
> 2022-09-22T04:40:21.9301538Z Sep 22 04:40:21  at 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.expectInAnyOrder(IntervalJoinITCase.java:521)
> 2022-09-22T04:40:21.9302578Z Sep 22 04:40:21  at 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.testIntervalJoinSideOutputRightLateData(IntervalJoinITCase.java:280)
> 2022-09-22T04:40:21.9303641Z Sep 22 04:40:21  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-22T04:40:21.9304472Z Sep 22 04:40:21  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-22T04:40:21.9305371Z Sep 22 04:40:21  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-22T04:40:21.9306195Z Sep 22 04:40:21  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-22T04:40:21.9307011Z Sep 22 04:40:21  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-09-22T04:40:21.9308077Z Sep 22 04:40:21  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-09-22T04:40:21.9308968Z Sep 22 04:40:21  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-09-22T04:40:21.9309849Z Sep 22 04:40:21  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-09-22T04:40:21.9310704Z Sep 22 04:40:21  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-09-22T04:40:21.9311533Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-22T04:40:21.9312386Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-09-22T04:40:21.9313231Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-09-22T04:40:21.9314985Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-09-22T04:40:21.9315857Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-09-22T04:40:21.9316633Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-09-22T04:40:21.9317450Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-09-22T04:40:21.9318209Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-09-22T04:40:21.9318949Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-09-22T04:40:21.9319680Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-09-22T04:40:21.9320401Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-22T04:40:21.9321130Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-09-22T04:40:21.9321822Z Sep 22 04:40:21  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-09-22T04:40:21.9322498Z Sep 22 04:40:21  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-09-22T04:40:21.9323248Z Sep 22 04:40:21  at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> 2022-09-22T04:40:21.9324080Z 

[jira] [Commented] (FLINK-24119) KafkaITCase.testTimestamps fails due to "Topic xxx already exist"

2022-10-24 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17622964#comment-17622964
 ] 

Xingbo Huang commented on FLINK-24119:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42326=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203

> KafkaITCase.testTimestamps fails due to "Topic xxx already exist"
> -
>
> Key: FLINK-24119
> URL: https://issues.apache.org/jira/browse/FLINK-24119
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0, 1.16.0
>Reporter: Xintong Song
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.16.1
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23328=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7419
> {code}
> Sep 01 15:53:20 [ERROR] Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 162.65 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> Sep 01 15:53:20 [ERROR] testTimestamps  Time elapsed: 23.237 s  <<< FAILURE!
> Sep 01 15:53:20 java.lang.AssertionError: Create test topic : tstopic failed, 
> org.apache.kafka.common.errors.TopicExistsException: Topic 'tstopic' already 
> exists.
> Sep 01 15:53:20   at org.junit.Assert.fail(Assert.java:89)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:226)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testTimestamps(KafkaITCase.java:191)
> Sep 01 15:53:20   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 01 15:53:20   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 01 15:53:20   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 01 15:53:20   at java.lang.reflect.Method.invoke(Method.java:498)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Sep 01 15:53:20   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Sep 01 15:53:20   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28766) UnalignedCheckpointStressITCase.runStressTest failed with NoSuchFileException

2022-10-20 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17621470#comment-17621470
 ] 

Xingbo Huang commented on FLINK-28766:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42272=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=10622

> UnalignedCheckpointStressITCase.runStressTest failed with NoSuchFileException
> -
>
> Key: FLINK-28766
> URL: https://issues.apache.org/jira/browse/FLINK-28766
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Assignee: Anton Kalashnikov
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.17.0
>
>
> {code:java}
> 2022-08-01T01:36:16.0563880Z Aug 01 01:36:16 [ERROR] 
> org.apache.flink.test.checkpointing.UnalignedCheckpointStressITCase.runStressTest
>   Time elapsed: 12.579 s  <<< ERROR!
> 2022-08-01T01:36:16.0565407Z Aug 01 01:36:16 java.io.UncheckedIOException: 
> java.nio.file.NoSuchFileException: 
> /tmp/junit1058240190382532303/f0f99754a53d2c4633fed75011da58dd/chk-7/61092e4a-5b9a-4f56-83f7-d9960c53ed3e
> 2022-08-01T01:36:16.0566296Z Aug 01 01:36:16  at 
> java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:88)
> 2022-08-01T01:36:16.0566972Z Aug 01 01:36:16  at 
> java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:104)
> 2022-08-01T01:36:16.0567600Z Aug 01 01:36:16  at 
> java.util.Iterator.forEachRemaining(Iterator.java:115)
> 2022-08-01T01:36:16.0568290Z Aug 01 01:36:16  at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> 2022-08-01T01:36:16.0569172Z Aug 01 01:36:16  at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> 2022-08-01T01:36:16.0569877Z Aug 01 01:36:16  at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> 2022-08-01T01:36:16.0570554Z Aug 01 01:36:16  at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> 2022-08-01T01:36:16.0571371Z Aug 01 01:36:16  at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> 2022-08-01T01:36:16.0572417Z Aug 01 01:36:16  at 
> java.util.stream.ReferencePipeline.reduce(ReferencePipeline.java:546)
> 2022-08-01T01:36:16.0573618Z Aug 01 01:36:16  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointStressITCase.discoverRetainedCheckpoint(UnalignedCheckpointStressITCase.java:289)
> 2022-08-01T01:36:16.0575187Z Aug 01 01:36:16  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointStressITCase.runAndTakeExternalCheckpoint(UnalignedCheckpointStressITCase.java:262)
> 2022-08-01T01:36:16.0576540Z Aug 01 01:36:16  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointStressITCase.runStressTest(UnalignedCheckpointStressITCase.java:158)
> 2022-08-01T01:36:16.0577684Z Aug 01 01:36:16  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-08-01T01:36:16.0578546Z Aug 01 01:36:16  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-08-01T01:36:16.0579374Z Aug 01 01:36:16  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-08-01T01:36:16.0580298Z Aug 01 01:36:16  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-08-01T01:36:16.0581243Z Aug 01 01:36:16  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-08-01T01:36:16.0582029Z Aug 01 01:36:16  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-08-01T01:36:16.0582766Z Aug 01 01:36:16  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-08-01T01:36:16.0583488Z Aug 01 01:36:16  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-08-01T01:36:16.0584203Z Aug 01 01:36:16  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-08-01T01:36:16.0585087Z Aug 01 01:36:16  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-08-01T01:36:16.0585778Z Aug 01 01:36:16  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-08-01T01:36:16.0586482Z Aug 01 01:36:16  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-08-01T01:36:16.0587155Z Aug 01 01:36:16  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-08-01T01:36:16.0587809Z Aug 01 01:36:16  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-08-01T01:36:16.0588434Z Aug 01 01:36:16  at 
> 

[jira] [Comment Edited] (FLINK-24119) KafkaITCase.testTimestamps fails due to "Topic xxx already exist"

2022-10-20 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17621469#comment-17621469
 ] 

Xingbo Huang edited comment on FLINK-24119 at 10/21/22 3:39 AM:


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42266=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]
{code:java}
2022-10-20T08:43:50.6008823Z Oct 20 08:43:50 [ERROR] 
org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleExactlyOnceITCase.testAssignedToPartitionFailureRecoveryEventTime
  Time elapsed: 92.728 s  <<< FAILURE!
2022-10-20T08:43:50.6010487Z Oct 20 08:43:50 java.lang.AssertionError: Create 
test topic : partition_failure_recovery_EventTime failed, 
org.apache.kafka.common.errors.TopicExistsException: Topic 
'partition_failure_recovery_EventTime' already exists.
2022-10-20T08:43:50.6011552Z Oct 20 08:43:50at 
org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:207)
2022-10-20T08:43:50.6012448Z Oct 20 08:43:50at 
org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:97)
2022-10-20T08:43:50.6013274Z Oct 20 08:43:50at 
org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:217)
2022-10-20T08:43:50.6014297Z Oct 20 08:43:50at 
org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleExactlyOnceITCase.testAssignedToPartitionFailureRecovery(KafkaShuffleExactlyOnceITCase.java:158)
2022-10-20T08:43:50.6015529Z Oct 20 08:43:50at 
org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleExactlyOnceITCase.testAssignedToPartitionFailureRecoveryEventTime(KafkaShuffleExactlyOnceITCase.java:101)
 {code}


was (Author: hxb):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42266=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203

> KafkaITCase.testTimestamps fails due to "Topic xxx already exist"
> -
>
> Key: FLINK-24119
> URL: https://issues.apache.org/jira/browse/FLINK-24119
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0, 1.16.0
>Reporter: Xintong Song
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23328=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7419
> {code}
> Sep 01 15:53:20 [ERROR] Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 162.65 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> Sep 01 15:53:20 [ERROR] testTimestamps  Time elapsed: 23.237 s  <<< FAILURE!
> Sep 01 15:53:20 java.lang.AssertionError: Create test topic : tstopic failed, 
> org.apache.kafka.common.errors.TopicExistsException: Topic 'tstopic' already 
> exists.
> Sep 01 15:53:20   at org.junit.Assert.fail(Assert.java:89)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:226)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testTimestamps(KafkaITCase.java:191)
> Sep 01 15:53:20   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 01 15:53:20   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 01 15:53:20   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 01 15:53:20   at java.lang.reflect.Method.invoke(Method.java:498)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Sep 01 15:53:20   at 
> 

[jira] [Updated] (FLINK-24119) KafkaITCase.testTimestamps fails due to "Topic xxx already exist"

2022-10-20 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-24119:
-
Fix Version/s: 1.16.1
   (was: 1.16.0)

> KafkaITCase.testTimestamps fails due to "Topic xxx already exist"
> -
>
> Key: FLINK-24119
> URL: https://issues.apache.org/jira/browse/FLINK-24119
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0, 1.16.0
>Reporter: Xintong Song
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.16.1
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23328=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7419
> {code}
> Sep 01 15:53:20 [ERROR] Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 162.65 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> Sep 01 15:53:20 [ERROR] testTimestamps  Time elapsed: 23.237 s  <<< FAILURE!
> Sep 01 15:53:20 java.lang.AssertionError: Create test topic : tstopic failed, 
> org.apache.kafka.common.errors.TopicExistsException: Topic 'tstopic' already 
> exists.
> Sep 01 15:53:20   at org.junit.Assert.fail(Assert.java:89)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:226)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testTimestamps(KafkaITCase.java:191)
> Sep 01 15:53:20   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 01 15:53:20   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 01 15:53:20   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 01 15:53:20   at java.lang.reflect.Method.invoke(Method.java:498)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Sep 01 15:53:20   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Sep 01 15:53:20   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-24119) KafkaITCase.testTimestamps fails due to "Topic xxx already exist"

2022-10-20 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17621469#comment-17621469
 ] 

Xingbo Huang commented on FLINK-24119:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42266=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203

> KafkaITCase.testTimestamps fails due to "Topic xxx already exist"
> -
>
> Key: FLINK-24119
> URL: https://issues.apache.org/jira/browse/FLINK-24119
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0, 1.16.0
>Reporter: Xintong Song
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23328=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7419
> {code}
> Sep 01 15:53:20 [ERROR] Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 162.65 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> Sep 01 15:53:20 [ERROR] testTimestamps  Time elapsed: 23.237 s  <<< FAILURE!
> Sep 01 15:53:20 java.lang.AssertionError: Create test topic : tstopic failed, 
> org.apache.kafka.common.errors.TopicExistsException: Topic 'tstopic' already 
> exists.
> Sep 01 15:53:20   at org.junit.Assert.fail(Assert.java:89)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:226)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testTimestamps(KafkaITCase.java:191)
> Sep 01 15:53:20   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 01 15:53:20   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 01 15:53:20   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 01 15:53:20   at java.lang.reflect.Method.invoke(Method.java:498)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Sep 01 15:53:20   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Sep 01 15:53:20   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26402) MinioTestContainerTest.testS3EndpointNeedsToBeSpecifiedBeforeInitializingFileSyste failed due to Container startup failed

2022-10-20 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17620783#comment-17620783
 ] 

Xingbo Huang commented on FLINK-26402:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42249=logs=4eda0b4a-bd0d-521a-0916-8285b9be9bb5=2ff6d5fa-53a6-53ac-bff7-fa524ea361a9=15998

> MinioTestContainerTest.testS3EndpointNeedsToBeSpecifiedBeforeInitializingFileSyste
>  failed due to Container startup failed
> -
>
> Key: FLINK-26402
> URL: https://issues.apache.org/jira/browse/FLINK-26402
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
>
> {code:java}
> 2022-02-24T02:49:59.3646340Z Feb 24 02:49:59 [ERROR] Tests run: 6, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 49.457 s <<< FAILURE! - in 
> org.apache.flink.fs.s3.common.MinioTestContainerTest
> 2022-02-24T02:49:59.3648027Z Feb 24 02:49:59 [ERROR] 
> org.apache.flink.fs.s3.common.MinioTestContainerTest.testS3EndpointNeedsToBeSpecifiedBeforeInitializingFileSyste
>   Time elapsed: 5.751 s  <<< ERROR!
> 2022-02-24T02:49:59.3648805Z Feb 24 02:49:59 
> org.testcontainers.containers.ContainerLaunchException: Container startup 
> failed
> 2022-02-24T02:49:59.3651640Z Feb 24 02:49:59  at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:336)
> 2022-02-24T02:49:59.3652820Z Feb 24 02:49:59  at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:317)
> 2022-02-24T02:49:59.3653619Z Feb 24 02:49:59  at 
> org.apache.flink.core.testutils.TestContainerExtension.instantiateTestContainer(TestContainerExtension.java:59)
> 2022-02-24T02:49:59.3654319Z Feb 24 02:49:59  at 
> org.apache.flink.core.testutils.TestContainerExtension.before(TestContainerExtension.java:70)
> 2022-02-24T02:49:59.3655057Z Feb 24 02:49:59  at 
> org.apache.flink.core.testutils.EachCallbackWrapper.beforeEach(EachCallbackWrapper.java:45)
> 2022-02-24T02:49:59.3656153Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachCallbacks$2(TestMethodTestDescriptor.java:163)
> 2022-02-24T02:49:59.3657088Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:199)
> 2022-02-24T02:49:59.3657905Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2022-02-24T02:49:59.3659016Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:199)
> 2022-02-24T02:49:59.3660004Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeEachCallbacks(TestMethodTestDescriptor.java:162)
> 2022-02-24T02:49:59.3660997Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:129)
> 2022-02-24T02:49:59.3662153Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
> 2022-02-24T02:49:59.3663189Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
> 2022-02-24T02:49:59.3664211Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2022-02-24T02:49:59.3664971Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
> 2022-02-24T02:49:59.3665623Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> 2022-02-24T02:49:59.3666433Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
> 2022-02-24T02:49:59.3667322Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2022-02-24T02:49:59.3668024Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> 2022-02-24T02:49:59.3669276Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> 2022-02-24T02:49:59.3669881Z Feb 24 02:49:59  at 
> java.util.ArrayList.forEach(ArrayList.java:1259)
> 2022-02-24T02:49:59.3670715Z Feb 24 02:49:59  at 
> 

[jira] [Updated] (FLINK-29468) Update jackson-bom to 2.13.4

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29468:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> Update jackson-bom to 2.13.4
> 
>
> Key: FLINK-29468
> URL: https://issues.apache.org/jira/browse/FLINK-29468
> Project: Flink
>  Issue Type: Technical Debt
>Affects Versions: 1.16.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29638) Update jackson bom because of CVE-2022-42003

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29638:
-
Fix Version/s: (was: 1.16.1)
   1.16.0

> Update jackson bom because of CVE-2022-42003
> 
>
> Key: FLINK-29638
> URL: https://issues.apache.org/jira/browse/FLINK-29638
> Project: Flink
>  Issue Type: Technical Debt
>Affects Versions: 1.16.0, 1.17.0, 1.15.2
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
>
> There is a CVE-2022-42003 fixed in 2.13.4.1 and 2.14.0-rc1
> https://nvd.nist.gov/vuln/detail/CVE-2022-42003
> P.S. It seems there will not be 2.14.0 release until end of October according 
> to 
> https://github.com/FasterXML/jackson-databind/issues/3590#issuecomment-1270363915



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29613) Wrong message size assertion in Pulsar's batch message

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29613:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> Wrong message size assertion in Pulsar's batch message
> --
>
> Key: FLINK-29613
> URL: https://issues.apache.org/jira/browse/FLINK-29613
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.16.0, 1.17.0, 1.15.2
>Reporter: qiaomengnan
>Assignee: Yufan Sheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
>
> java.lang.RuntimeException: One or more fetchers have encountered exception
> at 
> nextMessageIdorg.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:225)
> at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169)
> at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:130)
> at 
> org.apache.flink.connector.pulsar.source.reader.source.PulsarOrderedSourceReader.pollNext(PulsarOrderedSourceReader.java:109)
> at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:385)
> at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
> at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:804)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:753)
> at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
> at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
> at java.lang.Thread.run(Thread.java:750)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 1 received 
> unexpected exception while polling the records
> at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:150)
> at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:105)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ... 1 more
> Suppressed: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
> ... 7 more
> Caused by: java.lang.IllegalArgumentException: We only support normal message 
> id currently.
> at org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:138)
> at 
> org.apache.flink.connector.pulsar.source.enumerator.cursor.MessageIdUtils.unwrapMessageId(MessageIdUtils.java:61)
> at 
> org.apache.flink.connector.pulsar.source.enumerator.cursor.MessageIdUtils.nextMessageId(MessageIdUtils.java:43)
> at 
> org.apache.flink.connector.pulsar.source.reader.split.PulsarOrderedPartitionSplitReader.beforeCreatingConsumer(PulsarOrderedPartitionSplitReader.java:94)
> at 
> org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderBase.handleSplitsChanges(PulsarPartitionSplitReaderBase.java:160)
> at 
> org.apache.flink.connector.pulsar.source.reader.split.PulsarOrderedPartitionSplitReader.handleSplitsChanges(PulsarOrderedPartitionSplitReader.java:52)
> at 
> org.apache.flink.connector.base.source.reader.fetcher.AddSplitsTask.run(AddSplitsTask.java:51)
> at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142)
> ... 6 more
> Caused by: java.lang.IllegalArgumentException: We only support normal message 
> id currently.
> at org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:138)
> at 
> org.apache.flink.connector.pulsar.source.enumerator.cursor.MessageIdUtils.unwrapMessageId(MessageIdUtils.java:61)
> at 
> org.apache.flink.connector.pulsar.source.enumerator.cursor.MessageIdUtils.nextMessageId(MessageIdUtils.java:43)
> at 
> org.apache.flink.connector.pulsar.source.reader.split.PulsarOrderedPartitionSplitReader.beforeCreatingConsumer(PulsarOrderedPartitionSplitReader.java:94)
> at 
> 

[jira] [Updated] (FLINK-29512) Align SubtaskCommittableManager checkpointId with CheckpointCommittableManagerImpl checkpointId during recovery

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29512:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> Align SubtaskCommittableManager checkpointId with 
> CheckpointCommittableManagerImpl checkpointId during recovery
> ---
>
> Key: FLINK-29512
> URL: https://issues.apache.org/jira/browse/FLINK-29512
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.1, 1.16.0, 1.17.0
>Reporter: Fabian Paul
>Assignee: Fabian Paul
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
>
> Similar to the issue described in 
> https://issues.apache.org/jira/browse/FLINK-29509 during the recovery of 
> committables, the subtaskCommittables checkpointId is set to always 1 
> [https://github.com/apache/flink/blob/9152af41c2d401e5eacddee1bb10d1b6bea6c61a/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CommittableCollectorSerializer.java#L193]
>  while the holding CheckpointCommittableManager is initialized with the 
> checkpointId that is written into state 
> [https://github.com/apache/flink/blob/9152af41c2d401e5eacddee1bb10d1b6bea6c61a/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CommittableCollectorSerializer.java#L155
>  
> .|https://github.com/apache/flink/blob/9152af41c2d401e5eacddee1bb10d1b6bea6c61a/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CommittableCollectorSerializer.java#L155.]
>  
> This leads to that during a recovery, the post-commit topology will receive a 
> committable summary with the recovered checkpoint id and multiple 
> `CommittableWithLinage`s with the reset checkpointId causing orphaned 
> `CommittableWithLinages` without a `CommittableSummary` failing the job.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29509) Set correct subtaskId during recovery of committables

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29509:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> Set correct subtaskId during recovery of committables
> -
>
> Key: FLINK-29509
> URL: https://issues.apache.org/jira/browse/FLINK-29509
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.16.0, 1.17.0, 1.15.2
>Reporter: Fabian Paul
>Assignee: Krzysztof Chmielewski
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
>
> When we recover the `CheckpointCommittableManager` we ignore the subtaskId it 
> is recovered on. 
> [https://github.com/apache/flink/blob/d191bda7e63a2c12416cba56090e5cd75426079b/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CheckpointCommittableManagerImpl.java#L58]
> This becomes a problem when a sink uses a post-commit topology because 
> multiple committer operators might forward committable summaries coming from 
> the same subtaskId.
>  
> It should be possible to use the subtaskId already present in the 
> `CommittableCollector` when creating the `CheckpointCommittableManager`s.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22243) Reactive Mode parallelism changes are not shown in the job graph visualization in the UI

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-22243:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> Reactive Mode parallelism changes are not shown in the job graph 
> visualization in the UI
> 
>
> Key: FLINK-22243
> URL: https://issues.apache.org/jira/browse/FLINK-22243
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Robert Metzger
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
> Attachments: screenshot-1.png
>
>
> As reported here FLINK-22134, the parallelism in the visual job graph on top 
> of the detail page is not in sync with the parallelism listed in the task 
> list below, when reactive mode causes a parallelism change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29504) Jar upload spec should define a schema

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29504:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> Jar upload spec should define a schema
> --
>
> Key: FLINK-29504
> URL: https://issues.apache.org/jira/browse/FLINK-29504
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Affects Versions: 1.15.2
>Reporter: Tiger (Apache) Wang
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: openapi, pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
>
> Install nodejs and run
> {{$ npx --yes @openapitools/openapi-generator-cli generate -i 
> [https://nightlies.apache.org/flink/flink-docs-release-1.15/generated/rest_v1_dispatcher.yml]
>  -g typescript-axios -o .}}
>  
> Then it outputs error 
>  
> {{Caused by: java.lang.RuntimeException: Request body cannot be null. 
> Possible cause: missing schema in body parameter (OAS v2): class RequestBody 
> {}}
> {{    description: null}}
> {{    content: class Content {}}
> {{        {application/x-java-archive=class MediaType {}}
> {{            schema: null}}
> {{            examples: null}}
> {{            example: null}}
> {{            encoding: null}}
> {\{        
> {\{    }}}
> {{    required: true}}
> {{}}}
>  
> This is because in the YAML:
> {{}}{{ requestBody:}}
> {{  content:}}
> {{{}    application/x-java-archive: {{
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29408) HiveCatalogITCase failed with NPE

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29408:
-
Fix Version/s: 1.16.0
   (was: 1.17.0)
   (was: 1.16.1)

> HiveCatalogITCase failed with NPE
> -
>
> Key: FLINK-29408
> URL: https://issues.apache.org/jira/browse/FLINK-29408
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: luoyuxia
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> {code:java}
> 2022-09-25T03:41:07.4212129Z Sep 25 03:41:07 [ERROR] 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf  Time 
> elapsed: 0.098 s  <<< ERROR!
> 2022-09-25T03:41:07.4212662Z Sep 25 03:41:07 java.lang.NullPointerException
> 2022-09-25T03:41:07.4213189Z Sep 25 03:41:07  at 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf(HiveCatalogUdfITCase.java:109)
> 2022-09-25T03:41:07.4213753Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-25T03:41:07.4224643Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-25T03:41:07.4225311Z Sep 25 03:41:07  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-25T03:41:07.4225879Z Sep 25 03:41:07  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-25T03:41:07.4226405Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-09-25T03:41:07.4227201Z Sep 25 03:41:07  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-09-25T03:41:07.4227807Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-09-25T03:41:07.4228394Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-09-25T03:41:07.4228966Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4229514Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4230066Z Sep 25 03:41:07  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-09-25T03:41:07.4230587Z Sep 25 03:41:07  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-09-25T03:41:07.4231258Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-25T03:41:07.4231823Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-09-25T03:41:07.4232384Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-09-25T03:41:07.4232930Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-09-25T03:41:07.4233511Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-09-25T03:41:07.4234039Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-09-25T03:41:07.4234546Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-09-25T03:41:07.4235057Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-09-25T03:41:07.4235573Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-09-25T03:41:07.4236087Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-09-25T03:41:07.4236635Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-09-25T03:41:07.4237314Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4238211Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4238775Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4239277Z Sep 25 03:41:07  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2022-09-25T03:41:07.4239769Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-25T03:41:07.4240265Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-09-25T03:41:07.4240731Z Sep 25 03:41:07  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-09-25T03:41:07.4241196Z Sep 25 03:41:07  

[jira] [Updated] (FLINK-29503) Add backpressureLevel field without hyphens

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29503:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> Add backpressureLevel field without hyphens
> ---
>
> Key: FLINK-29503
> URL: https://issues.apache.org/jira/browse/FLINK-29503
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Affects Versions: 1.15.2
>Reporter: Tiger (Apache) Wang
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: openapi, pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
>
> Install nodejs and run
> {{$ npx --yes --package openapi-typescript-codegen openapi --input 
> [https://nightlies.apache.org/flink/flink-docs-release-1.15/generated/rest_v1_dispatcher.yml]
>  --output .}}
> {{$ npx --package typescript tsc }}
> The only thing it complains about is:
> {{{}src/models/JobVertexBackPressureInfo.ts:21:17 - error TS1003: Identifier 
> expected.{}}}{{{}21     export enum 'backpressure-level' {{}}}
> This is because for TypeScript, enum name should not have a hyphen in it.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29395) [Kinesis][EFO] Issue using EFO consumer at timestamp with empty shard

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29395:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> [Kinesis][EFO] Issue using EFO consumer at timestamp with empty shard
> -
>
> Key: FLINK-29395
> URL: https://issues.apache.org/jira/browse/FLINK-29395
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.12.7, 1.13.6, 1.14.5, 1.15.2
>Reporter: Hong Liang Teoh
>Assignee: Hong Liang Teoh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
>
> *Background*
> The consumer fails when an EFO record publisher uses a timestamp sentinel 
> starting position, the first record batch is not empty, but the first 
> deaggregated record batch is empty. This can happen if the user explicitly 
> specifies the hashkey in the KPL, and does not ensure that the 
> explicitHashKey of every record in the aggregated batch is the same.
> When resharding occurs, the aggregated record batch can have records that are 
> out of the shard's hash key range. This causes the records to be dropped when 
> deaggregating, and can result in this situation, where record batch is not 
> empty, but the deaggregated record batch is empty.
> The symptom seen is similar to the issue seen in 
> https://issues.apache.org/jira/browse/FLINK-20088.
> See 
> [here|https://github.com/awslabs/kinesis-aggregation/blob/master/potential_data_loss.md]
>  and [here|https://github.com/awslabs/kinesis-aggregation/issues/11] for a 
> more detailed explanation
> *Replicate*
> Get shard information
> {code:java}
> aws kinesis describe-stream --stream-name 
> {
>     "StreamDescription": {
>         "Shards": [
>             ...
>             {
>                 "ShardId": "shardId-0037",
>                 "ParentShardId": "shardId-0027",
>                 "HashKeyRange": {
>                     "StartingHashKey": 
> "272225893536750770770699685945414569164",
>                     "EndingHashKey": "340282366920938463463374607431768211455"
>                 }
> ...
>             },
>             {
>                 "ShardId": "shardId-0038",
>                 "ParentShardId": "shardId-0034",
>                 "AdjacentParentShardId": "shardId-0036",
>                 "HashKeyRange": {
>                     "StartingHashKey": 
> "204169420152563078078024764459060926873",
>                     "EndingHashKey": "272225893536750770770699685945414569163"
>                 }
> ...
>             }
>         ]
> ...
>     }
> }{code}
> Create an aggregate record with two records, each with explicit hash keys 
> belonging to different shards
> {code:java}
> RecordAggregator aggregator = new RecordAggregator();
> String record1 = "RECORD_1";
> String record2 = "RECORD_2";
> aggregator.addUserRecord("pk", "272225893536750770770699685945414569162", 
> record1.getBytes());
> aggregator.addUserRecord("pk", "272225893536750770770699685945414569165", 
> record2.getBytes());
> AmazonKinesis kinesisClient = AmazonKinesisClient.builder()
>.build();
> kinesisClient.putRecord(aggregator.clearAndGet().toPutRecordRequest("EFOStreamTest"));
>  {code}
> Consume from given stream whilst specifying a Timestamp where the only record 
> retrieved is the record above.
> *Error*
> {code:java}
> java.lang.IllegalArgumentException: Unexpected sentinel type: 
> AT_TIMESTAMP_SEQUENCE_NUM
>   at 
> org.apache.flink.streaming.connectors.kinesis.model.StartingPosition.fromSentinelSequenceNumber(StartingPosition.java:115)
>   at 
> org.apache.flink.streaming.connectors.kinesis.model.StartingPosition.fromSequenceNumber(StartingPosition.java:91)
>   at 
> org.apache.flink.streaming.connectors.kinesis.model.StartingPosition.continueFromSequenceNumber(StartingPosition.java:72)
>   at 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutRecordPublisher.lambda$run$0(FanOutRecordPublisher.java:120)
>   at 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriber.consumeAllRecordsFromKinesisShard(FanOutShardSubscriber.java:356)
>   at 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriber.subscribeToShardAndConsumeRecords(FanOutShardSubscriber.java:188)
>   at 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutRecordPublisher.runWithBackoff(FanOutRecordPublisher.java:154)
>   at 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutRecordPublisher.run(FanOutRecordPublisher.java:123)
>   at 
> 

[jira] [Updated] (FLINK-29562) JM/SQl gateway OpenAPI specs should have different titles

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29562:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> JM/SQl gateway OpenAPI specs should have different titles
> -
>
> Key: FLINK-29562
> URL: https://issues.apache.org/jira/browse/FLINK-29562
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Runtime / REST
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29483) flink python udf arrow in thread model bug

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29483:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> flink python udf arrow in thread model bug
> --
>
> Key: FLINK-29483
> URL: https://issues.apache.org/jira/browse/FLINK-29483
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0, 1.15.2
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
> Attachments: image-2022-09-30-17-03-05-005.png
>
>
> !image-2022-09-30-17-03-05-005.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29532) Update Pulsar dependency to 2.10.1

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29532:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> Update Pulsar dependency to 2.10.1
> --
>
> Key: FLINK-29532
> URL: https://issues.apache.org/jira/browse/FLINK-29532
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Pulsar
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> Update the Pulsar dependency to 2.10.1 to benefit of the fixes highlights at 
> https://github.com/apache/pulsar/releases/tag/v2.10.1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29500) InitializeOnMaster uses wrong parallelism with AdaptiveScheduler

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29500:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> InitializeOnMaster uses wrong parallelism with AdaptiveScheduler
> 
>
> Key: FLINK-29500
> URL: https://issues.apache.org/jira/browse/FLINK-29500
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, Runtime / Coordination
>Affects Versions: 1.16.0, 1.15.2, 1.14.6
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
>
> {{InputOutputFormatVertex}} uses {{JobVertex#getParallelism}} to invoke 
> {{InitializeOnMaster#initializeGlobal}}. However, this parallelism might not 
> be the actual one which will be used to execute the node in combination with 
> Adaptive Scheduler. In case of Adaptive Scheduler the execution parallelism 
> is provided via {{VertexParallelismStore}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26469) Adaptive job shows error in WebUI when not enough resource are available

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-26469:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> Adaptive job shows error in WebUI when not enough resource are available
> 
>
> Key: FLINK-26469
> URL: https://issues.apache.org/jira/browse/FLINK-26469
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Niklas Semmler
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
>
> When there is no resource and job is in CREATED state, the job page shows the 
> error: "Job failed during initialization of JobManager". 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29476) Kinesis Connector retry mechanism not applied to EOFException

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29476:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> Kinesis Connector retry mechanism not applied to EOFException
> -
>
> Key: FLINK-29476
> URL: https://issues.apache.org/jira/browse/FLINK-29476
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.2
>Reporter: Alexander Fedulov
>Assignee: Alexander Fedulov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
> Attachments: kinesis-exception.log
>
>
> The current retry mechanism in Kinesis connector only considers 
> _SocketTimeoutException_ as recoverable: 
> [KinesisProxy.java#L422|https://github.com/apache/flink/blob/release-1.16.0-rc1/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/proxy/KinesisProxy.java#L422]
>  , however we observed that communication can also fail with EOFException: 
> [^kinesis-exception.log]
> This exception should also be considered recoverable and retried.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29552) Fix documentation usage examples for DAYOFYEAR, DAYOFMONTH, and DAYOFWEEK functions

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29552:
-
Fix Version/s: 1.16.0
   (was: 1.16.1)

> Fix documentation usage examples for DAYOFYEAR, DAYOFMONTH, and DAYOFWEEK 
> functions
> ---
>
> Key: FLINK-29552
> URL: https://issues.apache.org/jira/browse/FLINK-29552
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.15.0
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
> Attachments: image-2022-10-09-11-21-56-033.png
>
>
> !image-2022-10-09-11-21-56-033.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29622) KerberosDelegationTokenManager fails to load DelegationTokenProvider due to NoClassDefFoundError in various ITCases

2022-10-19 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17620303#comment-17620303
 ] 

Xingbo Huang commented on FLINK-29622:
--

[~mbalassi] Thanks a lot for the information. After this issue and 
https://issues.apache.org/jira/browse/FLINK-29567 are fixed, I will start to 
prepare rc2.

> KerberosDelegationTokenManager fails to load DelegationTokenProvider due to 
> NoClassDefFoundError in various ITCases
> ---
>
> Key: FLINK-29622
> URL: https://issues.apache.org/jira/browse/FLINK-29622
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Matthias Pohl
>Assignee: Gabor Somogyi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> There are multiple ITCases (e.g. {{EventTimeWindowCheckpointingITCase}}) that 
> print an error when trying to load the {{HadoopFSDelegationTokenProvider}} 
> which is on the classpath through {{flink-runtime}} but the corresponding 
> hadoop dependency seems to be missing:
> {code}
> 186348 02:25:25,492 [main] INFO  
> org.apache.flink.runtime.security.token.KerberosDelegationTokenManager [] - 
> Loading delegation token providers
>  186349 02:25:25,493 [main] ERROR 
> org.apache.flink.runtime.security.token.KerberosDelegationTokenManager [] - 
> Failed to initialize delegation token provider hadoopfs
>  186350 java.lang.NoClassDefFoundError: 
> org/apache/hadoop/hdfs/HdfsConfiguration
>  186351 at 
> org.apache.flink.runtime.security.token.HadoopFSDelegationTokenProvider.init(HadoopFSDelegationTokenProvider.java:68)
>  ~[flink-runtime-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>  186352 at 
> org.apache.flink.runtime.security.token.KerberosDelegationTokenManager.loadProviders(KerberosDelegationTokenManager.java:124)
>  ~[flink-runtime-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>  186353 at 
> org.apache.flink.runtime.security.token.KerberosDelegationTokenManager.(KerberosDelegationTokenManager.java:109)
>  ~[flink-runtime-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>  186354 at 
> org.apache.flink.runtime.security.token.KerberosDelegationTokenManager.(KerberosDelegationTokenManager.java:91)
>  ~[flink-runtime-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>  186355 at 
> org.apache.flink.runtime.security.token.KerberosDelegationTokenManagerFactory.create(KerberosDelegationTokenManagerFactory.java:47)
>  ~[flink-runtime-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>  186356 at 
> org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:431) 
> ~[flink-runtime-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>  186357 at 
> org.apache.flink.runtime.testutils.MiniClusterResource.startMiniCluster(MiniClusterResource.java:234)
>  ~[flink-runtime-1.16-SNAPSHOT-tests.jar:1.16-SNAPSHOT]
>  186358 at 
> org.apache.flink.runtime.testutils.MiniClusterResource.before(MiniClusterResource.java:109)
>  ~[flink-runtime-1.16-SNAPSHOT-tests.jar:1.16-SNAPSHOT]
>  186359 at 
> org.apache.flink.test.util.MiniClusterWithClientResource.before(MiniClusterWithClientResource.java:64)
>  ~[flink-test-utils-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>  186360 at 
> org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.setupTestCluster(EventTimeWindowCheckpointingITCase.java:253)
>  ~[test-classes/:?]
>  186361 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) ~[?:1.8.0_292]
> [...]
> {code}
> This error might be misleading/confusing to people investigating the logs. It 
> looks like this error is actually expected since the tests not necessarily 
> require Kerberos delegation tokens.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28424) JdbcExactlyOnceSinkE2eTest hangs on Azure

2022-10-19 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17620101#comment-17620101
 ] 

Xingbo Huang commented on FLINK-28424:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42176=logs=075127ba-54d5-54b0-cccf-6a36778b332d=c35a13eb-0df9-505f-29ac-8097029d4d79=17367

> JdbcExactlyOnceSinkE2eTest hangs on Azure
> -
>
> Key: FLINK-28424
> URL: https://issues.apache.org/jira/browse/FLINK-28424
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: Martijn Visser
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> {code:java}
> 2022-07-06T07:10:57.8133295Z 
> ==
> 2022-07-06T07:10:57.8137200Z === WARNING: This task took already 95% of the 
> available time budget of 232 minutes ===
> 2022-07-06T07:10:57.8140723Z 
> ==
> 2022-07-06T07:10:57.8186584Z 
> ==
> 2022-07-06T07:10:57.8187530Z The following Java processes are running (JPS)
> 2022-07-06T07:10:57.8188571Z 
> ==
> 2022-07-06T07:10:58.2136012Z 825016 Jps
> 2022-07-06T07:10:58.2136438Z 34359 surefirebooter8568713056714319310.jar
> 2022-07-06T07:10:58.2136774Z 525 Launcher
> 2022-07-06T07:10:58.2240260Z 
> ==
> 2022-07-06T07:10:58.2240814Z Printing stack trace of Java process 825016
> 2022-07-06T07:10:58.2241256Z 
> ==
> 2022-07-06T07:10:58.4498109Z 825016: No such process
> 2022-07-06T07:10:58.4524779Z 
> ==
> 2022-07-06T07:10:58.4525272Z Printing stack trace of Java process 34359
> 2022-07-06T07:10:58.4525713Z 
> ==
> 2022-07-06T07:10:58.6399085Z 2022-07-06 07:10:58
> 2022-07-06T07:10:58.6400425Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.292-b10 mixed mode):
> 2022-07-06T07:10:58.7332738Z "Legacy Source Thread - Source: Custom Source -> 
> Map -> Sink: Unnamed (1/4)#44585" #870775 prio=5 os_prio=0 
> tid=0x7fca5c06f800 nid=0xc3c26 waiting on condition [0x7fca503b1000]
> 2022-07-06T07:10:58.786Zjava.lang.Thread.State: WAITING (parking)
> 2022-07-06T07:10:58.7333759Z  at sun.misc.Unsafe.park(Native Method)
> 2022-07-06T07:10:58.7334404Z  - parking to wait for  <0xd5998448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> 2022-07-06T07:10:58.7334943Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2022-07-06T07:10:58.7335605Z  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> 2022-07-06T07:10:58.7336392Z  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> 2022-07-06T07:10:58.7337195Z  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> 2022-07-06T07:10:58.7337966Z  at 
> java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> 2022-07-06T07:10:58.7338677Z  at 
> org.apache.flink.connector.jdbc.xa.JdbcExactlyOnceSinkE2eTest$TestEntrySource.waitForConsumers(JdbcExactlyOnceSinkE2eTest.java:314)
> 2022-07-06T07:10:58.7339566Z  at 
> org.apache.flink.connector.jdbc.xa.JdbcExactlyOnceSinkE2eTest$TestEntrySource.run(JdbcExactlyOnceSinkE2eTest.java:300)
> 2022-07-06T07:10:58.7340281Z  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
> 2022-07-06T07:10:58.7340883Z  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
> 2022-07-06T07:10:58.7341583Z  at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:333)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37685=logs=075127ba-54d5-54b0-cccf-6a36778b332d=c35a13eb-0df9-505f-29ac-8097029d4d79=14871



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29681) Python side-output operator not generated in some cases

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-29681.

Fix Version/s: (was: 1.17.0)
 Assignee: Juntao Hu
   Resolution: Fixed

Merged into master via 93a9c504b882356fdf65ca962c4169ecb1bf66e5

Merged into release-1.16 via 0630952860153dcabe1bd0e74bcde19cf6f5

> Python side-output operator not generated in some cases
> ---
>
> Key: FLINK-29681
> URL: https://issues.apache.org/jira/browse/FLINK-29681
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Juntao Hu
>Assignee: Juntao Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> If a SideOutputDataStream is used in `execute_and_collect`, 
> `from_data_stream`, `create_temporary_view`, the side-outputing operator will 
> not be properly configured, since we rely on the bottom-up scan of 
> transformations while there's no solid transformation after the 
> SideOutputTransformation in these cases. Thus, an error, "tuple object has no 
> attribute get_fields_by_names", will be raised.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29641) SortMergeResultPartitionReadSchedulerTest.testCreateSubpartitionReader

2022-10-19 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-29641.

Fix Version/s: 1.16.0
   (was: 1.17.0)
   (was: 1.16.1)
   Resolution: Fixed

Merged into master via 959fe97beeac15bbac70f4980cc6a3a110b432a2

Merged into release-1.16 via d233b39be94f330dabba593ac3e709a73eb714d2

> SortMergeResultPartitionReadSchedulerTest.testCreateSubpartitionReader
> --
>
> Key: FLINK-29641
> URL: https://issues.apache.org/jira/browse/FLINK-29641
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network, Tests
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Matthias Pohl
>Assignee: Weijie Guo
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> [This 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42011=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=8433]
>  failed (not exclusively) due to 
> {{SortMergeResultPartitionReadSchedulerTest.testCreateSubpartitionReader}}. 
> The assert checking that the {{SortedMergeSubpartitionReader}} is in running 
> state fails.
> My suspicion is that the condition in 
> [SortMergeResultPartitionReadScheduler.mayTriggerReading|https://github.com/apache/flink/blob/87d4f70e49255b551d0106117978b1aa0747358c/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SortMergeResultPartitionReadScheduler.java#L425-L428]
>  (or something related to that condition) needs to be reconsidered since 
> that's the only time {{isRunning}} is actually set to true.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29419) HybridShuffle.testHybridFullExchangesRestart hangs

2022-10-18 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619963#comment-17619963
 ] 

Xingbo Huang commented on FLINK-29419:
--

Temporarily disabled `HybridShuffleITCase` on master via 
426c39107f434da5805bef0ff84f95912098465b

Temporarily disabled `HybridShuffleITCase` on master via 
20eb0168aa602d1b7a6b8dd116ddd1abeac8dc5e

> HybridShuffle.testHybridFullExchangesRestart hangs
> --
>
> Key: FLINK-29419
> URL: https://issues.apache.org/jira/browse/FLINK-29419
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Weijie Guo
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> 2022-09-26T10:56:44.0766792Z Sep 26 10:56:44 "ForkJoinPool-1-worker-25" #27 
> daemon prio=5 os_prio=0 tid=0x7f41a4efa000 nid=0x6d76 waiting on 
> condition [0x7f40ac135000]
> 2022-09-26T10:56:44.0767432Z Sep 26 10:56:44java.lang.Thread.State: 
> WAITING (parking)
> 2022-09-26T10:56:44.0767892Z Sep 26 10:56:44  at sun.misc.Unsafe.park(Native 
> Method)
> 2022-09-26T10:56:44.0768644Z Sep 26 10:56:44  - parking to wait for  
> <0xa0704e18> (a java.util.concurrent.CompletableFuture$Signaller)
> 2022-09-26T10:56:44.0769287Z Sep 26 10:56:44  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2022-09-26T10:56:44.0769949Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> 2022-09-26T10:56:44.0770623Z Sep 26 10:56:44  at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313)
> 2022-09-26T10:56:44.0771349Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> 2022-09-26T10:56:44.0772092Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2022-09-26T10:56:44.0772777Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:57)
> 2022-09-26T10:56:44.0773534Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.BatchShuffleITCaseBase.executeJob(BatchShuffleITCaseBase.java:115)
> 2022-09-26T10:56:44.0774333Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridFullExchangesRestart(HybridShuffleITCase.java:59)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41343=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-29419) HybridShuffle.testHybridFullExchangesRestart hangs

2022-10-18 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619963#comment-17619963
 ] 

Xingbo Huang edited comment on FLINK-29419 at 10/19/22 3:23 AM:


Temporarily disabled `HybridShuffleITCase` on master via 
426c39107f434da5805bef0ff84f95912098465b

Temporarily disabled `HybridShuffleITCase` on release-1.16 via 
20eb0168aa602d1b7a6b8dd116ddd1abeac8dc5e


was (Author: hxb):
Temporarily disabled `HybridShuffleITCase` on master via 
426c39107f434da5805bef0ff84f95912098465b

Temporarily disabled `HybridShuffleITCase` on master via 
20eb0168aa602d1b7a6b8dd116ddd1abeac8dc5e

> HybridShuffle.testHybridFullExchangesRestart hangs
> --
>
> Key: FLINK-29419
> URL: https://issues.apache.org/jira/browse/FLINK-29419
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Weijie Guo
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> 2022-09-26T10:56:44.0766792Z Sep 26 10:56:44 "ForkJoinPool-1-worker-25" #27 
> daemon prio=5 os_prio=0 tid=0x7f41a4efa000 nid=0x6d76 waiting on 
> condition [0x7f40ac135000]
> 2022-09-26T10:56:44.0767432Z Sep 26 10:56:44java.lang.Thread.State: 
> WAITING (parking)
> 2022-09-26T10:56:44.0767892Z Sep 26 10:56:44  at sun.misc.Unsafe.park(Native 
> Method)
> 2022-09-26T10:56:44.0768644Z Sep 26 10:56:44  - parking to wait for  
> <0xa0704e18> (a java.util.concurrent.CompletableFuture$Signaller)
> 2022-09-26T10:56:44.0769287Z Sep 26 10:56:44  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2022-09-26T10:56:44.0769949Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> 2022-09-26T10:56:44.0770623Z Sep 26 10:56:44  at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313)
> 2022-09-26T10:56:44.0771349Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> 2022-09-26T10:56:44.0772092Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2022-09-26T10:56:44.0772777Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:57)
> 2022-09-26T10:56:44.0773534Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.BatchShuffleITCaseBase.executeJob(BatchShuffleITCaseBase.java:115)
> 2022-09-26T10:56:44.0774333Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridFullExchangesRestart(HybridShuffleITCase.java:59)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41343=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29427) LookupJoinITCase failed with classloader problem

2022-10-18 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619346#comment-17619346
 ] 

Xingbo Huang commented on FLINK-29427:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42117=logs=f2c100be-250b-5e85-7bbe-176f68fcddc5=05efd11e-5400-54a4-0d27-a4663be008a9]

Hi [~smiralex]  is any update on this issue?

> LookupJoinITCase failed with classloader problem
> 
>
> Key: FLINK-29427
> URL: https://issues.apache.org/jira/browse/FLINK-29427
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Assignee: Alexander Smirnov
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-27T02:49:20.9501313Z Sep 27 02:49:20 Caused by: 
> org.codehaus.janino.InternalCompilerException: Compiling 
> "KeyProjection$108341": Trying to access closed classloader. Please check if 
> you store classloaders directly or indirectly in static fields. If the 
> stacktrace suggests that the leak occurs in a third party library and cannot 
> be fixed immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> 2022-09-27T02:49:20.9502654Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:382)
> 2022-09-27T02:49:20.9503366Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:237)
> 2022-09-27T02:49:20.9504044Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:465)
> 2022-09-27T02:49:20.9504704Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:216)
> 2022-09-27T02:49:20.9505341Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:207)
> 2022-09-27T02:49:20.9505965Z Sep 27 02:49:20  at 
> org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80)
> 2022-09-27T02:49:20.9506584Z Sep 27 02:49:20  at 
> org.codehaus.commons.compiler.Cookable.cook(Cookable.java:75)
> 2022-09-27T02:49:20.9507261Z Sep 27 02:49:20  at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:104)
> 2022-09-27T02:49:20.9507883Z Sep 27 02:49:20  ... 30 more
> 2022-09-27T02:49:20.9509266Z Sep 27 02:49:20 Caused by: 
> java.lang.IllegalStateException: Trying to access closed classloader. Please 
> check if you store classloaders directly or indirectly in static fields. If 
> the stacktrace suggests that the leak occurs in a third party library and 
> cannot be fixed immediately, you can disable this check with the 
> configuration 'classloader.check-leaked-classloader'.
> 2022-09-27T02:49:20.9510835Z Sep 27 02:49:20  at 
> org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:184)
> 2022-09-27T02:49:20.9511760Z Sep 27 02:49:20  at 
> org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.loadClass(FlinkUserCodeClassLoaders.java:192)
> 2022-09-27T02:49:20.9512456Z Sep 27 02:49:20  at 
> java.lang.Class.forName0(Native Method)
> 2022-09-27T02:49:20.9513014Z Sep 27 02:49:20  at 
> java.lang.Class.forName(Class.java:348)
> 2022-09-27T02:49:20.9513649Z Sep 27 02:49:20  at 
> org.codehaus.janino.ClassLoaderIClassLoader.findIClass(ClassLoaderIClassLoader.java:89)
> 2022-09-27T02:49:20.9514339Z Sep 27 02:49:20  at 
> org.codehaus.janino.IClassLoader.loadIClass(IClassLoader.java:312)
> 2022-09-27T02:49:20.9514990Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.findTypeByName(UnitCompiler.java:8556)
> 2022-09-27T02:49:20.9515659Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6749)
> 2022-09-27T02:49:20.9516337Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6594)
> 2022-09-27T02:49:20.9516989Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6573)
> 2022-09-27T02:49:20.9517632Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.access$13900(UnitCompiler.java:215)
> 2022-09-27T02:49:20.9518319Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6481)
> 2022-09-27T02:49:20.9519018Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6476)
> 2022-09-27T02:49:20.9519680Z Sep 27 02:49:20  at 
> org.codehaus.janino.Java$ReferenceType.accept(Java.java:3928)
> 2022-09-27T02:49:20.9520386Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6476)
> 2022-09-27T02:49:20.9521042Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6469)
> 2022-09-27T02:49:20.9521677Z Sep 27 02:49:20  at 

[jira] [Commented] (FLINK-28440) EventTimeWindowCheckpointingITCase.testSlidingTimeWindow failed with restore

2022-10-18 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619342#comment-17619342
 ] 

Xingbo Huang commented on FLINK-28440:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42116=logs=baf26b34-3c6a-54e8-f93f-cf269b32f802=8c9d126d-57d2-5a9e-a8c8-ff53f7b35cd9

> EventTimeWindowCheckpointingITCase.testSlidingTimeWindow failed with restore
> 
>
> Key: FLINK-28440
> URL: https://issues.apache.org/jira/browse/FLINK-28440
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.17.0
>
>
> {code:java}
> 2022-07-07T03:27:47.5779102Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-07-07T03:27:47.5779722Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-07-07T03:27:47.5780444Z  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
> 2022-07-07T03:27:47.5781338Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2022-07-07T03:27:47.5781955Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-07-07T03:27:47.5782587Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-07-07T03:27:47.5783184Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-07-07T03:27:47.5783843Z  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:268)
> 2022-07-07T03:27:47.5784599Z  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-07-07T03:27:47.5785284Z  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-07-07T03:27:47.5785907Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-07-07T03:27:47.5786528Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-07-07T03:27:47.5787121Z  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1277)
> 2022-07-07T03:27:47.5787874Z  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-07-07T03:27:47.5788498Z  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-07-07T03:27:47.5789265Z  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-07-07T03:27:47.5789968Z  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-07-07T03:27:47.5790582Z  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-07-07T03:27:47.5791198Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-07-07T03:27:47.5791799Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-07-07T03:27:47.5792351Z  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> 2022-07-07T03:27:47.5793075Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:300)
> 2022-07-07T03:27:47.5793572Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-07-07T03:27:47.5794075Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-07-07T03:27:47.5794586Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-07-07T03:27:47.5795094Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-07-07T03:27:47.5795654Z  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
> 2022-07-07T03:27:47.5796307Z  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
> 2022-07-07T03:27:47.5796922Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
> 2022-07-07T03:27:47.5797574Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
> 2022-07-07T03:27:47.5798196Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
> 2022-07-07T03:27:47.5798739Z  at 
> akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621)
> 2022-07-07T03:27:47.5799255Z  at 
> 

[jira] [Commented] (FLINK-29419) HybridShuffle.testHybridFullExchangesRestart hangs

2022-10-18 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619341#comment-17619341
 ] 

Xingbo Huang commented on FLINK-29419:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42115=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=b78d9d30-509a-5cea-1fef-db7abaa325ae]

HybridShuffleITCase.testHybridSelectiveExchanges hangs too
{code:java}
3:03:31.9738556Z Oct 18 03:03:31 "ForkJoinPool-1-worker-51" #28 daemon prio=5 
os_prio=0 cpu=5134.15ms elapsed=3370.05s tid=0x7f03f4dad000 nid=0x4f71 
waiting on condition  [0x7f03c8c12000]
2022-10-18T03:03:31.9739287Z Oct 18 03:03:31java.lang.Thread.State: WAITING 
(parking)
2022-10-18T03:03:31.9739815Z Oct 18 03:03:31at 
jdk.internal.misc.Unsafe.park(java.base@11.0.11/Native Method)
2022-10-18T03:03:31.9740662Z Oct 18 03:03:31- parking to wait for  
<0xa2748288> (a java.util.concurrent.CompletableFuture$Signaller)
2022-10-18T03:03:31.9741425Z Oct 18 03:03:31at 
java.util.concurrent.locks.LockSupport.park(java.base@11.0.11/LockSupport.java:194)
2022-10-18T03:03:31.9742584Z Oct 18 03:03:31at 
java.util.concurrent.CompletableFuture$Signaller.block(java.base@11.0.11/CompletableFuture.java:1796)
2022-10-18T03:03:31.9743340Z Oct 18 03:03:31at 
java.util.concurrent.ForkJoinPool.managedBlock(java.base@11.0.11/ForkJoinPool.java:3118)
2022-10-18T03:03:31.9744059Z Oct 18 03:03:31at 
java.util.concurrent.CompletableFuture.waitingGet(java.base@11.0.11/CompletableFuture.java:1823)
2022-10-18T03:03:31.9744783Z Oct 18 03:03:31at 
java.util.concurrent.CompletableFuture.get(java.base@11.0.11/CompletableFuture.java:1998)
2022-10-18T03:03:31.9745501Z Oct 18 03:03:31at 
org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:57)
2022-10-18T03:03:31.9746297Z Oct 18 03:03:31at 
org.apache.flink.test.runtime.BatchShuffleITCaseBase.executeJob(BatchShuffleITCaseBase.java:115)
2022-10-18T03:03:31.9747132Z Oct 18 03:03:31at 
org.apache.flink.test.runtime.HybridShuffleITCase.testHybridSelectiveExchanges(HybridShuffleITCase.java:51)
 {code}

> HybridShuffle.testHybridFullExchangesRestart hangs
> --
>
> Key: FLINK-29419
> URL: https://issues.apache.org/jira/browse/FLINK-29419
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Weijie Guo
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> 2022-09-26T10:56:44.0766792Z Sep 26 10:56:44 "ForkJoinPool-1-worker-25" #27 
> daemon prio=5 os_prio=0 tid=0x7f41a4efa000 nid=0x6d76 waiting on 
> condition [0x7f40ac135000]
> 2022-09-26T10:56:44.0767432Z Sep 26 10:56:44java.lang.Thread.State: 
> WAITING (parking)
> 2022-09-26T10:56:44.0767892Z Sep 26 10:56:44  at sun.misc.Unsafe.park(Native 
> Method)
> 2022-09-26T10:56:44.0768644Z Sep 26 10:56:44  - parking to wait for  
> <0xa0704e18> (a java.util.concurrent.CompletableFuture$Signaller)
> 2022-09-26T10:56:44.0769287Z Sep 26 10:56:44  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2022-09-26T10:56:44.0769949Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> 2022-09-26T10:56:44.0770623Z Sep 26 10:56:44  at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313)
> 2022-09-26T10:56:44.0771349Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> 2022-09-26T10:56:44.0772092Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2022-09-26T10:56:44.0772777Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:57)
> 2022-09-26T10:56:44.0773534Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.BatchShuffleITCaseBase.executeJob(BatchShuffleITCaseBase.java:115)
> 2022-09-26T10:56:44.0774333Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridFullExchangesRestart(HybridShuffleITCase.java:59)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41343=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29405) InputFormatCacheLoaderTest is unstable

2022-10-18 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619340#comment-17619340
 ] 

Xingbo Huang commented on FLINK-29405:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42115=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94

> InputFormatCacheLoaderTest is unstable
> --
>
> Key: FLINK-29405
> URL: https://issues.apache.org/jira/browse/FLINK-29405
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Chesnay Schepler
>Assignee: Alexander Smirnov
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> #testExceptionDuringReload/#testCloseAndInterruptDuringReload fail reliably 
> when run in a loop.
> {code}
> java.lang.AssertionError: 
> Expecting AtomicInteger(0) to have value:
>   0
> but did not.
>   at 
> org.apache.flink.table.runtime.functions.table.fullcache.inputformat.InputFormatCacheLoaderTest.testCloseAndInterruptDuringReload(InputFormatCacheLoaderTest.java:161)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29405) InputFormatCacheLoaderTest is unstable

2022-10-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-29405:
-
Labels: pull-request-available test-stability  (was: pull-request-available)

> InputFormatCacheLoaderTest is unstable
> --
>
> Key: FLINK-29405
> URL: https://issues.apache.org/jira/browse/FLINK-29405
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Chesnay Schepler
>Assignee: Alexander Smirnov
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> #testExceptionDuringReload/#testCloseAndInterruptDuringReload fail reliably 
> when run in a loop.
> {code}
> java.lang.AssertionError: 
> Expecting AtomicInteger(0) to have value:
>   0
> but did not.
>   at 
> org.apache.flink.table.runtime.functions.table.fullcache.inputformat.InputFormatCacheLoaderTest.testCloseAndInterruptDuringReload(InputFormatCacheLoaderTest.java:161)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29387) IntervalJoinITCase.testIntervalJoinSideOutputRightLateData failed with AssertionError

2022-10-18 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619333#comment-17619333
 ] 

Xingbo Huang commented on FLINK-29387:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42115=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798

> IntervalJoinITCase.testIntervalJoinSideOutputRightLateData failed with 
> AssertionError
> -
>
> Key: FLINK-29387
> URL: https://issues.apache.org/jira/browse/FLINK-29387
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.17.0
>Reporter: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-22T04:40:21.9296331Z Sep 22 04:40:21 [ERROR] 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.testIntervalJoinSideOutputRightLateData
>   Time elapsed: 2.46 s  <<< FAILURE!
> 2022-09-22T04:40:21.9297487Z Sep 22 04:40:21 java.lang.AssertionError: 
> expected:<[(key,2)]> but was:<[]>
> 2022-09-22T04:40:21.9298208Z Sep 22 04:40:21  at 
> org.junit.Assert.fail(Assert.java:89)
> 2022-09-22T04:40:21.9298927Z Sep 22 04:40:21  at 
> org.junit.Assert.failNotEquals(Assert.java:835)
> 2022-09-22T04:40:21.9299655Z Sep 22 04:40:21  at 
> org.junit.Assert.assertEquals(Assert.java:120)
> 2022-09-22T04:40:21.9300403Z Sep 22 04:40:21  at 
> org.junit.Assert.assertEquals(Assert.java:146)
> 2022-09-22T04:40:21.9301538Z Sep 22 04:40:21  at 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.expectInAnyOrder(IntervalJoinITCase.java:521)
> 2022-09-22T04:40:21.9302578Z Sep 22 04:40:21  at 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.testIntervalJoinSideOutputRightLateData(IntervalJoinITCase.java:280)
> 2022-09-22T04:40:21.9303641Z Sep 22 04:40:21  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-22T04:40:21.9304472Z Sep 22 04:40:21  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-22T04:40:21.9305371Z Sep 22 04:40:21  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-22T04:40:21.9306195Z Sep 22 04:40:21  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-22T04:40:21.9307011Z Sep 22 04:40:21  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-09-22T04:40:21.9308077Z Sep 22 04:40:21  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-09-22T04:40:21.9308968Z Sep 22 04:40:21  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-09-22T04:40:21.9309849Z Sep 22 04:40:21  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-09-22T04:40:21.9310704Z Sep 22 04:40:21  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-09-22T04:40:21.9311533Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-22T04:40:21.9312386Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-09-22T04:40:21.9313231Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-09-22T04:40:21.9314985Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-09-22T04:40:21.9315857Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-09-22T04:40:21.9316633Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-09-22T04:40:21.9317450Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-09-22T04:40:21.9318209Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-09-22T04:40:21.9318949Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-09-22T04:40:21.9319680Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-09-22T04:40:21.9320401Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-22T04:40:21.9321130Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-09-22T04:40:21.9321822Z Sep 22 04:40:21  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-09-22T04:40:21.9322498Z Sep 22 04:40:21  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-09-22T04:40:21.9323248Z Sep 22 04:40:21  at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> 2022-09-22T04:40:21.9324080Z Sep 22 

[jira] [Commented] (FLINK-28424) JdbcExactlyOnceSinkE2eTest hangs on Azure

2022-10-18 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619330#comment-17619330
 ] 

Xingbo Huang commented on FLINK-28424:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42115=logs=075127ba-54d5-54b0-cccf-6a36778b332d=c35a13eb-0df9-505f-29ac-8097029d4d79=17787

> JdbcExactlyOnceSinkE2eTest hangs on Azure
> -
>
> Key: FLINK-28424
> URL: https://issues.apache.org/jira/browse/FLINK-28424
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: Martijn Visser
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> {code:java}
> 2022-07-06T07:10:57.8133295Z 
> ==
> 2022-07-06T07:10:57.8137200Z === WARNING: This task took already 95% of the 
> available time budget of 232 minutes ===
> 2022-07-06T07:10:57.8140723Z 
> ==
> 2022-07-06T07:10:57.8186584Z 
> ==
> 2022-07-06T07:10:57.8187530Z The following Java processes are running (JPS)
> 2022-07-06T07:10:57.8188571Z 
> ==
> 2022-07-06T07:10:58.2136012Z 825016 Jps
> 2022-07-06T07:10:58.2136438Z 34359 surefirebooter8568713056714319310.jar
> 2022-07-06T07:10:58.2136774Z 525 Launcher
> 2022-07-06T07:10:58.2240260Z 
> ==
> 2022-07-06T07:10:58.2240814Z Printing stack trace of Java process 825016
> 2022-07-06T07:10:58.2241256Z 
> ==
> 2022-07-06T07:10:58.4498109Z 825016: No such process
> 2022-07-06T07:10:58.4524779Z 
> ==
> 2022-07-06T07:10:58.4525272Z Printing stack trace of Java process 34359
> 2022-07-06T07:10:58.4525713Z 
> ==
> 2022-07-06T07:10:58.6399085Z 2022-07-06 07:10:58
> 2022-07-06T07:10:58.6400425Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.292-b10 mixed mode):
> 2022-07-06T07:10:58.7332738Z "Legacy Source Thread - Source: Custom Source -> 
> Map -> Sink: Unnamed (1/4)#44585" #870775 prio=5 os_prio=0 
> tid=0x7fca5c06f800 nid=0xc3c26 waiting on condition [0x7fca503b1000]
> 2022-07-06T07:10:58.786Zjava.lang.Thread.State: WAITING (parking)
> 2022-07-06T07:10:58.7333759Z  at sun.misc.Unsafe.park(Native Method)
> 2022-07-06T07:10:58.7334404Z  - parking to wait for  <0xd5998448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> 2022-07-06T07:10:58.7334943Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2022-07-06T07:10:58.7335605Z  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> 2022-07-06T07:10:58.7336392Z  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> 2022-07-06T07:10:58.7337195Z  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> 2022-07-06T07:10:58.7337966Z  at 
> java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> 2022-07-06T07:10:58.7338677Z  at 
> org.apache.flink.connector.jdbc.xa.JdbcExactlyOnceSinkE2eTest$TestEntrySource.waitForConsumers(JdbcExactlyOnceSinkE2eTest.java:314)
> 2022-07-06T07:10:58.7339566Z  at 
> org.apache.flink.connector.jdbc.xa.JdbcExactlyOnceSinkE2eTest$TestEntrySource.run(JdbcExactlyOnceSinkE2eTest.java:300)
> 2022-07-06T07:10:58.7340281Z  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
> 2022-07-06T07:10:58.7340883Z  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
> 2022-07-06T07:10:58.7341583Z  at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:333)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37685=logs=075127ba-54d5-54b0-cccf-6a36778b332d=c35a13eb-0df9-505f-29ac-8097029d4d79=14871



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29658) LocalTime support when converting Table to DataStream in PyFlink

2022-10-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-29658.

  Assignee: Juntao Hu
Resolution: Fixed

Merged into release-1.15 via 5e4da2cc868b28bfdadd689f24bbeb80079d4d5e

> LocalTime support when converting Table to DataStream in PyFlink
> 
>
> Key: FLINK-29658
> URL: https://issues.apache.org/jira/browse/FLINK-29658
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Affects Versions: 1.15.2
>Reporter: Juntao Hu
>Assignee: Juntao Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.3
>
>
> Support for Java LocalDate/Time/DateTime is needed when calling 
> `to_data_stream` on tables containing DATE/TIME/TIMESTAMP fields in PyFlink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29648) "LocalDateTime not supported" error when retrieving Java TypeInformation from PyFlink

2022-10-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-29648.

Resolution: Fixed

Merged into master via ef0489b5131d03112d8a4ef11962a2d1ba6a9f54

Merged into release-1.16 via 59a276b38813a66f9ad80ed81b3dfcfe26decb7a

> "LocalDateTime not supported" error when retrieving Java TypeInformation from 
> PyFlink
> -
>
> Key: FLINK-29648
> URL: https://issues.apache.org/jira/browse/FLINK-29648
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0
>Reporter: Juntao Hu
>Assignee: Juntao Hu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> The following code raises "TypeError: The java type info: LocalDateTime is 
> not supported in PyFlink currently.":
> {code:java}
> t_env.to_data_stream(t).key_by(...){code}
> However, this works:
> {code:java}
> t_env.to_data_stream(t).map(lambda r: r).key_by(...){code}
> Although we add Python coders for LocalTimeTypeInfo in 1.16, there's no 
> corresponding typeinfo at Python side. So it works when a user immediately 
> does processing after to_data_stream since date/time data has already been 
> converted to Python object, but when key_by tries to retrieve typeinfo from 
> Java TypeInformation, it fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29648) "LocalDateTime not supported" error when retrieving Java TypeInformation from PyFlink

2022-10-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang reassigned FLINK-29648:


Assignee: Juntao Hu

> "LocalDateTime not supported" error when retrieving Java TypeInformation from 
> PyFlink
> -
>
> Key: FLINK-29648
> URL: https://issues.apache.org/jira/browse/FLINK-29648
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0
>Reporter: Juntao Hu
>Assignee: Juntao Hu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> The following code raises "TypeError: The java type info: LocalDateTime is 
> not supported in PyFlink currently.":
> {code:java}
> t_env.to_data_stream(t).key_by(...){code}
> However, this works:
> {code:java}
> t_env.to_data_stream(t).map(lambda r: r).key_by(...){code}
> Although we add Python coders for LocalTimeTypeInfo in 1.16, there's no 
> corresponding typeinfo at Python side. So it works when a user immediately 
> does processing after to_data_stream since date/time data has already been 
> converted to Python object, but when key_by tries to retrieve typeinfo from 
> Java TypeInformation, it fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29419) HybridShuffle.testHybridFullExchangesRestart hangs

2022-10-17 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618720#comment-17618720
 ] 

Xingbo Huang commented on FLINK-29419:
--

Temporarily disabled on master via fca71d237b2440c0129ef7e6f8266f4091df4884

Temporarily disabled on master via 5d03327d016c6e3250a82566ff5530c65cf5d344

> HybridShuffle.testHybridFullExchangesRestart hangs
> --
>
> Key: FLINK-29419
> URL: https://issues.apache.org/jira/browse/FLINK-29419
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Weijie Guo
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> 2022-09-26T10:56:44.0766792Z Sep 26 10:56:44 "ForkJoinPool-1-worker-25" #27 
> daemon prio=5 os_prio=0 tid=0x7f41a4efa000 nid=0x6d76 waiting on 
> condition [0x7f40ac135000]
> 2022-09-26T10:56:44.0767432Z Sep 26 10:56:44java.lang.Thread.State: 
> WAITING (parking)
> 2022-09-26T10:56:44.0767892Z Sep 26 10:56:44  at sun.misc.Unsafe.park(Native 
> Method)
> 2022-09-26T10:56:44.0768644Z Sep 26 10:56:44  - parking to wait for  
> <0xa0704e18> (a java.util.concurrent.CompletableFuture$Signaller)
> 2022-09-26T10:56:44.0769287Z Sep 26 10:56:44  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2022-09-26T10:56:44.0769949Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> 2022-09-26T10:56:44.0770623Z Sep 26 10:56:44  at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313)
> 2022-09-26T10:56:44.0771349Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> 2022-09-26T10:56:44.0772092Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2022-09-26T10:56:44.0772777Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:57)
> 2022-09-26T10:56:44.0773534Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.BatchShuffleITCaseBase.executeJob(BatchShuffleITCaseBase.java:115)
> 2022-09-26T10:56:44.0774333Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridFullExchangesRestart(HybridShuffleITCase.java:59)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41343=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   >