[GitHub] [flink] flinkbot edited a comment on pull request #17988: [FLINK-25010][Connectors/Hive] Speed up hive's createMRSplits by multi thread

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #17988:
URL: https://github.com/apache/flink/pull/17988#issuecomment-984363654


   
   ## CI report:
   
   * 19a283c42da777c55b3b1e29bbab04edf4db6bd6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28591)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25292) Azure failed due to unable to fetch some archives

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465137#comment-17465137
 ] 

Yun Gao commented on FLINK-25292:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28583=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=31

> Azure failed due to unable to fetch some archives
> -
>
> Key: FLINK-25292
> URL: https://issues.apache.org/jira/browse/FLINK-25292
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.0, 1.13.3, 1.15.0
>Reporter: Yun Gao
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.13.6, 1.14.3
>
>
> {code:java}
> /bin/bash --noprofile --norc 
> /__w/_temp/ba0f8961-8595-4ace-b13f-d60e17df8803.sh
> Reading package lists...
> Building dependency tree...
> Reading state information...
> The following additional packages will be installed:
>   libio-pty-perl libipc-run-perl
> Suggested packages:
>   libtime-duration-perl libtimedate-perl
> The following NEW packages will be installed:
>   libio-pty-perl libipc-run-perl moreutils
> 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
> Need to get 177 kB of archives.
> After this operation, 573 kB of additional disk space will be used.
> Err:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libio-pty-perl amd64 
> 1:1.08-1.1build1
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> Err:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libipc-run-perl all 
> 0.94-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> Err:3 http://archive.ubuntu.com/ubuntu xenial/universe amd64 moreutils amd64 
> 0.57-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libio-pty-perl/libio-pty-perl_1.08-1.1build1_amd64.deb
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libipc-run-perl/libipc-run-perl_0.94-1_all.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/universe/m/moreutils/moreutils_0.57-1_amd64.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Unable to fetch some archives, maybe run apt-get update or try with 
> --fix-missing?
> Running command './tools/ci/test_controller.sh kafka/gelly' with a timeout of 
> 234 minutes.
> ./tools/azure-pipelines/uploading_watchdog.sh: line 76: ts: command not found
> The STDIO streams did not close within 10 seconds of the exit event from 
> process '/bin/bash'. This may indicate a child process inherited the STDIO 
> streams and has not yet exited.
> ##[error]Bash exited with code '141'.
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28064=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=e424005a-b16e-540f-196d-da062cc19bdf=13



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-25292) Azure failed due to unable to fetch some archives

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465137#comment-17465137
 ] 

Yun Gao edited comment on FLINK-25292 at 12/25/21, 3:38 AM:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28583=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28583=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=31


was (Author: gaoyunhaii):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28583=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=31

> Azure failed due to unable to fetch some archives
> -
>
> Key: FLINK-25292
> URL: https://issues.apache.org/jira/browse/FLINK-25292
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.0, 1.13.3, 1.15.0
>Reporter: Yun Gao
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.13.6, 1.14.3
>
>
> {code:java}
> /bin/bash --noprofile --norc 
> /__w/_temp/ba0f8961-8595-4ace-b13f-d60e17df8803.sh
> Reading package lists...
> Building dependency tree...
> Reading state information...
> The following additional packages will be installed:
>   libio-pty-perl libipc-run-perl
> Suggested packages:
>   libtime-duration-perl libtimedate-perl
> The following NEW packages will be installed:
>   libio-pty-perl libipc-run-perl moreutils
> 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
> Need to get 177 kB of archives.
> After this operation, 573 kB of additional disk space will be used.
> Err:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libio-pty-perl amd64 
> 1:1.08-1.1build1
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> Err:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libipc-run-perl all 
> 0.94-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> Err:3 http://archive.ubuntu.com/ubuntu xenial/universe amd64 moreutils amd64 
> 0.57-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libio-pty-perl/libio-pty-perl_1.08-1.1build1_amd64.deb
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libipc-run-perl/libipc-run-perl_0.94-1_all.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/universe/m/moreutils/moreutils_0.57-1_amd64.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Unable to fetch some archives, maybe run apt-get update or try with 
> --fix-missing?
> Running command './tools/ci/test_controller.sh kafka/gelly' with a timeout of 
> 234 minutes.
> ./tools/azure-pipelines/uploading_watchdog.sh: line 76: ts: command not found
> The STDIO streams did not close within 10 seconds of the exit event from 
> process '/bin/bash'. This may indicate a child process inherited the STDIO 
> streams and has not yet exited.
> ##[error]Bash exited with code '141'.
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28064=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=e424005a-b16e-540f-196d-da062cc19bdf=13



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25292) Azure failed due to unable to fetch some archives

2021-12-24 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25292:

Priority: Blocker  (was: Critical)

> Azure failed due to unable to fetch some archives
> -
>
> Key: FLINK-25292
> URL: https://issues.apache.org/jira/browse/FLINK-25292
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.0, 1.13.3, 1.15.0
>Reporter: Yun Gao
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.13.6, 1.14.3
>
>
> {code:java}
> /bin/bash --noprofile --norc 
> /__w/_temp/ba0f8961-8595-4ace-b13f-d60e17df8803.sh
> Reading package lists...
> Building dependency tree...
> Reading state information...
> The following additional packages will be installed:
>   libio-pty-perl libipc-run-perl
> Suggested packages:
>   libtime-duration-perl libtimedate-perl
> The following NEW packages will be installed:
>   libio-pty-perl libipc-run-perl moreutils
> 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
> Need to get 177 kB of archives.
> After this operation, 573 kB of additional disk space will be used.
> Err:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libio-pty-perl amd64 
> 1:1.08-1.1build1
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> Err:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libipc-run-perl all 
> 0.94-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> Err:3 http://archive.ubuntu.com/ubuntu xenial/universe amd64 moreutils amd64 
> 0.57-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libio-pty-perl/libio-pty-perl_1.08-1.1build1_amd64.deb
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libipc-run-perl/libipc-run-perl_0.94-1_all.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/universe/m/moreutils/moreutils_0.57-1_amd64.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Unable to fetch some archives, maybe run apt-get update or try with 
> --fix-missing?
> Running command './tools/ci/test_controller.sh kafka/gelly' with a timeout of 
> 234 minutes.
> ./tools/azure-pipelines/uploading_watchdog.sh: line 76: ts: command not found
> The STDIO streams did not close within 10 seconds of the exit event from 
> process '/bin/bash'. This may indicate a child process inherited the STDIO 
> streams and has not yet exited.
> ##[error]Bash exited with code '141'.
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28064=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=e424005a-b16e-540f-196d-da062cc19bdf=13



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25292) Azure failed due to unable to fetch some archives

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465136#comment-17465136
 ] 

Yun Gao commented on FLINK-25292:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28566=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=29
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28566=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28566=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=31


> Azure failed due to unable to fetch some archives
> -
>
> Key: FLINK-25292
> URL: https://issues.apache.org/jira/browse/FLINK-25292
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.0, 1.13.3, 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.13.6, 1.14.3
>
>
> {code:java}
> /bin/bash --noprofile --norc 
> /__w/_temp/ba0f8961-8595-4ace-b13f-d60e17df8803.sh
> Reading package lists...
> Building dependency tree...
> Reading state information...
> The following additional packages will be installed:
>   libio-pty-perl libipc-run-perl
> Suggested packages:
>   libtime-duration-perl libtimedate-perl
> The following NEW packages will be installed:
>   libio-pty-perl libipc-run-perl moreutils
> 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
> Need to get 177 kB of archives.
> After this operation, 573 kB of additional disk space will be used.
> Err:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libio-pty-perl amd64 
> 1:1.08-1.1build1
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> Err:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libipc-run-perl all 
> 0.94-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> Err:3 http://archive.ubuntu.com/ubuntu xenial/universe amd64 moreutils amd64 
> 0.57-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libio-pty-perl/libio-pty-perl_1.08-1.1build1_amd64.deb
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libipc-run-perl/libipc-run-perl_0.94-1_all.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/universe/m/moreutils/moreutils_0.57-1_amd64.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Unable to fetch some archives, maybe run apt-get update or try with 
> --fix-missing?
> Running command './tools/ci/test_controller.sh kafka/gelly' with a timeout of 
> 234 minutes.
> ./tools/azure-pipelines/uploading_watchdog.sh: line 76: ts: command not found
> The STDIO streams did not close within 10 seconds of the exit event from 
> process '/bin/bash'. This may indicate a child process inherited the STDIO 
> streams and has not yet exited.
> ##[error]Bash exited with code '141'.
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28064=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=e424005a-b16e-540f-196d-da062cc19bdf=13



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25426) UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint fails on AZP because it cannot allocate enough network buffers

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465135#comment-17465135
 ] 

Yun Gao commented on FLINK-25426:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28566=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=16536

> UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint fails on 
> AZP because it cannot allocate enough network buffers
> 
>
> Key: FLINK-25426
> URL: https://issues.apache.org/jira/browse/FLINK-25426
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> The test 
> {{UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint}} fails 
> with
> {code}
> 2021-12-23T02:54:46.2862342Z Dec 23 02:54:46 [ERROR] 
> UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint  Time 
> elapsed: 2.992 s  <<< ERROR!
> 2021-12-23T02:54:46.2865774Z Dec 23 02:54:46 java.lang.OutOfMemoryError: 
> Could not allocate enough memory segments for NetworkBufferPool (required 
> (Mb): 64, allocated (Mb): 14, missing (Mb): 50). Cause: Direct buffer memory. 
> The direct out-of-memory error has occurred. This can mean two things: either 
> job(s) require(s) a larger size of JVM direct memory or there is a direct 
> memory leak. The direct memory can be allocated by user code or some of its 
> dependencies. In this case 'taskmanager.memory.task.off-heap.size' 
> configuration option should be increased. Flink framework and its 
> dependencies also consume the direct memory, mostly for network 
> communication. The most of network memory is managed by Flink and should not 
> result in out-of-memory error. In certain special cases, in particular for 
> jobs with high parallelism, the framework may require more direct memory 
> which is not managed by Flink. In this case 
> 'taskmanager.memory.framework.off-heap.size' configuration option should be 
> increased. If the error persists then there is probably a direct memory leak 
> in user code or some of its dependencies which has to be investigated and 
> fixed. The task executor has to be shutdown...
> 2021-12-23T02:54:46.2868239Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.buffer.NetworkBufferPool.(NetworkBufferPool.java:138)
> 2021-12-23T02:54:46.2868975Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createNettyShuffleEnvironment(NettyShuffleServiceFactory.java:140)
> 2021-12-23T02:54:46.2869771Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createNettyShuffleEnvironment(NettyShuffleServiceFactory.java:94)
> 2021-12-23T02:54:46.2870550Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createShuffleEnvironment(NettyShuffleServiceFactory.java:79)
> 2021-12-23T02:54:46.2871312Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createShuffleEnvironment(NettyShuffleServiceFactory.java:58)
> 2021-12-23T02:54:46.2872062Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.taskexecutor.TaskManagerServices.createShuffleEnvironment(TaskManagerServices.java:414)
> 2021-12-23T02:54:46.2872767Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.taskexecutor.TaskManagerServices.fromConfiguration(TaskManagerServices.java:282)
> 2021-12-23T02:54:46.2873436Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:523)
> 2021-12-23T02:54:46.2877615Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.minicluster.MiniCluster.startTaskManager(MiniCluster.java:645)
> 2021-12-23T02:54:46.2878247Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.minicluster.MiniCluster.startTaskManagers(MiniCluster.java:626)
> 2021-12-23T02:54:46.2878856Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:379)
> 2021-12-23T02:54:46.2879487Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.testutils.MiniClusterResource.startMiniCluster(MiniClusterResource.java:209)
> 2021-12-23T02:54:46.2880152Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.testutils.MiniClusterResource.before(MiniClusterResource.java:95)
> 2021-12-23T02:54:46.2880821Z Dec 23 02:54:46  at 
> org.apache.flink.test.util.MiniClusterWithClientResource.before(MiniClusterWithClientResource.java:64)
> 2021-12-23T02:54:46.2881519Z Dec 23 02:54:46  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.execute(UnalignedCheckpointTestBase.java:151)
> 2021-12-23T02:54:46.2882310Z Dec 23 02:54:46  at 
> 

[jira] [Comment Edited] (FLINK-18356) Exit code 137 returned from process

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465130#comment-17465130
 ] 

Yun Gao edited comment on FLINK-18356 at 12/25/21, 3:35 AM:


Table tests on master: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=9584
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798




was (Author: gaoyunhaii):
Table tests on master: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=9584

> Exit code 137 returned from process
> ---
>
> Key: FLINK-18356
> URL: https://issues.apache.org/jira/browse/FLINK-18356
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.12.0, 1.13.0, 1.14.0, 1.15.0
>Reporter: Piotr Nowojski
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> {noformat}
> = test session starts 
> ==
> platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
> cachedir: .tox/py37-cython/.pytest_cache
> rootdir: /__w/3/s/flink-python
> collected 568 items
> pyflink/common/tests/test_configuration.py ..[  
> 1%]
> pyflink/common/tests/test_execution_config.py ...[  
> 5%]
> pyflink/dataset/tests/test_execution_environment.py .
> ##[error]Exit code 137 returned from process: file name '/bin/docker', 
> arguments 'exec -i -u 1002 
> 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> Finishing: Test - python
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25292) Azure failed due to unable to fetch some archives

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465134#comment-17465134
 ] 

Yun Gao commented on FLINK-25292:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=29
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=f53023d8-92c3-5d78-ec7e-70c2bf37be20=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=e424005a-b16e-540f-196d-da062cc19bdf=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=119bbba7-f5e3-5e08-e72d-09f1529665de=7166e71c-cad6-5ec9-ae14-15891ce68128=27
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=576aba0a-d787-51b6-6a92-cf233f360582=28
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=a549b384-c55a-52c0-c451-00e0477ab6db=eef5922c-08d9-5ba3-7299-8393476594e7=29
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=e92ecf6d-e207-5a42-7ff7-528ff0c5b259=40fc352e-9b4c-5fd8-363f-628f24b01ec2=29
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=ce3801ad-3bd5-5f06-d165-34d37e757d90=5e4d9387-1dcc-5885-a901-90469b7e6d2f=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=c520d2c3-4d17-51f1-813b-4b0b74a0c307=31

> Azure failed due to unable to fetch some archives
> -
>
> Key: FLINK-25292
> URL: https://issues.apache.org/jira/browse/FLINK-25292
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.0, 1.13.3, 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.13.6, 1.14.3
>
>
> {code:java}
> /bin/bash --noprofile --norc 
> /__w/_temp/ba0f8961-8595-4ace-b13f-d60e17df8803.sh
> Reading package lists...
> Building dependency tree...
> Reading state information...
> The following additional packages will be installed:
>   libio-pty-perl libipc-run-perl
> Suggested packages:
>   libtime-duration-perl libtimedate-perl
> The following NEW packages will be installed:
>   libio-pty-perl libipc-run-perl moreutils
> 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
> Need to get 177 kB of archives.
> After this operation, 573 kB of additional disk space will be used.
> Err:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libio-pty-perl amd64 
> 1:1.08-1.1build1
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> Err:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libipc-run-perl all 
> 0.94-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> Err:3 http://archive.ubuntu.com/ubuntu xenial/universe amd64 moreutils amd64 
> 0.57-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libio-pty-perl/libio-pty-perl_1.08-1.1build1_amd64.deb
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libipc-run-perl/libipc-run-perl_0.94-1_all.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/universe/m/moreutils/moreutils_0.57-1_amd64.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Unable to fetch some archives, maybe run apt-get update or try with 
> --fix-missing?
> Running command './tools/ci/test_controller.sh kafka/gelly' with a timeout of 
> 234 minutes.
> ./tools/azure-pipelines/uploading_watchdog.sh: line 76: ts: command not found
> The STDIO streams did not close within 10 seconds of the exit event from 
> process '/bin/bash'. This may indicate a child process inherited the STDIO 
> streams and has not yet exited.
> ##[error]Bash exited with code '141'.
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28064=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=e424005a-b16e-540f-196d-da062cc19bdf=13



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25427) SavepointITCase.testTriggerSavepointAndResumeWithNoClaim fails on AZP

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465133#comment-17465133
 ] 

Yun Gao commented on FLINK-25427:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=9806

> SavepointITCase.testTriggerSavepointAndResumeWithNoClaim fails on AZP
> -
>
> Key: FLINK-25427
> URL: https://issues.apache.org/jira/browse/FLINK-25427
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> The test {{SavepointITCase.testTriggerSavepointAndResumeWithNoClaim}} fails 
> on AZP with
> {code}
> 2021-12-23T03:10:26.4240179Z Dec 23 03:10:26 [ERROR] 
> org.apache.flink.test.checkpointing.SavepointITCase.testTriggerSavepointAndResumeWithNoClaim
>   Time elapsed: 62.289 s  <<< ERROR!
> 2021-12-23T03:10:26.4240998Z Dec 23 03:10:26 
> java.util.concurrent.TimeoutException: Condition was not met in given timeout.
> 2021-12-23T03:10:26.4241716Z Dec 23 03:10:26  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:166)
> 2021-12-23T03:10:26.4242643Z Dec 23 03:10:26  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:144)
> 2021-12-23T03:10:26.4243295Z Dec 23 03:10:26  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:136)
> 2021-12-23T03:10:26.4244433Z Dec 23 03:10:26  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning(CommonTestUtils.java:210)
> 2021-12-23T03:10:26.4245166Z Dec 23 03:10:26  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning(CommonTestUtils.java:184)
> 2021-12-23T03:10:26.4245830Z Dec 23 03:10:26  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning(CommonTestUtils.java:172)
> 2021-12-23T03:10:26.4246870Z Dec 23 03:10:26  at 
> org.apache.flink.test.checkpointing.SavepointITCase.testTriggerSavepointAndResumeWithNoClaim(SavepointITCase.java:446)
> 2021-12-23T03:10:26.4247813Z Dec 23 03:10:26  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-12-23T03:10:26.4248808Z Dec 23 03:10:26  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-12-23T03:10:26.4249426Z Dec 23 03:10:26  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-12-23T03:10:26.4250192Z Dec 23 03:10:26  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-12-23T03:10:26.4251196Z Dec 23 03:10:26  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2021-12-23T03:10:26.4252160Z Dec 23 03:10:26  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-12-23T03:10:26.4252888Z Dec 23 03:10:26  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2021-12-23T03:10:26.4253547Z Dec 23 03:10:26  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-12-23T03:10:26.4254142Z Dec 23 03:10:26  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-12-23T03:10:26.4254932Z Dec 23 03:10:26  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2021-12-23T03:10:26.4255513Z Dec 23 03:10:26  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2021-12-23T03:10:26.4256091Z Dec 23 03:10:26  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-12-23T03:10:26.4256636Z Dec 23 03:10:26  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-12-23T03:10:26.4257165Z Dec 23 03:10:26  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-12-23T03:10:26.4257744Z Dec 23 03:10:26  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2021-12-23T03:10:26.4258312Z Dec 23 03:10:26  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2021-12-23T03:10:26.4258884Z Dec 23 03:10:26  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2021-12-23T03:10:26.4259488Z Dec 23 03:10:26  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2021-12-23T03:10:26.4260049Z Dec 23 03:10:26  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2021-12-23T03:10:26.4260579Z Dec 23 03:10:26  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2021-12-23T03:10:26.4261108Z Dec 

[jira] [Comment Edited] (FLINK-25426) UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint fails on AZP because it cannot allocate enough network buffers

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465131#comment-17465131
 ] 

Yun Gao edited comment on FLINK-25426 at 12/25/21, 3:35 AM:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=b78d9d30-509a-5cea-1fef-db7abaa325ae=14634
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=747432ad-a576-5911-1e2a-68c6bedc248a=21020


was (Author: gaoyunhaii):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=b78d9d30-509a-5cea-1fef-db7abaa325ae=14634

> UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint fails on 
> AZP because it cannot allocate enough network buffers
> 
>
> Key: FLINK-25426
> URL: https://issues.apache.org/jira/browse/FLINK-25426
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> The test 
> {{UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint}} fails 
> with
> {code}
> 2021-12-23T02:54:46.2862342Z Dec 23 02:54:46 [ERROR] 
> UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint  Time 
> elapsed: 2.992 s  <<< ERROR!
> 2021-12-23T02:54:46.2865774Z Dec 23 02:54:46 java.lang.OutOfMemoryError: 
> Could not allocate enough memory segments for NetworkBufferPool (required 
> (Mb): 64, allocated (Mb): 14, missing (Mb): 50). Cause: Direct buffer memory. 
> The direct out-of-memory error has occurred. This can mean two things: either 
> job(s) require(s) a larger size of JVM direct memory or there is a direct 
> memory leak. The direct memory can be allocated by user code or some of its 
> dependencies. In this case 'taskmanager.memory.task.off-heap.size' 
> configuration option should be increased. Flink framework and its 
> dependencies also consume the direct memory, mostly for network 
> communication. The most of network memory is managed by Flink and should not 
> result in out-of-memory error. In certain special cases, in particular for 
> jobs with high parallelism, the framework may require more direct memory 
> which is not managed by Flink. In this case 
> 'taskmanager.memory.framework.off-heap.size' configuration option should be 
> increased. If the error persists then there is probably a direct memory leak 
> in user code or some of its dependencies which has to be investigated and 
> fixed. The task executor has to be shutdown...
> 2021-12-23T02:54:46.2868239Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.buffer.NetworkBufferPool.(NetworkBufferPool.java:138)
> 2021-12-23T02:54:46.2868975Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createNettyShuffleEnvironment(NettyShuffleServiceFactory.java:140)
> 2021-12-23T02:54:46.2869771Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createNettyShuffleEnvironment(NettyShuffleServiceFactory.java:94)
> 2021-12-23T02:54:46.2870550Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createShuffleEnvironment(NettyShuffleServiceFactory.java:79)
> 2021-12-23T02:54:46.2871312Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createShuffleEnvironment(NettyShuffleServiceFactory.java:58)
> 2021-12-23T02:54:46.2872062Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.taskexecutor.TaskManagerServices.createShuffleEnvironment(TaskManagerServices.java:414)
> 2021-12-23T02:54:46.2872767Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.taskexecutor.TaskManagerServices.fromConfiguration(TaskManagerServices.java:282)
> 2021-12-23T02:54:46.2873436Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:523)
> 2021-12-23T02:54:46.2877615Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.minicluster.MiniCluster.startTaskManager(MiniCluster.java:645)
> 2021-12-23T02:54:46.2878247Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.minicluster.MiniCluster.startTaskManagers(MiniCluster.java:626)
> 2021-12-23T02:54:46.2878856Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:379)
> 2021-12-23T02:54:46.2879487Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.testutils.MiniClusterResource.startMiniCluster(MiniClusterResource.java:209)
> 2021-12-23T02:54:46.2880152Z Dec 23 02:54:46  at 
> 

[jira] [Commented] (FLINK-23047) CassandraConnectorITCase.testCassandraBatchTupleFormat fails on azure

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465132#comment-17465132
 ] 

Yun Gao commented on FLINK-23047:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=c91190b6-40ae-57b2-5999-31b869b0a7c1=41463ccd-0694-5d4d-220d-8f771e7d098b=11527

> CassandraConnectorITCase.testCassandraBatchTupleFormat fails on azure
> -
>
> Key: FLINK-23047
> URL: https://issues.apache.org/jira/browse/FLINK-23047
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0, 1.12.4, 1.13.2, 1.15.0
>Reporter: Xintong Song
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.14.3
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19176=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=13995
> {code}
> [ERROR] Tests run: 17, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 157.28 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> [ERROR] 
> testCassandraBatchTupleFormat(org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase)
>   Time elapsed: 12.052 s  <<< ERROR!
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: /127.0.0.1:9042 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: [/127.0.0.1] 
> Timed out waiting for server response))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
>   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.createTable(CassandraConnectorITCase.java:234)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> 

[jira] [Commented] (FLINK-25426) UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint fails on AZP because it cannot allocate enough network buffers

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465131#comment-17465131
 ] 

Yun Gao commented on FLINK-25426:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=b78d9d30-509a-5cea-1fef-db7abaa325ae=14634

> UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint fails on 
> AZP because it cannot allocate enough network buffers
> 
>
> Key: FLINK-25426
> URL: https://issues.apache.org/jira/browse/FLINK-25426
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> The test 
> {{UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint}} fails 
> with
> {code}
> 2021-12-23T02:54:46.2862342Z Dec 23 02:54:46 [ERROR] 
> UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint  Time 
> elapsed: 2.992 s  <<< ERROR!
> 2021-12-23T02:54:46.2865774Z Dec 23 02:54:46 java.lang.OutOfMemoryError: 
> Could not allocate enough memory segments for NetworkBufferPool (required 
> (Mb): 64, allocated (Mb): 14, missing (Mb): 50). Cause: Direct buffer memory. 
> The direct out-of-memory error has occurred. This can mean two things: either 
> job(s) require(s) a larger size of JVM direct memory or there is a direct 
> memory leak. The direct memory can be allocated by user code or some of its 
> dependencies. In this case 'taskmanager.memory.task.off-heap.size' 
> configuration option should be increased. Flink framework and its 
> dependencies also consume the direct memory, mostly for network 
> communication. The most of network memory is managed by Flink and should not 
> result in out-of-memory error. In certain special cases, in particular for 
> jobs with high parallelism, the framework may require more direct memory 
> which is not managed by Flink. In this case 
> 'taskmanager.memory.framework.off-heap.size' configuration option should be 
> increased. If the error persists then there is probably a direct memory leak 
> in user code or some of its dependencies which has to be investigated and 
> fixed. The task executor has to be shutdown...
> 2021-12-23T02:54:46.2868239Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.buffer.NetworkBufferPool.(NetworkBufferPool.java:138)
> 2021-12-23T02:54:46.2868975Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createNettyShuffleEnvironment(NettyShuffleServiceFactory.java:140)
> 2021-12-23T02:54:46.2869771Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createNettyShuffleEnvironment(NettyShuffleServiceFactory.java:94)
> 2021-12-23T02:54:46.2870550Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createShuffleEnvironment(NettyShuffleServiceFactory.java:79)
> 2021-12-23T02:54:46.2871312Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createShuffleEnvironment(NettyShuffleServiceFactory.java:58)
> 2021-12-23T02:54:46.2872062Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.taskexecutor.TaskManagerServices.createShuffleEnvironment(TaskManagerServices.java:414)
> 2021-12-23T02:54:46.2872767Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.taskexecutor.TaskManagerServices.fromConfiguration(TaskManagerServices.java:282)
> 2021-12-23T02:54:46.2873436Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:523)
> 2021-12-23T02:54:46.2877615Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.minicluster.MiniCluster.startTaskManager(MiniCluster.java:645)
> 2021-12-23T02:54:46.2878247Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.minicluster.MiniCluster.startTaskManagers(MiniCluster.java:626)
> 2021-12-23T02:54:46.2878856Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:379)
> 2021-12-23T02:54:46.2879487Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.testutils.MiniClusterResource.startMiniCluster(MiniClusterResource.java:209)
> 2021-12-23T02:54:46.2880152Z Dec 23 02:54:46  at 
> org.apache.flink.runtime.testutils.MiniClusterResource.before(MiniClusterResource.java:95)
> 2021-12-23T02:54:46.2880821Z Dec 23 02:54:46  at 
> org.apache.flink.test.util.MiniClusterWithClientResource.before(MiniClusterWithClientResource.java:64)
> 2021-12-23T02:54:46.2881519Z Dec 23 02:54:46  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.execute(UnalignedCheckpointTestBase.java:151)
> 2021-12-23T02:54:46.2882310Z Dec 23 02:54:46  at 
> 

[jira] [Commented] (FLINK-18356) Exit code 137 returned from process

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465130#comment-17465130
 ] 

Yun Gao commented on FLINK-18356:
-

Table tests on master: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=9584

> Exit code 137 returned from process
> ---
>
> Key: FLINK-18356
> URL: https://issues.apache.org/jira/browse/FLINK-18356
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.12.0, 1.13.0, 1.14.0, 1.15.0
>Reporter: Piotr Nowojski
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> {noformat}
> = test session starts 
> ==
> platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
> cachedir: .tox/py37-cython/.pytest_cache
> rootdir: /__w/3/s/flink-python
> collected 568 items
> pyflink/common/tests/test_configuration.py ..[  
> 1%]
> pyflink/common/tests/test_execution_config.py ...[  
> 5%]
> pyflink/dataset/tests/test_execution_environment.py .
> ##[error]Exit code 137 returned from process: file name '/bin/docker', 
> arguments 'exec -i -u 1002 
> 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> Finishing: Test - python
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-23047) CassandraConnectorITCase.testCassandraBatchTupleFormat fails on azure

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465129#comment-17465129
 ] 

Yun Gao commented on FLINK-23047:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28553=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=11924

> CassandraConnectorITCase.testCassandraBatchTupleFormat fails on azure
> -
>
> Key: FLINK-23047
> URL: https://issues.apache.org/jira/browse/FLINK-23047
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0, 1.12.4, 1.13.2, 1.15.0
>Reporter: Xintong Song
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.14.3
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19176=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=13995
> {code}
> [ERROR] Tests run: 17, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 157.28 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> [ERROR] 
> testCassandraBatchTupleFormat(org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase)
>   Time elapsed: 12.052 s  <<< ERROR!
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: /127.0.0.1:9042 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: [/127.0.0.1] 
> Timed out waiting for server response))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
>   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.createTable(CassandraConnectorITCase.java:234)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> 

[GitHub] [flink] lipusheng commented on pull request #18147: [FLINK-25370][docs] update HBase SQL Connector Options table-name des…

2021-12-24 Thread GitBox


lipusheng commented on pull request #18147:
URL: https://github.com/apache/flink/pull/18147#issuecomment-1000967223


   @fapaul can you help me merge it, thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25292) Azure failed due to unable to fetch some archives

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465128#comment-17465128
 ] 

Yun Gao commented on FLINK-25292:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=27
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=b2f046ab-ae17-5406-acdc-240be7e870e4=93e5ae06-d194-513d-ba8d-150ef6da1d7c=27
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=5360d54c-8d94-5d85-304e-a89267eb785a=30
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=6bfdaf55-0c08-5e3f-a2d2-2a0285fd41cf=fd9796c3-9ce8-5619-781c-42f873e126a6=30
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=f450c1a5-64b1-5955-e215-49cb1ad5ec88=ea63c80c-957f-50d1-8f67-3671c14686b9=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=d8d26c26-7ec2-5ed2-772e-7a1a1eb8317c=be5fb08e-1ad7-563c-4f1a-a97ad4ce4865=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=d89de3df-4600-5585-dadc-9bbc9a5e661c=19336553-69ec-5b03-471a-791a483cced6=29
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=af0c3dd6-ccea-53d1-d352-344c568905e4=f898bece-d8f3-5fab-10f5-eacbefdb2d1b=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=02c4e775-43bf-5625-d1cc-542b5209e072=e5961b24-88d9-5c77-efd3-955422674c25=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=a549b384-c55a-52c0-c451-00e0477ab6db=81f2da51-a161-54c7-5b84-6001fed26530=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=3b6ec2fd-a816-5e75-c775-06fb87cb6670=2aff8966-346f-518f-e6ce-de64002a5034=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=f66801b3-5d8b-58b4-03aa-cc67e0663d23=1abe556e-1530-599d-b2c7-b8c00d549e53=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=f266c805-9429-58ed-2f9e-482e7b82f58b=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=2c7d57b9-7341-5a87-c9af-2cf7cc1a37dc=30
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=c5612577-f1f7-5977-6ff6-7432788526f7=53f6305f-55e6-561c-8f1e-3a1dde2c77df=29
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=a5ef94ef-68c2-57fd-3794-dc108ed1c495=9c1ddabe-d186-5a2c-5fcc-f3cafb3ec699=29

It seems nearly all the ci machines are affected. 


> Azure failed due to unable to fetch some archives
> -
>
> Key: FLINK-25292
> URL: https://issues.apache.org/jira/browse/FLINK-25292
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.0, 1.13.3, 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.13.6, 1.14.3
>
>
> {code:java}
> /bin/bash --noprofile --norc 
> /__w/_temp/ba0f8961-8595-4ace-b13f-d60e17df8803.sh
> Reading package lists...
> Building dependency tree...
> Reading state information...
> The following additional packages will be installed:
>   libio-pty-perl libipc-run-perl
> Suggested packages:
>   libtime-duration-perl libtimedate-perl
> The following NEW packages will be installed:
>   libio-pty-perl libipc-run-perl moreutils
> 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
> Need to get 177 kB of archives.
> After this operation, 573 kB of additional disk space will be used.
> Err:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libio-pty-perl amd64 
> 1:1.08-1.1build1
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> Err:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libipc-run-perl all 
> 0.94-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> Err:3 http://archive.ubuntu.com/ubuntu xenial/universe amd64 moreutils amd64 
> 0.57-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libio-pty-perl/libio-pty-perl_1.08-1.1build1_amd64.deb
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libipc-run-perl/libipc-run-perl_0.94-1_all.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> 

[jira] [Commented] (FLINK-23047) CassandraConnectorITCase.testCassandraBatchTupleFormat fails on azure

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465127#comment-17465127
 ] 

Yun Gao commented on FLINK-23047:
-

1.13: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=b6c4efed-9c7d-55ea-03a9-9bd7d5b08e4c=13475

> CassandraConnectorITCase.testCassandraBatchTupleFormat fails on azure
> -
>
> Key: FLINK-23047
> URL: https://issues.apache.org/jira/browse/FLINK-23047
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0, 1.12.4, 1.13.2, 1.15.0
>Reporter: Xintong Song
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.14.3
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19176=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=13995
> {code}
> [ERROR] Tests run: 17, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 157.28 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> [ERROR] 
> testCassandraBatchTupleFormat(org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase)
>   Time elapsed: 12.052 s  <<< ERROR!
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: /127.0.0.1:9042 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: [/127.0.0.1] 
> Timed out waiting for server response))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
>   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.createTable(CassandraConnectorITCase.java:234)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> 

[jira] [Commented] (FLINK-22306) KafkaITCase.testCollectingSchema failed on AZP

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465126#comment-17465126
 ] 

Yun Gao commented on FLINK-22306:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518=7942

> KafkaITCase.testCollectingSchema failed on AZP
> --
>
> Key: FLINK-22306
> URL: https://issues.apache.org/jira/browse/FLINK-22306
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Assignee: Fabian Paul
>Priority: Minor
>  Labels: test-stability
> Fix For: 1.15.0, 1.14.3
>
>
> The {{KafkaITCase.testCollectingSchema}} failed on AZP with
> {code}
> 2021-04-15T10:22:06.8263865Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2021-04-15T10:22:06.8266577Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2021-04-15T10:22:06.8267526Z  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> 2021-04-15T10:22:06.8268034Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2021-04-15T10:22:06.8268496Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2021-04-15T10:22:06.8269133Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-15T10:22:06.8270205Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-04-15T10:22:06.8270698Z  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
> 2021-04-15T10:22:06.8271192Z  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2021-04-15T10:22:06.8274903Z  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2021-04-15T10:22:06.8275602Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-15T10:22:06.8276139Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-04-15T10:22:06.8276589Z  at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081)
> 2021-04-15T10:22:06.8276965Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:264)
> 2021-04-15T10:22:06.8277307Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:261)
> 2021-04-15T10:22:06.8277634Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
> 2021-04-15T10:22:06.8277971Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
> 2021-04-15T10:22:06.8278352Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-04-15T10:22:06.8278767Z  at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
> 2021-04-15T10:22:06.8279223Z  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> 2021-04-15T10:22:06.8279743Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> 2021-04-15T10:22:06.8280130Z  at 
> akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
> 2021-04-15T10:22:06.8280561Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
> 2021-04-15T10:22:06.8287231Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
> 2021-04-15T10:22:06.8291223Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
> 2021-04-15T10:22:06.8291779Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
> 2021-04-15T10:22:06.8292745Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-04-15T10:22:06.8293335Z  at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
> 2021-04-15T10:22:06.8294000Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8294702Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8295281Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8295905Z  at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> 2021-04-15T10:22:06.8296412Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
> 2021-04-15T10:22:06.8296799Z  at 
> akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> 2021-04-15T10:22:06.8297353Z  at 
> 

[jira] [Updated] (FLINK-22306) KafkaITCase.testCollectingSchema failed on AZP

2021-12-24 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22306:

Labels: test-stability  (was: )

> KafkaITCase.testCollectingSchema failed on AZP
> --
>
> Key: FLINK-22306
> URL: https://issues.apache.org/jira/browse/FLINK-22306
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Assignee: Fabian Paul
>Priority: Minor
>  Labels: test-stability
> Fix For: 1.15.0, 1.14.3
>
>
> The {{KafkaITCase.testCollectingSchema}} failed on AZP with
> {code}
> 2021-04-15T10:22:06.8263865Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2021-04-15T10:22:06.8266577Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2021-04-15T10:22:06.8267526Z  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> 2021-04-15T10:22:06.8268034Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2021-04-15T10:22:06.8268496Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2021-04-15T10:22:06.8269133Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-15T10:22:06.8270205Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-04-15T10:22:06.8270698Z  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
> 2021-04-15T10:22:06.8271192Z  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2021-04-15T10:22:06.8274903Z  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2021-04-15T10:22:06.8275602Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-15T10:22:06.8276139Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-04-15T10:22:06.8276589Z  at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081)
> 2021-04-15T10:22:06.8276965Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:264)
> 2021-04-15T10:22:06.8277307Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:261)
> 2021-04-15T10:22:06.8277634Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
> 2021-04-15T10:22:06.8277971Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
> 2021-04-15T10:22:06.8278352Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-04-15T10:22:06.8278767Z  at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
> 2021-04-15T10:22:06.8279223Z  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> 2021-04-15T10:22:06.8279743Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> 2021-04-15T10:22:06.8280130Z  at 
> akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
> 2021-04-15T10:22:06.8280561Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
> 2021-04-15T10:22:06.8287231Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
> 2021-04-15T10:22:06.8291223Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
> 2021-04-15T10:22:06.8291779Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
> 2021-04-15T10:22:06.8292745Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-04-15T10:22:06.8293335Z  at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
> 2021-04-15T10:22:06.8294000Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8294702Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8295281Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8295905Z  at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> 2021-04-15T10:22:06.8296412Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
> 2021-04-15T10:22:06.8296799Z  at 
> akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> 2021-04-15T10:22:06.8297353Z  at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
> 2021-04-15T10:22:06.8297805Z  at 
> 

[jira] [Updated] (FLINK-22306) KafkaITCase.testCollectingSchema failed on AZP

2021-12-24 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22306:

Labels:   (was: auto-deprioritized-critical auto-deprioritized-major 
test-stability)

> KafkaITCase.testCollectingSchema failed on AZP
> --
>
> Key: FLINK-22306
> URL: https://issues.apache.org/jira/browse/FLINK-22306
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Assignee: Fabian Paul
>Priority: Minor
> Fix For: 1.15.0, 1.14.3
>
>
> The {{KafkaITCase.testCollectingSchema}} failed on AZP with
> {code}
> 2021-04-15T10:22:06.8263865Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2021-04-15T10:22:06.8266577Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2021-04-15T10:22:06.8267526Z  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> 2021-04-15T10:22:06.8268034Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2021-04-15T10:22:06.8268496Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2021-04-15T10:22:06.8269133Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-15T10:22:06.8270205Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-04-15T10:22:06.8270698Z  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
> 2021-04-15T10:22:06.8271192Z  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2021-04-15T10:22:06.8274903Z  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2021-04-15T10:22:06.8275602Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-15T10:22:06.8276139Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-04-15T10:22:06.8276589Z  at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081)
> 2021-04-15T10:22:06.8276965Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:264)
> 2021-04-15T10:22:06.8277307Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:261)
> 2021-04-15T10:22:06.8277634Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
> 2021-04-15T10:22:06.8277971Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
> 2021-04-15T10:22:06.8278352Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-04-15T10:22:06.8278767Z  at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
> 2021-04-15T10:22:06.8279223Z  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> 2021-04-15T10:22:06.8279743Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> 2021-04-15T10:22:06.8280130Z  at 
> akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
> 2021-04-15T10:22:06.8280561Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
> 2021-04-15T10:22:06.8287231Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
> 2021-04-15T10:22:06.8291223Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
> 2021-04-15T10:22:06.8291779Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
> 2021-04-15T10:22:06.8292745Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-04-15T10:22:06.8293335Z  at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
> 2021-04-15T10:22:06.8294000Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8294702Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8295281Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8295905Z  at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> 2021-04-15T10:22:06.8296412Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
> 2021-04-15T10:22:06.8296799Z  at 
> akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> 2021-04-15T10:22:06.8297353Z  at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
> 2021-04-15T10:22:06.8297805Z  at 
> 

[jira] [Commented] (FLINK-22765) ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465125#comment-17465125
 ] 

Yun Gao commented on FLINK-22765:
-

1.13: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=c2734c79-73b6-521c-e85a-67c7ecae9107=9921

> ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable
> 
>
> Key: FLINK-22765
> URL: https://issues.apache.org/jira/browse/FLINK-22765
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.5
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18292=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56
> {code}
> May 25 00:56:38 java.lang.AssertionError: 
> May 25 00:56:38 
> May 25 00:56:38 Expected: is ""
> May 25 00:56:38  but: was "The system is out of resources.\nConsult the 
> following stack trace for details."
> May 25 00:56:38   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:956)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:923)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.run(ExceptionUtilsITCase.java:94)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError(ExceptionUtilsITCase.java:70)
> May 25 00:56:38   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 25 00:56:38   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 25 00:56:38   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 25 00:56:38   at java.lang.reflect.Method.invoke(Method.java:498)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 25 00:56:38   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 25 00:56:38   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 25 00:56:38   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> May 25 00:56:38   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> May 25 00:56:38   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> May 25 00:56:38 
> {code}



--
This message 

[jira] [Reopened] (FLINK-22765) ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable

2021-12-24 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao reopened FLINK-22765:
-

> ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable
> 
>
> Key: FLINK-22765
> URL: https://issues.apache.org/jira/browse/FLINK-22765
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.5
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18292=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56
> {code}
> May 25 00:56:38 java.lang.AssertionError: 
> May 25 00:56:38 
> May 25 00:56:38 Expected: is ""
> May 25 00:56:38  but: was "The system is out of resources.\nConsult the 
> following stack trace for details."
> May 25 00:56:38   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:956)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:923)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.run(ExceptionUtilsITCase.java:94)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError(ExceptionUtilsITCase.java:70)
> May 25 00:56:38   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 25 00:56:38   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 25 00:56:38   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 25 00:56:38   at java.lang.reflect.Method.invoke(Method.java:498)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 25 00:56:38   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 25 00:56:38   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 25 00:56:38   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> May 25 00:56:38   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> May 25 00:56:38   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> May 25 00:56:38 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24261) KafkaSourceITCase.testMultipleSplits fails due to "Cannot create topic"

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465124#comment-17465124
 ] 

Yun Gao commented on FLINK-24261:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=c5612577-f1f7-5977-6ff6-7432788526f7=ffa8837a-b445-534e-cdf4-db364cf8235d=7384

> KafkaSourceITCase.testMultipleSplits fails due to "Cannot create topic"
> ---
>
> Key: FLINK-24261
> URL: https://issues.apache.org/jira/browse/FLINK-24261
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.15.0
>
> Attachments: mvn-1.FLINK-24261-vvp-flink-1.14-stream.log.gz
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23955=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=7119
> {code}
> Sep 13 01:14:27 [ERROR] Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 180.412 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase
> Sep 13 01:14:27 [ERROR] testMultipleSplits{TestEnvironment, 
> ExternalContext}[1]  Time elapsed: 120.244 s  <<< ERROR!
> Sep 13 01:14:27 java.lang.RuntimeException: Cannot create topic 
> 'kafka-single-topic-7245292146378659602'
> Sep 13 01:14:27   at 
> org.apache.flink.connector.kafka.source.testutils.KafkaSingleTopicExternalContext.createTopic(KafkaSingleTopicExternalContext.java:100)
> Sep 13 01:14:27   at 
> org.apache.flink.connector.kafka.source.testutils.KafkaSingleTopicExternalContext.createSourceSplitDataWriter(KafkaSingleTopicExternalContext.java:142)
> Sep 13 01:14:27   at 
> org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.generateAndWriteTestData(SourceTestSuiteBase.java:301)
> Sep 13 01:14:27   at 
> org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:142)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-23047) CassandraConnectorITCase.testCassandraBatchTupleFormat fails on azure

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465123#comment-17465123
 ] 

Yun Gao commented on FLINK-23047:
-

Very thanks [~echauchot] for taking care this issue! 

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=c520d2c3-4d17-51f1-813b-4b0b74a0c307=14352

> CassandraConnectorITCase.testCassandraBatchTupleFormat fails on azure
> -
>
> Key: FLINK-23047
> URL: https://issues.apache.org/jira/browse/FLINK-23047
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0, 1.12.4, 1.13.2, 1.15.0
>Reporter: Xintong Song
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.14.3
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19176=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=13995
> {code}
> [ERROR] Tests run: 17, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 157.28 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> [ERROR] 
> testCassandraBatchTupleFormat(org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase)
>   Time elapsed: 12.052 s  <<< ERROR!
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: /127.0.0.1:9042 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: [/127.0.0.1] 
> Timed out waiting for server response))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
>   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.createTable(CassandraConnectorITCase.java:234)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #17988: [FLINK-25010][Connectors/Hive] Speed up hive's createMRSplits by multi thread

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #17988:
URL: https://github.com/apache/flink/pull/17988#issuecomment-984363654


   
   ## CI report:
   
   * e7be85162e0b431d518ab1ffe8b59283338d00b7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28411)
 
   * 19a283c42da777c55b3b1e29bbab04edf4db6bd6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28591)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25442) HBaseConnectorITCase.testTableSink failed on azure

2021-12-24 Thread Yun Gao (Jira)
Yun Gao created FLINK-25442:
---

 Summary: HBaseConnectorITCase.testTableSink failed on azure
 Key: FLINK-25442
 URL: https://issues.apache.org/jira/browse/FLINK-25442
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HBase
Affects Versions: 1.14.2
Reporter: Yun Gao



{code:java}
Dec 24 00:48:54 Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
Dec 24 00:48:54 OpenJDK 64-Bit Server VM warning: ignoring option 
MaxPermSize=128m; support was removed in 8.0
Dec 24 00:48:54 Running org.apache.flink.connector.hbase2.HBaseConnectorITCase
Dec 24 00:48:59 Formatting using clusterid: testClusterID
Dec 24 00:50:15 java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10]
Dec 24 00:50:15 Thread[HFileArchiver-8,5,PEWorkerGroup]
Dec 24 00:50:15 Thread[HFileArchiver-9,5,PEWorkerGroup]
Dec 24 00:50:15 Thread[HFileArchiver-10,5,PEWorkerGroup]
Dec 24 00:50:15 Thread[HFileArchiver-11,5,PEWorkerGroup]
Dec 24 00:50:15 Thread[HFileArchiver-12,5,PEWorkerGroup]
Dec 24 00:50:15 Thread[HFileArchiver-13,5,PEWorkerGroup]
Dec 24 00:50:16 Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
82.068 sec <<< FAILURE! - in 
org.apache.flink.connector.hbase2.HBaseConnectorITCase
Dec 24 00:50:16 
testTableSink(org.apache.flink.connector.hbase2.HBaseConnectorITCase)  Time 
elapsed: 8.534 sec  <<< FAILURE!
Dec 24 00:50:16 java.lang.AssertionError: expected:<8> but was:<5>
Dec 24 00:50:16 at org.junit.Assert.fail(Assert.java:89)
Dec 24 00:50:16 at org.junit.Assert.failNotEquals(Assert.java:835)
Dec 24 00:50:16 at org.junit.Assert.assertEquals(Assert.java:120)
Dec 24 00:50:16 at org.junit.Assert.assertEquals(Assert.java:146)
Dec 24 00:50:16 at 
org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSink(HBaseConnectorITCase.java:291)
Dec 24 00:50:16 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Dec 24 00:50:16 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Dec 24 00:50:16 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Dec 24 00:50:16 at java.lang.reflect.Method.invoke(Method.java:498)
Dec 24 00:50:16 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Dec 24 00:50:16 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Dec 24 00:50:16 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Dec 24 00:50:16 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Dec 24 00:50:16 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Dec 24 00:50:16 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Dec 24 00:50:16 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Dec 24 00:50:16 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Dec 24 00:50:16 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Dec 24 00:50:16 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Dec 24 00:50:16 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Dec 24 00:50:16 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Dec 24 00:50:16 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Dec 24 00:50:16 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)

{code}




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17988: [FLINK-25010][Connectors/Hive] Speed up hive's createMRSplits by multi thread

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #17988:
URL: https://github.com/apache/flink/pull/17988#issuecomment-984363654


   
   ## CI report:
   
   * e7be85162e0b431d518ab1ffe8b59283338d00b7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28411)
 
   * 19a283c42da777c55b3b1e29bbab04edf4db6bd6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Myracle commented on a change in pull request #17988: [FLINK-25010][Connectors/Hive] Speed up hive's createMRSplits by multi thread

2021-12-24 Thread GitBox


Myracle commented on a change in pull request #17988:
URL: https://github.com/apache/flink/pull/17988#discussion_r775101619



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveOptions.java
##
@@ -51,4 +51,11 @@
 .withDescription(
 "If it is false, using flink native writer to 
write parquet and orc files; "
 + "If it is true, using hadoop mapred 
record writer to write parquet and orc files.");
+
+public static final ConfigOption 
TABLE_EXEC_HIVE_PARTITION_SPLIT_THREAD_NUM =
+key("table.exec.hive.partition-split.thread.num")

Review comment:
   @wuchong Good suggestion! I have modified it. Can you review it again? 
Thank you.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Myracle commented on a change in pull request #17988: [FLINK-25010][Connectors/Hive] Speed up hive's createMRSplits by multi thread

2021-12-24 Thread GitBox


Myracle commented on a change in pull request #17988:
URL: https://github.com/apache/flink/pull/17988#discussion_r775101619



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveOptions.java
##
@@ -51,4 +51,11 @@
 .withDescription(
 "If it is false, using flink native writer to 
write parquet and orc files; "
 + "If it is true, using hadoop mapred 
record writer to write parquet and orc files.");
+
+public static final ConfigOption 
TABLE_EXEC_HIVE_PARTITION_SPLIT_THREAD_NUM =
+key("table.exec.hive.partition-split.thread.num")

Review comment:
   Good suggestion! I have modified it. Can you review it again? Thank you.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25292) Azure failed due to unable to fetch some archives

2021-12-24 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465122#comment-17465122
 ] 

Yun Gao commented on FLINK-25292:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=56781494-ebb0-5eae-f732-b9c397ec6ede=f34192cb-f912-5aba-c822-2283f32eeb24=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=119bbba7-f5e3-5e08-e72d-09f1529665de=7166e71c-cad6-5ec9-ae14-15891ce68128=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=576aba0a-d787-51b6-6a92-cf233f360582=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=ff2e2ea5-07e3-5521-7b04-a4fc3ad765e9=1ec6382b-bafe-5817-63ae-eda7d4be718e=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=ed934a8e-982d-5d3f-03cf-c751f5bd1b22=972d3f6c-09f6-5149-9cf8-2eaaf718eb08=29
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=51fed01c-4eb0-5511-d479-ed5e8b9a7820=948a1472-716f-5b18-3d4a-33ca0a14a784=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=7c1d86e3-35bd-5fd5-3b7c-30c126a78702=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=f2c100be-250b-5e85-7bbe-176f68fcddc5=05efd11e-5400-54a4-0d27-a4663be008a9=31
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28551=logs=a5ef94ef-68c2-57fd-3794-dc108ed1c495=2c68b137-b01d-55c9-e603-3ff3f320364b=31

> Azure failed due to unable to fetch some archives
> -
>
> Key: FLINK-25292
> URL: https://issues.apache.org/jira/browse/FLINK-25292
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.0, 1.13.3, 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.13.6, 1.14.3
>
>
> {code:java}
> /bin/bash --noprofile --norc 
> /__w/_temp/ba0f8961-8595-4ace-b13f-d60e17df8803.sh
> Reading package lists...
> Building dependency tree...
> Reading state information...
> The following additional packages will be installed:
>   libio-pty-perl libipc-run-perl
> Suggested packages:
>   libtime-duration-perl libtimedate-perl
> The following NEW packages will be installed:
>   libio-pty-perl libipc-run-perl moreutils
> 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
> Need to get 177 kB of archives.
> After this operation, 573 kB of additional disk space will be used.
> Err:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libio-pty-perl amd64 
> 1:1.08-1.1build1
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> Err:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libipc-run-perl all 
> 0.94-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> Err:3 http://archive.ubuntu.com/ubuntu xenial/universe amd64 moreutils amd64 
> 0.57-1
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libio-pty-perl/libio-pty-perl_1.08-1.1build1_amd64.deb
>   Could not connect to archive.ubuntu.com:80 (91.189.88.152), connection 
> timed out [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/main/libi/libipc-run-perl/libipc-run-perl_0.94-1_all.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/pool/universe/m/moreutils/moreutils_0.57-1_amd64.deb
>   Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.152 80]
> E: Unable to fetch some archives, maybe run apt-get update or try with 
> --fix-missing?
> Running command './tools/ci/test_controller.sh kafka/gelly' with a timeout of 
> 234 minutes.
> ./tools/azure-pipelines/uploading_watchdog.sh: line 76: ts: command not found
> The STDIO streams did not close within 10 seconds of the exit event from 
> process '/bin/bash'. This may indicate a child process inherited the STDIO 
> streams and has not yet exited.
> ##[error]Bash exited with code '141'.
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28064=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=e424005a-b16e-540f-196d-da062cc19bdf=13



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17873: [FLINK-25009][CLI] Output slotSharingGroup as part of JsonGraph

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #17873:
URL: https://github.com/apache/flink/pull/17873#issuecomment-975923151


   
   ## CI report:
   
   * 7a5e46abcb427fbec0eb93842085f8d4d45bc561 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28587)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-11673) add example for streaming operators's broadcast

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11673:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> add example for streaming operators's broadcast
> ---
>
> Key: FLINK-11673
> URL: https://issues.apache.org/jira/browse/FLINK-11673
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: shengjk1
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> add example for streaming operators's broadcast in code



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6363) Document precedence rules of Kryo serializer registrations

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6363:
--
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Document precedence rules of Kryo serializer registrations
> --
>
> Key: FLINK-6363
> URL: https://issues.apache.org/jira/browse/FLINK-6363
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System, Documentation
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> Currently, there is no documentation / Javadoc mentioning the precedence 
> rules of Kryo registrations via the register methods in 
> {{StreamExecutionEnvironment}} / {{ExecutionEnvironment}}.
> It is important for the user to be notified of the precedence because the 
> {{KryoSerializer}} applies the configurations in a specific order that is not 
> visible from the public API.
> For example:
> {code}
> env.addDefaultKryoSerializer(SomeClass.class, SerializerA.class);
> env.addDefaultKryoSerializer(SomeClass.class, new SerializerB());
> {code}
> from this API usage, it may seem as if {{SerializerA}} will be used as the 
> default serializer for {{SomeClass}} (or the other way around, depends really 
> on how the user perceives this).
> However, whatever the called order in this example, {{SerializerB}} will 
> always be used because in the case of defining default serializers, due to 
> the ordering that the internal {{KryoSerializer}} applies these 
> configurations, defining default serializer by instance has a higher 
> precedence than defining by class. Since the existence of this precedence is 
> not due to Kryo's behaviour, but due to the applied ordering in 
> {{KryoSerializer}}, users that are familiar with Kryo will be surprised by 
> the unexpected results.
> These methods are also subject to the same issue:
> {code}
> env.registerType(SomeClass.class, SerializerA.class);
> env.registerTypeWithKryoSerializer(SomeClass.class, SerializerA.class);
> env.registerTypeWithKryoSerializer(SomeClass.class, new SerializerB());
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-11639) Provide readSequenceFile for Hadoop new API

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11639:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Provide readSequenceFile for Hadoop new API
> ---
>
> Key: FLINK-11639
> URL: https://issues.apache.org/jira/browse/FLINK-11639
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hadoop Compatibility
>Reporter: vinoyang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Now {{HadoopInputs}} just provide a {{readSequenceFile}} for 
> {{org.apache.hadoop.mapred.SequenceFileInputFormat}} , it would be better to 
> provide another {{readSequenceFile}} for 
> {{org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6713) Document how to allow multiple Kafka consumers / producers to authenticate using different credentials

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6713:
--
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Document how to allow multiple Kafka consumers / producers to authenticate 
> using different credentials
> --
>
> Key: FLINK-6713
> URL: https://issues.apache.org/jira/browse/FLINK-6713
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Documentation
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> The doc improvements should include:
> 1. Clearly state that the built-in JAAS security module in Flink is a JVM 
> process-wide static JAAS file installation (all static JAAS files are, not 
> Flink specific), and therefore only allows all Kafka consumers and producers 
> in a single JVM (and therefore the whole job, since we do not allow assigning 
> operators to specific slots) to authenticate as one single user.
> 2. If Kerberos authentication is used: self-ship multiple keytab files, and 
> use Kafka's dynamic JAAS configuration through client properties to point to 
> separate keytabs for each consumer / producer. Note that ticket cache would 
> never work for multiple authentications.
> 3. If plain simple login is used: Kafka's dynamic JAAS configuration should 
> be used (and is the only way to do so).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6209) StreamPlanEnvironment always has a parallelism of 1

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6209:
--
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> StreamPlanEnvironment always has a parallelism of 1
> ---
>
> Key: FLINK-6209
> URL: https://issues.apache.org/jira/browse/FLINK-6209
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.2.1
>Reporter: Haohui Mai
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> Thanks [~bill.liu8904] for triaging the issue.
> After FLINK-5808 we saw that the Flink jobs that are uploaded through the UI 
> always have a parallelism of 1, even the parallelism is explicitly set via in 
> the UI.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6224) RemoteStreamEnvironment not resolve ip of JobManager to hostname

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6224:
--
Labels: auto-deprioritized-major auto-unassigned patch stale-minor  (was: 
auto-deprioritized-major auto-unassigned patch)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> RemoteStreamEnvironment not resolve ip of JobManager to hostname
> 
>
> Key: FLINK-6224
> URL: https://issues.apache.org/jira/browse/FLINK-6224
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Command Line Client
>Affects Versions: 1.2.0
>Reporter: CanBin Zheng
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, patch, 
> stale-minor
>
> I run two examples in the same client. 
> first one use
> ExecutionEnvironment.createRemoteEnvironment("10.75.203.170", 59551)
> second one use
> StreamExecutionEnvironment.createRemoteEnvironment("10.75.203.170", 59551)
> the first one runs successfully, but the second fails(connect to JobManager 
> timeout), for the second one, if I change host parameter from ip to hostname, 
> it works. 
> I checked the source code and found, 
> ExecutionEnvironment.createRemoteEnvironment 
> resolves the given hostname, this will lookup the hostname for the given 
> ip. In contrast, the StreamExecutionEnvironment.createRemoteEnvironment 
> won't do this.
> As Till Rohrmann mentioned,  the problem is that with 
> FLINK-2821 [1], we can no longer resolve the hostname on the JobManager, so, 
> we'd better resolve hostname for given ip in RemoteStreamEnvironment too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6343) Migrate from discouraged Java serialization for all sources / sinks

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6343:
--
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Migrate from discouraged Java serialization for all sources / sinks
> ---
>
> Key: FLINK-6343
> URL: https://issues.apache.org/jira/browse/FLINK-6343
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> With FLINK-6324, the Java serialization shortcut for operator state is now 
> deprecated to discourage its usage.
> These sources / sinks are still using this shortcut, and should be migrated 
> to use {{getListState(descriptor)}} instead:
> - {{BucketingSink}}
> - {{ContinuousFileReaderOperator}}
> - {{GenericWriteAheadSink}}
> - {{MessageAcknowledgingSourceBase}}
> - {{RollingSink}}
> - {{FlinkKafkaConsumerBase}} (will be fixed along with the state migration 
> included with FLINK-4022)
> To ease review, I propose to open up subtasks under this JIRA and separate 
> PRs for the migration of each single source / sink.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6370) FileAlreadyExistsException on startup

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6370:
--
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> FileAlreadyExistsException on startup
> -
>
> Key: FLINK-6370
> URL: https://issues.apache.org/jira/browse/FLINK-6370
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.2.0
>Reporter: Andrey
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> Currently static web resources are lazily cached onto disk during first 
> request. However if 2 concurrent requests will be executed, then 
> FileAlreadyExistsException will be in logs.
> {code}
> 2017-04-24 14:00:58,075 ERROR 
> org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler  - error 
> while responding [nioEventLoopGroup-3-2]
> java.nio.file.FileAlreadyExistsException: 
> /flink/web/flink-web-528f8cb8-dd60-433c-8f6c-df49ad0b79e0/index.html
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:88)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>   at 
> java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
>   at java.nio.file.Files.newOutputStream(Files.java:216)
>   at java.nio.file.Files.copy(Files.java:3016)
>   at 
> org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler.respondAsLeader(StaticFileServerHandler.java:238)
>   at 
> org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler.channelRead0(StaticFileServerHandler.java:197)
>   at 
> org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler.channelRead0(StaticFileServerHandler.java:99)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at io.netty.handler.codec.http.router.Handler.routed(Handler.java:62)
> {code}
> Expect: 
> * extract all static resources on startup in main thread and before opening 
> http port.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6309) Memory consumer weights should be calculated in job vertex level

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6309:
--
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Memory consumer weights should be calculated in job vertex level
> 
>
> Key: FLINK-6309
> URL: https://issues.apache.org/jira/browse/FLINK-6309
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataSet
>Reporter: Kurt Young
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> Currently, in {{PlanFinalizer}}, we travel all the job vertices to calculate 
> the consumer weights of the memory and then assign the weights for each job 
> vertex. In the case of a large job graph, e.g. with multiple joins, group 
> reduces, the value of consumer weights will be very high and the available 
> memory for each job vertex will be very low.
> I think it makes more sense to calculate the consumer weights of the memory 
> at the job vertex level (after chaining), in order to maximize the usage 
> ratio of the memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6605) Allow users to specify a default name for processing time in Table API

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6605:
--
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Allow users to specify a default name for processing time in Table API
> --
>
> Key: FLINK-6605
> URL: https://issues.apache.org/jira/browse/FLINK-6605
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Haohui Mai
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> FLINK-5884 enables users to specify column names for both processing time and 
> event time. FLINK-6595 and FLINK-6584 breaks as chained / nested queries will 
> no longer have an attribute of processing time / event time.
> This jira proposes to add a default name for the processing time in order to 
> unbreak FLINK-6595 and FLINK-6584.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6477) The first time to click Taskmanager cannot get the actual data

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6477:
--
Labels: auto-deprioritized-major auto-unassigned pull-request-available 
stale-minor  (was: auto-deprioritized-major auto-unassigned 
pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> The first time to click Taskmanager cannot get the actual data
> --
>
> Key: FLINK-6477
> URL: https://issues.apache.org/jira/browse/FLINK-6477
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.2.0
>Reporter: zhihao chen
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, stale-minor
> Attachments: errDisplay.jpg
>
>
> Flink web page first click Taskmanager to get less than the actual data, when 
> the parameter “jobmanager.web.refresh-interval” is set to a larger value, eg: 
> 180, if you do not manually refresh the page you need to wait time after 
> the timeout normal display



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-11543) Type mismatch AssertionError in FilterJoinRule

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11543:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Type mismatch AssertionError in FilterJoinRule 
> ---
>
> Key: FLINK-11543
> URL: https://issues.apache.org/jira/browse/FLINK-11543
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.7.1
>Reporter: Timo Walther
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
> Attachments: Test.java
>
>
> The following problem is copied from the user mailing list:
> {code}
> Exception in thread "main" java.lang.AssertionError: mismatched type $5 
> TIMESTAMP(3)
> at 
> org.apache.calcite.rex.RexUtil$FixNullabilityShuttle.visitInputRef(RexUtil.java:2481)
> at 
> org.apache.calcite.rex.RexUtil$FixNullabilityShuttle.visitInputRef(RexUtil.java:2459)
> at org.apache.calcite.rex.RexInputRef.accept(RexInputRef.java:112)
> at org.apache.calcite.rex.RexShuttle.visitList(RexShuttle.java:151)
> at org.apache.calcite.rex.RexShuttle.visitCall(RexShuttle.java:100)
> at org.apache.calcite.rex.RexShuttle.visitCall(RexShuttle.java:34)
> at org.apache.calcite.rex.RexCall.accept(RexCall.java:107)
> at org.apache.calcite.rex.RexShuttle.apply(RexShuttle.java:279)
> at org.apache.calcite.rex.RexShuttle.mutate(RexShuttle.java:241)
> at org.apache.calcite.rex.RexShuttle.apply(RexShuttle.java:259)
> at org.apache.calcite.rex.RexUtil.fixUp(RexUtil.java:1605)
> at 
> org.apache.calcite.rel.rules.FilterJoinRule.perform(FilterJoinRule.java:230)
> at 
> org.apache.calcite.rel.rules.FilterJoinRule$FilterIntoJoinRule.onMatch(FilterJoinRule.java:344)
> at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:212)
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:646)
> at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:339)
> at 
> org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:374)
> at 
> org.apache.flink.table.api.TableEnvironment.optimizeLogicalPlan(TableEnvironment.scala:292)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.optimize(StreamTableEnvironment.scala:812)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:860)
> at 
> org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:340)
> at 
> org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:272)
> at test.Test.main(Test.java:78) 
> {code}
> It sounds related to FLINK-10211. A runnable example is attached.
> See also: 
> https://lists.apache.org/thread.html/9a9a979f4344111baf053a51ebfa2f2a0ba31e4d5a70e633dbcae254@%3Cuser.flink.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6424) Add basic helper functions for map type

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6424:
--
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add basic helper functions for map type
> ---
>
> Key: FLINK-6424
> URL: https://issues.apache.org/jira/browse/FLINK-6424
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Timo Walther
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> FLINK-6377 introduced the map type for the Table & SQL API. We still need to 
> implement functions around this type:
> - the value constructor in SQL that constructs a map {{MAP ‘[’ key, value [, 
> key, value ]* ‘]’}}
> - the value constructur in Table API {{map(key, value,...)}} (syntax up for 
> discussion)
> - {{ELEMENT, CARDINALITY}} for SQL API
> - {{.at(), .cardinality(), and .element()}} for Table API in Scala & Java



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-11616) Flink official document has an error

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11616:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Flink official document has an error
> 
>
> Key: FLINK-11616
> URL: https://issues.apache.org/jira/browse/FLINK-11616
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: xulinjie
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
> Attachments: wx20190214-214...@2x.png
>
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/tutorials/flink_on_windows.html]
> The mistake is in paragraph “Installing Flink from Git”.
> “The solution is to adjust the Cygwin settings to deal with the correct line 
> endings by following these three steps:”,
> The sequence of steps you wrote was "1, 2, 1".But I think you might want to 
> write "1, 2, 3".



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-11619) Make ScheduleMode configurable via user code or configuration file

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11619:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Make ScheduleMode configurable via user code or configuration file 
> ---
>
> Key: FLINK-11619
> URL: https://issues.apache.org/jira/browse/FLINK-11619
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: yuqi
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> Currently, Schedule mode for stream job is always 
> see StreamingJobGraphGenerator#createJobGraph
> {code:java}
> // make sure that all vertices start immediately
>   jobGraph.setScheduleMode(ScheduleMode.EAGER);
> {code}
> on this point, we can make ScheduleMode configurable to user so as to adapt 
> different environment. Users can set this option via env.setScheduleMode() in 
> code, or make it optional in configuration. 
> Anyone's help and suggestions is welcomed.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-11672) Add example for streaming operators's connect

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11672:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add example for streaming operators's connect  
> ---
>
> Key: FLINK-11672
> URL: https://issues.apache.org/jira/browse/FLINK-11672
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: shengjk1
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> add example for streaming operators's connect such as 
> \{{datastream1.connect(datastream2)}} in code



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6446) Various improvements to the Web Frontend

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6446:
--
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Various improvements to the Web Frontend
> 
>
> Key: FLINK-6446
> URL: https://issues.apache.org/jira/browse/FLINK-6446
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: Stephan Ewen
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> This is the umbrella issue for various improvements to the web frontend,



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-11698) Add readMapRedTextFile API for HadoopInputs

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11698:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add readMapRedTextFile API for HadoopInputs
> ---
>
> Key: FLINK-11698
> URL: https://issues.apache.org/jira/browse/FLINK-11698
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Hadoop Compatibility
>Reporter: vinoyang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Considering that {{TextInputFormat}} is a very common {{InputFormat}}, I 
> think it's valuable to provide such a convenient API for users, just like 
> {{readMapRedSequenceFile}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-11620) table example add kafka

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11620:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> table example add kafka
> ---
>
> Key: FLINK-11620
> URL: https://issues.apache.org/jira/browse/FLINK-11620
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: lining
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-11697) Add readMapReduceTextFile API for HadoopInputs

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11697:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add readMapReduceTextFile API for HadoopInputs
> --
>
> Key: FLINK-11697
> URL: https://issues.apache.org/jira/browse/FLINK-11697
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Hadoop Compatibility
>Reporter: vinoyang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Considering that {{TextInputFormat}} is a very common {{InputFormat}}, I 
> think it's valuable to provide such a convenient API for users, just like 
> {{readMapReduceSequenceFile}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-11695) Make sharedStateDir could create sub-directories to avoid MaxDirectoryItemsExceededException

2021-12-24 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11695:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Make sharedStateDir could create sub-directories to avoid 
> MaxDirectoryItemsExceededException
> 
>
> Key: FLINK-11695
> URL: https://issues.apache.org/jira/browse/FLINK-11695
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Reporter: Yun Tang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> We meet this annoying problem many times when the {{sharedStateDir}} in 
> checkpoint path exceeded the directory items limit due to large checkpoint:
> {code:java}
> org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException:
>  The directory item limit of xxx is exceeded: limit=1048576 items=1048576 
> {code}
> Currently, our solution is to let {{FsCheckpointStorage}} could create 
> sub-dirs when calling {{resolveCheckpointStorageLocation}}. The default value 
> for the number of sub-dirs is zero, just keep backward compatibility as 
> current situation. The created sub-dirs have the name as integer value of 
> [{{0, num-of-sub-dirs}})



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17873: [FLINK-25009][CLI] Output slotSharingGroup as part of JsonGraph

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #17873:
URL: https://github.com/apache/flink/pull/17873#issuecomment-975923151


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * 7a5e46abcb427fbec0eb93842085f8d4d45bc561 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28587)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xinbinhuang commented on pull request #17873: [FLINK-25009][CLI] Output slotSharingGroup as part of JsonGraph

2021-12-24 Thread GitBox


xinbinhuang commented on pull request #17873:
URL: https://github.com/apache/flink/pull/17873#issuecomment-1000935339


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17873: [FLINK-25009][CLI] Output slotSharingGroup as part of JsonGraph

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #17873:
URL: https://github.com/apache/flink/pull/17873#issuecomment-975923151


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * 7a5e46abcb427fbec0eb93842085f8d4d45bc561 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-25423) Enable loading state backend via configuration in state processor api

2021-12-24 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465074#comment-17465074
 ] 

Seth Wiesman edited comment on FLINK-25423 at 12/24/21, 7:52 PM:
-

Hi Yuan, 

 

we can get this into 1.15 without any problem. Once  FLINK-24912 is merged it 
should be straightforward to implement 


was (Author: sjwiesman):
Hi Yuan, 

 

we can get this into 1.15 without any problem. Once  FLINK-24921 is merged it 
should be straightforward to implement 

> Enable loading state backend via configuration in state processor api
> -
>
> Key: FLINK-25423
> URL: https://issues.apache.org/jira/browse/FLINK-25423
> Project: Flink
>  Issue Type: Improvement
>  Components: API / State Processor, Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Seth Wiesman
>Priority: Major
> Fix For: 1.15.0
>
>
> Currently, state processor API would load savepoint via explictly 
> initalizated state backend on client side, which is like 
> {{StreamExecutionEnvironment#setStateBackend(stateBackend)}}:
> {code:java}
> Savepoint.load(bEnv, "hdfs://path/", new HashMapStateBackend());
> {code}
> As we all konw, stream env also support to load state backend via 
> configuration to provide flexibility to load state backends especially some 
> customized state backend. This could also benefit state processor API with 
> similiar ability.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25423) Enable loading state backend via configuration in state processor api

2021-12-24 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17465074#comment-17465074
 ] 

Seth Wiesman commented on FLINK-25423:
--

Hi Yuan, 

 

we can get this into 1.15 without any problem. Once  FLINK-24921 is merged it 
should be straightforward to implement 

> Enable loading state backend via configuration in state processor api
> -
>
> Key: FLINK-25423
> URL: https://issues.apache.org/jira/browse/FLINK-25423
> Project: Flink
>  Issue Type: Improvement
>  Components: API / State Processor, Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Seth Wiesman
>Priority: Major
> Fix For: 1.15.0
>
>
> Currently, state processor API would load savepoint via explictly 
> initalizated state backend on client side, which is like 
> {{StreamExecutionEnvironment#setStateBackend(stateBackend)}}:
> {code:java}
> Savepoint.load(bEnv, "hdfs://path/", new HashMapStateBackend());
> {code}
> As we all konw, stream env also support to load state backend via 
> configuration to provide flexibility to load state backends especially some 
> customized state backend. This could also benefit state processor API with 
> similiar ability.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18145: [FLINK-25368][connectors/kafka] Substitute KafkaConsumer with AdminClient when getting offsets

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18145:
URL: https://github.com/apache/flink/pull/18145#issuecomment-997131565


   
   ## CI report:
   
   * b5804c8b8810086275acfa3272aab1d2d981fd80 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28586)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18145: [FLINK-25368][connectors/kafka] Substitute KafkaConsumer with AdminClient when getting offsets

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18145:
URL: https://github.com/apache/flink/pull/18145#issuecomment-997131565


   
   ## CI report:
   
   * 6d14f13f549a87c5982e9b23de20718bd83e13b6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28559)
 
   * b5804c8b8810086275acfa3272aab1d2d981fd80 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28586)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18145: [FLINK-25368][connectors/kafka] Substitute KafkaConsumer with AdminClient when getting offsets

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18145:
URL: https://github.com/apache/flink/pull/18145#issuecomment-997131565


   
   ## CI report:
   
   * 6d14f13f549a87c5982e9b23de20718bd83e13b6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28559)
 
   * b5804c8b8810086275acfa3272aab1d2d981fd80 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun-playground] igalshilman merged pull request #16: Upgrade playground 3.1.x to version to 3.1.1

2021-12-24 Thread GitBox


igalshilman merged pull request #16:
URL: https://github.com/apache/flink-statefun-playground/pull/16


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18184: [hotfix][quickstarts] Fix misprints in DataStreamJob.java and DataStreamJob.scala

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18184:
URL: https://github.com/apache/flink/pull/18184#issuecomment-42453


   
   ## CI report:
   
   * 118bdcb21779306dbc10125fff3882952d42c5b7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28582)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouweikun commented on pull request #18199: [FLINK-25183] Optimize changelog normalize for the managed table upse…

2021-12-24 Thread GitBox


shouweikun commented on pull request #18199:
URL: https://github.com/apache/flink/pull/18199#issuecomment-1000863930


   @flinkbot  run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on pull request #45: [hotfix] Add the NOTICE file

2021-12-24 Thread GitBox


lindong28 commented on pull request #45:
URL: https://github.com/apache/flink-ml/pull/45#issuecomment-1000852081


   @gaoyunhaii Could you help review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 opened a new pull request #45: [hotfix] Add the NOTICE file

2021-12-24 Thread GitBox


lindong28 opened a new pull request #45:
URL: https://github.com/apache/flink-ml/pull/45


   ## What is the purpose of the change
   
   Add the NOTICE file.
   
   ## Brief change log
   
   Added the NOTICE file.
   
   ## Verifying this change
   
   N/A
   
   ## Does this pull request potentially affect one of the following parts:
   
   - Dependencies (does it add or upgrade a dependency): (no)
   - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
   
   ## Documentation
   
   - Does this pull request introduce a new feature? (no)
   - If yes, how is the feature documented? (N/A)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18165: [FLINK-24904][docs] Updated docs to reflect new KDS Sink and deprecat…

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18165:
URL: https://github.com/apache/flink/pull/18165#issuecomment-998902737


   
   ## CI report:
   
   * 9eba4a4d417946aa5e076595aa78492f4bb190ea Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28585)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17907: [FLINK-24905][connectors/kinesis] Adding support for Kinesis DataStream sink as kinesis table connector sink.

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #17907:
URL: https://github.com/apache/flink/pull/17907#issuecomment-979080179


   
   ## CI report:
   
   * 683c8e4796bb01ba4789a1edc1cb465fae8b0547 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28584)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18184: [hotfix][quickstarts] Fix misprints in DataStreamJob.java and DataStreamJob.scala

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18184:
URL: https://github.com/apache/flink/pull/18184#issuecomment-42453


   
   ## CI report:
   
   * 118bdcb21779306dbc10125fff3882952d42c5b7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28582)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18184: [hotfix][quickstarts] Fix misprints in DataStreamJob.java and DataStreamJob.scala

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18184:
URL: https://github.com/apache/flink/pull/18184#issuecomment-42453


   
   ## CI report:
   
   * 118bdcb21779306dbc10125fff3882952d42c5b7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28582)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] snuyanzin commented on pull request #18116: [hotfix][docs] Fix type name for INTERVAL in Data Type Extraction section of types.md

2021-12-24 Thread GitBox


snuyanzin commented on pull request #18116:
URL: https://github.com/apache/flink/pull/18116#issuecomment-1000842007


   Thanks for you feedback.
   I improved the title, description and commit message. 
   Please let me know in case I missed anything.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] snuyanzin commented on pull request #18184: [hotfix][quickstarts] Fix misprints in DataStreamJob.java and DataStreamJob.scala

2021-12-24 Thread GitBox


snuyanzin commented on pull request #18184:
URL: https://github.com/apache/flink/pull/18184#issuecomment-1000841738


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18199: [FLINK-25183] Optimize changelog normalize for the managed table upse…

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18199:
URL: https://github.com/apache/flink/pull/18199#issuecomment-1000699581


   
   ## CI report:
   
   * 151689d6a8214edc38eae822b0f54c1b23922fe5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28576)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wsry commented on pull request #17814: [FLINK-24899][runtime] Enable data compression for blocking shuffle by default

2021-12-24 Thread GitBox


wsry commented on pull request #17814:
URL: https://github.com/apache/flink/pull/17814#issuecomment-1000828829


   Hi, thanks a lot for the PR and review. I will take look after finishing 
this discussion: 
https://lists.apache.org/thread/pt2b1f17x2l5rlvggwxs6m265lo4ly7p.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18184: [hotfix][quickstarts] Fix misprints in DataStreamJob.java and DataStreamJob.scala

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18184:
URL: https://github.com/apache/flink/pull/18184#issuecomment-42453


   
   ## CI report:
   
   * 118bdcb21779306dbc10125fff3882952d42c5b7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28582)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18116: [hotfix][docs] Fix type name for INTERVAL in Data Type Extraction section of types.md

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18116:
URL: https://github.com/apache/flink/pull/18116#issuecomment-994626650


   
   ## CI report:
   
   * 3de2d7d7e66b4e94827c35c29f18be8e838603aa Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28581)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on pull request #17814: [FLINK-24899][runtime] Enable data compression for blocking shuffle by default

2021-12-24 Thread GitBox


SteNicholas commented on pull request #17814:
URL: https://github.com/apache/flink/pull/17814#issuecomment-1000826210


   @hililiwei, thanks for your detailed review and I have addressed the above 
comment. IMO, the boolean type option has no the definition in 
NettyShuffleEnvironmentOptions and thus I have not added this definition.
   @wsry , could you please take a look at the changes?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas removed a comment on pull request #17814: [FLINK-24899][runtime] Enable data compression for blocking shuffle by default

2021-12-24 Thread GitBox


SteNicholas removed a comment on pull request #17814:
URL: https://github.com/apache/flink/pull/17814#issuecomment-985186694


   @hililiwei, thanks for your detailed review and I have addressed the above 
comment. IMO, the boolean type option has no the definition in 
`NettyShuffleEnvironmentOptions` and thus I have not added this definition.
   @wsry , could you please take a look at the changes?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas edited a comment on pull request #17814: [FLINK-24899][runtime] Enable data compression for blocking shuffle by default

2021-12-24 Thread GitBox


SteNicholas edited a comment on pull request #17814:
URL: https://github.com/apache/flink/pull/17814#issuecomment-1000826210


   @hililiwei, thanks for your detailed review and I have addressed the above 
comment. IMO, the boolean type option has no the definition in 
NettyShuffleEnvironmentOptions and thus I have not added this definition.
   @wsry, could you please take a look at the changes?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas edited a comment on pull request #17814: [FLINK-24899][runtime] Enable data compression for blocking shuffle by default

2021-12-24 Thread GitBox


SteNicholas edited a comment on pull request #17814:
URL: https://github.com/apache/flink/pull/17814#issuecomment-985186694


   @hililiwei, thanks for your detailed review and I have addressed the above 
comment. IMO, the boolean type option has no the definition in 
`NettyShuffleEnvironmentOptions` and thus I have not added this definition.
   @wsry , could you please take a look at the changes?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18201: [FLINK-25167][API / DataStream]Expose StreamOperatorFactory in Connec…

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18201:
URL: https://github.com/apache/flink/pull/18201#issuecomment-1000767373


   
   ## CI report:
   
   * b106d3f7ba04cd518efbb4552de331e80ff9be57 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28580)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18184: [hotfix][quickstarts] Fix misprints in DataStreamJob.java and DataStreamJob.scala

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18184:
URL: https://github.com/apache/flink/pull/18184#issuecomment-42453


   
   ## CI report:
   
   * 1b62110d5f63f65bbd806793c57cb0e470c1b020 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28579)
 
   * 118bdcb21779306dbc10125fff3882952d42c5b7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28582)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouweikun commented on pull request #18114: [FLINK-25173][table][hive] Introduce CatalogLock and implement HiveCatalogLock

2021-12-24 Thread GitBox


shouweikun commented on pull request #18114:
URL: https://github.com/apache/flink/pull/18114#issuecomment-1000806049


   Hi @JingsongLi , still I have some questions. Would you mind explaining it a 
little bit more? thx~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouweikun commented on a change in pull request #18114: [FLINK-25173][table][hive] Introduce CatalogLock and implement HiveCatalogLock

2021-12-24 Thread GitBox


shouweikun commented on a change in pull request #18114:
URL: https://github.com/apache/flink/pull/18114#discussion_r774987762



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogLock.java
##
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog;
+
+import org.apache.flink.annotation.Internal;
+
+import java.io.Closeable;
+import java.io.Serializable;
+import java.util.concurrent.Callable;
+
+/**
+ * An interface that allows source and sink to use global lock to some 
transaction-related things.
+ */
+@Internal
+public interface CatalogLock extends Closeable {
+
+/** Run with catalog lock. The caller should tell catalog the database and 
table name. */
+ T runWithLock(String database, String table, Callable callable) 
throws Exception;

Review comment:
   I have a question about this. Shall we also add `String catalogName` as 
a parameter too? Maybe only databaseName and TableName can not avoid clash? eg: 
catalog_a.db.tb and catalog_b.db.tb




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18165: [FLINK-24904][docs] Updated docs to reflect new KDS Sink and deprecat…

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18165:
URL: https://github.com/apache/flink/pull/18165#issuecomment-998902737


   
   ## CI report:
   
   * 2d3f90db1450f2b91f5730cbf4ca4f7edff04fa4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28578)
 
   * 9eba4a4d417946aa5e076595aa78492f4bb190ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28585)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouweikun commented on a change in pull request #18114: [FLINK-25173][table][hive] Introduce CatalogLock and implement HiveCatalogLock

2021-12-24 Thread GitBox


shouweikun commented on a change in pull request #18114:
URL: https://github.com/apache/flink/pull/18114#discussion_r774988544



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveDynamicTableFactory.java
##
@@ -70,13 +72,18 @@ public DynamicTableSink createDynamicTableSink(Context 
context) {
 
 // we don't support temporary hive tables yet
 if (!isHiveTable || context.isTemporary()) {
-return FactoryUtil.createDynamicTableSink(
-null,
-context.getObjectIdentifier(),
-context.getCatalogTable(),
-context.getConfiguration(),
-context.getClassLoader(),
-context.isTemporary());
+DynamicTableSink sink =
+FactoryUtil.createDynamicTableSink(
+null,
+context.getObjectIdentifier(),
+context.getCatalogTable(),
+context.getConfiguration(),
+context.getClassLoader(),
+context.isTemporary());
+if (sink instanceof RequireCatalogLock) {
+((RequireCatalogLock) 
sink).setLockFactory(HiveCatalogLock.createFactory(hiveConf));

Review comment:
   For now we have to check and call `RequireCatalogLock#setLockFactory` 
manually in every `DynamicTableFactory` if necessary. Shall we lift this to 
`Planner`?

##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogLock.java
##
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog;
+
+import org.apache.flink.annotation.Internal;
+
+import java.io.Closeable;
+import java.io.Serializable;
+import java.util.concurrent.Callable;
+
+/**
+ * An interface that allows source and sink to use global lock to some 
transaction-related things.
+ */
+@Internal
+public interface CatalogLock extends Closeable {
+
+/** Run with catalog lock. The caller should tell catalog the database and 
table name. */
+ T runWithLock(String database, String table, Callable callable) 
throws Exception;

Review comment:
   I have a question about this. Shall we also add `String catalogName` as 
a parameter too? Maybe only databaseName and TableName can not avoid clash? eg: 
catalog_b.db.tb and catalog_b.db.tb




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18165: [FLINK-24904][docs] Updated docs to reflect new KDS Sink and deprecat…

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18165:
URL: https://github.com/apache/flink/pull/18165#issuecomment-998902737


   
   ## CI report:
   
   * 2d3f90db1450f2b91f5730cbf4ca4f7edff04fa4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28578)
 
   * 9eba4a4d417946aa5e076595aa78492f4bb190ea UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17907: [FLINK-24905][connectors/kinesis] Adding support for Kinesis DataStream sink as kinesis table connector sink.

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #17907:
URL: https://github.com/apache/flink/pull/17907#issuecomment-979080179


   
   ## CI report:
   
   * 47adbd8e9553d1532f9ef8eb9a51c07e498a5899 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28541)
 
   * 683c8e4796bb01ba4789a1edc1cb465fae8b0547 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28584)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17907: [FLINK-24905][connectors/kinesis] Adding support for Kinesis DataStream sink as kinesis table connector sink.

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #17907:
URL: https://github.com/apache/flink/pull/17907#issuecomment-979080179


   
   ## CI report:
   
   * 47adbd8e9553d1532f9ef8eb9a51c07e498a5899 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28541)
 
   * 683c8e4796bb01ba4789a1edc1cb465fae8b0547 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18158: [FLINK-25398] Show complete stacktrace when requesting thread dump

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18158:
URL: https://github.com/apache/flink/pull/18158#issuecomment-998496692


   
   ## CI report:
   
   * c510d63b1d0b22542cef930d3d13a2fead70289d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28577)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] alpinegizmo commented on a change in pull request #18055: [docs] Tutorial: Write Your First Flink SQL program

2021-12-24 Thread GitBox


alpinegizmo commented on a change in pull request #18055:
URL: https://github.com/apache/flink/pull/18055#discussion_r774982745



##
File path: docs/content/docs/try-flink/write_flink_program_with_sql.md
##
@@ -0,0 +1,262 @@
+---
+title: 'Write your first Flink program with SQL'
+weight: 2 
+type: docs
+aliases:
+  - /try-flink/write_flink_program_with_sql.html
+---
+
+
+# Write your first Flink program with SQL
+
+## Introduction
+
+Flink features [multiple APIs]({{< ref "docs/concepts/overview" >}}) with 
different levels of abstraction that can be used to develop your streaming 
application. SQL is the highest level of abstraction and is supported by Flink 
as a relational API for batch and stream processing. This means that you can 
write the same queries on both unbounded real-time streams and bounded recorded 
streams and produce the same results. 
+
+SQL on Flink is based on [Apache Calcite](https://calcite.apache.org/) (which 
is based on standard SQL) and is commonly used to ease the process of 
implementing data analytics, data pipelining, and ETL applications.  It is a 
great entryway to writing your first Flink application and requires no Java or 
Python. 
+
+This tutorial will guide you through writing your first Flink program 
leveraging SQL alone. Through this exercise you will learn and understand the 
ease and speed with which you can analyze streaming data in Flink! 
+
+
+## Goals
+
+This tutorial will teach you how to:
+
+- use the Flink SQL client to submit queries 
+- consume a data source with Flink SQL
+- run a continuous query on a stream of data
+- use Flink SQL to write out results to persistent storage 
+
+
+## Prerequisites 
+
+You only need to have basic knowledge of SQL to follow along.
+
+
+## Step 1: Start the Flink SQL client 
+
+The [SQL Client]({{< ref "docs/dev/table/sqlClient" >}}) is bundled in the 
regular Flink distribution and can be run out-of-the-box. It requires only a 
running Flink cluster where table programs can be executed (since Flink SQL is 
a thin abstraction over the Table API). 
+
+There are many ways to set up Flink but you will run it locally for the 
purpose of this tutorial. [Download Flink]({{< ref 
"docs/try-flink/local_installation#downloading-flink" >}}) and [start a local 
cluster]({{< ref 
"docs/try-flink/local_installation#starting-and-stopping-a-local-cluster" >}}) 
with one worker (the TaskManager).  
+
+The scripts for the SQL client are located in the `/bin` directory of Flink. 
You can start the client by executing:
+
+```sh
+./bin/sql-client.sh
+```
+
+You should see something like this:
+
+{{< img src="/fig/try-flink/flink-sql.png" alt="Flink SQL client" >}}
+
+
+## Step 2: Set up a data source with flink-faker

Review comment:
   An alternative would be to switch over to using a file now, and rework 
the tutorial to be batch only until 1.15, after which it could be expanded to 
show both batch and streaming.
   
   But I'm still unsure about the logistics of using a file. Ideally such a 
file should be hosted somewhere under the control of the project -- so maybe in 
some github repo. However, having a large file in a github repo makes that repo 
unwieldy to work with. Would a small file be sufficient?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] alpinegizmo commented on a change in pull request #18055: [docs] Tutorial: Write Your First Flink SQL program

2021-12-24 Thread GitBox


alpinegizmo commented on a change in pull request #18055:
URL: https://github.com/apache/flink/pull/18055#discussion_r774982745



##
File path: docs/content/docs/try-flink/write_flink_program_with_sql.md
##
@@ -0,0 +1,262 @@
+---
+title: 'Write your first Flink program with SQL'
+weight: 2 
+type: docs
+aliases:
+  - /try-flink/write_flink_program_with_sql.html
+---
+
+
+# Write your first Flink program with SQL
+
+## Introduction
+
+Flink features [multiple APIs]({{< ref "docs/concepts/overview" >}}) with 
different levels of abstraction that can be used to develop your streaming 
application. SQL is the highest level of abstraction and is supported by Flink 
as a relational API for batch and stream processing. This means that you can 
write the same queries on both unbounded real-time streams and bounded recorded 
streams and produce the same results. 
+
+SQL on Flink is based on [Apache Calcite](https://calcite.apache.org/) (which 
is based on standard SQL) and is commonly used to ease the process of 
implementing data analytics, data pipelining, and ETL applications.  It is a 
great entryway to writing your first Flink application and requires no Java or 
Python. 
+
+This tutorial will guide you through writing your first Flink program 
leveraging SQL alone. Through this exercise you will learn and understand the 
ease and speed with which you can analyze streaming data in Flink! 
+
+
+## Goals
+
+This tutorial will teach you how to:
+
+- use the Flink SQL client to submit queries 
+- consume a data source with Flink SQL
+- run a continuous query on a stream of data
+- use Flink SQL to write out results to persistent storage 
+
+
+## Prerequisites 
+
+You only need to have basic knowledge of SQL to follow along.
+
+
+## Step 1: Start the Flink SQL client 
+
+The [SQL Client]({{< ref "docs/dev/table/sqlClient" >}}) is bundled in the 
regular Flink distribution and can be run out-of-the-box. It requires only a 
running Flink cluster where table programs can be executed (since Flink SQL is 
a thin abstraction over the Table API). 
+
+There are many ways to set up Flink but you will run it locally for the 
purpose of this tutorial. [Download Flink]({{< ref 
"docs/try-flink/local_installation#downloading-flink" >}}) and [start a local 
cluster]({{< ref 
"docs/try-flink/local_installation#starting-and-stopping-a-local-cluster" >}}) 
with one worker (the TaskManager).  
+
+The scripts for the SQL client are located in the `/bin` directory of Flink. 
You can start the client by executing:
+
+```sh
+./bin/sql-client.sh
+```
+
+You should see something like this:
+
+{{< img src="/fig/try-flink/flink-sql.png" alt="Flink SQL client" >}}
+
+
+## Step 2: Set up a data source with flink-faker

Review comment:
   An alternative would be to switch over to using a file now, and rework 
the tutorial to be batch only until 1.15, after which it could be expanded to 
show both batch and streaming.
   
   But I'm still unsure about the logistics of using a file. Ideally such a 
file should be hosted somewhere under the control of the project -- so maybe in 
some github repo.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18133: [FLINK-24718][avro] update AVRO dependency to latest version

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18133:
URL: https://github.com/apache/flink/pull/18133#issuecomment-995786550


   
   ## CI report:
   
   * 635596e288832ab3df3434b7d6b153415f2711a0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28575)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-25437) Build wheels failed

2021-12-24 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo resolved FLINK-25437.
--
Fix Version/s: 1.12.8
   1.13.6
   Resolution: Fixed

Merged into release-1.13 via 5afe88a648cfde306b9cc14e681c6226bf629a3f
Merged into release-1.12 bd2b763408e5011acf0935dd4cb2d6b3deff23d1

> Build wheels failed
> ---
>
> Key: FLINK-25437
> URL: https://issues.apache.org/jira/browse/FLINK-25437
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.8, 1.13.6
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.8, 1.13.6
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=33dd8067-7758-552f-a1cf-a8b8ff0e44cd=bf344275-d244-5694-d05a-7ad127794669



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] HuangXingBo closed pull request #18198: [FLINK-25437][python] Correct grpcio dependency version in dev-requirenment.txt

2021-12-24 Thread GitBox


HuangXingBo closed pull request #18198:
URL: https://github.com/apache/flink/pull/18198


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo closed pull request #18197: [FLINK-25437][python] Correct grpcio dependency version in dev-requirenment.txt

2021-12-24 Thread GitBox


HuangXingBo closed pull request #18197:
URL: https://github.com/apache/flink/pull/18197


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18180: [hotfix][doc] fix typo of 'note that' and 'now that'

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18180:
URL: https://github.com/apache/flink/pull/18180#issuecomment-999581101


   
   ## CI report:
   
   * b3736ea54f18c499c3c4d8bbf24bbce17e02adbc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28574)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo commented on pull request #18197: [FLINK-25437][python] Correct grpcio dependency version in dev-requirenment.txt

2021-12-24 Thread GitBox


HuangXingBo commented on pull request #18197:
URL: https://github.com/apache/flink/pull/18197#issuecomment-1000789125


   
https://dev.azure.com/hxbks2ks/FLINK-TEST/_build/results?buildId=1692=results


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo removed a comment on pull request #18198: [FLINK-25437][python] Correct grpcio dependency version in dev-requirenment.txt

2021-12-24 Thread GitBox


HuangXingBo removed a comment on pull request #18198:
URL: https://github.com/apache/flink/pull/18198#issuecomment-1000788561


   
https://dev.azure.com/hxbks2ks/FLINK-TEST/_build/results?buildId=1692=results
 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo commented on pull request #18198: [FLINK-25437][python] Correct grpcio dependency version in dev-requirenment.txt

2021-12-24 Thread GitBox


HuangXingBo commented on pull request #18198:
URL: https://github.com/apache/flink/pull/18198#issuecomment-1000788561


   
https://dev.azure.com/hxbks2ks/FLINK-TEST/_build/results?buildId=1692=results
 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18197: [FLINK-25437][python] Correct grpcio dependency version in dev-requirenment.txt

2021-12-24 Thread GitBox


flinkbot edited a comment on pull request #18197:
URL: https://github.com/apache/flink/pull/18197#issuecomment-1000691490


   
   ## CI report:
   
   * 1b53f876faefbbf99353632447af0f5e5099fc58 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28569)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   >