[GitHub] [flink] curcur opened a new pull request #16474: [hotfix] Finalize SeqNo for Change Log

2021-07-12 Thread GitBox


curcur opened a new pull request #16474:
URL: https://github.com/apache/flink/pull/16474


   Without this PR, the current Log will always start from seq # 1;
   
   while the `materilizedTo` is initialized from 0;  and the returned stream is 
actually from 0 -> the current position
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16473: [FLINK-23365][filesystems]flink-azure-fs-hadoop compile error because of json-smart

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16473:
URL: https://github.com/apache/flink/pull/16473#issuecomment-878768290


   
   ## CI report:
   
   * ecc9389b111d5726f1cb4b4b532280864a4a5e75 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20355)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16465: [FLINK-22910][runtime] Refine ShuffleMaster lifecycle management for pluggable shuffle service framework

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16465:
URL: https://github.com/apache/flink/pull/16465#issuecomment-878076127


   
   ## CI report:
   
   * d42a23c9138222f06f65b879e2449da3b5f0102e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20329)
 
   * 952176efe045c811f8648d2e97ad5149eb462975 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20354)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16457: [hotfix][runtime] Log slot pool status if unable to fulfill job requirements

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16457:
URL: https://github.com/apache/flink/pull/16457#issuecomment-877915805


   
   ## CI report:
   
   * 61bd9db42a4990df50beca81555e1091eb880f2d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20351)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16184: [FLINK-21089] Skip the execution of the fully finished operators after recovery

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16184:
URL: https://github.com/apache/flink/pull/16184#issuecomment-863123325


   
   ## CI report:
   
   * 70da1a72321f3d0a6dea7cec29ed28404dd06882 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20306)
 
   * 9051008979b8c996ee3d8e88f6ae2e1da438f972 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15322: [FLINK-21353][state] Add DFS-based StateChangelog (TM-owned state)

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #15322:
URL: https://github.com/apache/flink/pull/15322#issuecomment-804015738


   
   ## CI report:
   
   * 7f22bef11e25a9ec615c97dbb8f070c5f072c641 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20350)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23367) testKeyGroupedInternalPriorityQueue does not dispose ChangelogDelegateEmbeddedRocksDB properly, and fails the test

2021-07-12 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-23367:
-
Summary: testKeyGroupedInternalPriorityQueue does not dispose 
ChangelogDelegateEmbeddedRocksDB properly, and fails the test  (was: 
testKeyGroupedInternalPriorityQueue does not dispose rocksdb properly, and 
fails the test)

> testKeyGroupedInternalPriorityQueue does not dispose 
> ChangelogDelegateEmbeddedRocksDB properly, and fails the test
> --
>
> Key: FLINK-23367
> URL: https://issues.apache.org/jira/browse/FLINK-23367
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Yuan Mei
>Priority: Major
>
> The set of `testKeyGroupedInternalPriorityQueue` for 
> `ChangelogDelegateEmbeddedRocksDBStateBackendTest` does not dispose rocksdb 
> properly. 
> It  seems it creates one more than needed column family in the 
> `ChangelogDelegateEmbeddedRocksDBStateBackendTest` case comparing to 
> `EmbeddedRocksDBStateBackend`
>  // ... continue with the ones created by Flink...
> for (RocksDbKvStateInfo kvStateInfo : 
> kvStateInformation.values()) {
> RocksDBOperationUtils.addColumnFamilyOptionsToCloseLater(
> columnFamilyOptions, kvStateInfo.columnFamilyHandle);
> IOUtils.closeQuietly(kvStateInfo.columnFamilyHandle);
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23367) testKeyGroupedInternalPriorityQueue does not dispose rocksdb properly, and fails the test

2021-07-12 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-23367:
-
Parent: FLINK-21352
Issue Type: Sub-task  (was: Bug)

> testKeyGroupedInternalPriorityQueue does not dispose rocksdb properly, and 
> fails the test
> -
>
> Key: FLINK-23367
> URL: https://issues.apache.org/jira/browse/FLINK-23367
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Yuan Mei
>Priority: Major
>
> The set of `testKeyGroupedInternalPriorityQueue` for 
> `ChangelogDelegateEmbeddedRocksDBStateBackendTest` does not dispose rocksdb 
> properly. 
> It  seems it creates one more than needed column family in the 
> `ChangelogDelegateEmbeddedRocksDBStateBackendTest` case comparing to 
> `EmbeddedRocksDBStateBackend`
>  // ... continue with the ones created by Flink...
> for (RocksDbKvStateInfo kvStateInfo : 
> kvStateInformation.values()) {
> RocksDBOperationUtils.addColumnFamilyOptionsToCloseLater(
> columnFamilyOptions, kvStateInfo.columnFamilyHandle);
> IOUtils.closeQuietly(kvStateInfo.columnFamilyHandle);
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-11627) Translate the "JobManager High Availability (HA)" page into Chinese

2021-07-12 Thread Shen Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379579#comment-17379579
 ] 

Shen Zhu commented on FLINK-11627:
--

Hey [~jark] , in the latest version, 
[https://github.com/apache/flink/blob/master/docs/ops/jobmanager_high_availability.md]
 has been replaced by 
[https://github.com/apache/flink/blob/master/docs/content/docs/deployment/ha/overview.md,]
 and there's already a page for Chinese translation: 
[https://github.com/apache/flink/blob/master/docs/content.zh/docs/deployment/ha/overview.md],
 perhaps this ticket could be closed.

> Translate the "JobManager High Availability (HA)" page into Chinese
> ---
>
> Key: FLINK-11627
> URL: https://issues.apache.org/jira/browse/FLINK-11627
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Hui Zhao
>Assignee: Shen Zhu
>Priority: Major
>  Labels: auto-unassigned
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> The page url 
> ishttps://ci.apache.org/projects/flink/flink-docs-master/ops/jobmanager_high_availability.html
> The markdown file is located in 
> https://github.com/apache/flink/blob/master/docs/ops/jobmanager_high_availability.md
> The markdown file will be created once FLINK-11529 is merged.
> You can reference the translation from : 
> https://github.com/flink-china/1.6.0/blob/master/ops/jobmanager_high_availability.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23367) testKeyGroupedInternalPriorityQueue does not dispose rocksdb properly, and fails the test

2021-07-12 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-23367:
-
Description: 
The set of `testKeyGroupedInternalPriorityQueue` for 
`ChangelogDelegateEmbeddedRocksDBStateBackendTest` does not dispose rocksdb 
properly. 

It  seems it creates one more than needed column family in the 
`ChangelogDelegateEmbeddedRocksDBStateBackendTest` case comparing to 
`EmbeddedRocksDBStateBackend`

 // ... continue with the ones created by Flink...
for (RocksDbKvStateInfo kvStateInfo : kvStateInformation.values()) {
RocksDBOperationUtils.addColumnFamilyOptionsToCloseLater(
columnFamilyOptions, kvStateInfo.columnFamilyHandle);
IOUtils.closeQuietly(kvStateInfo.columnFamilyHandle);
}

  was:The set of `testKeyGroupedInternalPriorityQueue` for 
`ChangelogDelegateEmbeddedRocksDBStateBackendTest` does not dispose rocksdb 
properly. 


> testKeyGroupedInternalPriorityQueue does not dispose rocksdb properly, and 
> fails the test
> -
>
> Key: FLINK-23367
> URL: https://issues.apache.org/jira/browse/FLINK-23367
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Yuan Mei
>Priority: Major
>
> The set of `testKeyGroupedInternalPriorityQueue` for 
> `ChangelogDelegateEmbeddedRocksDBStateBackendTest` does not dispose rocksdb 
> properly. 
> It  seems it creates one more than needed column family in the 
> `ChangelogDelegateEmbeddedRocksDBStateBackendTest` case comparing to 
> `EmbeddedRocksDBStateBackend`
>  // ... continue with the ones created by Flink...
> for (RocksDbKvStateInfo kvStateInfo : 
> kvStateInformation.values()) {
> RocksDBOperationUtils.addColumnFamilyOptionsToCloseLater(
> columnFamilyOptions, kvStateInfo.columnFamilyHandle);
> IOUtils.closeQuietly(kvStateInfo.columnFamilyHandle);
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23367) testKeyGroupedInternalPriorityQueue does not dispose rocksdb properly, and fails the test

2021-07-12 Thread Yuan Mei (Jira)
Yuan Mei created FLINK-23367:


 Summary: testKeyGroupedInternalPriorityQueue does not dispose 
rocksdb properly, and fails the test
 Key: FLINK-23367
 URL: https://issues.apache.org/jira/browse/FLINK-23367
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Reporter: Yuan Mei


The set of `testKeyGroupedInternalPriorityQueue` for 
`ChangelogDelegateEmbeddedRocksDBStateBackendTest` does not dispose rocksdb 
properly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22910) Refine ShuffleMaster lifecycle management for pluggable shuffle service framework

2021-07-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-22910:
---
Labels: pull-request-available  (was: )

> Refine ShuffleMaster lifecycle management for pluggable shuffle service 
> framework
> -
>
> Key: FLINK-22910
> URL: https://issues.apache.org/jira/browse/FLINK-22910
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> The current _ShuffleMaster_ has an unclear lifecycle which is inconsistent 
> with the _ShuffleEnvironment_ at the _TM_ side. Besides, it is hard to 
> Implement some important capabilities for remote shuffle service. For 
> example, 1) release external resources when a job finished; 2) Stop or start 
> tracking some partitions depending on the status of the external service or 
> system.
> We drafted a document[1] which proposed some simple changes to solve these 
> issues. The document is still not wholly completed yet. We will start a 
> discussion once it is finished.
>  
> [1] 
> https://docs.google.com/document/d/1_cHoapNbx_fJ7ZNraSqw4ZK1hMRiWWJDITuSZrdMDDs/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16473: [FLINK-23365][filesystems]flink-azure-fs-hadoop compile error because of json-smart

2021-07-12 Thread GitBox


flinkbot commented on pull request #16473:
URL: https://github.com/apache/flink/pull/16473#issuecomment-878768290


   
   ## CI report:
   
   * ecc9389b111d5726f1cb4b4b532280864a4a5e75 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16465: [FLINK-22910][runtime] Refine ShuffleMaster lifecycle management for pluggable shuffle service framework

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16465:
URL: https://github.com/apache/flink/pull/16465#issuecomment-878076127


   
   ## CI report:
   
   * d42a23c9138222f06f65b879e2449da3b5f0102e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20329)
 
   * 952176efe045c811f8648d2e97ad5149eb462975 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on pull request #16184: [FLINK-21089] Skip the execution of the fully finished operators after recovery

2021-07-12 Thread GitBox


gaoyunhaii commented on pull request #16184:
URL: https://github.com/apache/flink/pull/16184#issuecomment-878766517


   Very thanks @dawidwys for the review! I updated the PR according to the 
current comments~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #16184: [FLINK-21089] Skip the execution of the fully finished operators after recovery

2021-07-12 Thread GitBox


gaoyunhaii commented on a change in pull request #16184:
URL: https://github.com/apache/flink/pull/16184#discussion_r668415862



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTask.java
##
@@ -265,6 +265,14 @@ protected void declineCheckpoint(long checkpointId) {
 
 @Override
 public void run() {
+if (operatorChain.isFinishedOnRestore()) {

Review comment:
   I also think would be better to directly skipping the startup of source 
thread. With this change the callback of `CompletionFuture` would be executed 
in the main thread, but it should also be correct. I changed it to not start 
the thread~ 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23361) when cleanup ZooKeeper Paths, high-availability.zookeeper.path.root doesn't work

2021-07-12 Thread shouzuo meng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shouzuo meng updated FLINK-23361:
-
Description: 
When  I use "high-availability.zookeeper.path.root",

but execute tryDeleteEmptyParentZNodes, it doesn't work,It is possible that 
there is a problem in 
org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServices#isRootPath

 
{code:java}
   private void tryDeleteEmptyParentZNodes() throws Exception {
// try to delete the parent znodes if they are empty
String remainingPath = 

getParentPath(getNormalizedPath(client.getNamespace()));   
final CuratorFramework nonNamespaceClient = client.usingNamespace(null);

while (!isRootPath(remainingPath)) {
try {
nonNamespaceClient.delete().forPath(remainingPath);
} catch (KeeperException.NotEmptyException ignored) {
// We can only delete empty znodes
break;
}remainingPath = getParentPath(remainingPath);
}
}
{code}
 
{code:java}
private static boolean isRootPath(String remainingPath) {
     return ZKPaths.PATH_SEPARATOR.equals(remainingPath);
 }{code}
when i use "high-availability.zookeeper.path.root" , remainingPath should 
equals the root path that I specified

  was:
When  I use "high-availability.zookeeper.path.root",

but execute HA, it doesn't work,It is possible that there is a problem in 
org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServices#isRootPath
{code:java}
private static boolean isRootPath(String remainingPath) {
     return ZKPaths.PATH_SEPARATOR.equals(remainingPath);
 }{code}
when i use "high-availability.zookeeper.path.root" , remainingPath should 
equals the root path that I specified

Summary: when cleanup ZooKeeper Paths, 
high-availability.zookeeper.path.root doesn't work  (was: After ZooKeeper sets 
authority, HA does not work)

> when cleanup ZooKeeper Paths, high-availability.zookeeper.path.root doesn't 
> work
> 
>
> Key: FLINK-23361
> URL: https://issues.apache.org/jira/browse/FLINK-23361
> Project: Flink
>  Issue Type: Bug
>  Components: flink-contrib
>Affects Versions: 1.13.1
> Environment: Flink 1.13.1
>Reporter: shouzuo meng
>Priority: Major
>  Labels: security
> Fix For: 1.13.1
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When  I use "high-availability.zookeeper.path.root",
> but execute tryDeleteEmptyParentZNodes, it doesn't work,It is possible that 
> there is a problem in 
> org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServices#isRootPath
>  
> {code:java}
>private void tryDeleteEmptyParentZNodes() throws Exception {
> // try to delete the parent znodes if they are empty
> String remainingPath = 
> 
> getParentPath(getNormalizedPath(client.getNamespace()));   
> final CuratorFramework nonNamespaceClient = client.usingNamespace(null);
> 
> while (!isRootPath(remainingPath)) {
> try {
> nonNamespaceClient.delete().forPath(remainingPath);
> } catch (KeeperException.NotEmptyException ignored) {
> // We can only delete empty znodes
> break;
> }remainingPath = getParentPath(remainingPath);
> }
> }
> {code}
>  
> {code:java}
> private static boolean isRootPath(String remainingPath) {
>      return ZKPaths.PATH_SEPARATOR.equals(remainingPath);
>  }{code}
> when i use "high-availability.zookeeper.path.root" , remainingPath should 
> equals the root path that I specified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-23366) AkkaRpcActorTest. testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor fails on azure

2021-07-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-23366.

Resolution: Duplicate

Closing as duplicating FLINK-23318.

> AkkaRpcActorTest. testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor 
> fails on azure
> -
>
> Key: FLINK-23366
> URL: https://issues.apache.org/jira/browse/FLINK-23366
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20343=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=7388
> {code:java}
> Jul 12 18:40:57 [ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 0.208 s <<< FAILURE! - in 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest
> Jul 12 18:40:57 [ERROR] 
> testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor(org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest)
>   Time elapsed: 0.013 s  <<< FAILURE!
> Jul 12 18:40:57 java.lang.AssertionError: 
> Jul 12 18:40:57 
> Jul 12 18:40:57 Expected: is 
> Jul 12 18:40:57  but: was 
> Jul 12 18:40:57   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> Jul 12 18:40:57   at org.junit.Assert.assertThat(Assert.java:964)
> Jul 12 18:40:57   at org.junit.Assert.assertThat(Assert.java:930)
> Jul 12 18:40:57   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest.testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor(AkkaRpcActorTest.java:375)
> Jul 12 18:40:57   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 12 18:40:57   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 12 18:40:57   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 12 18:40:57   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 12 18:40:57   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 12 18:40:57   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 12 18:40:57   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jul 12 18:40:57   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jul 12 18:40:57   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Jul 12 18:40:57   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Jul 12 18:40:57   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> Jul 12 

[GitHub] [flink-statefun] evans-ye commented on pull request #242: [FLINK-23126] Refactor smoke-e2e into smoke-e2e-common and smoke-e2e-embedded

2021-07-12 Thread GitBox


evans-ye commented on pull request #242:
URL: https://github.com/apache/flink-statefun/pull/242#issuecomment-878756030


   Hmmm. There's a build error:
   ```
   Error:  Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.1:testCompile 
(default-testCompile) on project statefun-smoke-e2e-embedded: Compilation 
failure
   Error:  
/home/runner/work/flink-statefun/flink-statefun/statefun-e2e-tests/statefun-smoke-e2e-embedded/src/test/java/org/apache/flink/statefun/e2e/smoke/CommandInterpreterTest.java:[49,24]
 org.apache.flink.statefun.e2e.smoke.CommandInterpreterTest.MockContext is not 
abstract and does not override abstract method 
cancelDelayedMessage(java.lang.String) in org.apache.flink.statefun.sdk.Context
   ```
   
   Let me try to fix it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-12 Thread longwang0616 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379552#comment-17379552
 ] 

longwang0616 edited comment on FLINK-22969 at 7/13/21, 3:43 AM:


[~luoyuxia]

I found through debug that when he executes KafkaSourceFunction, he will call 
the getAllPartitionsForTopics method to get topicinfo, and in 
getAllPartitionsForTopics, he will call kafkaConsumer.partitionsFor(topic); if 
topic is "", it will report 
org.apache.kafka.common.errors.InvalidTopicException: Topic '' is invalid 
exception

Can't reproduce IndexOutBoundException

 

!image-2021-07-13-11-02-37-451.png!

 

!image-2021-07-13-11-03-30-740.png!

  !image-2021-07-13-11-39-20-010.png!

KafkaSinkFunction will also call producer.partitionsFor(topic) and report 
org.apache.kafka.common.errors.InvalidTopicException: Invalid topics: []

!image-2021-07-13-11-15-47-392.png!

exception

!image-2021-07-13-11-20-22-134.png!

!image-2021-07-13-11-37-27-870.png!

 

 


was (Author: longwang0616):
[~luoyuxia]

I found through debug that when he executes KafkaSourceFunction, he will call 
the getAllPartitionsForTopics method to get topicinfo, and in 
getAllPartitionsForTopics, he will call kafkaConsumer.partitionsFor(topic); if 
topic is "", it will report 
org.apache.kafka.common.errors.InvalidTopicException: Topic '' is invalid 
exception, but the exception was not thrown in the idea's console, and the 
program ended directly。

Can't reproduce IndexOutBoundException

 

!image-2021-07-13-11-02-37-451.png!

 

!image-2021-07-13-11-03-30-740.png!

 

KafkaSinkFunction will also call producer.partitionsFor(topic) and report 
org.apache.kafka.common.errors.InvalidTopicException: Invalid topics: []

!image-2021-07-13-11-15-47-392.png!

exception

!image-2021-07-13-11-20-22-134.png!

!image-2021-07-13-11-37-27-870.png!

 

 

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png, 
> image-2021-07-13-11-02-37-451.png, image-2021-07-13-11-03-30-740.png, 
> image-2021-07-13-11-15-06-977.png, image-2021-07-13-11-15-47-392.png, 
> image-2021-07-13-11-20-22-134.png, image-2021-07-13-11-37-27-870.png, 
> image-2021-07-13-11-39-20-010.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"

[jira] [Comment Edited] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-12 Thread longwang0616 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379552#comment-17379552
 ] 

longwang0616 edited comment on FLINK-22969 at 7/13/21, 3:41 AM:


[~luoyuxia]

I found through debug that when he executes KafkaSourceFunction, he will call 
the getAllPartitionsForTopics method to get topicinfo, and in 
getAllPartitionsForTopics, he will call kafkaConsumer.partitionsFor(topic); if 
topic is "", it will report 
org.apache.kafka.common.errors.InvalidTopicException: Topic '' is invalid 
exception, but the exception was not thrown in the idea's console, and the 
program ended directly。

Can't reproduce IndexOutBoundException

 

!image-2021-07-13-11-02-37-451.png!

 

!image-2021-07-13-11-03-30-740.png!

 

KafkaSinkFunction will also call producer.partitionsFor(topic) and report 
org.apache.kafka.common.errors.InvalidTopicException: Invalid topics: []

!image-2021-07-13-11-15-47-392.png!

exception

!image-2021-07-13-11-20-22-134.png!

!image-2021-07-13-11-37-27-870.png!

 

 


was (Author: longwang0616):
[~luoyuxia]

I found through debug that when he executes KafkaSourceFunction, he will call 
the getAllPartitionsForTopics method to get topicinfo, and in 
getAllPartitionsForTopics, he will call kafkaConsumer.partitionsFor(topic); if 
topic is "", it will report 
org.apache.kafka.common.errors.InvalidTopicException: Topic '' is invalid 
exception, but the exception was not thrown in the idea's console, and the 
program ended directly。

Can't reproduce IndexOutBoundException

 

!image-2021-07-13-11-02-37-451.png!

 

!image-2021-07-13-11-03-30-740.png!

 

KafkaSinkFunction will also call producer.partitionsFor(topic) and report 
org.apache.kafka.common.errors.InvalidTopicException: Invalid topics: []

!image-2021-07-13-11-15-47-392.png!

exception

!image-2021-07-13-11-20-22-134.png!

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png, 
> image-2021-07-13-11-02-37-451.png, image-2021-07-13-11-03-30-740.png, 
> image-2021-07-13-11-15-06-977.png, image-2021-07-13-11-15-47-392.png, 
> image-2021-07-13-11-20-22-134.png, image-2021-07-13-11-37-27-870.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
>  

[GitHub] [flink] flinkbot commented on pull request #16473: [FLINK-23365][filesystems]flink-azure-fs-hadoop compile error because of json-smart

2021-07-12 Thread GitBox


flinkbot commented on pull request #16473:
URL: https://github.com/apache/flink/pull/16473#issuecomment-878752116


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit ecc9389b111d5726f1cb4b4b532280864a4a5e75 (Tue Jul 13 
03:37:37 UTC 2021)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-23365).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23366) AkkaRpcActorTest. testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor fails on azure

2021-07-12 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379556#comment-17379556
 ] 

Yun Gao commented on FLINK-23366:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20337=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=5360d54c-8d94-5d85-304e-a89267eb785a=6024

> AkkaRpcActorTest. testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor 
> fails on azure
> -
>
> Key: FLINK-23366
> URL: https://issues.apache.org/jira/browse/FLINK-23366
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20343=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=7388
> {code:java}
> Jul 12 18:40:57 [ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 0.208 s <<< FAILURE! - in 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest
> Jul 12 18:40:57 [ERROR] 
> testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor(org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest)
>   Time elapsed: 0.013 s  <<< FAILURE!
> Jul 12 18:40:57 java.lang.AssertionError: 
> Jul 12 18:40:57 
> Jul 12 18:40:57 Expected: is 
> Jul 12 18:40:57  but: was 
> Jul 12 18:40:57   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> Jul 12 18:40:57   at org.junit.Assert.assertThat(Assert.java:964)
> Jul 12 18:40:57   at org.junit.Assert.assertThat(Assert.java:930)
> Jul 12 18:40:57   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest.testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor(AkkaRpcActorTest.java:375)
> Jul 12 18:40:57   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 12 18:40:57   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 12 18:40:57   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 12 18:40:57   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 12 18:40:57   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 12 18:40:57   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 12 18:40:57   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jul 12 18:40:57   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jul 12 18:40:57   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Jul 12 18:40:57   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Jul 12 18:40:57   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Jul 12 18:40:57   at 
> 

[jira] [Commented] (FLINK-23366) AkkaRpcActorTest. testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor fails on azure

2021-07-12 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379555#comment-17379555
 ] 

Yun Gao commented on FLINK-23366:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20337=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=7650

> AkkaRpcActorTest. testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor 
> fails on azure
> -
>
> Key: FLINK-23366
> URL: https://issues.apache.org/jira/browse/FLINK-23366
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20343=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=7388
> {code:java}
> Jul 12 18:40:57 [ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 0.208 s <<< FAILURE! - in 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest
> Jul 12 18:40:57 [ERROR] 
> testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor(org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest)
>   Time elapsed: 0.013 s  <<< FAILURE!
> Jul 12 18:40:57 java.lang.AssertionError: 
> Jul 12 18:40:57 
> Jul 12 18:40:57 Expected: is 
> Jul 12 18:40:57  but: was 
> Jul 12 18:40:57   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> Jul 12 18:40:57   at org.junit.Assert.assertThat(Assert.java:964)
> Jul 12 18:40:57   at org.junit.Assert.assertThat(Assert.java:930)
> Jul 12 18:40:57   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest.testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor(AkkaRpcActorTest.java:375)
> Jul 12 18:40:57   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 12 18:40:57   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 12 18:40:57   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 12 18:40:57   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 12 18:40:57   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 12 18:40:57   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 12 18:40:57   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jul 12 18:40:57   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jul 12 18:40:57   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Jul 12 18:40:57   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Jul 12 18:40:57   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Jul 12 18:40:57   at 
> 

[jira] [Updated] (FLINK-23365) flink-azure-fs-hadoop compile error because of json-smart

2021-07-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-23365:
---
Labels: pull-request-available  (was: )

> flink-azure-fs-hadoop  compile error because of  json-smart 
> 
>
> Key: FLINK-23365
> URL: https://issues.apache.org/jira/browse/FLINK-23365
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
> Environment: windows
>Reporter: kevinsun
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> project maven compile error  flink-azure-fs-hadoop  module because of 
> json-smart  jar



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] kevinsun2010 opened a new pull request #16473: [FLINK-23365][filesystems]flink-azure-fs-hadoop compile error because of json-smart

2021-07-12 Thread GitBox


kevinsun2010 opened a new pull request #16473:
URL: https://github.com/apache/flink/pull/16473


   
   ## What is the purpose of the change
   
   This pull request fixes the issue described in FLINK-23365. 
   
   
   ## Brief change log
   
   update flink-fs-hadoop-shaded/pom.xml  exclusion  json-smart
   
   
   ## Verifying this change

   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (**yes** / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] guoweiM commented on a change in pull request #16465: [Flink 22910][runtime] Refine ShuffleMaster lifecycle management for pluggable shuffle service framework

2021-07-12 Thread GitBox


guoweiM commented on a change in pull request #16465:
URL: https://github.com/apache/flink/pull/16465#discussion_r668398056



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMasterContext.java
##
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.shuffle;
+
+import org.apache.flink.configuration.Configuration;
+
+/**
+ * Shuffle context used to create {@link ShuffleMaster}. It can work as a 
proxy to other cluster

Review comment:
   Could you explain more about the "It can work as a proxy to other 
cluster components and hide these compontes from users"?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23366) AkkaRpcActorTest. testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor fails on azure

2021-07-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23366:

Description: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20343=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=7388
{code:java}
Jul 12 18:40:57 [ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 0.208 s <<< FAILURE! - in 
org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest
Jul 12 18:40:57 [ERROR] 
testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor(org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest)
  Time elapsed: 0.013 s  <<< FAILURE!
Jul 12 18:40:57 java.lang.AssertionError: 
Jul 12 18:40:57 
Jul 12 18:40:57 Expected: is 
Jul 12 18:40:57  but: was 
Jul 12 18:40:57 at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
Jul 12 18:40:57 at org.junit.Assert.assertThat(Assert.java:964)
Jul 12 18:40:57 at org.junit.Assert.assertThat(Assert.java:930)
Jul 12 18:40:57 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest.testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor(AkkaRpcActorTest.java:375)
Jul 12 18:40:57 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Jul 12 18:40:57 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Jul 12 18:40:57 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Jul 12 18:40:57 at java.lang.reflect.Method.invoke(Method.java:498)
Jul 12 18:40:57 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Jul 12 18:40:57 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Jul 12 18:40:57 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Jul 12 18:40:57 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Jul 12 18:40:57 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Jul 12 18:40:57 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Jul 12 18:40:57 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Jul 12 18:40:57 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Jul 12 18:40:57 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Jul 12 18:40:57 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Jul 12 18:40:57 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
Jul 12 18:40:57 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
Jul 12 18:40:57 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
Jul 12 18:40:57 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
Jul 12 18:40:57 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Jul 12 18:40:57 
Jul 12 18:40:57 [INFO] Running 
org.apache.flink.runtime.rpc.akka.TimeoutCallStackTest
Jul 12 18:40:58 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 0.076 s - in org.apache.flink.runtime.rpc.akka.TimeoutCallStackTest
Jul 12 18:40:58 [INFO] Running 
org.apache.flink.runtime.rpc.akka.AkkaRpcActorOversizedResponseMessageTest
Jul 12 18:40:58 [INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 0.587 s - in 

[jira] [Updated] (FLINK-23366) AkkaRpcActorTest. testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor fails on azure

2021-07-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23366:

Description: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20343=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=7388
{code:java}
Jul 12 18:40:57 [ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 0.208 s <<< FAILURE! - in 
org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest
Jul 12 18:40:57 [ERROR] 
testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor(org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest)
  Time elapsed: 0.013 s  <<< FAILURE!
Jul 12 18:40:57 java.lang.AssertionError: 
Jul 12 18:40:57 
Jul 12 18:40:57 Expected: is 
Jul 12 18:40:57  but: was 
Jul 12 18:40:57 at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
Jul 12 18:40:57 at org.junit.Assert.assertThat(Assert.java:964)
Jul 12 18:40:57 at org.junit.Assert.assertThat(Assert.java:930)
Jul 12 18:40:57 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActorTest.testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor(AkkaRpcActorTest.java:375)
Jul 12 18:40:57 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Jul 12 18:40:57 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Jul 12 18:40:57 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Jul 12 18:40:57 at java.lang.reflect.Method.invoke(Method.java:498)
Jul 12 18:40:57 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Jul 12 18:40:57 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Jul 12 18:40:57 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Jul 12 18:40:57 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Jul 12 18:40:57 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Jul 12 18:40:57 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Jul 12 18:40:57 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Jul 12 18:40:57 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Jul 12 18:40:57 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Jul 12 18:40:57 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Jul 12 18:40:57 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
Jul 12 18:40:57 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)

{code}

  was:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20343=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=7388
{code:java}

Jul 12 18:40:57 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Jul 12 18:40:57 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Jul 12 18:40:57

[jira] [Updated] (FLINK-23366) AkkaRpcActorTest. testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor fails on azure

2021-07-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23366:

Labels: test-stability  (was: )

> AkkaRpcActorTest. testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor 
> fails on azure
> -
>
> Key: FLINK-23366
> URL: https://issues.apache.org/jira/browse/FLINK-23366
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20343=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=7388
> {code:java}
> Jul 12 18:40:57   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Jul 12 18:40:57   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jul 12 18:40:57   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jul 12 18:40:57   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> Jul 12 18:40:57   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Jul 12 18:40:57 
> Jul 12 18:40:57 [INFO] Running 
> org.apache.flink.runtime.rpc.akka.TimeoutCallStackTest
> Jul 12 18:40:58 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 0.076 s - in org.apache.flink.runtime.rpc.akka.TimeoutCallStackTest
> Jul 12 18:40:58 [INFO] Running 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActorOversizedResponseMessageTest
> Jul 12 18:40:58 [INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, 
> Time elapsed: 0.587 s - in 
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceTest
> Jul 12 18:40:58 [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 0.159 s - in 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActorOversizedResponseMessageTest
> Jul 12 18:40:58 [INFO] 
> Jul 12 18:40:58 [INFO] Results:
> Jul 12 18:40:58 [INFO] 
> Jul 12 18:40:58 [ERROR] Failures: 
> Jul 12 18:40:58 [ERROR]   
> AkkaRpcActorTest.testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor:375 
> Jul 12 18:40:58 Expected: is 
> Jul 12 18:40:58  but: was 
> Jul 12 18:40:58 [INFO] 
> Jul 12 18:40:58 [ERROR] Tests run: 85, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23366) AkkaRpcActorTest. testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor fails on azure

2021-07-12 Thread Yun Gao (Jira)
Yun Gao created FLINK-23366:
---

 Summary: AkkaRpcActorTest. 
testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor fails on azure
 Key: FLINK-23366
 URL: https://issues.apache.org/jira/browse/FLINK-23366
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.14.0
Reporter: Yun Gao


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20343=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=7388
{code:java}

Jul 12 18:40:57 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Jul 12 18:40:57 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Jul 12 18:40:57 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Jul 12 18:40:57 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Jul 12 18:40:57 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
Jul 12 18:40:57 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
Jul 12 18:40:57 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
Jul 12 18:40:57 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
Jul 12 18:40:57 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
Jul 12 18:40:57 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Jul 12 18:40:57 
Jul 12 18:40:57 [INFO] Running 
org.apache.flink.runtime.rpc.akka.TimeoutCallStackTest
Jul 12 18:40:58 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 0.076 s - in org.apache.flink.runtime.rpc.akka.TimeoutCallStackTest
Jul 12 18:40:58 [INFO] Running 
org.apache.flink.runtime.rpc.akka.AkkaRpcActorOversizedResponseMessageTest
Jul 12 18:40:58 [INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 0.587 s - in org.apache.flink.runtime.rpc.akka.AkkaRpcServiceTest
Jul 12 18:40:58 [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 0.159 s - in 
org.apache.flink.runtime.rpc.akka.AkkaRpcActorOversizedResponseMessageTest
Jul 12 18:40:58 [INFO] 
Jul 12 18:40:58 [INFO] Results:
Jul 12 18:40:58 [INFO] 
Jul 12 18:40:58 [ERROR] Failures: 
Jul 12 18:40:58 [ERROR]   
AkkaRpcActorTest.testOnStopFutureCompletionDirectlyTerminatesAkkaRpcActor:375 
Jul 12 18:40:58 Expected: is 
Jul 12 18:40:58  but: was 
Jul 12 18:40:58 [INFO] 
Jul 12 18:40:58 [ERROR] Tests run: 85, Failures: 1, Errors: 0, Skipped: 0
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-12 Thread longwang0616 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

longwang0616 updated FLINK-22969:
-
Attachment: image-2021-07-13-11-20-22-134.png

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png, 
> image-2021-07-13-11-02-37-451.png, image-2021-07-13-11-03-30-740.png, 
> image-2021-07-13-11-15-06-977.png, image-2021-07-13-11-15-47-392.png, 
> image-2021-07-13-11-20-22-134.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
>

[jira] [Commented] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-12 Thread longwang0616 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379552#comment-17379552
 ] 

longwang0616 commented on FLINK-22969:
--

[~luoyuxia]

I found through debug that when he executes KafkaSourceFunction, he will call 
the getAllPartitionsForTopics method to get topicinfo, and in 
getAllPartitionsForTopics, he will call kafkaConsumer.partitionsFor(topic); if 
topic is "", it will report 
org.apache.kafka.common.errors.InvalidTopicException: Topic '' is invalid 
exception, but the exception was not thrown in the idea's console, and the 
program ended directly。

Can't reproduce IndexOutBoundException

 

!image-2021-07-13-11-02-37-451.png!

 

!image-2021-07-13-11-03-30-740.png!

 

KafkaSinkFunction will also call producer.partitionsFor(topic) and report 
org.apache.kafka.common.errors.InvalidTopicException: Invalid topics: []

!image-2021-07-13-11-15-47-392.png!

exception

!image-2021-07-13-11-20-22-134.png!

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png, 
> image-2021-07-13-11-02-37-451.png, image-2021-07-13-11-03-30-740.png, 
> image-2021-07-13-11-15-06-977.png, image-2021-07-13-11-15-47-392.png, 
> image-2021-07-13-11-20-22-134.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #16432: [FLINK-23233][runtime] Failing checkpoints before failover for failed events in OperatorCoordinator

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16432:
URL: https://github.com/apache/flink/pull/16432#issuecomment-876475935


   
   ## CI report:
   
   * eb00c919c45cf86258863260adf5ba8066cca0f0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20343)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20353)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] guoweiM commented on a change in pull request #16465: [Flink 22910][runtime] Refine ShuffleMaster lifecycle management for pluggable shuffle service framework

2021-07-12 Thread GitBox


guoweiM commented on a change in pull request #16465:
URL: https://github.com/apache/flink/pull/16465#discussion_r668398056



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMasterContext.java
##
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.shuffle;
+
+import org.apache.flink.configuration.Configuration;
+
+/**
+ * Shuffle context used to create {@link ShuffleMaster}. It can work as a 
proxy to other cluster

Review comment:
   Could you explain more about the "It can work as a proxy to other 
cluster components and hide these compontes from users"?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] evans-ye commented on pull request #242: [FLINK-23126] Refactor smoke-e2e into smoke-e2e-common and smoke-e2e-embedded

2021-07-12 Thread GitBox


evans-ye commented on pull request #242:
URL: https://github.com/apache/flink-statefun/pull/242#issuecomment-878746659


   Yes sure.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-12 Thread longwang0616 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

longwang0616 updated FLINK-22969:
-
Attachment: image-2021-07-13-11-15-47-392.png

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png, 
> image-2021-07-13-11-02-37-451.png, image-2021-07-13-11-03-30-740.png, 
> image-2021-07-13-11-15-06-977.png, image-2021-07-13-11-15-47-392.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> "payload 2"),
>  

[jira] [Updated] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-12 Thread longwang0616 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

longwang0616 updated FLINK-22969:
-
Attachment: image-2021-07-13-11-15-06-977.png

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png, 
> image-2021-07-13-11-02-37-451.png, image-2021-07-13-11-03-30-740.png, 
> image-2021-07-13-11-15-06-977.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> "payload 2"),
> changelogRow(
> 

[GitHub] [flink] JingsongLi commented on a change in pull request #16434: [FLINK-23107][table-runtime] Separate implementation of deduplicate r…

2021-07-12 Thread GitBox


JingsongLi commented on a change in pull request #16434:
URL: https://github.com/apache/flink/pull/16434#discussion_r668394192



##
File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/utils/RankUtil.scala
##
@@ -295,6 +295,11 @@ object RankUtil {
 literals.head
   }
 
+  def isTop1(rankRange: RankRange): Boolean = rankRange match {
+case crg: ConstantRankRange => crg.getRankEnd == 1L

Review comment:
   start == 1 too?

##
File path: 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/rank/FastTop1Function.java
##
@@ -0,0 +1,192 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.operators.rank;
+
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.state.StateTtlConfig;
+import org.apache.flink.api.common.state.ValueState;
+import org.apache.flink.api.common.state.ValueStateDescriptor;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.state.FunctionInitializationContext;
+import org.apache.flink.runtime.state.FunctionSnapshotContext;
+import org.apache.flink.streaming.api.checkpoint.CheckpointedFunction;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.runtime.generated.GeneratedRecordComparator;
+import org.apache.flink.table.runtime.keyselector.RowDataKeySelector;
+import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava18.com.google.common.cache.CacheBuilder;
+import org.apache.flink.shaded.guava18.com.google.common.cache.RemovalCause;
+import org.apache.flink.shaded.guava18.com.google.common.cache.RemovalListener;
+import 
org.apache.flink.shaded.guava18.com.google.common.cache.RemovalNotification;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * A more concise implementation for {@link AppendOnlyTopNFunction} and {@link
+ * UpdatableTopNFunction} when only Top-1 is desired.

Review comment:
   Why it can cover updatable? We should add more comments.

##
File path: 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/rank/FastTop1Function.java
##
@@ -0,0 +1,192 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.operators.rank;
+
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.state.StateTtlConfig;
+import org.apache.flink.api.common.state.ValueState;
+import org.apache.flink.api.common.state.ValueStateDescriptor;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.state.FunctionInitializationContext;
+import org.apache.flink.runtime.state.FunctionSnapshotContext;
+import org.apache.flink.streaming.api.checkpoint.CheckpointedFunction;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.runtime.generated.GeneratedRecordComparator;
+import org.apache.flink.table.runtime.keyselector.RowDataKeySelector;
+import 

[jira] [Commented] (FLINK-16012) Reduce the default number of exclusive buffers from 2 to 1 on receiver side

2021-07-12 Thread Yingjie Cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379549#comment-17379549
 ] 

Yingjie Cao commented on FLINK-16012:
-

Hi [~pnowojski], do we still need this? I am asking because this has some 
influence on the micro benchmark performance. (The performance lost can be 
compensated by increasing the floating buffer size.)

> Reduce the default number of exclusive buffers from 2 to 1 on receiver side
> ---
>
> Key: FLINK-16012
> URL: https://issues.apache.org/jira/browse/FLINK-16012
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Reporter: Zhijiang
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In order to reduce the inflight buffers for checkpoint in the case of back 
> pressure, we can reduce the number of exclusive buffers for remote input 
> channel from default 2 to 1 as the first step. Besides that, the total 
> required buffers are also reduced as a result. We can further verify the 
> performance effect via various of benchmarks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23365) flink-azure-fs-hadoop compile error because of json-smart

2021-07-12 Thread kevinsun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379548#comment-17379548
 ] 

kevinsun commented on FLINK-23365:
--

i will fix it

> flink-azure-fs-hadoop  compile error because of  json-smart 
> 
>
> Key: FLINK-23365
> URL: https://issues.apache.org/jira/browse/FLINK-23365
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
> Environment: windows
>Reporter: kevinsun
>Priority: Major
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> project maven compile error  flink-azure-fs-hadoop  module because of 
> json-smart  jar



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23365) flink-azure-fs-hadoop compile error because of json-smart

2021-07-12 Thread kevinsun (Jira)
kevinsun created FLINK-23365:


 Summary: flink-azure-fs-hadoop  compile error because of  
json-smart 
 Key: FLINK-23365
 URL: https://issues.apache.org/jira/browse/FLINK-23365
 Project: Flink
  Issue Type: Bug
  Components: FileSystems
 Environment: windows
Reporter: kevinsun


project maven compile error  flink-azure-fs-hadoop  module because of 
json-smart  jar



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22367) JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished times out

2021-07-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22367:
-
Priority: Major  (was: Minor)

> JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished
>  times out
> --
>
> Key: FLINK-22367
> URL: https://issues.apache.org/jira/browse/FLINK-22367
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-critical, auto-unassigned, 
> test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16818=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=2c7d57b9-7341-5a87-c9af-2cf7cc1a37dc=3844
> {code}
> [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 13.135 s <<< FAILURE! - in 
> org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase
> Apr 19 22:28:44 [ERROR] 
> terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase)
>   Time elapsed: 10.237 s  <<< ERROR!
> Apr 19 22:28:44 java.util.concurrent.ExecutionException: 
> java.util.concurrent.TimeoutException: Invocation of public default 
> java.util.concurrent.CompletableFuture 
> org.apache.flink.runtime.webmonitor.RestfulGateway.stopWithSavepoint(org.apache.flink.api.common.JobID,java.lang.String,boolean,org.apache.flink.api.common.time.Time)
>  timed out.
> Apr 19 22:28:44   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
> Apr 19 22:28:44   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
> Apr 19 22:28:44   at 
> org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.stopWithSavepointNormalExecutionHelper(JobMasterStopWithSavepointITCase.java:123)
> Apr 19 22:28:44   at 
> org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(JobMasterStopWithSavepointITCase.java:111)
> Apr 19 22:28:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Apr 19 22:28:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 19 22:28:44   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 19 22:28:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> Apr 19 22:28:44   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Apr 19 22:28:44   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 19 22:28:44   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Apr 19 22:28:44   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Apr 19 22:28:44   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Apr 19 22:28:44   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> Apr 19 22:28:44   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Apr 19 22:28:44   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Apr 19 22:28:44   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Apr 19 22:28:44   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 19 22:28:44   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 19 22:28:44   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> Apr 19 22:28:44   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> Apr 19 22:28:44   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Apr 19 22:28:44   at 
> 

[jira] [Commented] (FLINK-22367) JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished times out

2021-07-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379545#comment-17379545
 ] 

Xintong Song commented on FLINK-22367:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20348=logs=219e462f-e75e-506c-3671-5017d866ccf6=4c5dc768-5c82-5ab0-660d-086cb90b76a0=4913

> JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished
>  times out
> --
>
> Key: FLINK-22367
> URL: https://issues.apache.org/jira/browse/FLINK-22367
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-unassigned, 
> test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16818=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=2c7d57b9-7341-5a87-c9af-2cf7cc1a37dc=3844
> {code}
> [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 13.135 s <<< FAILURE! - in 
> org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase
> Apr 19 22:28:44 [ERROR] 
> terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase)
>   Time elapsed: 10.237 s  <<< ERROR!
> Apr 19 22:28:44 java.util.concurrent.ExecutionException: 
> java.util.concurrent.TimeoutException: Invocation of public default 
> java.util.concurrent.CompletableFuture 
> org.apache.flink.runtime.webmonitor.RestfulGateway.stopWithSavepoint(org.apache.flink.api.common.JobID,java.lang.String,boolean,org.apache.flink.api.common.time.Time)
>  timed out.
> Apr 19 22:28:44   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
> Apr 19 22:28:44   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
> Apr 19 22:28:44   at 
> org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.stopWithSavepointNormalExecutionHelper(JobMasterStopWithSavepointITCase.java:123)
> Apr 19 22:28:44   at 
> org.apache.flink.runtime.jobmaster.JobMasterStopWithSavepointITCase.terminateWithSavepointWithoutComplicationsShouldSucceedAndLeadJobToFinished(JobMasterStopWithSavepointITCase.java:111)
> Apr 19 22:28:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Apr 19 22:28:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 19 22:28:44   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 19 22:28:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> Apr 19 22:28:44   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Apr 19 22:28:44   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 19 22:28:44   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Apr 19 22:28:44   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Apr 19 22:28:44   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Apr 19 22:28:44   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> Apr 19 22:28:44   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Apr 19 22:28:44   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Apr 19 22:28:44   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Apr 19 22:28:44   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 19 22:28:44   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 19 22:28:44   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 19 22:28:44   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> Apr 19 22:28:44   at 
> 

[jira] [Updated] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-12 Thread longwang0616 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

longwang0616 updated FLINK-22969:
-
Attachment: image-2021-07-13-11-03-30-740.png

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png, 
> image-2021-07-13-11-02-37-451.png, image-2021-07-13-11-03-30-740.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> "payload 2"),
> changelogRow(
> "+I",

[jira] [Updated] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-12 Thread longwang0616 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

longwang0616 updated FLINK-22969:
-
Attachment: image-2021-07-13-11-02-37-451.png

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png, 
> image-2021-07-13-11-02-37-451.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> "payload 2"),
> changelogRow(
> "+I",
> 

[GitHub] [flink] gaoyunhaii commented on pull request #16432: [FLINK-23233][runtime] Failing checkpoints before failover for failed events in OperatorCoordinator

2021-07-12 Thread GitBox


gaoyunhaii commented on pull request #16432:
URL: https://github.com/apache/flink/pull/16432#issuecomment-878739640


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-23235) SinkITCase.writerAndCommitterAndGlobalCommitterExecuteInStreamingMode fails on azure

2021-07-12 Thread Guowei Ma (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guowei Ma resolved FLINK-23235.
---
  Assignee: Guowei Ma
Resolution: Fixed

> SinkITCase.writerAndCommitterAndGlobalCommitterExecuteInStreamingMode fails 
> on azure
> 
>
> Key: FLINK-23235
> URL: https://issues.apache.org/jira/browse/FLINK-23235
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.13.1
>Reporter: Xintong Song
>Assignee: Guowei Ma
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.13.2
>
> Attachments: screenshot-1.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19867=logs=02c4e775-43bf-5625-d1cc-542b5209e072=e5961b24-88d9-5c77-efd3-955422674c25=9972
> {code}
> Jul 03 23:57:29 [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 5.53 s <<< FAILURE! - in 
> org.apache.flink.test.streaming.runtime.SinkITCase
> Jul 03 23:57:29 [ERROR] 
> writerAndCommitterAndGlobalCommitterExecuteInStreamingMode(org.apache.flink.test.streaming.runtime.SinkITCase)
>   Time elapsed: 0.68 s  <<< FAILURE!
> Jul 03 23:57:29 java.lang.AssertionError: 
> Jul 03 23:57:29 
> Jul 03 23:57:29 Expected: iterable over ["(895,null,-9223372036854775808)", 
> "(895,null,-9223372036854775808)", "(127,null,-9223372036854775808)", 
> "(127,null,-9223372036854775808)", "(148,null,-9223372036854775808)", 
> "(148,null,-9223372036854775808)", "(161,null,-9223372036854775808)", 
> "(161,null,-9223372036854775808)", "(148,null,-9223372036854775808)", 
> "(148,null,-9223372036854775808)", "(662,null,-9223372036854775808)", 
> "(662,null,-9223372036854775808)", "(822,null,-9223372036854775808)", 
> "(822,null,-9223372036854775808)", "(491,null,-9223372036854775808)", 
> "(491,null,-9223372036854775808)", "(275,null,-9223372036854775808)", 
> "(275,null,-9223372036854775808)", "(122,null,-9223372036854775808)", 
> "(122,null,-9223372036854775808)", "(850,null,-9223372036854775808)", 
> "(850,null,-9223372036854775808)", "(630,null,-9223372036854775808)", 
> "(630,null,-9223372036854775808)", "(682,null,-9223372036854775808)", 
> "(682,null,-9223372036854775808)", "(765,null,-9223372036854775808)", 
> "(765,null,-9223372036854775808)", "(434,null,-9223372036854775808)", 
> "(434,null,-9223372036854775808)", "(970,null,-9223372036854775808)", 
> "(970,null,-9223372036854775808)", "(714,null,-9223372036854775808)", 
> "(714,null,-9223372036854775808)", "(795,null,-9223372036854775808)", 
> "(795,null,-9223372036854775808)", "(288,null,-9223372036854775808)", 
> "(288,null,-9223372036854775808)", "(422,null,-9223372036854775808)", 
> "(422,null,-9223372036854775808)"] in any order
> Jul 03 23:57:29  but: Not matched: "end of input"
> Jul 03 23:57:29   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> Jul 03 23:57:29   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
> Jul 03 23:57:29   at 
> org.apache.flink.test.streaming.runtime.SinkITCase.writerAndCommitterAndGlobalCommitterExecuteInStreamingMode(SinkITCase.java:139)
> Jul 03 23:57:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 03 23:57:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 03 23:57:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 03 23:57:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 03 23:57:29   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 03 23:57:29   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 03 23:57:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jul 03 23:57:29   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jul 03 23:57:29   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jul 03 23:57:29   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jul 03 23:57:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jul 03 23:57:29   at 
> 

[jira] [Comment Edited] (FLINK-23235) SinkITCase.writerAndCommitterAndGlobalCommitterExecuteInStreamingMode fails on azure

2021-07-12 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379068#comment-17379068
 ] 

Guowei Ma edited comment on FLINK-23235 at 7/13/21, 2:58 AM:
-

Fix in release-1.13: 92398d42498e61554566ab2dcfd8a5fa8f2a64a2
Fix in master: 2268baf211f1b367e56c8f8d7cd8ee8dee355cab


was (Author: maguowei):
Fix in release-1.13: 92398d42498e61554566ab2dcfd8a5fa8f2a64a2

> SinkITCase.writerAndCommitterAndGlobalCommitterExecuteInStreamingMode fails 
> on azure
> 
>
> Key: FLINK-23235
> URL: https://issues.apache.org/jira/browse/FLINK-23235
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.13.1
>Reporter: Xintong Song
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.2
>
> Attachments: screenshot-1.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19867=logs=02c4e775-43bf-5625-d1cc-542b5209e072=e5961b24-88d9-5c77-efd3-955422674c25=9972
> {code}
> Jul 03 23:57:29 [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 5.53 s <<< FAILURE! - in 
> org.apache.flink.test.streaming.runtime.SinkITCase
> Jul 03 23:57:29 [ERROR] 
> writerAndCommitterAndGlobalCommitterExecuteInStreamingMode(org.apache.flink.test.streaming.runtime.SinkITCase)
>   Time elapsed: 0.68 s  <<< FAILURE!
> Jul 03 23:57:29 java.lang.AssertionError: 
> Jul 03 23:57:29 
> Jul 03 23:57:29 Expected: iterable over ["(895,null,-9223372036854775808)", 
> "(895,null,-9223372036854775808)", "(127,null,-9223372036854775808)", 
> "(127,null,-9223372036854775808)", "(148,null,-9223372036854775808)", 
> "(148,null,-9223372036854775808)", "(161,null,-9223372036854775808)", 
> "(161,null,-9223372036854775808)", "(148,null,-9223372036854775808)", 
> "(148,null,-9223372036854775808)", "(662,null,-9223372036854775808)", 
> "(662,null,-9223372036854775808)", "(822,null,-9223372036854775808)", 
> "(822,null,-9223372036854775808)", "(491,null,-9223372036854775808)", 
> "(491,null,-9223372036854775808)", "(275,null,-9223372036854775808)", 
> "(275,null,-9223372036854775808)", "(122,null,-9223372036854775808)", 
> "(122,null,-9223372036854775808)", "(850,null,-9223372036854775808)", 
> "(850,null,-9223372036854775808)", "(630,null,-9223372036854775808)", 
> "(630,null,-9223372036854775808)", "(682,null,-9223372036854775808)", 
> "(682,null,-9223372036854775808)", "(765,null,-9223372036854775808)", 
> "(765,null,-9223372036854775808)", "(434,null,-9223372036854775808)", 
> "(434,null,-9223372036854775808)", "(970,null,-9223372036854775808)", 
> "(970,null,-9223372036854775808)", "(714,null,-9223372036854775808)", 
> "(714,null,-9223372036854775808)", "(795,null,-9223372036854775808)", 
> "(795,null,-9223372036854775808)", "(288,null,-9223372036854775808)", 
> "(288,null,-9223372036854775808)", "(422,null,-9223372036854775808)", 
> "(422,null,-9223372036854775808)"] in any order
> Jul 03 23:57:29  but: Not matched: "end of input"
> Jul 03 23:57:29   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> Jul 03 23:57:29   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
> Jul 03 23:57:29   at 
> org.apache.flink.test.streaming.runtime.SinkITCase.writerAndCommitterAndGlobalCommitterExecuteInStreamingMode(SinkITCase.java:139)
> Jul 03 23:57:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 03 23:57:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 03 23:57:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 03 23:57:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 03 23:57:29   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 03 23:57:29   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 03 23:57:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jul 03 23:57:29   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jul 03 23:57:29   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jul 03 23:57:29   at 
> 

[jira] [Updated] (FLINK-23235) SinkITCase.writerAndCommitterAndGlobalCommitterExecuteInStreamingMode fails on azure

2021-07-12 Thread Guowei Ma (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guowei Ma updated FLINK-23235:
--
Fix Version/s: 1.14.0

> SinkITCase.writerAndCommitterAndGlobalCommitterExecuteInStreamingMode fails 
> on azure
> 
>
> Key: FLINK-23235
> URL: https://issues.apache.org/jira/browse/FLINK-23235
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.13.1
>Reporter: Xintong Song
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.13.2
>
> Attachments: screenshot-1.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19867=logs=02c4e775-43bf-5625-d1cc-542b5209e072=e5961b24-88d9-5c77-efd3-955422674c25=9972
> {code}
> Jul 03 23:57:29 [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 5.53 s <<< FAILURE! - in 
> org.apache.flink.test.streaming.runtime.SinkITCase
> Jul 03 23:57:29 [ERROR] 
> writerAndCommitterAndGlobalCommitterExecuteInStreamingMode(org.apache.flink.test.streaming.runtime.SinkITCase)
>   Time elapsed: 0.68 s  <<< FAILURE!
> Jul 03 23:57:29 java.lang.AssertionError: 
> Jul 03 23:57:29 
> Jul 03 23:57:29 Expected: iterable over ["(895,null,-9223372036854775808)", 
> "(895,null,-9223372036854775808)", "(127,null,-9223372036854775808)", 
> "(127,null,-9223372036854775808)", "(148,null,-9223372036854775808)", 
> "(148,null,-9223372036854775808)", "(161,null,-9223372036854775808)", 
> "(161,null,-9223372036854775808)", "(148,null,-9223372036854775808)", 
> "(148,null,-9223372036854775808)", "(662,null,-9223372036854775808)", 
> "(662,null,-9223372036854775808)", "(822,null,-9223372036854775808)", 
> "(822,null,-9223372036854775808)", "(491,null,-9223372036854775808)", 
> "(491,null,-9223372036854775808)", "(275,null,-9223372036854775808)", 
> "(275,null,-9223372036854775808)", "(122,null,-9223372036854775808)", 
> "(122,null,-9223372036854775808)", "(850,null,-9223372036854775808)", 
> "(850,null,-9223372036854775808)", "(630,null,-9223372036854775808)", 
> "(630,null,-9223372036854775808)", "(682,null,-9223372036854775808)", 
> "(682,null,-9223372036854775808)", "(765,null,-9223372036854775808)", 
> "(765,null,-9223372036854775808)", "(434,null,-9223372036854775808)", 
> "(434,null,-9223372036854775808)", "(970,null,-9223372036854775808)", 
> "(970,null,-9223372036854775808)", "(714,null,-9223372036854775808)", 
> "(714,null,-9223372036854775808)", "(795,null,-9223372036854775808)", 
> "(795,null,-9223372036854775808)", "(288,null,-9223372036854775808)", 
> "(288,null,-9223372036854775808)", "(422,null,-9223372036854775808)", 
> "(422,null,-9223372036854775808)"] in any order
> Jul 03 23:57:29  but: Not matched: "end of input"
> Jul 03 23:57:29   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> Jul 03 23:57:29   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
> Jul 03 23:57:29   at 
> org.apache.flink.test.streaming.runtime.SinkITCase.writerAndCommitterAndGlobalCommitterExecuteInStreamingMode(SinkITCase.java:139)
> Jul 03 23:57:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 03 23:57:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 03 23:57:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 03 23:57:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 03 23:57:29   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 03 23:57:29   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jul 03 23:57:29   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 03 23:57:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jul 03 23:57:29   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jul 03 23:57:29   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jul 03 23:57:29   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jul 03 23:57:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jul 03 23:57:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jul 

[jira] [Commented] (FLINK-19916) Hadoop3 ShutdownHookManager visit closed ClassLoader

2021-07-12 Thread Dongwon Kim (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379539#comment-17379539
 ] 

Dongwon Kim commented on FLINK-19916:
-

After upgrading from 1.12.1 to 1.13.1, I'm seeing the same exception on Hadoop 
3.1.1.3.1.4.0-315. I've never seen this exception before the upgrade.

 
{code:bash}
Setting HBASE_CONF_DIR=/etc/hbase/conf because no HBASE_CONF_DIR was set.
2021-07-13 11:25:47,762 INFO  org.apache.hadoop.yarn.client.AHSProxy
   [] - Connecting to Application History server at 
mobdata-flink-nm02.dakao.io/10.92.42.215:10200
2021-07-13 11:25:47,773 INFO  org.apache.flink.yarn.YarnClusterDescriptor   
   [] - No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2021-07-13 11:25:47,894 INFO  
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
over to rm2
2021-07-13 11:25:47,961 INFO  org.apache.hadoop.conf.Configuration  
   [] - found resource resource-types.xml at 
file:/etc/hadoop/3.1.4.0-315/0/resource-types.xml
2021-07-13 11:25:47,986 WARN  org.apache.flink.yarn.YarnClusterDescriptor   
   [] - Neither the HADOOP_CONF_DIR nor the YARN_CONF_DIR environment 
variable is set. The Flink YARN Client needs one of these to be set to properly 
load the Hadoop configuration for accessing YARN.
2021-07-13 11:25:48,034 INFO  org.apache.flink.yarn.YarnClusterDescriptor   
   [] - The configured JobManager memory is 1600 MB. YARN will allocate 
2048 MB to make up an integer multiple of its minimum allocation memory (1024 
MB, configured via 'yarn.scheduler.minimum-allocation-mb'). The extra 448 MB 
may not be used by Flink.
2021-07-13 11:25:48,034 INFO  org.apache.flink.yarn.YarnClusterDescriptor   
   [] - Cluster specification: 
ClusterSpecification{masterMemoryMB=1600, taskManagerMemoryMB=10240, 
slotsPerTaskManager=4}
2021-07-13 11:25:48,620 WARN  
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory  [] - The 
short-circuit local reads feature cannot be used because libhadoop cannot be 
loaded.
2021-07-13 11:25:49,941 INFO  org.apache.flink.yarn.YarnClusterDescriptor   
   [] - Submitting application master application_1622612076031_0129
2021-07-13 11:25:49,984 INFO  
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl[] - Submitted 
application application_1622612076031_0129
2021-07-13 11:25:49,984 INFO  org.apache.flink.yarn.YarnClusterDescriptor   
   [] - Waiting for the cluster to be allocated
2021-07-13 11:25:49,990 INFO  org.apache.flink.yarn.YarnClusterDescriptor   
   [] - Deploying cluster, current state ACCEPTED
2021-07-13 11:25:56,760 INFO  org.apache.flink.yarn.YarnClusterDescriptor   
   [] - YARN application has been deployed successfully.
2021-07-13 11:25:56,761 INFO  org.apache.flink.yarn.YarnClusterDescriptor   
   [] - The Flink YARN session cluster has been started in detached 
mode. In order to stop Flink gracefully, use the following command:
$ echo "stop" | ./bin/yarn-session.sh -id application_1622612076031_0129
If this should not be possible, then you can also kill Flink via YARN's web 
interface or via:
$ yarn application -kill application_1622612076031_0129
Note that killing Flink might not clean up all job artifacts and temporary 
files.
2021-07-13 11:25:56,762 INFO  org.apache.flink.yarn.YarnClusterDescriptor   
   [] - Found Web Interface mobdata-flink-dn04.dakao.io:46709 of 
application 'application_1622612076031_0129'.
Job has been submitted with JobID 37afdf9b6067e28f079a45c80cdc322f
Exception in thread "Thread-6" java.lang.IllegalStateException: Trying to 
access closed classloader. Please check if you store classloaders directly or 
indirectly in static fields. If the stacktrace suggests that the leak occurs in 
a third party library and cannot be fixed immediately, you can disable this 
check with the configuration 'classloader.check-leaked-classloader'.
at 
org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:164)
at 
org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResource(FlinkUserCodeClassLoaders.java:183)
at 
org.apache.hadoop.conf.Configuration.getResource(Configuration.java:2739)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2953)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2927)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2807)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1201)
at 
org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1789)
at 

[jira] [Created] (FLINK-23364) Check all GeneratedClass to make sure they can be split by JavaCodeSplitter

2021-07-12 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-23364:


 Summary: Check all  GeneratedClass to make sure they can be split 
by JavaCodeSplitter
 Key: FLINK-23364
 URL: https://issues.apache.org/jira/browse/FLINK-23364
 Project: Flink
  Issue Type: Sub-task
Reporter: Jingsong Lee






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20975) HiveTableSourceITCase.testPartitionFilter fails on AZP

2021-07-12 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li reassigned FLINK-20975:
--

Assignee: Rui Li

> HiveTableSourceITCase.testPartitionFilter fails on AZP
> --
>
> Key: FLINK-20975
> URL: https://issues.apache.org/jira/browse/FLINK-20975
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Assignee: Rui Li
>Priority: Major
>  Labels: auto-deprioritized-critical, auto-unassigned, 
> test-stability
> Fix For: 1.13.2
>
>
> The test {{HiveTableSourceITCase.testPartitionFilter}} fails on AZP with the 
> following exception:
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testPartitionFilter(HiveTableSourceITCase.java:278)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-23150) Clean up the old implementation of code splitting

2021-07-12 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-23150.

Fix Version/s: 1.14.0
 Assignee: Caizhi Weng
   Resolution: Implemented

Implemented via:

master: 83c5e710673157afc03e6e9ecc123774d2dc008e

> Clean up the old implementation of code splitting
> -
>
> Key: FLINK-23150
> URL: https://issues.apache.org/jira/browse/FLINK-23150
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> In the third step we clean up the old implementation of code splitting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #16460: [FLINK-23150][table-planner] Remove the old code split implementation

2021-07-12 Thread GitBox


JingsongLi merged pull request #16460:
URL: https://github.com/apache/flink/pull/16460


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23363) JavaCodeSplitter can not split the code from ProjectionCodeGenerator

2021-07-12 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-23363:


 Summary: JavaCodeSplitter can not split the code from 
ProjectionCodeGenerator
 Key: FLINK-23363
 URL: https://issues.apache.org/jira/browse/FLINK-23363
 Project: Flink
  Issue Type: Sub-task
Reporter: Jingsong Lee
 Fix For: 1.14.0


- JavaCodeSplitter can not split the method which has return value. We should 
add comments in JavaCodeSplitter.

- ProjectionCodeGenerator need has a method without return value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22766) Report metrics of KafkaConsumer in Kafka new source

2021-07-12 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379537#comment-17379537
 ] 

Jiangjie Qin commented on FLINK-22766:
--

Merged to release-1.13: 2c455f324b9ec7ef053253cf4904413b1e5f7a98

> Report metrics of KafkaConsumer in Kafka new source
> ---
>
> Key: FLINK-22766
> URL: https://issues.apache.org/jira/browse/FLINK-22766
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.13.2
>
>
> Currently Kafka new source doesn't register metrics of KafkaConsumer in 
> KafkaPartitionSplitReader. These metrics should be added for debugging and 
> monitoring purpose. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin merged pull request #16379: [BP-1.13][FLINK-22722 / FLINK-22766][connector/kafka] Add docs and metrics for Kafka new source

2021-07-12 Thread GitBox


becketqin merged pull request #16379:
URL: https://github.com/apache/flink/pull/16379


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-12 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379534#comment-17379534
 ] 

luoyuxia commented on FLINK-22969:
--

[~longwang0616]

Yeah, the description of the issue is not clear. But actually It's said that it 
will throw IndexOutBoundException when set  topic = ''  in kafka option, but I 
can't reproduce the exception.

But whatever, just be patient. After [~fsk119] expains this issue clearly, we 
can keep moving on this issue.

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> 

[jira] [Closed] (FLINK-23359) Fix the number of available slots in testResourceCanBeAllocatedForDifferentJobAfterFree

2021-07-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-23359.

Resolution: Fixed

Fixed via
- master (1.14): 2e374954b9bfa69e30624dfb27ff3762749da725
- release-1.13: 4d865341e16f668899e4295e9d85cc5258145e24

> Fix the number of available slots in 
> testResourceCanBeAllocatedForDifferentJobAfterFree
> ---
>
> Key: FLINK-23359
> URL: https://issues.apache.org/jira/browse/FLINK-23359
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.13.0
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.13.2
>
>
> In 
> AbstractFineGrainedSlotManagerITCase#testResourceCanBeAllocatedForDifferentJobAfterFree,
>  we need only 1 default slot exist in the cluster. However, we currently 
> register a TaskManager with 2 default slots. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong closed pull request #16469: [FLINK-23359][test] Fix the number of available slots in testResource…

2021-07-12 Thread GitBox


xintongsong closed pull request #16469:
URL: https://github.com/apache/flink/pull/16469


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-training] alpinegizmo commented on pull request #21: [FLINK-23332] update gradle to 7.1

2021-07-12 Thread GitBox


alpinegizmo commented on pull request #21:
URL: https://github.com/apache/flink-training/pull/21#issuecomment-878723029


   I upgraded my system gradle 7.1.1, but the tests fail when I use it. They 
seem to pass when I use ./gradlew, but I'm not confident they are being run.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2021-07-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379516#comment-17379516
 ] 

Xintong Song commented on FLINK-20329:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20347=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361=12948

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  

[jira] [Created] (FLINK-23362) ClientTest.testSimpleRequests fails due to timeout on azure

2021-07-12 Thread Xintong Song (Jira)
Xintong Song created FLINK-23362:


 Summary: ClientTest.testSimpleRequests fails due to timeout on 
azure
 Key: FLINK-23362
 URL: https://issues.apache.org/jira/browse/FLINK-23362
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Queryable State
Affects Versions: 1.12.4
Reporter: Xintong Song
 Fix For: 1.12.5


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20347=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=14440

{code}
[ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 22.994 
s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest
[ERROR] testSimpleRequests(org.apache.flink.queryablestate.network.ClientTest)  
Time elapsed: 20.055 s  <<< FAILURE!
java.lang.AssertionError: Receive timed out
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertNotNull(Assert.java:712)
at 
org.apache.flink.queryablestate.network.ClientTest.testSimpleRequests(ClientTest.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-training] alpinegizmo commented on pull request #21: [FLINK-23332] update gradle to 7.1

2021-07-12 Thread GitBox


alpinegizmo commented on pull request #21:
URL: https://github.com/apache/flink-training/pull/21#issuecomment-878721210


   The problem I'm having testing this is that I don't know how to get 
./gradlew to clear the cache, so I'm not succeeding in seeing this do very much 
work. I'm frustrated that ./gradlew clean doesn't seem to work.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14861: [FLINK-21088][runtime][checkpoint] Pass the flag about whether a task is finished on restore on recovery

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #14861:
URL: https://github.com/apache/flink/pull/14861#issuecomment-773094467


   
   ## CI report:
   
   * 8f283864d747feaef2b55b02c1b436f02cd7f9a0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20349)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20337)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] guoweiM closed pull request #16468: [FLINK-23235][connector] Fix SinkITCase instability

2021-07-12 Thread GitBox


guoweiM closed pull request #16468:
URL: https://github.com/apache/flink/pull/16468


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23219) temproary join ttl configruation does not take effect

2021-07-12 Thread waywtdcc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379514#comment-17379514
 ] 

waywtdcc commented on FLINK-23219:
--

Yes it is. We need to delete the outdated dimension table data here. The 
dimension table we have here is an extended table, which has a relatively large 
amount of data and requires the latest data. I think this is just like the left 
interval join, and the left interval join also has a storage time range.

> temproary join ttl configruation does not take effect
> -
>
> Key: FLINK-23219
> URL: https://issues.apache.org/jira/browse/FLINK-23219
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Runtime
>Affects Versions: 1.12.2
>Reporter: waywtdcc
>Priority: Major
>  Labels: flink, pull-request-available, sql
> Attachments: image-2021-07-02-16-29-40-310.png
>
>
> * version: flink 1.12.2
>  *  problem: I run the job of table A temproary left join table B, and set 
> the table.exec.state.ttl configuration
>  to 3 hour or 2 sencond for test. But the task status keeps growing for more 
> than 7 days.
>  *  code
> {code:java}
> tableEnv.getConfig().setIdleStateRetention(Duration.ofSeconds(2));
> tableEnv.executeSql("drop table if exists persons_table_kafka2");
>  String kafka_source_sql = "CREATE TABLE persons_table_kafka2 (\n" +
>  " `id` BIGINT,\n" +
>  " `name` STRING,\n" +
>  " `age` INT,\n" +
>  " proctime as PROCTIME(),\n" +
>  " `ts` TIMESTAMP(3),\n" +
>  " WATERMARK FOR ts AS ts\n" +
>  ") WITH (\n" +
>  " 'connector' = 'kafka',\n" +
>  " 'topic' = 'persons_test_auto',\n" +
>  " 'properties.bootstrap.servers' = 'node2:6667',\n" +
>  " 'properties.group.id' = 'testGrodsu1765',\n" +
>  " 'scan.startup.mode' = 'group-offsets',\n" +
>  " 'format' = 'json'\n" +
>  ")";
>  tableEnv.executeSql(kafka_source_sql);
>  tableEnv.executeSql("drop table if exists persons_message_table_kafka2");
>  String kafka_source_sql2 = "CREATE TABLE persons_message_table_kafka2 (\n" +
>  " `id` BIGINT,\n" +
>  " `name` STRING,\n" +
>  " `message` STRING,\n" +
>  " `ts` TIMESTAMP(3) ," +
> // " WATERMARK FOR ts AS ts - INTERVAL '5' SECOND\n" +
>  " WATERMARK FOR ts AS ts\n" +
>  ") WITH (\n" +
>  " 'connector' = 'kafka',\n" +
>  " 'topic' = 'persons_extra_message_auto',\n" +
>  " 'properties.bootstrap.servers' = 'node2:6667',\n" +
>  " 'properties.group.id' = 'testGroud125313',\n" +
>  " 'scan.startup.mode' = 'group-offsets',\n" +
>  " 'format' = 'json'\n" +
>  ")";
>  tableEnv.executeSql(kafka_source_sql2);
>  tableEnv.executeSql(
>  "CREATE TEMPORARY VIEW persons_message_table22 AS \n" +
>  "SELECT id, name, message,ts \n" +
>  " FROM (\n" +
>  " SELECT *,\n" +
>  " ROW_NUMBER() OVER (PARTITION BY name \n" +
>  " ORDER BY ts DESC) AS rowNum \n" +
>  " FROM persons_message_table_kafka2 " +
>  " )\n" +
>  "WHERE rowNum = 1");
>  tableEnv.executeSql("" +
>  "CREATE TEMPORARY VIEW result_data_view " +
>  " as " +
>  " select " +
>  " t1.name, t1.age,t1.ts as ts1,t2.message, cast (t2.ts as string) as ts2 " +
>  " from persons_table_kafka2 t1 " +
>  " left join persons_message_table22 FOR SYSTEM_TIME AS OF t1.ts AS t2 on 
> t1.name = t2.name "
>  );
> Table resultTable = tableEnv.from("result_data_view");
> DataStream rowDataDataStream = tableEnv.toAppendStream(resultTable, 
> RowData.class);
> rowDataDataStream.print();
> env.execute("test_it");
> {code}
>  * the result like   !image-2021-07-02-16-29-40-310.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-training] alpinegizmo commented on pull request #22: [FLINK-23334] let the subprojects decide whether they implement an application

2021-07-12 Thread GitBox


alpinegizmo commented on pull request #22:
URL: https://github.com/apache/flink-training/pull/22#issuecomment-878713224


   You can ignore the comment that I deleted. I realized what was going on.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-training] alpinegizmo commented on pull request #23: [FLINK-23335] add a separate 'runSolution' task

2021-07-12 Thread GitBox


alpinegizmo commented on pull request #23:
URL: https://github.com/apache/flink-training/pull/23#issuecomment-878712844


   The one question I have about this is the java-centric-ness of it. I'm 
wondering if we ought to be completely ignoring Scala, as this does. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22198) KafkaTableITCase hang.

2021-07-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379502#comment-17379502
 ] 

Xintong Song commented on FLINK-22198:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20346=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=e8fcc430-213e-5cce-59d4-6942acf09121=6564

> KafkaTableITCase hang.
> --
>
> Key: FLINK-22198
> URL: https://issues.apache.org/jira/browse/FLINK-22198
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.4
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16287=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6625
> There is no any artifacts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-training] alpinegizmo removed a comment on pull request #21: [FLINK-23332] update gradle to 7.1

2021-07-12 Thread GitBox


alpinegizmo removed a comment on pull request #21:
URL: https://github.com/apache/flink-training/pull/21#issuecomment-878510470


   After approving this, I also tried
   
   ```
   $ gradle --no-build-cache test
   Starting a Gradle Daemon (subsequent builds will be faster)
   
   FAILURE: Build failed with an exception.
   
   * Where:
   Build file '/Users/david/stuff/flink-training/build.gradle' line: 32
   
   * What went wrong:
   A problem occurred evaluating root project 'flink-training'.
   > Failed to apply plugin [class 
'com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin']
  > This version of Shadow supports Gradle 7.0+ only. Please upgrade.
   
   * Try:
   Run with --stacktrace option to get the stack trace. Run with --info or 
--debug option to get more log output. Run with --scan to get full insights.
   
   * Get more help at https://help.gradle.org
   
   BUILD FAILED in 14s
   
   ```
   
   Not sure if I should be concerned.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16383: [FLINK-22677][runtime] DefaultScheduler supports async registration of produced partitions

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16383:
URL: https://github.com/apache/flink/pull/16383#issuecomment-874451793


   
   ## CI report:
   
   * 4681d5b7ad64e7b121bc8b6fedb07834a50c0e9c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20215)
 
   * 887ba068c23e3e80e88de2db3a3c6b4e12017218 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20352)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] tzulitai commented on pull request #242: [FLINK-23126] Refactor smoke-e2e into smoke-e2e-common and smoke-e2e-embedded

2021-07-12 Thread GitBox


tzulitai commented on pull request #242:
URL: https://github.com/apache/flink-statefun/pull/242#issuecomment-878701752


   @evans-ye there seems to be a conflict with `master`. Could you do another 
rebase on `master` and resolve the conflict, and squash all the commits into 
one? Thank you!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16383: [FLINK-22677][runtime] DefaultScheduler supports async registration of produced partitions

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16383:
URL: https://github.com/apache/flink/pull/16383#issuecomment-874451793


   
   ## CI report:
   
   * 4681d5b7ad64e7b121bc8b6fedb07834a50c0e9c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20215)
 
   * 887ba068c23e3e80e88de2db3a3c6b4e12017218 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #16383: [FLINK-22677][runtime] DefaultScheduler supports async registration of produced partitions

2021-07-12 Thread GitBox


zhuzhurk commented on a change in pull request #16383:
URL: https://github.com/apache/flink/pull/16383#discussion_r668342937



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultScheduler.java
##
@@ -470,36 +482,56 @@ private static void propagateIfNonNull(final Throwable 
throwable) {
 }
 }
 
-private BiFunction 
assignResourceOrHandleError(
+private BiFunction assignResource(
 final DeploymentHandle deploymentHandle) {
 final ExecutionVertexVersion requiredVertexVersion =
 deploymentHandle.getRequiredVertexVersion();
 final ExecutionVertexID executionVertexId = 
deploymentHandle.getExecutionVertexId();
 
 return (logicalSlot, throwable) -> {
 if (executionVertexVersioner.isModified(requiredVertexVersion)) {
-log.debug(
-"Refusing to assign slot to execution vertex {} 
because this deployment was "
-+ "superseded by another deployment",
-executionVertexId);
-releaseSlotIfPresent(logicalSlot);
+if (throwable == null) {
+log.debug(
+"Refusing to assign slot to execution vertex {} 
because this deployment was "
++ "superseded by another deployment",
+executionVertexId);
+releaseSlotIfPresent(logicalSlot);
+}

Review comment:
   if `throwable != null`, `logicalSlot` will be null and there is no need 
to release a `null` slot.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #16436: [FLINK-22017][coordination] Allow BLOCKING result partition to be individually consumable

2021-07-12 Thread GitBox


zhuzhurk commented on a change in pull request #16436:
URL: https://github.com/apache/flink/pull/16436#discussion_r668312184



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/IntermediateResultPartition.java
##
@@ -72,6 +75,13 @@ public ResultPartitionType getResultType() {
 return 
getEdgeManager().getConsumerVertexGroupsForPartition(partitionId);
 }
 
+public List getConsumedPartitionGroups() {
+if (consumedPartitionGroups == null) {
+consumedPartitionGroups = 
getEdgeManager().getConsumedPartitionGroupsById(partitionId);

Review comment:
   I think we do not need this field `consumedPartitionGroups`.
   Using it without initialization may even cause problems.
   In the next commit I can see that `getConsumedPartitionGroups()` is used 
instead of `consumedPartitionGroups`.

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/EdgeManagerBuildUtil.java
##
@@ -88,6 +88,7 @@ private static void connectAllToAll(
 Arrays.stream(intermediateResult.getPartitions())
 
.map(IntermediateResultPartition::getPartitionId)
 .collect(Collectors.toList()));
+registerConsumedPartitionGroupToEdgeManager(intermediateResult, 
consumedPartitions);

Review comment:
   maybe add a method 
`createAndRegisterConsumedPartitionGroup(IntermediateResultPartition... 
partitions)`, in case some `ConsumedPartitionGroup` is created but not 
registered?

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/DefaultExecutionGraphConstructionTest.java
##
@@ -485,4 +491,39 @@ public void testMoreThanOneConsumerForIntermediateResult() 
{
 fail(e.getMessage());
 }
 }
+
+@Test
+public void testRegisterConsumedPartitionGroupToEdgeManager() throws 
Exception {
+JobVertex v1 = new JobVertex("source");
+JobVertex v2 = new JobVertex("sink");
+
+v1.setParallelism(2);
+v2.setParallelism(2);
+
+v2.connectNewDataSetAsInput(
+v1, DistributionPattern.ALL_TO_ALL, 
ResultPartitionType.BLOCKING);
+
+List ordered = new ArrayList<>(Arrays.asList(v1, v2));
+ExecutionGraph eg = createDefaultExecutionGraph(ordered);
+eg.attachJobGraph(ordered);
+
+IntermediateResult result =
+
Objects.requireNonNull(eg.getJobVertex(v1.getID())).getProducedDataSets()[0];
+
+IntermediateResultPartition partition1 = result.getPartitions()[0];
+IntermediateResultPartition partition2 = result.getPartitions()[1];
+
+assertEquals(
+partition1.getConsumedPartitionGroups().get(0),
+partition2.getConsumedPartitionGroups().get(0));
+
+ConsumedPartitionGroup consumedPartitionGroup =
+partition1.getConsumedPartitionGroups().get(0);
+Set partitionIds = new HashSet<>();
+for (IntermediateResultPartitionID partitionId : 
consumedPartitionGroup) {
+partitionIds.add(partitionId);
+}
+assertTrue(partitionIds.contains(partition1.getPartitionId()));

Review comment:
   can be `assertThat(partitionIds, 
containsInAnyOrder(partition1.getPartitionId(), partition2.getPartitionId()))`

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/EdgeManager.java
##
@@ -89,4 +92,17 @@ public void connectVertexWithConsumedPartitionGroup(
 return Collections.unmodifiableList(
 
getConsumedPartitionGroupsForVertexInternal(executionVertexId));
 }
+
+public void registerConsumedPartitionGroup(ConsumedPartitionGroup group) {
+for (IntermediateResultPartitionID partitionId : group) {
+consumedPartitionsById
+.computeIfAbsent(partitionId, ignore -> new ArrayList<>())
+.add(group);
+}
+}
+
+public List getConsumedPartitionGroupsById(
+IntermediateResultPartitionID id) {
+return Collections.unmodifiableList(consumedPartitionsById.get(id));

Review comment:
   maybe `consumedPartitionsById.get(id)` -> 
`consumedPartitionsById.computeIfAbsent(id, id -> new ArrayList<>())`?

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adapter/DefaultResultPartitionTest.java
##
@@ -34,30 +46,71 @@
 /** Unit tests for {@link DefaultResultPartition}. */
 public class DefaultResultPartitionTest extends TestLogger {
 
-private static final TestResultPartitionStateSupplier resultPartitionState 
=
-new TestResultPartitionStateSupplier();
-
-private final IntermediateResultPartitionID resultPartitionId =
-new IntermediateResultPartitionID();
-private final IntermediateDataSetID intermediateResultId = new 
IntermediateDataSetID();
+@Test
+public 

[jira] [Commented] (FLINK-23357) jobmanager metaspace oom

2021-07-12 Thread Borland Won (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379476#comment-17379476
 ] 

Borland Won commented on FLINK-23357:
-

[~chesnay] i'm really sorry it is forbidden... 

> jobmanager metaspace oom
> 
>
> Key: FLINK-23357
> URL: https://issues.apache.org/jira/browse/FLINK-23357
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.2
>Reporter: Borland Won
>Priority: Major
> Attachments: image-2021-07-12-16-57-55-256.png, 
> image-2021-07-12-17-06-17-218.png, image-2021-07-12-19-20-13-742.png, 
> image-2021-07-12-19-30-38-245.png, path to gc roots.png
>
>
> *Flink Version: 1.12.2*
> Hi .  I created a standalone HA cluster(with 3 taskmanager and 2 jobmanager), 
> and repeatedly submit new jobs to the cluster and cancel old jobs  via rest 
> api . Then jobmanager master got the increasing metaspace.
>   !image-2021-07-12-16-57-55-256.png!
> Soon it will OOM and get the exception below:
> 2021-06-21 15:44:06,637 ERROR 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler   [] - Unhandled 
> exception.2021-06-21 15:44:06,637 ERROR 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler   [] - Unhandled 
> exception.java.util.concurrent.CompletionException: 
> org.apache.flink.client.program.ProgramInvocationException: The program's 
> entry point class 'xxx.xxx.xxx.XXXBootstrap' caused an exception during 
> initialization: Metaspace. The metaspace out-of-memory error has occurred. 
> This can mean two things: either Flink Master requires a larger size of JVM 
> metaspace to load classes or there is a class loading leak. In the first case 
> 'jobmanager.memory.jvm-metaspace.size' configuration option should be 
> increased. If the error persists (usually in cluster after several job 
> (re-)submissions) then there is probably a class loading leak in user code or 
> some of its dependencies which has to be investigated and fixed. The Flink 
> Master has to be shutdown... at 
> org.apache.flink.runtime.webmonitor.handlers.utils.JarHandlerUtils$JarHandlerContext.toPackagedProgram(JarHandlerUtils.java:184)
>  ~[flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.runtime.webmonitor.handlers.utils.JarHandlerUtils$JarHandlerContext.applyToConfiguration(JarHandlerUtils.java:141)
>  ~[flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.handleRequest(JarRunHandler.java:95)
>  ~[flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.handleRequest(JarRunHandler.java:53)
>  ~[flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.runtime.rest.handler.AbstractRestHandler.respondToRequest(AbstractRestHandler.java:83)
>  ~[flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.runtime.rest.handler.AbstractHandler.respondAsLeader(AbstractHandler.java:195)
>  ~[flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.lambda$channelRead0$0(LeaderRetrievalHandler.java:83)
>  ~[flink-dist_2.11-1.12.2.jar:1.12.2] at 
> java.util.Optional.ifPresent(Optional.java:159) [?:1.8.0_292] at 
> org.apache.flink.util.OptionalConsumer.ifPresent(OptionalConsumer.java:45) 
> [flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:80)
>  [flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:49)
>  [flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
>  [flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  [flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  [flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>  [flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:115)
>  [flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:94)
>  [flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:55)
>  [flink-dist_2.11-1.12.2.jar:1.12.2] at 
> org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
>  

[GitHub] [flink] flinkbot edited a comment on pull request #16457: [hotfix][runtime] Log slot pool status if unable to fulfill job requirements

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16457:
URL: https://github.com/apache/flink/pull/16457#issuecomment-877915805


   
   ## CI report:
   
   * 94ee2c10a64e7106de5d9dee069d5425bd77f030 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20328)
 
   * 61bd9db42a4990df50beca81555e1091eb880f2d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20351)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23302) Precondition failed building CheckpointMetrics.

2021-07-12 Thread Kyle Weaver (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379459#comment-17379459
 ] 

Kyle Weaver commented on FLINK-23302:
-

We've observed this failure on Flink 1.12.4 and 1.13.1. It does indeed look 
like a dupe of FLINK-23201, thanks for checking.

> Precondition failed building CheckpointMetrics.
> ---
>
> Key: FLINK-23302
> URL: https://issues.apache.org/jira/browse/FLINK-23302
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Kyle Weaver
>Priority: Major
>
> Beam has a flaky test using Flink savepoints. It looks like 
> alignmentDurationNanos is less than -1, which shouldn't be possible. As far 
> as I know clients (like Beam) don't have any control over this value, so my 
> best guess is that it's a bug in Flink.
> See 
> https://issues.apache.org/jira/browse/BEAM-10955?focusedCommentId=17376928=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17376928
>  for context.
> The failing test is here: 
> [https://github.com/apache/beam/blob/b401d23dfc2a487ae5775164a7834952391ff4fa/runners/flink/src/test/java/org/apache/beam/runners/flink/FlinkSavepointTest.java#L146]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22955) lookup join filter push down result to mismatch function signature

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22955:
---
  Labels: auto-deprioritized-critical  (was: stale-critical)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Critical, 
please raise the priority and ask a committer to assign you the issue or revive 
the public discussion.


> lookup join filter push down result to mismatch function signature
> --
>
> Key: FLINK-22955
> URL: https://issues.apache.org/jira/browse/FLINK-22955
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.3, 1.13.1, 1.12.4
> Environment: Flink 1.13.1
> how to reproduce: patch file attached
>Reporter: Cooper Luan
>Priority: Major
>  Labels: auto-deprioritized-critical
> Fix For: 1.11.4, 1.12.6, 1.13.3
>
> Attachments: 
> 0001-try-to-produce-lookup-join-filter-pushdown-expensive.patch
>
>
> a sql like this may result to look function signature mismatch exception when 
> explain sql
> {code:sql}
> CREATE TEMPORARY VIEW v_vvv AS
> SELECT * FROM MyTable AS T
> JOIN LookupTableAsync1 FOR SYSTEM_TIME AS OF T.proctime AS D
> ON T.a = D.id;
> SELECT a,b,id,name
> FROM v_vvv
> WHERE age = 10;{code}
> the lookup function is
> {code:scala}
> class AsyncTableFunction1 extends AsyncTableFunction[RowData] {
>   def eval(resultFuture: CompletableFuture[JCollection[RowData]], a: 
> Integer): Unit = {
>   }
> }{code}
> exec plan is
> {code:java}
> LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], 
> fields=[a, b, id, name])
> +- LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], 
> joinType=[InnerJoin], async=[true], lookup=[age=10, id=a], where=[(age = 
> 10)], select=[a, b, id, name])
>+- Calc(select=[a, b])
>   +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], 
> fields=[a, b, c, proctime, rowtime])
> {code}
> the "lookup=[age=10, id=a]" result to mismatch signature mismatch
>  
> but if I add 1 more insert, it works well
> {code:sql}
> SELECT a,b,id,name
> FROM v_vvv
> WHERE age = 30
> {code}
> exec plan is
> {code:java}
> == Optimized Execution Plan ==
> LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], 
> joinType=[InnerJoin], async=[true], lookup=[id=a], select=[a, b, c, proctime, 
> rowtime, id, name, age, ts])(reuse_id=[1])
> +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], 
> fields=[a, b, c, proctime, 
> rowtime])LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`],
>  fields=[a, b, id, name])
> +- Calc(select=[a, b, id, name], where=[(age = 10)])
>+- 
> Reused(reference_id=[1])LegacySink(name=[`default_catalog`.`default_database`.`appendSink2`],
>  fields=[a, b, id, name])
> +- Calc(select=[a, b, id, name], where=[(age = 30)])
>+- Reused(reference_id=[1])
> {code}
>  the LookupJoin node use "lookup=[id=a]"(right) not "lookup=[age=10, id=a]" 
> (wrong)
>  
> so, in "multi insert" case, planner works great
> in "single insert" case, planner throw exception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14078) Introduce more JDBCDialect implementations

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14078:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Introduce more JDBCDialect implementations
> --
>
> Key: FLINK-14078
> URL: https://issues.apache.org/jira/browse/FLINK-14078
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Canbin Zheng
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
>  MySQL, Derby and Postgres JDBCDialect are available now, maybe we can 
> introduce more JDBCDialect implementations, such as OracleDialect, 
> SqlServerDialect, DB2Dialect, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12786) Implement local aggregation in Flink

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-12786:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Implement local aggregation in Flink
> 
>
> Key: FLINK-12786
> URL: https://issues.apache.org/jira/browse/FLINK-12786
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: vinoyang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> Currently, keyed streams are widely used to perform aggregating operations 
> (e.g., reduce, sum and window) on the elements that have the same key. When 
> executed at runtime, the elements with the same key will be sent to and 
> aggregated by the same task.
>  
> The performance of these aggregating operations is very sensitive to the 
> distribution of keys. In the cases where the distribution of keys follows a 
> powerful law, the performance will be significantly downgraded. More 
> unluckily, increasing the degree of parallelism does not help when a task is 
> overloaded by a single key.
>  
> Local aggregation is a widely-adopted method to reduce the performance 
> degraded by data skew. We can decompose the aggregating operations into two 
> phases. In the first phase, we aggregate the elements of the same key at the 
> sender side to obtain partial results. Then at the second phase, these 
> partial results are sent to receivers according to their keys and are 
> combined to obtain the final result. Since the number of partial results 
> received by each receiver is limited by the number of senders, the imbalance 
> among receivers can be reduced. Besides, by reducing the amount of 
> transferred data the performance can be further improved.
> The design documentation is here: 
> [https://docs.google.com/document/d/1gizbbFPVtkPZPRS8AIuH8596BmgkfEa7NRwR6n3pQes/edit?usp=sharing]
> The discussion thread is here: 
> [http://mail-archives.apache.org/mod_mbox/flink-dev/201906.mbox/%3CCAA_=o7dvtv8zjcxknxyoyy7y_ktvgexrvb4zhxjwzuhsulz...@mail.gmail.com%3E]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10211) Time indicators are not correctly materialized for LogicalJoin

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10211:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Time indicators are not correctly materialized for LogicalJoin
> --
>
> Key: FLINK-10211
> URL: https://issues.apache.org/jira/browse/FLINK-10211
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.6.0
>Reporter: Piotr Nowojski
>Assignee: JING ZHANG
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> Currently 
> {{org.apache.flink.table.calcite.RelTimeIndicatorConverter#visit(LogicalJoin)}}
>  correctly handles only windowed joins. Output of non windowed joins 
> shouldn't contain any time indicators.
> A symptom of this is the exception:
> {code}
> Rowtime attributes must not be in the input rows of a regular join. As a 
> workaround you can cast the time attributes of input tables to TIMESTAMP 
> before.
> {code}
> Or this exception:
> {code}
> org.apache.flink.table.api.TableException: Found more than one rowtime field: 
> [orderTime, payTime] in the table that should be converted to a DataStream. 
> Please select the rowtime field that should be used as event-time timestamp 
> for the DataStream by casting all other fields to TIMESTAMP. at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:906)
> {code}
> A long-term solution would be:
> The root cause of this issue is the early phase in which 
> {{RelTimeIndicatorConverter}} is called. Due to lack of information (since 
> the join condition might not have been pushed into the join node), we can not 
> differentiate between a window and non-window join. Thus, we cannot perform 
> the time indicator materialization more fine grained. A solution would be to 
> perform the materialization later after the logical optimization and before 
> the physical translation, this would also make sense from a semantic 
> perspective because time indicators are more a physical characteristic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15532) Enable strict capacity limit for memory usage for RocksDB

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15532:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Enable strict capacity limit for memory usage for RocksDB
> -
>
> Key: FLINK-15532
> URL: https://issues.apache.org/jira/browse/FLINK-15532
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
> Attachments: image-2020-10-23-14-39-45-997.png, 
> image-2020-10-23-14-41-10-584.png, image-2020-10-23-14-43-18-739.png, 
> image-2020-10-23-14-55-08-120.png
>
>
> Currently, due to the limitation of RocksDB (see 
> [issue-6247|https://github.com/facebook/rocksdb/issues/6247]), we cannot 
> create a strict-capacity-limit LRUCache which shared among rocksDB 
> instance(s).
> This issue tracks this problem and offer the ability of strict mode once we 
> could enable this feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12619) Support TERMINATE/SUSPEND Job with Checkpoint

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-12619:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Support TERMINATE/SUSPEND Job with Checkpoint
> -
>
> Key: FLINK-12619
> URL: https://issues.apache.org/jira/browse/FLINK-12619
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / State Backends
>Reporter: Congxian Qiu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Inspired by the idea of FLINK-11458, we propose to support terminate/suspend 
> a job with checkpoint. This improvement cooperates with incremental and 
> external checkpoint features, that if checkpoint is retained and this feature 
> is configured, we will trigger a checkpoint before the job stops. It could 
> accelarate job recovery a lot since:
> 1. No source rewinding required any more.
> 2. It's much faster than taking a savepoint since incremental checkpoint is 
> enabled.
> Please note that conceptually savepoints is different from checkpoint in a 
> similar way that backups are different from recovery logs in traditional 
> database systems. So we suggest using this feature only for job recovery, 
> while stick with FLINK-11458 for the 
> upgrading/cross-cluster-job-migration/state-backend-switch cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15376) support "CREATE TABLE AS" in Flink SQL

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15376:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> support "CREATE TABLE AS" in Flink SQL
> --
>
> Key: FLINK-15376
> URL: https://issues.apache.org/jira/browse/FLINK-15376
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22870) Grouping sets + case when + constant string throws AssertionError

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22870:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Grouping sets + case when + constant string throws AssertionError
> -
>
> Key: FLINK-22870
> URL: https://issues.apache.org/jira/browse/FLINK-22870
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Caizhi Weng
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> Add the following case to 
> {{org.apache.flink.table.api.TableEnvironmentITCase}} to reproduce this issue.
> {code:scala}
> @Test
> def myTest2(): Unit = {
>   tEnv.executeSql(
> """
>   |create temporary table my_source(
>   |  a INT
>   |) WITH (
>   |  'connector' = 'values'
>   |)
>   |""".stripMargin)
>   tEnv.executeSql(
> """
>   |create temporary view my_view as select a, 'test' as b from my_source
>   |""".stripMargin)
>   tEnv.executeSql(
> """
>   |create temporary view my_view2 as select
>   |  a,
>   |  case when GROUPING(b) = 1 then 'test2' else b end as b
>   |from my_view
>   |group by grouping sets(
>   |  (),
>   |  (a),
>   |  (b),
>   |  (a, b)
>   |)
>   |""".stripMargin)
>   System.out.println(tEnv.explainSql(
> """
>   |select a, b from my_view2
>   |""".stripMargin))
> }
> {code}
> The exception stack is
> {code}
> java.lang.AssertionError: Conversion to relational algebra failed to preserve 
> datatypes:
> validated type:
> RecordType(INTEGER a, VARCHAR(5) CHARACTER SET "UTF-16LE" NOT NULL b) NOT NULL
> converted type:
> RecordType(INTEGER a, VARCHAR(5) CHARACTER SET "UTF-16LE" b) NOT NULL
> rel:
> LogicalProject(a=[$0], b=[CASE(=($2, 1), _UTF-16LE'test2':VARCHAR(5) 
> CHARACTER SET "UTF-16LE", CAST($1):VARCHAR(5) CHARACTER SET "UTF-16LE")])
>   LogicalAggregate(group=[{0, 1}], groups=[[{0, 1}, {0}, {1}, {}]], 
> agg#0=[GROUPING($1)])
> LogicalProject(a=[$0], b=[_UTF-16LE'test'])
>   LogicalTableScan(table=[[default_catalog, default_database, my_source]])
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.checkConvertedType(SqlToRelConverter.java:467)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:582)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:177)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:169)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:1048)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertViewQuery(SqlToOperationConverter.java:897)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertCreateView(SqlToOperationConverter.java:864)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:259)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)
>   at 
> org.apache.flink.table.api.TableEnvironmentITCase.myTest2(TableEnvironmentITCase.scala:148)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17860) Recursively remove channel state directories

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-17860:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Recursively remove channel state directories
> 
>
> Key: FLINK-17860
> URL: https://issues.apache.org/jira/browse/FLINK-17860
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> With a high degree of parallelism, we end up with n*s number of files in each 
> checkpoint (n = parallelism, s = stages). Writing them if fast (from many 
> subtasks), removing them is slow (from JM).
> This can't be mitigated by state.backend.fs.memory-threshold because most 
> states are ten to hundreds Mb.
>  
> Instead of going through them 1 by 1, we could remove the directory 
> recursively.
>  
> The easiest way is to remove channelStateHandle.discard() calls and use 
> isRecursive=true  in 
> FsCompletedCheckpointStorageLocation.disposeStorageLocation.
> Note: with the current isRecursive=false there will be an exception if there 
> are any files left under that folder.
>  
> This can be extended to other state handles in future as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18892) Upgrade Guava to version 28.2-jre

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18892:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Upgrade Guava to version 28.2-jre
> -
>
> Key: FLINK-18892
> URL: https://issues.apache.org/jira/browse/FLINK-18892
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.11.1
>Reporter: Igor Dvorzhak
>Priority: Minor
>  Labels: auto-deprioritized-major
> Attachments: FLINK-18892.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13251) Add bandwidth throttling for checkpoint uploading/downloading

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13251:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add bandwidth throttling for checkpoint uploading/downloading
> -
>
> Key: FLINK-13251
> URL: https://issues.apache.org/jira/browse/FLINK-13251
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing
>Reporter: Yu Li
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> As 
> [reported|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Bandwidth-throttling-of-checkpoints-uploading-to-s3-tt28735.html]
>  in our user mailing list, the checkpoint uploading may make high load to the 
> network. In contrast to accelerating checkpoint downloading/uploading as 
> introduced by FLINK-10461/FLINK-11008, I think it also makes sense to add a 
> feature to allow bandwidth throttling, and this JIRA aims at introducing this 
> feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22872) Remove usages of legacy planner test utilities in Python

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22872:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Remove usages of legacy planner test utilities in Python
> 
>
> Key: FLINK-22872
> URL: https://issues.apache.org/jira/browse/FLINK-22872
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Python
>Reporter: Timo Walther
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> The tests of the Python module rely on a couple of testing classes from the 
> legacy {{flink-table-planner}} test jar. We should remove references to:
> {code}
> org.apache.flink.table.utils.TableFunc1
> org.apache.flink.table.descriptors.RowtimeTest$CustomExtractor
> org.apache.flink.table.descriptors.RowtimeTest$CustomAssigner
> org.apache.flink.table.functions.aggfunctions.ByteMaxAggFunction
> org.apache.flink.table.expressions.utils.RichFunc0
> org.apache.flink.table.runtime.stream.table.TestAppendSink
> org.apache.flink.table.runtime.stream.table.TestRetractSink
> org.apache.flink.table.runtime.stream.table.TestUpsertSink
> org.apache.flink.table.runtime.stream.table.RowCollector
> TestCollectionTableFactory
> {code}
> A temporary fix will be provided in FLINK-22849.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15674) Let Java and Scala Type Extraction go through the same stack

2021-07-12 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15674:
---
  Labels: auto-deprioritized-major auto-unassigned usability  (was: 
auto-unassigned stale-major usability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Let Java and Scala Type Extraction go through the same stack
> 
>
> Key: FLINK-15674
> URL: https://issues.apache.org/jira/browse/FLINK-15674
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Stephan Ewen
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, usability
>
> Currently, the Java and Scala Type Extraction stacks are completely different.
> * Java uses the {{TypeExtractor}}
> * Scala uses the type extraction macros.
> As a result, the same class can be extracted as different types in the 
> different stacks, which can lead to very confusing results. In particular, 
> when you use the TypeExtractor on Scala Classes, you always get a 
> {{GenericType}}.
> *Suggestion for New Design*
> There should be one type extraction stack, based on the TypeExtractor.
> * The TypeExtractor should be extensible and load additions through service 
> loaders, similar as it currently loads Avro as an extension.
> * The Scala Type Extraction logic should be such an extension.
> * The Scala Marcos would only capture the {{Type}} (as in Java type), meaning 
> {{Class}}, or {{ParameterizedType}}, or {{Array}} (etc.) and delegate this to 
> the TypeExtractor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16457: [hotfix][runtime] Log slot pool status if unable to fulfill job requirements

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16457:
URL: https://github.com/apache/flink/pull/16457#issuecomment-877915805


   
   ## CI report:
   
   * 94ee2c10a64e7106de5d9dee069d5425bd77f030 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20328)
 
   * 61bd9db42a4990df50beca81555e1091eb880f2d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15322: [FLINK-21353][state] Add DFS-based StateChangelog (TM-owned state)

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #15322:
URL: https://github.com/apache/flink/pull/15322#issuecomment-804015738


   
   ## CI report:
   
   * c6ae187f7949bb09b629fa51375dd9becd406b28 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19799)
 
   * 7f22bef11e25a9ec615c97dbb8f070c5f072c641 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20350)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16472: [FLINK-23183][connectors/rabbitmq] Fix ACKs for redelivered messages in RMQSource

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16472:
URL: https://github.com/apache/flink/pull/16472#issuecomment-878495810


   
   ## CI report:
   
   * f65706a9b880ae3621de55a7cd62fe19edbab9cb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20345)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16464: [FLINK-21928][clients] JobManager failover should success, when tryin…

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16464:
URL: https://github.com/apache/flink/pull/16464#issuecomment-878076020


   
   ## CI report:
   
   * 1ac8f31fe640225f8104131df85061d9026fe663 UNKNOWN
   * 46fc9bfbf03164aec78dc83323db8c547ee9ec21 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20344)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16432: [FLINK-23233][runtime] Failing checkpoints before failover for failed events in OperatorCoordinator

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #16432:
URL: https://github.com/apache/flink/pull/16432#issuecomment-876475935


   
   ## CI report:
   
   * eb00c919c45cf86258863260adf5ba8066cca0f0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20343)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15322: [FLINK-21353][state] Add DFS-based StateChangelog (TM-owned state)

2021-07-12 Thread GitBox


flinkbot edited a comment on pull request #15322:
URL: https://github.com/apache/flink/pull/15322#issuecomment-804015738


   
   ## CI report:
   
   * c6ae187f7949bb09b629fa51375dd9becd406b28 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19799)
 
   * 7f22bef11e25a9ec615c97dbb8f070c5f072c641 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on a change in pull request #16457: [hotfix][runtime] Log slot pool status if unable to fulfill job requirements

2021-07-12 Thread GitBox


rkhachatryan commented on a change in pull request #16457:
URL: https://github.com/apache/flink/pull/16457#discussion_r668284397



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/DeclarativeSlotManager.java
##
@@ -647,7 +647,12 @@ private ResourceCounter 
tryFulfillRequirementsWithPendingSlots(
 pendingSlots = allocationResult.getNewAvailableResources();
 if (!allocationResult.isSuccessfulAllocating()
 && sendNotEnoughResourceNotifications) {
-LOG.warn("Could not fulfill resource requirements of 
job {}.", jobId);
+// TODO (review): free slots are logged as zero here, 
but non-zero in
+// JobMaster.slotPoolService

Review comment:
   I see, I'll remove the TODO then.
   Thanks for clarifying.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >