[jira] [Commented] (FLINK-16795) End to end tests timeout on Azure

2020-06-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136330#comment-17136330
 ] 

Robert Metzger commented on FLINK-16795:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650=logs=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18088) Umbrella for testing features in release-1.11.0

2020-06-15 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-18088:
-
Priority: Critical  (was: Blocker)

> Umbrella for testing features in release-1.11.0 
> 
>
> Key: FLINK-18088
> URL: https://issues.apache.org/jira/browse/FLINK-18088
> Project: Flink
>  Issue Type: Test
>Affects Versions: 1.11.0
>Reporter: Zhijiang
>Assignee: Zhijiang
>Priority: Critical
>  Labels: release-testing
> Fix For: 1.11.0
>
>
> This is the umbrella issue for tracing the testing progress of all the 
> related features in release-1.11.0, either the way of e2e or manually testing 
> in cluster, to confirm the features work in practice with good quality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on pull request #12510: [FLINK-18039]

2020-06-15 Thread GitBox


zhijiangW commented on pull request #12510:
URL: https://github.com/apache/flink/pull/12510#issuecomment-644543716


   @becketqin FYI: this PR has conflicts now to be resolved. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12655: [FLINK-18300][sql-client] SQL Client doesn't support ALTER VIEW

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12655:
URL: https://github.com/apache/flink/pull/12655#issuecomment-644085772


   
   ## CI report:
   
   * e9f9d2e5bd744d8443b78e9e3712bd718efa6b0d Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3511)
 
   * 35f7ea66eefa4f2931461df26624511847d39a8a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440597301



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorTest.java
##
@@ -218,6 +230,67 @@ public void testNotifyCheckpointAbortedBeforeAsyncPhase() 
throws Exception {
assertEquals(0, 
subtaskCheckpointCoordinator.getAsyncCheckpointRunnableSize());
}
 
+   @Test
+   public void 
testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase() 
throws Exception {
+   final OneInputStreamTaskTestHarness testHarness 
=
+   new OneInputStreamTaskTestHarness<>(
+   OneInputStreamTask::new,
+   1, 1,
+   BasicTypeInfo.STRING_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO);
+
+   testHarness.setupOutputForSingletonOperatorChain();
+   StreamConfig streamConfig = testHarness.getStreamConfig();
+   streamConfig.setStreamOperator(new MapOperator());
+
+   testHarness.invoke();
+   testHarness.waitForTaskRunning();
+
+   TestTaskStateManager stateManager = new TestTaskStateManager();
+   MockEnvironment mockEnvironment = 
MockEnvironment.builder().setTaskStateManager(stateManager).build();
+   SubtaskCheckpointCoordinatorImpl subtaskCheckpointCoordinator = 
(SubtaskCheckpointCoordinatorImpl) new MockSubtaskCheckpointCoordinatorBuilder()
+   .setEnvironment(mockEnvironment)
+   .setUnalignedCheckpointEnabled(true)
+   .build();
+
+   final TestPooledBufferProvider bufferProvider = new 
TestPooledBufferProvider(Integer.MAX_VALUE, 4096);
+   ArrayList recordOrEvents = new ArrayList<>();
+   StreamElementSerializer stringStreamElementSerializer = 
new StreamElementSerializer<>(StringSerializer.INSTANCE);
+   RecordOrEventCollectingResultPartitionWriter 
resultPartitionWriter = new 
RecordOrEventCollectingResultPartitionWriter<>(recordOrEvents, bufferProvider, 
stringStreamElementSerializer);
+   
mockEnvironment.addOutputs(Collections.singletonList(resultPartitionWriter));
+
+   OneInputStreamTask task = testHarness.getTask();
+   final OperatorChain> operatorChain = new OperatorChain<>(task, 
StreamTask.createRecordWriterDelegate(streamConfig, mockEnvironment));
+   long checkpointId = 42L;
+   // notify checkpoint aborted before execution.
+   
subtaskCheckpointCoordinator.notifyCheckpointAborted(checkpointId, 
operatorChain, () -> true);
+   
subtaskCheckpointCoordinator.getChannelStateWriter().start(checkpointId, 
CheckpointOptions.forCheckpointWithDefaultLocation());
+   subtaskCheckpointCoordinator.checkpointState(
+   new CheckpointMetaData(checkpointId, 
System.currentTimeMillis()),
+   CheckpointOptions.forCheckpointWithDefaultLocation(),
+   new CheckpointMetrics(),
+   operatorChain,
+   () -> true);
+
+   assertEquals(1, recordOrEvents.size());
+   Object recordOrEvent = recordOrEvents.get(0);
+   // ensure CancelCheckpointMarker is broadcast downstream.
+   assertTrue(recordOrEvent instanceof CancelCheckpointMarker);
+   assertEquals(checkpointId, ((CancelCheckpointMarker) 
recordOrEvent).getCheckpointId());
+   }

Review comment:
   We should make sure the internal task thread inside 
`StreamTaskTestHarness` exit at last to avoid remaining thread after test 
finishes?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:28 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why *it works fine* in every other' and 
ci environment except yours.(just imagine that you have installed many 
different versions of maven and want them to work together in the same command 
line, which is impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON'T DO THIS* until you 
have full knowledge of npm and node package, it would mess up the isolated env 
of node and npm. 
 # plz have a try to remove all your `flink-runtime-web/web-dashboard/node` and 
`flink-runtime-web/web-dashboard/node_modules` caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON'T DO THIS* until you 
have full knowledge of npm and node package, it would mess up the isolated env 
of node and npm. 
 # plz have a try to remove all your `flink-runtime-web/web-dashboard/node` and 
`flink-runtime-web/web-dashboard/node_modules` caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:28 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON'T DO THIS* until you 
have full knowledge of npm and node package, it would mess up the isolated env 
of node and npm. 
 # plz have a try to remove all your `flink-runtime-web/web-dashboard/node` and 
`flink-runtime-web/web-dashboard/node_modules` caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON'T DO THIS* until you 
have full knowledge of npm and node package, it would mess up the isolated env 
of node and npm. 
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:27 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON'T DO THIS* until you 
have full knowledge of npm and node package, it would mess up the isolated env 
of node and npm. 
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON't DO THIS* until you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:25 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON't DO THIS* until you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourselves. That is why it works fine in every others' and 
ci environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version is managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourselves), *DON't DO THIS* until 
you have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:24 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourselves. That is why it works fine in every others' and 
ci environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version is managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourselves), *DON't DO THIS* until 
you have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourselves. That is why it works fine in every others' and 
ci environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version is managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml, if you want to manage it yourself, make sure you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:21 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourselves. That is why it works fine in every others' and 
ci environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version is managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml, if you want to manage it yourself, make sure you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourselves. (just imagine that you have installed many 
different versions of maven and want them to work together in the same command 
line, which is impossible.)
 # the node and npm version is managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml, if you want to manage it yourself, make sure you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12661: [FLINK-18299][Formats(Json)]Add option in json format to parse timestamp in different standard

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12661:
URL: https://github.com/apache/flink/pull/12661#issuecomment-644211738


   
   ## CI report:
   
   * c1314ccf8e399484f78f3cea57d0229cb6a5d79b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3551)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12665: [FLINK-17886][docs-zh] Update Chinese documentation for new Watermark…

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12665:
URL: https://github.com/apache/flink/pull/12665#issuecomment-644514775


   
   ## CI report:
   
   * 539b338979344b960cbed6ab152eb32c117e121c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3552)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:19 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourselves. (just imagine that you have installed many 
different versions of maven and want them to work together in the same command 
line, which is impossible.)
 # the node and npm version is managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml, if you want to manage it yourself, make sure you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourselves. (just imagine that you have installed many 
different versions of maven and want them to work together, which is 
impossible.)
 # the node and npm version is managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml, if you want to manage it yourself, make sure you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:19 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourselves. (just imagine that you have installed many 
different versions of maven and want them to work together, which is 
impossible.)
 # the node and npm version is managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml, if you want to manage it yourself, make sure you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourselves.
 # the node and npm version is managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml, if you want to manage it yourself, make sure you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on pull request #12655: [FLINK-18300][sql-client] SQL Client doesn't support ALTER VIEW

2020-06-15 Thread GitBox


JingsongLi commented on pull request #12655:
URL: https://github.com/apache/flink/pull/12655#issuecomment-644536811


   Can you re-trigger azure testing?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136307#comment-17136307
 ] 

Yadong Xie commented on FLINK-18288:


[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourselves.
 # the node and npm version is managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml, if you want to manage it yourself, make sure you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #12651: [FLINK-18272][table-runtime-blink] Add retry logic to FileSystemLooku…

2020-06-15 Thread GitBox


JingsongLi commented on a change in pull request #12651:
URL: https://github.com/apache/flink/pull/12651#discussion_r440590714



##
File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/filesystem/FileSystemLookupFunction.java
##
@@ -143,26 +148,41 @@ private void checkCacheReload() {
} else {
LOG.info("Populating lookup join cache");
}
-   cache.clear();
-   try {
-   T[] inputSplits = inputFormat.createInputSplits(1);
-   GenericRowData reuse = new 
GenericRowData(producedNames.length);
-   long count = 0;
-   for (T split : inputSplits) {
-   inputFormat.open(split);
-   while (!inputFormat.reachedEnd()) {
-   RowData row = 
inputFormat.nextRecord(reuse);
-   count++;
-   Row key = extractKey(row);
-   List rows = 
cache.computeIfAbsent(key, k -> new ArrayList<>());
-   rows.add(serializer.copy(row));
+   int numRetry = 0;
+   while (true) {
+   cache.clear();
+   try {
+   T[] inputSplits = 
inputFormat.createInputSplits(1);
+   GenericRowData reuse = new 
GenericRowData(producedNames.length);
+   long count = 0;
+   for (T split : inputSplits) {
+   inputFormat.open(split);
+   while (!inputFormat.reachedEnd()) {
+   RowData row = 
inputFormat.nextRecord(reuse);
+   count++;
+   Row key = extractKey(row);
+   List rows = 
cache.computeIfAbsent(key, k -> new ArrayList<>());
+   rows.add(serializer.copy(row));
+   }
+   inputFormat.close();
+   }
+   nextLoadTime = System.currentTimeMillis() + 
getCacheTTL().toMillis();
+   LOG.info("Loaded {} row(s) into lookup join 
cache", count);
+   return;
+   } catch (IOException e) {
+   if (numRetry >= MAX_RETRIES) {
+   throw new FlinkRuntimeException(
+   String.format("Failed 
to load table into cache after %d retries", numRetry), e);
+   }
+   long toSleep = ++numRetry * 
RETRY_INTERVAL.toMillis();

Review comment:
   One single line to `numRetry ++`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #12651: [FLINK-18272][table-runtime-blink] Add retry logic to FileSystemLooku…

2020-06-15 Thread GitBox


JingsongLi commented on a change in pull request #12651:
URL: https://github.com/apache/flink/pull/12651#discussion_r440590313



##
File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/filesystem/FileSystemLookupFunction.java
##
@@ -59,6 +59,11 @@
 
private static final Logger LOG = 
LoggerFactory.getLogger(FileSystemLookupFunction.class);
 
+   // the max number of retries before throwing exception, in case of 
failure to load the table into cache
+   private static final int MAX_RETRIES = 3;
+   // interval between retries
+   private static final Duration RETRY_INTERVAL = Duration.ofSeconds(1);

Review comment:
   10 sec?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12256: [FLINK-17018][runtime] Allocates slots in bulks for pipelined region scheduling

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12256:
URL: https://github.com/apache/flink/pull/12256#issuecomment-631025695


   
   ## CI report:
   
   * f5939316d96975bea8e80acb52bc17683087 UNKNOWN
   * 1253ab0db340f4d3aac238a3c2bc8f8f4abb6941 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3530)
 
   * a637f6415f4f46bc9cc576e0a170cab37266c17a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3549)
 
   * 3e1f4485f91ab4c5537c5b6f3170391f25258f9a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3559)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12313: [FLINK-17005][docs] Translate the CREATE TABLE ... LIKE syntax documentation to Chinese

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12313:
URL: https://github.com/apache/flink/pull/12313#issuecomment-633389597


   
   ## CI report:
   
   * 05c952fdab2c5cf05b70fa26bd7049a827d06149 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2603)
 
   * e7dd7fa0fbbab91117bea70ac0c042e08d0637a8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3555)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12657: [FLINK-18086][tests][e2e][kafka] Migrate SQLClientKafkaITCase to use DDL and new options to create tables

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12657:
URL: https://github.com/apache/flink/pull/12657#issuecomment-644140243


   
   ## CI report:
   
   * 14cdb5d641588f9dfa7f8536fe92e3e57abd8e14 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3515)
 
   * 50108ca39ff114f9c85ae5f2b5cc72eb3dd4357a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3558)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12256: [FLINK-17018][runtime] Allocates slots in bulks for pipelined region scheduling

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12256:
URL: https://github.com/apache/flink/pull/12256#issuecomment-631025695


   
   ## CI report:
   
   * f5939316d96975bea8e80acb52bc17683087 UNKNOWN
   * 1253ab0db340f4d3aac238a3c2bc8f8f4abb6941 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3530)
 
   * a637f6415f4f46bc9cc576e0a170cab37266c17a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3549)
 
   * 3e1f4485f91ab4c5537c5b6f3170391f25258f9a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12656: [FLINK-17666][table-planner-blink] Insert into partitioned table can …

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12656:
URL: https://github.com/apache/flink/pull/12656#issuecomment-644099955


   
   ## CI report:
   
   * e35de4a21f5cf007341a71a1461b796b286c9948 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3512)
 
   * 35454e38388a139ba65943e1c876cbcfb9d9e87c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3548)
 
   * 8cec6fea4e3f8fe1edab5aab8484b8d315feae11 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3557)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12642: [FLINK-18282][docs-zh] retranslate the documentation home page

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12642:
URL: https://github.com/apache/flink/pull/12642#issuecomment-643624564


   
   ## CI report:
   
   * d13e1778d93d1aa4f365057bda967ed708555962 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3486)
 
   * 2885aeb9578583a92bc995535af537f927662fbc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3556)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440582367



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorTest.java
##
@@ -218,6 +230,67 @@ public void testNotifyCheckpointAbortedBeforeAsyncPhase() 
throws Exception {
assertEquals(0, 
subtaskCheckpointCoordinator.getAsyncCheckpointRunnableSize());
}
 
+   @Test
+   public void 
testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase() 
throws Exception {

Review comment:
   testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase 
-> testBroadcastCancelCheckpointMarkerOnAbortingFromCoordinator?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440581662



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorTest.java
##
@@ -218,6 +230,67 @@ public void testNotifyCheckpointAbortedBeforeAsyncPhase() 
throws Exception {
assertEquals(0, 
subtaskCheckpointCoordinator.getAsyncCheckpointRunnableSize());
}
 
+   @Test
+   public void 
testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase() 
throws Exception {
+   final OneInputStreamTaskTestHarness testHarness 
=
+   new OneInputStreamTaskTestHarness<>(
+   OneInputStreamTask::new,
+   1, 1,
+   BasicTypeInfo.STRING_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO);
+
+   testHarness.setupOutputForSingletonOperatorChain();
+   StreamConfig streamConfig = testHarness.getStreamConfig();
+   streamConfig.setStreamOperator(new MapOperator());
+
+   testHarness.invoke();
+   testHarness.waitForTaskRunning();
+
+   TestTaskStateManager stateManager = new TestTaskStateManager();
+   MockEnvironment mockEnvironment = 
MockEnvironment.builder().setTaskStateManager(stateManager).build();
+   SubtaskCheckpointCoordinatorImpl subtaskCheckpointCoordinator = 
(SubtaskCheckpointCoordinatorImpl) new MockSubtaskCheckpointCoordinatorBuilder()
+   .setEnvironment(mockEnvironment)
+   .setUnalignedCheckpointEnabled(true)
+   .build();
+
+   final TestPooledBufferProvider bufferProvider = new 
TestPooledBufferProvider(Integer.MAX_VALUE, 4096);

Review comment:
   nit: only 1 buffer is enough for this case. `Integer.MAX_VALUE` might 
bring some memory concerns if we change the implementation of 
`TestPooledBufferProvider` future. E.g. if we allocate the buffer early during 
constructor based on the size.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18242) Custom OptionsFactory settings seem to have no effect on RocksDB

2020-06-15 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136294#comment-17136294
 ] 

Yu Li commented on FLINK-18242:
---

bq. For 1.11 / 1.12 what do you think about dropping the OptionsFactory and 
only go ahead with the RocksDBOptionsFactory?

+1, let me prepare the PRs separately.

> Custom OptionsFactory settings seem to have no effect on RocksDB
> 
>
> Key: FLINK-18242
> URL: https://issues.apache.org/jira/browse/FLINK-18242
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.10.0, 1.10.1, 1.11.0
>Reporter: Nico Kruber
>Assignee: Yu Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
> Attachments: DefaultConfigurableOptionsFactoryWithLog.java
>
>
>  When I configure a custom {{OptionsFactory}} for RocksDB like this 
> (similarly by specifying it via the {{state.backend.rocksdb.options-factory}} 
> configuration):
> {code:java}
> Configuration globalConfig = GlobalConfiguration.loadConfiguration();
> String checkpointDataUri = 
> globalConfig.getString(CheckpointingOptions.CHECKPOINTS_DIRECTORY);
> RocksDBStateBackend stateBackend = new RocksDBStateBackend(checkpointDataUri);
> stateBackend.setOptions(new DefaultConfigurableOptionsFactoryWithLog());
> env.setStateBackend((StateBackend) stateBackend);{code}
> it seems to be loaded
> {code:java}
> 2020-06-10 12:54:20,720 INFO  
> org.apache.flink.contrib.streaming.state.RocksDBStateBackend  - Using 
> predefined options: DEFAULT.
> 2020-06-10 12:54:20,721 INFO  
> org.apache.flink.contrib.streaming.state.RocksDBStateBackend  - Using 
> application-defined options factory: 
> DefaultConfigurableOptionsFactoryWithLog{DefaultConfigurableOptionsFactory{configuredOptions={}}}.
>  {code}
> but it seems like none of the options defined in there is actually used. Just 
> as an example, my factory does set the info log level to {{INFO_LEVEL}} but 
> this is what you will see in the created RocksDB instance:
> {code:java}
> > cat /tmp/flink-io-c95e8f48-0daa-4fb9-a9a7-0e4fb42e9135/*/db/OPTIONS*|grep 
> > info_log_level
>   info_log_level=HEADER_LEVEL
>   info_log_level=HEADER_LEVEL{code}
> Together with the bug from FLINK-18241, it seems I cannot re-activate the 
> RocksDB log that we disabled in FLINK-15068. FLINK-15747 was aiming at 
> changing that particular configuration, but the problem seems broader since 
> {{setDbLogDir()}} was actually also ignored and Flink itself does not change 
> that setting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18316) Add a dynamic state registration primitive for Stateful Functions

2020-06-15 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-18316:
---

 Summary: Add a dynamic state registration primitive for Stateful 
Functions
 Key: FLINK-18316
 URL: https://issues.apache.org/jira/browse/FLINK-18316
 Project: Flink
  Issue Type: New Feature
  Components: Stateful Functions
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai


Currently, using the {{PersistedValue}} / {{PersistedTable}} / 
{{PersistedAppendingBuffer}} primitives, the user can only eagerly define 
states prior to function instance activation using the {{Persisted}} field 
annotation.

We propose to add a primitive that allows them to register states dynamically 
after activation (i.e. during runtime), along the lines of:
{code}
public MyStateFn implements StatefulFunction {

@Persisted
private final PersistedStateProvider provider = new 
PersistedStateProvider();

public MyStateFn() {
PersistedValue valueState = provider.getValue(...);
}

void invoke(Object input) {
PersistedValue anotherValueState = provider.getValue(...);
}
}
{code}

Note how you can register state during instantiation (in the constructor) and 
in the invoke method. Both registrations should be picked up by the runtime and 
bound to Flink state.

This will be useful for a few scenarios:
- Could enable us to get rid of eager state spec definitions in the YAML 
modules for remote functions in the future.
- Will allow new state to be registered in remote functions, without shutting 
down the StateFun cluster.
- Moreover, this approach allows us to differentiate which functions have 
dynamic state and which ones have only eager state, which might be handy in the 
future in case there is a need to differentiate.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440578921



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorTest.java
##
@@ -218,6 +230,67 @@ public void testNotifyCheckpointAbortedBeforeAsyncPhase() 
throws Exception {
assertEquals(0, 
subtaskCheckpointCoordinator.getAsyncCheckpointRunnableSize());
}
 
+   @Test
+   public void 
testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase() 
throws Exception {
+   final OneInputStreamTaskTestHarness testHarness 
=
+   new OneInputStreamTaskTestHarness<>(
+   OneInputStreamTask::new,
+   1, 1,
+   BasicTypeInfo.STRING_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO);
+
+   testHarness.setupOutputForSingletonOperatorChain();
+   StreamConfig streamConfig = testHarness.getStreamConfig();
+   streamConfig.setStreamOperator(new MapOperator());
+
+   testHarness.invoke();
+   testHarness.waitForTaskRunning();
+
+   TestTaskStateManager stateManager = new TestTaskStateManager();
+   MockEnvironment mockEnvironment = 
MockEnvironment.builder().setTaskStateManager(stateManager).build();
+   SubtaskCheckpointCoordinatorImpl subtaskCheckpointCoordinator = 
(SubtaskCheckpointCoordinatorImpl) new MockSubtaskCheckpointCoordinatorBuilder()
+   .setEnvironment(mockEnvironment)
+   .setUnalignedCheckpointEnabled(true)
+   .build();
+
+   final TestPooledBufferProvider bufferProvider = new 
TestPooledBufferProvider(Integer.MAX_VALUE, 4096);
+   ArrayList recordOrEvents = new ArrayList<>();
+   StreamElementSerializer stringStreamElementSerializer = 
new StreamElementSerializer<>(StringSerializer.INSTANCE);
+   RecordOrEventCollectingResultPartitionWriter 
resultPartitionWriter = new 
RecordOrEventCollectingResultPartitionWriter<>(recordOrEvents, bufferProvider, 
stringStreamElementSerializer);
+   
mockEnvironment.addOutputs(Collections.singletonList(resultPartitionWriter));
+
+   OneInputStreamTask task = testHarness.getTask();
+   final OperatorChain> operatorChain = new OperatorChain<>(task, 
StreamTask.createRecordWriterDelegate(streamConfig, mockEnvironment));

Review comment:
   it seems a bit inconsistent that only some variables with `final` 
decoration, better to unify it either use it or not.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sdlcwangsong commented on a change in pull request #12642: [FLINK-18282][docs-zh] retranslate the documentation home page

2020-06-15 Thread GitBox


sdlcwangsong commented on a change in pull request #12642:
URL: https://github.com/apache/flink/pull/12642#discussion_r440578667



##
File path: docs/index.zh.md
##
@@ -23,53 +23,71 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+
+Apache Flink 是一个在无界和有界数据流上进行状态计算的框架和分布式处理引擎。 Flink 已经可以在所有常见的集群环境中运行,并以 
in-memory 的速度和任意的规模进行计算。
+
 
-本文档适用于 Apache Flink {{ site.version_title}} 版本。本页面最近更新于 {% build_time %}.
+
+
 
-Apache Flink 是一个分布式流批一体化的开源平台。Flink 的核心是一个提供数据分发、通信以及自动容错的流计算引擎。Flink 
在流计算之上构建批处理,并且原生的支持迭代计算,内存管理以及程序优化。
+### 试用 Flink
 
-## 初步印象
+如果您有兴趣使用 Flink, 可以试试我们的教程:
 
-* **代码练习**: 跟随分步指南通过 Flink API 实现简单应用或查询。
-  * [实现 DataStream 应用]({% link try-flink/datastream_api.zh.md %})
-  * [书写 Table API 查询]({% link try-flink/table_api.zh.md %})
+* [DataStream API 进行欺诈检测]({% link try-flink/datastream_api.zh.md %})
+* [Table API 构建实时报表]({% link try-flink/table_api.zh.md %})
+* [Python API 教程]({% link try-flink/python_table_api.zh.md %})
+* [Flink 游乐场]({% link try-flink/flink-operations-playground.zh.md %})
 
-* **Docker 游乐场**: 你只需花几分钟搭建 Flink 沙盒环境,就可以探索和使用 Flink 了。
-  * [运行与管理 Flink 流处理应用]({% link try-flink/flink-operations-playground.zh.md %})
+### 学习 Flink
 
-* **概念**: 学习 Flink 的基本概念能更好地理解文档。
-  * [有状态流处理](concepts/stateful-stream-processing.html)
-  * [实时流处理](concepts/timely-stream-processing.html)
-  * [Flink 架构](concepts/flink-architecture.html)
-  * [术语表](concepts/glossary.html)
+* [操作培训]({% link learn-flink/index.zh.md %}) 包含了一系列的课程和练习,提供了对Flink的逐一介绍。

Review comment:
   @libenchao, thanks for you review





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440578551



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorTest.java
##
@@ -218,6 +230,67 @@ public void testNotifyCheckpointAbortedBeforeAsyncPhase() 
throws Exception {
assertEquals(0, 
subtaskCheckpointCoordinator.getAsyncCheckpointRunnableSize());
}
 
+   @Test
+   public void 
testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase() 
throws Exception {
+   final OneInputStreamTaskTestHarness testHarness 
=
+   new OneInputStreamTaskTestHarness<>(
+   OneInputStreamTask::new,
+   1, 1,

Review comment:
   nit: separate line for every argument





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135831#comment-17135831
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 4:24 AM:
--

runtime-web has only two commands:

*npm ci* and *npm run build*

the former need node-sass  v4.11 linux-x64-72_binding.node

 

if you input *npm ci* you'll find it not downloadable.

node-sass/v4.11.0/linux-x64-72_binding.node

has already been deleted by the author of node-sass.

 

*The reason why you can build it successfully is that*

*you have history file of  node-sass/v4.11.0/linux-x64-72_binding.node*

 

*if you use Ubuntu19.10/20.04,you'll fail.*


was (Author: appleyuchi):
runtime-web has only two commands:

*npm ci* and *npm run build*

the former need node-sass  v4.11 linux-x64-72_binding.node

 

if you input *npm ci* you'll find it not downloadable.

node-sass/v4.11.0/linux-x64-72_binding.node

has already been deleted by the author of node-sass.

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12656: [FLINK-17666][table-planner-blink] Insert into partitioned table can …

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12656:
URL: https://github.com/apache/flink/pull/12656#issuecomment-644099955


   
   ## CI report:
   
   * e35de4a21f5cf007341a71a1461b796b286c9948 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3512)
 
   * 35454e38388a139ba65943e1c876cbcfb9d9e87c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3548)
 
   * 8cec6fea4e3f8fe1edab5aab8484b8d315feae11 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12661: [FLINK-18299][Formats(Json)]Add option in json format to parse timestamp in different standard

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12661:
URL: https://github.com/apache/flink/pull/12661#issuecomment-644211738


   
   ## CI report:
   
   * 204bbb584e74633c7cd82fb0358e5570d5d74563 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3527)
 
   * c1314ccf8e399484f78f3cea57d0229cb6a5d79b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3551)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12657: [FLINK-18086][tests][e2e][kafka] Migrate SQLClientKafkaITCase to use DDL and new options to create tables

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12657:
URL: https://github.com/apache/flink/pull/12657#issuecomment-644140243


   
   ## CI report:
   
   * 14cdb5d641588f9dfa7f8536fe92e3e57abd8e14 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3515)
 
   * 50108ca39ff114f9c85ae5f2b5cc72eb3dd4357a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12665: [FLINK-17886][docs-zh] Update Chinese documentation for new Watermark…

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12665:
URL: https://github.com/apache/flink/pull/12665#issuecomment-644514775


   
   ## CI report:
   
   * 539b338979344b960cbed6ab152eb32c117e121c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3552)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12642: [FLINK-18282][docs-zh] retranslate the documentation home page

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12642:
URL: https://github.com/apache/flink/pull/12642#issuecomment-643624564


   
   ## CI report:
   
   * d13e1778d93d1aa4f365057bda967ed708555962 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3486)
 
   * 2885aeb9578583a92bc995535af537f927662fbc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on pull request #12657: [FLINK-18086][tests][e2e][kafka] Migrate SQLClientKafkaITCase to use DDL and new options to create tables

2020-06-15 Thread GitBox


JingsongLi commented on pull request #12657:
URL: https://github.com/apache/flink/pull/12657#issuecomment-644522570


   50108ca39ff114f9c85ae5f2b5cc72eb3dd4357a Looks good to me.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136281#comment-17136281
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 4:22 AM:
--

*--some 
basics--*

*npm -> package.json/package-lock.json*

is just like

*mvn -> pom.xml*

 

*npm->node.js*

is just like

*mvn->java*

#-*npm dependency file 
relation*

npm ci -cache-max=0 --nosave (package-lock.json)

npm run build(package.json)

*#isolated node.js 
environment---*

*npm build an* *isolated node.js environment* *in Flink*

node.js version is v10.9.0

 

*maybe you will say:why don't you compile it separately?*

Because My Ubuntu Desktop is 19.10,

nvm with node.js(v10.9.0)is incompatible with the Anular needed.

#*-the structure of Flink 
Building*-*---*--

There are two kinds of buid in the building of Flink

They mix *npm build* into *mvn build.*

 

 

 

 


was (Author: appleyuchi):
*--some 
basics--*

*npm -> package.json/package-lock.json*

is just like

*mvn -> pom.xml*

 

*npm->node.js*

is just like

*mvn->java*

#-*npm dependency file 
relation*

npm ci -cache-max=0 --nosave (package-lock.json)

npm run build(package.json)

*#isolated node.js 
environment---*

*npm build an* *isolated node.js environment* *in Flink*

node.js version is v10.9.0

 

*maybe you will say:why don't you compile it separately?*

Because My Ubuntu Desktop is 19.10,

nvm with node.js(v10.9.0)is incompatible with the Anular needed.

#*-the structure of Flink 
Building*-*---*--

There are two kinds of buid in the building of Flink

They mix *npm build* into *mvn build.*

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12313: [FLINK-17005][docs] Translate the CREATE TABLE ... LIKE syntax documentation to Chinese

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12313:
URL: https://github.com/apache/flink/pull/12313#issuecomment-633389597


   
   ## CI report:
   
   * 05c952fdab2c5cf05b70fa26bd7049a827d06149 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2603)
 
   * e7dd7fa0fbbab91117bea70ac0c042e08d0637a8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136281#comment-17136281
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 4:21 AM:
--

*---some 
basics---*

*npm -> package.json/package-lock.json*

is just like

*mvn -> pom.xml*

 

*npm->node.js*

is just like

*mvn->java*

#-*npm dependency file 
relation*

npm ci -cache-max=0 --nosave (package-lock.json)

npm run build(package.json)

*#isolated node.js 
environment---*

*npm build an* *isolated node.js environment* *in Flink*

node.js version is v10.9.0

 

*maybe you will say:why don't you compile it separately?*

Because My Ubuntu Desktop is 19.10,

nvm with node.js(v10.9.0)is incompatible with the Anular needed.

#*-the structure of Flink 
Building*-*--*---

There are two kinds of buid in the building of Flink

They mix *npm build* into *mvn build.*

 

 

 

 


was (Author: appleyuchi):
*---some 
basics---*

*npm -> package.json/package-lock.json*

is just like

*mvn -> pom.xml*

 

*npm->node.js*

is just like

*mvn->java*

#--*npm dependency file 
relation*-

npm _ci_ --cache-max=0 --_no_-_save_ (package-lock.json)

npm run build(package.json)

*#isolated node.js 
environment---*

*npm build an* *isolated node.js environment* *in Flink*

node.js version is v10.9.0

 

*maybe you will say:why don't you compile it separately?*

Because My Ubuntu Desktop is 19.10,

nvm with node.js(v10.9.0)is incompatible with the Anular needed.

#*-the structure of Flink 
Building--*

There are two kinds of buid in the building of Flink

They mix *npm build* into *mvn build.*

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136281#comment-17136281
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 4:21 AM:
--

*--some 
basics--*

*npm -> package.json/package-lock.json*

is just like

*mvn -> pom.xml*

 

*npm->node.js*

is just like

*mvn->java*

#-*npm dependency file 
relation*

npm ci -cache-max=0 --nosave (package-lock.json)

npm run build(package.json)

*#isolated node.js 
environment---*

*npm build an* *isolated node.js environment* *in Flink*

node.js version is v10.9.0

 

*maybe you will say:why don't you compile it separately?*

Because My Ubuntu Desktop is 19.10,

nvm with node.js(v10.9.0)is incompatible with the Anular needed.

#*-the structure of Flink 
Building*-*---*--

There are two kinds of buid in the building of Flink

They mix *npm build* into *mvn build.*

 

 

 

 


was (Author: appleyuchi):
*---some 
basics---*

*npm -> package.json/package-lock.json*

is just like

*mvn -> pom.xml*

 

*npm->node.js*

is just like

*mvn->java*

#-*npm dependency file 
relation*

npm ci -cache-max=0 --nosave (package-lock.json)

npm run build(package.json)

*#isolated node.js 
environment---*

*npm build an* *isolated node.js environment* *in Flink*

node.js version is v10.9.0

 

*maybe you will say:why don't you compile it separately?*

Because My Ubuntu Desktop is 19.10,

nvm with node.js(v10.9.0)is incompatible with the Anular needed.

#*-the structure of Flink 
Building*-*--*---

There are two kinds of buid in the building of Flink

They mix *npm build* into *mvn build.*

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136281#comment-17136281
 ] 

appleyuchi commented on FLINK-18288:


*---some 
basics---*

*npm -> package.json/package-lock.json*

is just like

*mvn -> pom.xml*

 

*npm->node.js*

is just like

*mvn->java*

#--*npm dependency file 
relation*-

npm _ci_ --cache-max=0 --_no_-_save_ (package-lock.json)

npm run build(package.json)

*#isolated node.js 
environment---*

*npm build an* *isolated node.js environment* *in Flink*

node.js version is v10.9.0

 

*maybe you will say:why don't you compile it separately?*

Because My Ubuntu Desktop is 19.10,

nvm with node.js(v10.9.0)is incompatible with the Anular needed.

#*-the structure of Flink 
Building--*

There are two kinds of buid in the building of Flink

They mix *npm build* into *mvn build.*

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18268) Correct Table API in Temporal table docs

2020-06-15 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-18268.

Resolution: Fixed

master: fea20adef1ab5f722e418c1a89600e0786208469

release-1.11: 9f1053fdac461b0b7ec3597c5cd6b8262bcdfc9c

> Correct Table API in Temporal table docs
> 
>
> Key: FLINK-18268
> URL: https://issues.apache.org/jira/browse/FLINK-18268
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> see user's feedback[1]:
> The  *getTableEnvironment* method has been dropped, but the documentation 
> still use it
> {code:java}
>  val tEnv = TableEnvironment.getTableEnvironment(env)
> {code}
> [1][http://apache-flink.147419.n8.nabble.com/flink-TableEnvironment-can-not-call-getTableEnvironment-api-tt3871.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #12630: [FLINK-18268][docs] Correct Table API in Temporal table docs

2020-06-15 Thread GitBox


JingsongLi merged pull request #12630:
URL: https://github.com/apache/flink/pull/12630


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-18315) Insert into partitioned table can fail with values

2020-06-15 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-18315:


Assignee: Danny Chen

> Insert into partitioned table can fail with values
> --
>
> Key: FLINK-18315
> URL: https://issues.apache.org/jira/browse/FLINK-18315
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: Danny Chen
>Assignee: Danny Chen
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17135831#comment-17135831
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 4:11 AM:
--

runtime-web has only two commands:

*npm ci* and *npm run build*

the former need node-sass  v4.11 linux-x64-72_binding.node

 

if you input *npm ci* you'll find it not downloadable.

node-sass/v4.11.0/linux-x64-72_binding.node

has already been deleted by the author of node-sass.


was (Author: appleyuchi):
runtime-web的编译只有两条命令:

npm ci和npm run build

前者需要的node-sass需要个v4.11的依赖包linux-x64-72_binding.node

如果你输入npm ci就会发现是下载不了的.

node-sass/v4.11.0/linux-x64-72_binding.node这个东西已经被github作者删除了,*只能认为你们以往的环境中存在这个文件的历史备份.*

 

你们显然修改了一些东西才进行的编译.

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 commented on pull request #12313: [FLINK-17005][docs] Translate the CREATE TABLE ... LIKE syntax documentation to Chinese

2020-06-15 Thread GitBox


klion26 commented on pull request #12313:
URL: https://github.com/apache/flink/pull/12313#issuecomment-644519136


   @yangyichao-mango thanks for the work, could you please use `git rebase` 
instead of `git merge` to resolve the conflict. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136274#comment-17136274
 ] 

Yang Wang commented on FLINK-18288:
---

I am not familiar with npm. cc [~vthinkxie], could you please have a look?

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18315) Insert into partitioned table can fail with values

2020-06-15 Thread Danny Chen (Jira)
Danny Chen created FLINK-18315:
--

 Summary: Insert into partitioned table can fail with values
 Key: FLINK-18315
 URL: https://issues.apache.org/jira/browse/FLINK-18315
 Project: Flink
  Issue Type: Task
  Components: Table SQL / API
Affects Versions: 1.10.0
Reporter: Danny Chen
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #12630: [FLINK-18268][docs] Correct Table API in Temporal table docs

2020-06-15 Thread GitBox


JingsongLi commented on a change in pull request #12630:
URL: https://github.com/apache/flink/pull/12630#discussion_r440572767



##
File path: docs/dev/table/streaming/temporal_tables.md
##
@@ -260,32 +260,45 @@ See also the page about [joins for continuous 
queries](joins.html) for more info
 {% highlight java %}
 // Get the stream and table environments.
 StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
-StreamTableEnvironment tEnv = TableEnvironment.getTableEnvironment(env);
-
-// Create an HBaseTableSource as a temporal table which implements 
LookableTableSource
-// In the real setup, you should replace this with your own table.
-HBaseTableSource rates = new HBaseTableSource(conf, "Rates");
-rates.setRowKey("currency", String.class);   // currency as the primary key
-rates.addColumn("fam1", "rate", Double.class);
-
-// register the temporal table into environment, then we can query it in sql
-tEnv.registerTableSource("Rates", rates);
+EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();

Review comment:
   Drop `.useBlinkPlanner().inStreamingMode()`? Because it is default value.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #12630: [FLINK-18268][docs] Correct Table API in Temporal table docs

2020-06-15 Thread GitBox


JingsongLi commented on a change in pull request #12630:
URL: https://github.com/apache/flink/pull/12630#discussion_r440572767



##
File path: docs/dev/table/streaming/temporal_tables.md
##
@@ -260,32 +260,45 @@ See also the page about [joins for continuous 
queries](joins.html) for more info
 {% highlight java %}
 // Get the stream and table environments.
 StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
-StreamTableEnvironment tEnv = TableEnvironment.getTableEnvironment(env);
-
-// Create an HBaseTableSource as a temporal table which implements 
LookableTableSource
-// In the real setup, you should replace this with your own table.
-HBaseTableSource rates = new HBaseTableSource(conf, "Rates");
-rates.setRowKey("currency", String.class);   // currency as the primary key
-rates.addColumn("fam1", "rate", Double.class);
-
-// register the temporal table into environment, then we can query it in sql
-tEnv.registerTableSource("Rates", rates);
+EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();

Review comment:
   `.useBlinkPlanner().inStreamingMode()` 干掉?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] DashShen edited a comment on pull request #12369: [FLINK-17678][Connectors/HBase]Support fink-sql-connector-hbase

2020-06-15 Thread GitBox


DashShen edited a comment on pull request #12369:
URL: https://github.com/apache/flink/pull/12369#issuecomment-644514878


   @wuchong Sorry, due to my personal physical reasons,  I don't have enough 
time to finish this work recently.You can assign others to finish this. I'll be 
involved in community work later when my back pain recovery.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] DashShen edited a comment on pull request #12369: [FLINK-17678][Connectors/HBase]Support fink-sql-connector-hbase

2020-06-15 Thread GitBox


DashShen edited a comment on pull request #12369:
URL: https://github.com/apache/flink/pull/12369#issuecomment-644514878


   @wuchong Sorry, due to my personal physical reasons,  I don't have enough 
time to finish this work recently.You can assignee others to finish this. I'll 
be involved in community work later when my back pain recovery.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440570913



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorTest.java
##
@@ -218,6 +230,67 @@ public void testNotifyCheckpointAbortedBeforeAsyncPhase() 
throws Exception {
assertEquals(0, 
subtaskCheckpointCoordinator.getAsyncCheckpointRunnableSize());
}
 
+   @Test
+   public void 
testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase() 
throws Exception {
+   final OneInputStreamTaskTestHarness testHarness 
=
+   new OneInputStreamTaskTestHarness<>(
+   OneInputStreamTask::new,
+   1, 1,
+   BasicTypeInfo.STRING_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO);
+
+   testHarness.setupOutputForSingletonOperatorChain();
+   StreamConfig streamConfig = testHarness.getStreamConfig();
+   streamConfig.setStreamOperator(new MapOperator());
+
+   testHarness.invoke();
+   testHarness.waitForTaskRunning();
+
+   TestTaskStateManager stateManager = new TestTaskStateManager();
+   MockEnvironment mockEnvironment = 
MockEnvironment.builder().setTaskStateManager(stateManager).build();
+   SubtaskCheckpointCoordinatorImpl subtaskCheckpointCoordinator = 
(SubtaskCheckpointCoordinatorImpl) new MockSubtaskCheckpointCoordinatorBuilder()
+   .setEnvironment(mockEnvironment)
+   .setUnalignedCheckpointEnabled(true)
+   .build();
+
+   final TestPooledBufferProvider bufferProvider = new 
TestPooledBufferProvider(Integer.MAX_VALUE, 4096);
+   ArrayList recordOrEvents = new ArrayList<>();
+   StreamElementSerializer stringStreamElementSerializer = 
new StreamElementSerializer<>(StringSerializer.INSTANCE);
+   RecordOrEventCollectingResultPartitionWriter 
resultPartitionWriter = new 
RecordOrEventCollectingResultPartitionWriter<>(recordOrEvents, bufferProvider, 
stringStreamElementSerializer);
+   
mockEnvironment.addOutputs(Collections.singletonList(resultPartitionWriter));
+
+   OneInputStreamTask task = testHarness.getTask();
+   final OperatorChain> operatorChain = new OperatorChain<>(task, 
StreamTask.createRecordWriterDelegate(streamConfig, mockEnvironment));
+   long checkpointId = 42L;
+   // notify checkpoint aborted before execution.
+   
subtaskCheckpointCoordinator.notifyCheckpointAborted(checkpointId, 
operatorChain, () -> true);
+   
subtaskCheckpointCoordinator.getChannelStateWriter().start(checkpointId, 
CheckpointOptions.forCheckpointWithDefaultLocation());

Review comment:
   no need to call `getChannelStateWriter().start`, since the checkpoint 
will never been actually executed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on a change in pull request #12657: [FLINK-18086][tests][e2e][kafka] Migrate SQLClientKafkaITCase to use DDL and new options to create tables

2020-06-15 Thread GitBox


godfreyhe commented on a change in pull request #12657:
URL: https://github.com/apache/flink/pull/12657#discussion_r440569344



##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/LocalExecutor.java
##
@@ -462,7 +462,7 @@ public ResolvedExpression parseSqlExpression(String 
sqlExpression, TableSchema i
@Override
public ResultDescriptor executeQuery(String sessionId, String query) 
throws SqlExecutionException {
final ExecutionContext context = 
getExecutionContext(sessionId);
-   return executeQueryInternal(sessionId, context, query);
+   return context.wrapClassLoader(() -> 
executeQueryInternal(sessionId, context, query));

Review comment:
   most operations in `executeQueryInternal ` method and 
`executeUpdateInternal` method are already wrapped in user classloader, only 
`deployer.deploy()` is  needed. Otherwise, it's better we should remove those 
wrappers.  btw, add some tests in sql client to verify the fix ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440570454



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorTest.java
##
@@ -218,6 +230,67 @@ public void testNotifyCheckpointAbortedBeforeAsyncPhase() 
throws Exception {
assertEquals(0, 
subtaskCheckpointCoordinator.getAsyncCheckpointRunnableSize());
}
 
+   @Test
+   public void 
testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase() 
throws Exception {
+   final OneInputStreamTaskTestHarness testHarness 
=
+   new OneInputStreamTaskTestHarness<>(
+   OneInputStreamTask::new,
+   1, 1,
+   BasicTypeInfo.STRING_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO);
+
+   testHarness.setupOutputForSingletonOperatorChain();
+   StreamConfig streamConfig = testHarness.getStreamConfig();
+   streamConfig.setStreamOperator(new MapOperator());
+
+   testHarness.invoke();
+   testHarness.waitForTaskRunning();
+
+   TestTaskStateManager stateManager = new TestTaskStateManager();
+   MockEnvironment mockEnvironment = 
MockEnvironment.builder().setTaskStateManager(stateManager).build();
+   SubtaskCheckpointCoordinatorImpl subtaskCheckpointCoordinator = 
(SubtaskCheckpointCoordinatorImpl) new MockSubtaskCheckpointCoordinatorBuilder()
+   .setEnvironment(mockEnvironment)
+   .setUnalignedCheckpointEnabled(true)
+   .build();
+
+   final TestPooledBufferProvider bufferProvider = new 
TestPooledBufferProvider(Integer.MAX_VALUE, 4096);
+   ArrayList recordOrEvents = new ArrayList<>();
+   StreamElementSerializer stringStreamElementSerializer = 
new StreamElementSerializer<>(StringSerializer.INSTANCE);
+   RecordOrEventCollectingResultPartitionWriter 
resultPartitionWriter = new 
RecordOrEventCollectingResultPartitionWriter<>(recordOrEvents, bufferProvider, 
stringStreamElementSerializer);

Review comment:
   RecordOrEventCollectingResultPartitionWriter -> 
ResultPartitionWriter for simple, also better to split the arguments in 
separate line because it seems too long line.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] DashShen commented on pull request #12369: [FLINK-17678][Connectors/HBase]Support fink-sql-connector-hbase

2020-06-15 Thread GitBox


DashShen commented on pull request #12369:
URL: https://github.com/apache/flink/pull/12369#issuecomment-644514878


   @wuchong Sorry, due to my personal physical reasons,  I don't have enough 
time to finish this work recently.You can assignee others to finish this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12661: [FLINK-18299][Formats(Json)]Add option in json format to parse timestamp in different standard

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12661:
URL: https://github.com/apache/flink/pull/12661#issuecomment-644211738


   
   ## CI report:
   
   * 204bbb584e74633c7cd82fb0358e5570d5d74563 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3527)
 
   * c1314ccf8e399484f78f3cea57d0229cb6a5d79b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12663: [hotfix][docs] Using a savepoint prevents data loss

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12663:
URL: https://github.com/apache/flink/pull/12663#issuecomment-644317639


   
   ## CI report:
   
   * babfbece617757adcd5f1c424610d6a83937d549 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3545)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12665: [FLINK-17886][docs-zh] Update Chinese documentation for new Watermark…

2020-06-15 Thread GitBox


flinkbot commented on pull request #12665:
URL: https://github.com/apache/flink/pull/12665#issuecomment-644514775


   
   ## CI report:
   
   * 539b338979344b960cbed6ab152eb32c117e121c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12630: [FLINK-18268][docs] Correct Table API in Temporal table docs

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12630:
URL: https://github.com/apache/flink/pull/12630#issuecomment-643173834


   
   ## CI report:
   
   * 535ba0fc8ca6ea67ded71fd9a80a24762b381a88 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3390)
 
   * ed3f09a47cd103818f3563d519da24b22da65798 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440569809



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorTest.java
##
@@ -218,6 +230,67 @@ public void testNotifyCheckpointAbortedBeforeAsyncPhase() 
throws Exception {
assertEquals(0, 
subtaskCheckpointCoordinator.getAsyncCheckpointRunnableSize());
}
 
+   @Test
+   public void 
testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase() 
throws Exception {
+   final OneInputStreamTaskTestHarness testHarness 
=
+   new OneInputStreamTaskTestHarness<>(
+   OneInputStreamTask::new,
+   1, 1,
+   BasicTypeInfo.STRING_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO);
+
+   testHarness.setupOutputForSingletonOperatorChain();
+   StreamConfig streamConfig = testHarness.getStreamConfig();
+   streamConfig.setStreamOperator(new MapOperator());
+
+   testHarness.invoke();
+   testHarness.waitForTaskRunning();
+
+   TestTaskStateManager stateManager = new TestTaskStateManager();
+   MockEnvironment mockEnvironment = 
MockEnvironment.builder().setTaskStateManager(stateManager).build();
+   SubtaskCheckpointCoordinatorImpl subtaskCheckpointCoordinator = 
(SubtaskCheckpointCoordinatorImpl) new MockSubtaskCheckpointCoordinatorBuilder()
+   .setEnvironment(mockEnvironment)
+   .setUnalignedCheckpointEnabled(true)

Review comment:
   no need for unaligned mode, the previous deadlock was actually found in 
alignment mode.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440569953



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorTest.java
##
@@ -218,6 +230,67 @@ public void testNotifyCheckpointAbortedBeforeAsyncPhase() 
throws Exception {
assertEquals(0, 
subtaskCheckpointCoordinator.getAsyncCheckpointRunnableSize());
}
 
+   @Test
+   public void 
testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase() 
throws Exception {
+   final OneInputStreamTaskTestHarness testHarness 
=
+   new OneInputStreamTaskTestHarness<>(
+   OneInputStreamTask::new,
+   1, 1,
+   BasicTypeInfo.STRING_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO);
+
+   testHarness.setupOutputForSingletonOperatorChain();
+   StreamConfig streamConfig = testHarness.getStreamConfig();
+   streamConfig.setStreamOperator(new MapOperator());
+
+   testHarness.invoke();
+   testHarness.waitForTaskRunning();
+
+   TestTaskStateManager stateManager = new TestTaskStateManager();
+   MockEnvironment mockEnvironment = 
MockEnvironment.builder().setTaskStateManager(stateManager).build();

Review comment:
   i guess we do not need to set `stateManager` for the environment





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12256: [FLINK-17018][runtime] Allocates slots in bulks for pipelined region scheduling

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12256:
URL: https://github.com/apache/flink/pull/12256#issuecomment-631025695


   
   ## CI report:
   
   * f5939316d96975bea8e80acb52bc17683087 UNKNOWN
   * 1253ab0db340f4d3aac238a3c2bc8f8f4abb6941 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3530)
 
   * a637f6415f4f46bc9cc576e0a170cab37266c17a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3549)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440569676



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorTest.java
##
@@ -218,6 +230,67 @@ public void testNotifyCheckpointAbortedBeforeAsyncPhase() 
throws Exception {
assertEquals(0, 
subtaskCheckpointCoordinator.getAsyncCheckpointRunnableSize());
}
 
+   @Test
+   public void 
testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase() 
throws Exception {
+   final OneInputStreamTaskTestHarness testHarness 
=
+   new OneInputStreamTaskTestHarness<>(
+   OneInputStreamTask::new,
+   1, 1,
+   BasicTypeInfo.STRING_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO);
+
+   testHarness.setupOutputForSingletonOperatorChain();
+   StreamConfig streamConfig = testHarness.getStreamConfig();
+   streamConfig.setStreamOperator(new MapOperator());
+
+   testHarness.invoke();
+   testHarness.waitForTaskRunning();
+
+   TestTaskStateManager stateManager = new TestTaskStateManager();
+   MockEnvironment mockEnvironment = 
MockEnvironment.builder().setTaskStateManager(stateManager).build();
+   SubtaskCheckpointCoordinatorImpl subtaskCheckpointCoordinator = 
(SubtaskCheckpointCoordinatorImpl) new MockSubtaskCheckpointCoordinatorBuilder()

Review comment:
   SubtaskCheckpointCoordinatorImpl -> SubtaskCheckpointCoordinator , so we 
do not need the transformation





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yangyichao-mango commented on pull request #12313: [FLINK-17005][docs] Translate the CREATE TABLE ... LIKE syntax documentation to Chinese

2020-06-15 Thread GitBox


yangyichao-mango commented on pull request #12313:
URL: https://github.com/apache/flink/pull/12313#issuecomment-644513011


   > @yangyichao-mango Seems there are some conflicts need to be resolved, 
could you please rebase the new master to resolve them?
   
   Thx a lot. I've resolved those conflicts.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440568203



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorImpl.java
##
@@ -275,7 +277,7 @@ public void checkpointState(
@Override
public void notifyCheckpointComplete(long checkpointId, 
OperatorChain operatorChain, Supplier isRunning) throws 
Exception {
if (isRunning.get()) {
-   LOG.debug("Notification of complete checkpoint for task 
{}", taskName);
+   LOG.debug("Notification of complete checkpoint {} for 
task {}", checkpointId, taskName);

Review comment:
   nit: it should be a separate hotfix commit.

##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorImpl.java
##
@@ -291,7 +293,7 @@ public void notifyCheckpointAborted(long checkpointId, 
OperatorChain opera
 
Exception previousException = null;
if (isRunning.get()) {
-   LOG.debug("Notification of aborted checkpoint for task 
{}", taskName);
+   LOG.debug("Notification of aborted checkpoint {} for 
task {}", checkpointId, taskName);

Review comment:
   ditto:





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12665: [FLINK-17886][docs-zh] Update Chinese documentation for new Watermark…

2020-06-15 Thread GitBox


flinkbot commented on pull request #12665:
URL: https://github.com/apache/flink/pull/12665#issuecomment-644512331


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 539b338979344b960cbed6ab152eb32c117e121c (Tue Jun 16 
03:39:13 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] libenchao commented on a change in pull request #12642: [FLINK-18282][docs-zh] retranslate the documentation home page

2020-06-15 Thread GitBox


libenchao commented on a change in pull request #12642:
URL: https://github.com/apache/flink/pull/12642#discussion_r440568132



##
File path: docs/index.zh.md
##
@@ -23,53 +23,71 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+
+Apache Flink 是一个在无界和有界数据流上进行状态计算的框架和分布式处理引擎。 Flink 已经可以在所有常见的集群环境中运行,并以 
in-memory 的速度和任意的规模进行计算。
+
 
-本文档适用于 Apache Flink {{ site.version_title}} 版本。本页面最近更新于 {% build_time %}.
+
+
 
-Apache Flink 是一个分布式流批一体化的开源平台。Flink 的核心是一个提供数据分发、通信以及自动容错的流计算引擎。Flink 
在流计算之上构建批处理,并且原生的支持迭代计算,内存管理以及程序优化。
+### 试用 Flink
 
-## 初步印象
+如果您有兴趣使用 Flink, 可以试试我们的教程:
 
-* **代码练习**: 跟随分步指南通过 Flink API 实现简单应用或查询。
-  * [实现 DataStream 应用]({% link try-flink/datastream_api.zh.md %})
-  * [书写 Table API 查询]({% link try-flink/table_api.zh.md %})
+* [DataStream API 进行欺诈检测]({% link try-flink/datastream_api.zh.md %})
+* [Table API 构建实时报表]({% link try-flink/table_api.zh.md %})
+* [Python API 教程]({% link try-flink/python_table_api.zh.md %})
+* [Flink 游乐场]({% link try-flink/flink-operations-playground.zh.md %})
 
-* **Docker 游乐场**: 你只需花几分钟搭建 Flink 沙盒环境,就可以探索和使用 Flink 了。
-  * [运行与管理 Flink 流处理应用]({% link try-flink/flink-operations-playground.zh.md %})
+### 学习 Flink
 
-* **概念**: 学习 Flink 的基本概念能更好地理解文档。
-  * [有状态流处理](concepts/stateful-stream-processing.html)
-  * [实时流处理](concepts/timely-stream-processing.html)
-  * [Flink 架构](concepts/flink-architecture.html)
-  * [术语表](concepts/glossary.html)
+* [操作培训]({% link learn-flink/index.zh.md %}) 包含了一系列的课程和练习,提供了对Flink的逐一介绍。

Review comment:
   ```suggestion
   * [操作培训]({% link learn-flink/index.zh.md %}) 包含了一系列的课程和练习,提供了对 Flink 的逐一介绍。
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17886) Update Chinese documentation for new WatermarkGenerator/WatermarkStrategies

2020-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17886:
---
Labels: pull-request-available  (was: )

> Update Chinese documentation for new WatermarkGenerator/WatermarkStrategies
> ---
>
> Key: FLINK-17886
> URL: https://issues.apache.org/jira/browse/FLINK-17886
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation
>Reporter: Aljoscha Krettek
>Assignee: Yichao Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> We need to update the Chinese documentation according to FLINK-17773.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] yangyichao-mango opened a new pull request #12665: [FLINK-17886][docs-zh] Update Chinese documentation for new Watermark…

2020-06-15 Thread GitBox


yangyichao-mango opened a new pull request #12665:
URL: https://github.com/apache/flink/pull/12665


   
   
   ## What is the purpose of the change
   
   *Update Chinese documentation for new WatermarkGenerator/WatermarkStrategies*
   
   
   ## Brief change log
   
 - *Update Chinese documentation dev/event_time.zh.md, 
dev/event_timestamp_extractors.zh.md, dev/event_timestamps_watermarks.zh.md*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 commented on a change in pull request #12632: [FLINK-18134][FLINK-18135][docs] Add documentation for Debezium and Canal formats

2020-06-15 Thread GitBox


danny0405 commented on a change in pull request #12632:
URL: https://github.com/apache/flink/pull/12632#discussion_r440564919



##
File path: docs/dev/table/connectors/formats/canal.md
##
@@ -0,0 +1,175 @@
+---
+title: "Canal Format"
+nav-title: Canal
+nav-parent_id: sql-formats
+nav-pos: 5
+---
+
+
+Changelog-Data-Capture Format
+Format: Deserialization Schema
+
+* This will be replaced by the TOC
+{:toc}
+
+[Canal](https://github.com/alibaba/canal/wiki) is a CDC (Changelog Data 
Capture) tool that can stream changes in real-time from MySQL into other 
systems. Canal provides an unified format schema for changelog and supports to 
serialize messages using JSON and 
[protobuf](https://developers.google.com/protocol-buffers).
+
+Flink supports to interpret Canal JSON messages as INSERT/UPDATE/DELETE 
messages into Flink SQL system. This is useful in many cases to leverage this 
feature, such as synchronizing incremental data from databases to other 
systems, auditing logs, materialized views on databases, temporal join changing 
history of a database table and so on.

Review comment:
   The WIKI seems good to put here: 
https://en.wikipedia.org/wiki/Change_data_capture





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jiasheng55 closed pull request #12359: [FLINK-15448][yarn] Add host and port info to yarn ResourceID

2020-06-15 Thread GitBox


jiasheng55 closed pull request #12359:
URL: https://github.com/apache/flink/pull/12359


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on a change in pull request #12630: [FLINK-18268][docs] Correct Table API in Temporal table docs

2020-06-15 Thread GitBox


leonardBang commented on a change in pull request #12630:
URL: https://github.com/apache/flink/pull/12630#discussion_r440563526



##
File path: docs/dev/table/streaming/temporal_tables.md
##
@@ -260,32 +260,43 @@ See also the page about [joins for continuous 
queries](joins.html) for more info
 {% highlight java %}
 // Get the stream and table environments.
 StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
-StreamTableEnvironment tEnv = TableEnvironment.getTableEnvironment(env);
-
-// Create an HBaseTableSource as a temporal table which implements 
LookableTableSource
-// In the real setup, you should replace this with your own table.
-HBaseTableSource rates = new HBaseTableSource(conf, "Rates");
-rates.setRowKey("currency", String.class);   // currency as the primary key
-rates.addColumn("fam1", "rate", Double.class);
-
-// register the temporal table into environment, then we can query it in sql
-tEnv.registerTableSource("Rates", rates);
+EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
+StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);
+// or TableEnvironment tEnv = TableEnvironment.create(settings);
+
+// Define an HBase table with DDL, then we can use it as a temporal table in 
sql

Review comment:
   The  pronounce of 'H' in 'HBase' is 'eɪtʃ', the first sound is vowel, so 
we should use 'an HBase'





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Reopened] (FLINK-18314) There are some problems in docs

2020-06-15 Thread jinxin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jinxin reopened FLINK-18314:


sorry ,i change the wrong state . i mean it's not fixed now 
 

> There are some problems in docs
> ---
>
> Key: FLINK-18314
> URL: https://issues.apache.org/jira/browse/FLINK-18314
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.11.0, 1.12.0
>Reporter: jinxin
>Priority: Major
>
> In this page 
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html.|https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html]
> Maven dependency should be flink-connector-kafka-0.11 instead of 
> flink-connector-kafka-011, which is missing `.` 
> flink-connector-kafka-010_2.11 has the same problem.
> I read the source code, the content of kafka.md is wrong.
> In the same page,DDL should be 
> {{`properties.bootstrap.servers` instead of }}{{properties.bootstrap.server.}}
> {{when i used}} {{properties.bootstrap.server,i got a exception :}}
> Caused by: org.apache.flink.table.api.ValidationException: One or more 
> required options are missing.
> Missing required options are:
> properties.bootstrap.servers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-18314) There are some problems in docs

2020-06-15 Thread jinxin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jinxin resolved FLINK-18314.

Resolution: Fixed

> There are some problems in docs
> ---
>
> Key: FLINK-18314
> URL: https://issues.apache.org/jira/browse/FLINK-18314
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.11.0, 1.12.0
>Reporter: jinxin
>Priority: Major
>
> In this page 
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html.|https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html]
> Maven dependency should be flink-connector-kafka-0.11 instead of 
> flink-connector-kafka-011, which is missing `.` 
> flink-connector-kafka-010_2.11 has the same problem.
> I read the source code, the content of kafka.md is wrong.
> In the same page,DDL should be 
> {{`properties.bootstrap.servers` instead of }}{{properties.bootstrap.server.}}
> {{when i used}} {{properties.bootstrap.server,i got a exception :}}
> Caused by: org.apache.flink.table.api.ValidationException: One or more 
> required options are missing.
> Missing required options are:
> properties.bootstrap.servers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18314) There are some problems in docs

2020-06-15 Thread jinxin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jinxin updated FLINK-18314:
---
Affects Version/s: 1.12.0

> There are some problems in docs
> ---
>
> Key: FLINK-18314
> URL: https://issues.apache.org/jira/browse/FLINK-18314
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.11.0, 1.12.0
>Reporter: jinxin
>Priority: Major
>
> In this page 
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html.|https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html]
> Maven dependency should be flink-connector-kafka-0.11 instead of 
> flink-connector-kafka-011, which is missing `.` 
> flink-connector-kafka-010_2.11 has the same problem.
> I read the source code, the content of kafka.md is wrong.
> In the same page,DDL should be 
> {{`properties.bootstrap.servers` instead of }}{{properties.bootstrap.server.}}
> {{when i used}} {{properties.bootstrap.server,i got a exception :}}
> Caused by: org.apache.flink.table.api.ValidationException: One or more 
> required options are missing.
> Missing required options are:
> properties.bootstrap.servers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18314) There are some problems in docs

2020-06-15 Thread jinxin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jinxin updated FLINK-18314:
---
Description: 
In this page 
[https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html.|https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html]

Maven dependency should be flink-connector-kafka-0.11 instead of 
flink-connector-kafka-011, which is missing `.` 

flink-connector-kafka-010_2.11 has the same problem.

I read the source code, the content of kafka.md is wrong.

In the same page,DDL should be 

{{`properties.bootstrap.servers` instead of }}{{properties.bootstrap.server.}}

{{when i used}} {{properties.bootstrap.server,i got a exception :}}

Caused by: org.apache.flink.table.api.ValidationException: One or more required 
options are missing.

Missing required options are:

properties.bootstrap.servers

  was:
In this page 
[https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html.|https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html]

Maven dependency should be flink-connector-kafka-0.11 instead of 
flink-connector-kafka-011, which is missing `.` 

flink-connector-kafka-010_2.11 has the same problem.

I read the source code, the content of kafka.md is wrong.

In the same page,DDL should be 

{{`properties.bootstrap.servers` instead of }}{{properties.bootstrap.server.}}

{{when i used}} {{properties.bootstrap.server,i got a exception :}}

{{}}

Caused by: org.apache.flink.table.api.ValidationException: One or more required 
options are missing.

{{}}

Missing required options are:

{{}}

properties.bootstrap.servers

{{}}


> There are some problems in docs
> ---
>
> Key: FLINK-18314
> URL: https://issues.apache.org/jira/browse/FLINK-18314
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.11.0
>Reporter: jinxin
>Priority: Major
>
> In this page 
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html.|https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html]
> Maven dependency should be flink-connector-kafka-0.11 instead of 
> flink-connector-kafka-011, which is missing `.` 
> flink-connector-kafka-010_2.11 has the same problem.
> I read the source code, the content of kafka.md is wrong.
> In the same page,DDL should be 
> {{`properties.bootstrap.servers` instead of }}{{properties.bootstrap.server.}}
> {{when i used}} {{properties.bootstrap.server,i got a exception :}}
> Caused by: org.apache.flink.table.api.ValidationException: One or more 
> required options are missing.
> Missing required options are:
> properties.bootstrap.servers



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18314) There are some problems in docs

2020-06-15 Thread jinxin (Jira)
jinxin created FLINK-18314:
--

 Summary: There are some problems in docs
 Key: FLINK-18314
 URL: https://issues.apache.org/jira/browse/FLINK-18314
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.11.0
Reporter: jinxin


In this page 
[https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html.|https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html]

Maven dependency should be flink-connector-kafka-0.11 instead of 
flink-connector-kafka-011, which is missing `.` 

flink-connector-kafka-010_2.11 has the same problem.

I read the source code, the content of kafka.md is wrong.

In the same page,DDL should be 

{{`properties.bootstrap.servers` instead of }}{{properties.bootstrap.server.}}

{{when i used}} {{properties.bootstrap.server,i got a exception :}}

{{}}

Caused by: org.apache.flink.table.api.ValidationException: One or more required 
options are missing.

{{}}

Missing required options are:

{{}}

properties.bootstrap.servers

{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18312) SavepointStatusHandler and StaticFileServerHandler not redirect

2020-06-15 Thread Yu Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136258#comment-17136258
 ] 

Yu Wang commented on FLINK-18312:
-

I think there seems a issue in "AbstractAsynchronousOperationHandlers", in this 
handler, there is a local memory cache "completedOperationCache" to store the 
pending savpoint opeartion before redirect the request to the leader 
jobmanager, which seems not synced between all the jobmanagers. This makes only 
the jobmanager which receive the savepoint trigger requset can lookup the 
status of the savpoint, while the others can only return 404.

> SavepointStatusHandler and StaticFileServerHandler not redirect 
> 
>
> Key: FLINK-18312
> URL: https://issues.apache.org/jira/browse/FLINK-18312
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.8.0, 1.9.0, 1.10.0
> Environment: 1. Deploy flink cluster in standlone mode on kubernetes 
> and use two Jobmanagers for HA.
> 2. Deploy a kubernetes service for the two jobmanagers to provide a unified 
> url.
>Reporter: Yu Wang
>Priority: Major
>
> Savepoint:
> 1. Deploy our flink cluster in standlone mode on kubernetes and use two 
> Jobmanagers for HA.
> 2. Deploy a kubernetes service for the two jobmanagers to provide a unified 
> url.
> 3. Send a savepoint trigger request to the leader Jobmanager.
> 4. Query the savepoint status from leader Jobmanager, get correct response.
> 5. Query the savepoint status from standby Jobmanager, the response will be 
> 404.
> Jobmanager log:
> 1. Query log from leader Jobmanager, get leader log.
> 2. Query log from standby Jobmanager, get standby log.
>  
> Both these two requests will be redirect to the leader in 1.7.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18312) SavepointStatusHandler and StaticFileServerHandler not redirect

2020-06-15 Thread Yu Wang (Jira)
Yu Wang created FLINK-18312:
---

 Summary: SavepointStatusHandler and StaticFileServerHandler not 
redirect 
 Key: FLINK-18312
 URL: https://issues.apache.org/jira/browse/FLINK-18312
 Project: Flink
  Issue Type: Bug
  Components: Runtime / REST
Affects Versions: 1.10.0, 1.9.0, 1.8.0
 Environment: 1. Deploy flink cluster in standlone mode on kubernetes 
and use two Jobmanagers for HA.
2. Deploy a kubernetes service for the two jobmanagers to provide a unified url.
Reporter: Yu Wang


Savepoint:

1. Deploy our flink cluster in standlone mode on kubernetes and use two 
Jobmanagers for HA.

2. Deploy a kubernetes service for the two jobmanagers to provide a unified url.

3. Send a savepoint trigger request to the leader Jobmanager.

4. Query the savepoint status from leader Jobmanager, get correct response.

5. Query the savepoint status from standby Jobmanager, the response will be 404.

Jobmanager log:

1. Query log from leader Jobmanager, get leader log.

2. Query log from standby Jobmanager, get standby log.

 

Both these two requests will be redirect to the leader in 1.7.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18313) Hive dialect doc should mention that views created in Flink cannot be used in Hive

2020-06-15 Thread Rui Li (Jira)
Rui Li created FLINK-18313:
--

 Summary: Hive dialect doc should mention that views created in 
Flink cannot be used in Hive
 Key: FLINK-18313
 URL: https://issues.apache.org/jira/browse/FLINK-18313
 Project: Flink
  Issue Type: Task
  Components: Connectors / Hive, Documentation
Reporter: Rui Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] prosscode commented on pull request #7820: [FLINK-11742][Metrics]Push metrics to Pushgateway without "instance"

2020-06-15 Thread GitBox


prosscode commented on pull request #7820:
URL: https://github.com/apache/flink/pull/7820#issuecomment-644503913


   Encountered this problem, has been resolved according to your modified codes.
   
   Follow @Draczech description, we can improve this parameter 
**randomJobNameSuffix**, if  is enabled, the `instance` value can be equal to 
their id of `tm1`, `tm2` and `jm` etc.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12256: [FLINK-17018][runtime] Allocates slots in bulks for pipelined region scheduling

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12256:
URL: https://github.com/apache/flink/pull/12256#issuecomment-631025695


   
   ## CI report:
   
   * f5939316d96975bea8e80acb52bc17683087 UNKNOWN
   * 1253ab0db340f4d3aac238a3c2bc8f8f4abb6941 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3530)
 
   * a637f6415f4f46bc9cc576e0a170cab37266c17a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] libenchao commented on a change in pull request #12642: [FLINK-18282][docs-zh] retranslate the documentation home page

2020-06-15 Thread GitBox


libenchao commented on a change in pull request #12642:
URL: https://github.com/apache/flink/pull/12642#discussion_r440557123



##
File path: docs/index.zh.md
##
@@ -23,53 +23,71 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+
+Apache Flink 是一个在无界和有界数据流上进行状态计算的框架和分布式处理引擎。 Flink 已经可以在所有常见的集群环境中运行,并以 
in-memory 的速度和任意的规模进行计算。
+
 
-本文档适用于 Apache Flink {{ site.version_title}} 版本。本页面最近更新于 {% build_time %}.
+
+
 
-Apache Flink 是一个分布式流批一体化的开源平台。Flink 的核心是一个提供数据分发、通信以及自动容错的流计算引擎。Flink 
在流计算之上构建批处理,并且原生的支持迭代计算,内存管理以及程序优化。
+### 试用 Flink
 
-## 初步印象
+如果您有兴趣使用 Flink, 可以试试我们的教程:
 
-* **代码练习**: 跟随分步指南通过 Flink API 实现简单应用或查询。
-  * [实现 DataStream 应用]({% link try-flink/datastream_api.zh.md %})
-  * [书写 Table API 查询]({% link try-flink/table_api.zh.md %})
+* [DataStream API 进行欺诈检测]({% link try-flink/datastream_api.md %})
+* [Table API 构建实时报表]({% link try-flink/table_api.md %})
+* [Python API 教程]({% link try-flink/python_table_api.md %})
+* [Flink 游乐场]({% link try-flink/flink-operations-playground.md %})
 
-* **Docker 游乐场**: 你只需花几分钟搭建 Flink 沙盒环境,就可以探索和使用 Flink 了。
-  * [运行与管理 Flink 流处理应用]({% link try-flink/flink-operations-playground.zh.md %})
+### 学习 Flink
 
-* **概念**: 学习 Flink 的基本概念能更好地理解文档。
-  * [有状态流处理](concepts/stateful-stream-processing.html)
-  * [实时流处理](concepts/timely-stream-processing.html)
-  * [Flink 架构](concepts/flink-architecture.html)
-  * [术语表](concepts/glossary.html)
+* [操作培训]({% link learn-flink/index.md %}) 包含了一系列的课程和练习,逐步介绍了,帮助你深入学习 Flink。
 
-## API 参考
+* [概念透析]({% link concepts/index.md %}) 介绍了在浏览参考文档之前你需要了解的 Flink 知识。
 
-API 参考列举并解释了 Flink API 的所有功能。
+### 获取 Flink 帮助
 
-* [DataStream API](dev/datastream_api.html)
-* [DataSet API](dev/batch/index.html)
-* [Table API  SQL](dev/table/index.html)
+如果你被困住了, 可以在 [社区](https://flink.apache.org/community.html)寻求帮助。 值得一提的是,Apache 
Flink 的用户邮件列表一直是最活跃的 Apache 项目之一,也是一个快速获得帮助的好途径。

Review comment:
   I think you maybe misunderstood my comment here. others LGTM





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk removed a comment on pull request #12256: [FLINK-17018][runtime] Allocates slots in bulks for pipelined region scheduling

2020-06-15 Thread GitBox


zhuzhurk removed a comment on pull request #12256:
URL: https://github.com/apache/flink/pull/12256#issuecomment-644476328


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12656: [FLINK-17666][table-planner-blink] Insert into partitioned table can …

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12656:
URL: https://github.com/apache/flink/pull/12656#issuecomment-644099955


   
   ## CI report:
   
   * e35de4a21f5cf007341a71a1461b796b286c9948 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3512)
 
   * 35454e38388a139ba65943e1c876cbcfb9d9e87c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3548)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18236) flink elasticsearch IT test ElasticsearchSinkTestBase.runElasticsearchSink* verify it not right

2020-06-15 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136245#comment-17136245
 ] 

jackylau commented on FLINK-18236:
--

hi [~dwysakowicz] , could you please also review this issue and PR?

> flink elasticsearch IT test ElasticsearchSinkTestBase.runElasticsearchSink* 
> verify it not right
> ---
>
> Key: FLINK-18236
> URL: https://issues.apache.org/jira/browse/FLINK-18236
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.10.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0, 1.12.0
>
>
> we can see there are diffirent tests
> runElasticsearchSinkTest
> runElasticsearchSinkCborTest
> runElasticsearchSinkSmileTest
> runElasticSearchSinkTest
> etc.
> And use SourceSinkDataTestKit.verifyProducedSinkData(client, index) to ensure 
> the correctness of results. But all of them use the same index.
> That is to say, if the second unit test sink doesn't send successfully. they 
> are also equal when use verifyProducedSinkData
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18311) StreamingKafkaITCase stalls indefinitely

2020-06-15 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18311:

Labels: test-stability  (was: )

> StreamingKafkaITCase stalls indefinitely
> 
>
> Key: FLINK-18311
> URL: https://issues.apache.org/jira/browse/FLINK-18311
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> CI: 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3537=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee]
> {code}
> 020-06-15T21:01:59.0792207Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.10_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0793580Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.11_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0794931Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka_2.11:1.11-SNAPSHOT:jar already 
> exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0795686Z [INFO] 
> 2020-06-15T21:01:59.0796403Z [INFO] --- maven-surefire-plugin:2.22.1:test 
> (end-to-end-tests) @ flink-end-to-end-tests-common-kafka ---
> 2020-06-15T21:01:59.0869911Z [INFO] 
> 2020-06-15T21:01:59.0871981Z [INFO] 
> ---
> 2020-06-15T21:01:59.0874203Z [INFO]  T E S T S
> 2020-06-15T21:01:59.0875086Z [INFO] 
> ---
> 2020-06-15T21:02:00.0134000Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-06-15T21:45:33.4889677Z ##[error]The operation was canceled.
> 2020-06-15T21:45:33.4902658Z ##[section]Finishing: Run e2e tests
> 2020-06-15T21:45:33.5058601Z ##[section]Starting: Cache Maven local repo
> 2020-06-15T21:45:33.5164621Z 
> ==
> 2020-06-15T21:45:33.5164972Z Task : Cache
> 2020-06-15T21:45:33.5165250Z Description  : Cache files between runs
> 2020-06-15T21:45:33.5165497Z Version  : 2.0.1
> 2020-06-15T21:45:33.5165769Z Author   : Microsoft Corporation
> 2020-06-15T21:45:33.5166079Z Help : 
> https://aka.ms/pipeline-caching-docs
> 2020-06-15T21:45:33.5166442Z 
> ==
> 2020-06-15T21:45:34.0475096Z ##[section]Finishing: Cache Maven local repo
> 2020-06-15T21:45:34.0502436Z ##[section]Starting: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.0506976Z 
> ==
> 2020-06-15T21:45:34.0507297Z Task : Get sources
> 2020-06-15T21:45:34.0507642Z Description  : Get sources from a repository. 
> Supports Git, TfsVC, and SVN repositories.
> 2020-06-15T21:45:34.0507965Z Version  : 1.0.0
> 2020-06-15T21:45:34.0508198Z Author   : Microsoft
> 2020-06-15T21:45:34.0508559Z Help : [More 
> Information](https://go.microsoft.com/fwlink/?LinkId=798199)
> 2020-06-15T21:45:34.0508934Z 
> ==
> 2020-06-15T21:45:34.3924966Z Cleaning any cached credential from repository: 
> flink-ci/flink-mirror (GitHub)
> 2020-06-15T21:45:34.3990430Z ##[section]Finishing: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.4049857Z ##[section]Starting: Finalize Job
> 2020-06-15T21:45:34.4086754Z Cleaning up task key
> 2020-06-15T21:45:34.4087951Z Start cleaning up orphan processes.
> 2020-06-15T21:45:34.4481307Z Terminate orphan process: pid (11772) (java)
> 2020-06-15T21:45:34.4548480Z Terminate orphan process: pid (12132) (java)
> 2020-06-15T21:45:34.4632331Z Terminate orphan process: pid (30726) (bash)
> 2020-06-15T21:45:34.4660351Z Terminate orphan process: pid (30728) (bash)
> 2020-06-15T21:45:34.4710124Z Terminate orphan process: pid (68958) (java)
> 2020-06-15T21:45:34.4751577Z Terminate orphan process: pid (119102) (java)
> 2020-06-15T21:45:34.4800161Z Terminate orphan process: pid (129546) (sh)
> 2020-06-15T21:45:34.4830588Z Terminate orphan process: pid (129548) (java)
> 2020-06-15T21:45:34.4833955Z ##[section]Finishing: Finalize Job
> 2020-06-15T21:45:34.4877321Z ##[section]Finishing: e2e_ci
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18311) StreamingKafkaITCase stalls indefinitely

2020-06-15 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136236#comment-17136236
 ] 

Dian Fu edited comment on FLINK-18311 at 6/16/20, 2:27 AM:
---

It seems caused by the PR: [https://github.com/apache/flink/pull/12589]

cc [~aljoscha]


was (Author: dian.fu):
It seems caused by the change: 
[https://github.com/flink-ci/flink-mirror/commit/b038e718ea2265957b98cfc6a7a79391e7150dc4]

cc [~aljoscha]

> StreamingKafkaITCase stalls indefinitely
> 
>
> Key: FLINK-18311
> URL: https://issues.apache.org/jira/browse/FLINK-18311
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Critical
> Fix For: 1.11.0
>
>
> CI: 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3537=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee]
> {code}
> 020-06-15T21:01:59.0792207Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.10_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0793580Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.11_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0794931Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka_2.11:1.11-SNAPSHOT:jar already 
> exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0795686Z [INFO] 
> 2020-06-15T21:01:59.0796403Z [INFO] --- maven-surefire-plugin:2.22.1:test 
> (end-to-end-tests) @ flink-end-to-end-tests-common-kafka ---
> 2020-06-15T21:01:59.0869911Z [INFO] 
> 2020-06-15T21:01:59.0871981Z [INFO] 
> ---
> 2020-06-15T21:01:59.0874203Z [INFO]  T E S T S
> 2020-06-15T21:01:59.0875086Z [INFO] 
> ---
> 2020-06-15T21:02:00.0134000Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-06-15T21:45:33.4889677Z ##[error]The operation was canceled.
> 2020-06-15T21:45:33.4902658Z ##[section]Finishing: Run e2e tests
> 2020-06-15T21:45:33.5058601Z ##[section]Starting: Cache Maven local repo
> 2020-06-15T21:45:33.5164621Z 
> ==
> 2020-06-15T21:45:33.5164972Z Task : Cache
> 2020-06-15T21:45:33.5165250Z Description  : Cache files between runs
> 2020-06-15T21:45:33.5165497Z Version  : 2.0.1
> 2020-06-15T21:45:33.5165769Z Author   : Microsoft Corporation
> 2020-06-15T21:45:33.5166079Z Help : 
> https://aka.ms/pipeline-caching-docs
> 2020-06-15T21:45:33.5166442Z 
> ==
> 2020-06-15T21:45:34.0475096Z ##[section]Finishing: Cache Maven local repo
> 2020-06-15T21:45:34.0502436Z ##[section]Starting: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.0506976Z 
> ==
> 2020-06-15T21:45:34.0507297Z Task : Get sources
> 2020-06-15T21:45:34.0507642Z Description  : Get sources from a repository. 
> Supports Git, TfsVC, and SVN repositories.
> 2020-06-15T21:45:34.0507965Z Version  : 1.0.0
> 2020-06-15T21:45:34.0508198Z Author   : Microsoft
> 2020-06-15T21:45:34.0508559Z Help : [More 
> Information](https://go.microsoft.com/fwlink/?LinkId=798199)
> 2020-06-15T21:45:34.0508934Z 
> ==
> 2020-06-15T21:45:34.3924966Z Cleaning any cached credential from repository: 
> flink-ci/flink-mirror (GitHub)
> 2020-06-15T21:45:34.3990430Z ##[section]Finishing: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.4049857Z ##[section]Starting: Finalize Job
> 2020-06-15T21:45:34.4086754Z Cleaning up task key
> 2020-06-15T21:45:34.4087951Z Start cleaning up orphan processes.
> 2020-06-15T21:45:34.4481307Z Terminate orphan process: pid (11772) (java)
> 2020-06-15T21:45:34.4548480Z Terminate orphan process: pid (12132) (java)
> 2020-06-15T21:45:34.4632331Z Terminate orphan process: pid (30726) (bash)
> 2020-06-15T21:45:34.4660351Z Terminate orphan process: pid (30728) (bash)
> 2020-06-15T21:45:34.4710124Z Terminate orphan process: pid (68958) (java)
> 2020-06-15T21:45:34.4751577Z Terminate orphan process: pid (119102) (java)
> 2020-06-15T21:45:34.4800161Z Terminate orphan process: pid (129546) (sh)
> 2020-06-15T21:45:34.4830588Z Terminate orphan process: pid (129548) (java)
> 

[GitHub] [flink] flinkbot edited a comment on pull request #12656: [FLINK-17666][table-planner-blink] Insert into partitioned table can …

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12656:
URL: https://github.com/apache/flink/pull/12656#issuecomment-644099955


   
   ## CI report:
   
   * e35de4a21f5cf007341a71a1461b796b286c9948 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3512)
 
   * 35454e38388a139ba65943e1c876cbcfb9d9e87c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sdlcwangsong commented on pull request #12642: [FLINK-18282][docs-zh] retranslate the documentation home page

2020-06-15 Thread GitBox


sdlcwangsong commented on pull request #12642:
URL: https://github.com/apache/flink/pull/12642#issuecomment-644491027


   hi, @libenchao, I have finish it



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on a change in pull request #12632: [FLINK-18134][FLINK-18135][docs] Add documentation for Debezium and Canal formats

2020-06-15 Thread GitBox


leonardBang commented on a change in pull request #12632:
URL: https://github.com/apache/flink/pull/12632#discussion_r440548891



##
File path: docs/dev/table/connectors/formats/canal.zh.md
##
@@ -0,0 +1,175 @@
+---
+title: "Canal Format"
+nav-title: Canal
+nav-parent_id: sql-formats
+nav-pos: 5
+---
+
+
+Changelog-Data-Capture Format
+Format: Deserialization Schema
+
+* This will be replaced by the TOC
+{:toc}
+
+[Canal](https://github.com/alibaba/canal/wiki) is a CDC (Changelog Data 
Capture) tool that can stream changes in real-time from MySQL into other 
systems. Canal provides an unified format schema for changelog and supports to 
serialize messages using JSON and 
[protobuf](https://developers.google.com/protocol-buffers).
+
+Flink supports to interpret Canal JSON messages as INSERT/UPDATE/DELETE 
messages into Flink SQL system. This is useful in many cases to leverage this 
feature, such as synchronizing incremental data from databases to other 
systems, auditing logs, materialized views on databases, temporal join changing 
history of a database table and so on.
+
+Note: Support for interpreting Canal protobuf messages and emitting Canal 
messages is on the roadmap.
+
+Dependencies
+
+
+In order to setup the Canal format, the following table provides dependency 
information for both projects using a build automation tool (such as Maven or 
SBT) and SQL Client with SQL JAR bundles.
+
+| Maven dependency   | SQL Client JAR |
+| :- | :--|
+| `flink-json`   | Built-in   |
+
+*Note: please refer to [Canal 
documentation](https://github.com/alibaba/canal/wiki) about how to deploy Canal 
to synchronize changelog to message queues.*
+
+
+How to use Canal format
+
+
+Canal provides an unified format for changelog, here is a simple example for 
an update operation captured from a MySQL `products` table:
+
+```json
+{
+  "data": [
+{
+  "id": "111",
+  "name": "scooter",
+  "description": "Big 2-wheel scooter",
+  "weight": "5.18"
+}
+  ],
+  "database": "inventory",
+  "es": 158937356,
+  "id": 9,
+  "isDdl": false,
+  "mysqlType": {
+"id": "INTEGER",
+"name": "VARCHAR(255)",
+"description": "VARCHAR(512)",
+"weight": "FLOAT"
+  },
+  "old": [
+{
+  "weight": "5.15"
+}
+  ],
+  "pkNames": [
+"id"
+  ],
+  "sql": "",
+  "sqlType": {
+"id": 4,
+"name": 12,
+"description": 12,
+"weight": 7
+  },
+  "table": "products",
+  "ts": 1589373560798,
+  "type": "UPDATE"
+}
+```
+
+*Note: please refer to [Canal 
documentation](https://github.com/alibaba/canal/wiki) about the meaning of each 
fields.*
+
+The MySQL `products` table has 4 columns (`id`, `name`, `description` and 
`weight`). The above JSON message is an update change event on the `products` 
table where the `weight` value of the row with `id = 111` is changed from 
`5.18` to `5.15`.
+Assuming this messages is synchronized to Kafka topic `products_binlog`, then 
we can use the following DDL to consume this topic and interpret the change 
events.
+
+
+
+{% highlight sql %}
+CREATE TABLE topic_products (
+  -- schema is totally the same to the MySQL "products" table
+  id BIGINT,
+  name STRING,
+  description STRING,
+  weight DECIMAL(10, 2)
+) WITH (
+ 'connector' = 'kafka',
+ 'topic' = 'products_binlog',
+ 'properties.bootstrap.servers' = 'localhost:9092',
+ 'properties.group.id' = 'testGroup',
+ 'format' = 'canal-json'  -- using canal-json as the format

Review comment:
   no restriction, just a style, I'm fine with current. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18303) Filesystem connector doesn't flush part files after rolling interval

2020-06-15 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136241#comment-17136241
 ] 

Jingsong Lee commented on FLINK-18303:
--

First, I don't think this is a technical bug, but it can be a user oriented bug.

The options control the "see the part file" for csv,json are:
 # rolling-policy.time-interval
 # bucket check interval
 # checkpoint interval

As long as one of the three options is not set small, it will not achieve the 
desired effect.

> Filesystem connector doesn't flush part files after rolling interval
> 
>
> Key: FLINK-18303
> URL: https://issues.apache.org/jira/browse/FLINK-18303
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Table SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.11.0
>
>
> I have set "execution.checkpointing.interval" to "5s" and 
> "sink.rolling-policy.time-interval" to "2s". However, it still take 60 
> seconds to see the first part file. 
> This can be reproduced by the following code in SQL CLI:
> {code:sql}
> CREATE TABLE CsvTable (
>   event_timestamp STRING,
>   `user` STRING,
>   message STRING,
>   duplicate_count BIGINT,
>   constant STRING
> ) WITH (
>   'connector' = 'filesystem',
>   'path' = '$RESULT',
>   'format' = 'csv',
>   'sink.rolling-policy.time-interval' = '2s'
> );
> INSERT INTO CsvTable -- read from Kafka Avro, and write into Filesystem Csv
> SELECT AvroTable.*, RegReplace('Test constant folding.', 'Test', 'Success') 
> AS constant
> FROM AvroTable;
> {code}
> This is found when I migrate SQLClientKafkaITCase to use DDL (FLINK-18086).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] prosscode removed a comment on pull request #7820: [FLINK-11742][Metrics]Push metrics to Pushgateway without "instance"

2020-06-15 Thread GitBox


prosscode removed a comment on pull request #7820:
URL: https://github.com/apache/flink/pull/7820#issuecomment-644485100


   Encountered this problem, has been resolved according to your modified 
codes, thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18261) flink-orc and flink-parquet have invalid NOTICE file

2020-06-15 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-18261.

Resolution: Fixed

master: 6388336b0c56589c3a77e38f8fd16f582e2d947c

release-1.11: 040de969fbf30072fc2ef2f0e6eac3e89570f625

> flink-orc and flink-parquet have invalid NOTICE file
> 
>
> Key: FLINK-18261
> URL: https://issues.apache.org/jira/browse/FLINK-18261
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Jingsong Lee
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> flink-orc provides a {{-jar-with-dependencies.jar}} variant which ships 
> binaries.
> However, these binaries are not documented in {{META-INF/NOTICE}}.
> There are two similar files in that directory (NOTICE from force-shading and 
> NOTICE.txt from Commons Lang). 
> There is a NOTICE file that looks valid, but it is in {{META-INF/services}}.
> I assume this has been introduced in FLINK-17460.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18311) StreamingKafkaITCase stalls indefinitely

2020-06-15 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18311:

Fix Version/s: 1.11.0

> StreamingKafkaITCase stalls indefinitely
> 
>
> Key: FLINK-18311
> URL: https://issues.apache.org/jira/browse/FLINK-18311
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Critical
> Fix For: 1.11.0
>
>
> CI: 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3537=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee]
> {code}
> 020-06-15T21:01:59.0792207Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.10_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0793580Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.11_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0794931Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka_2.11:1.11-SNAPSHOT:jar already 
> exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0795686Z [INFO] 
> 2020-06-15T21:01:59.0796403Z [INFO] --- maven-surefire-plugin:2.22.1:test 
> (end-to-end-tests) @ flink-end-to-end-tests-common-kafka ---
> 2020-06-15T21:01:59.0869911Z [INFO] 
> 2020-06-15T21:01:59.0871981Z [INFO] 
> ---
> 2020-06-15T21:01:59.0874203Z [INFO]  T E S T S
> 2020-06-15T21:01:59.0875086Z [INFO] 
> ---
> 2020-06-15T21:02:00.0134000Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-06-15T21:45:33.4889677Z ##[error]The operation was canceled.
> 2020-06-15T21:45:33.4902658Z ##[section]Finishing: Run e2e tests
> 2020-06-15T21:45:33.5058601Z ##[section]Starting: Cache Maven local repo
> 2020-06-15T21:45:33.5164621Z 
> ==
> 2020-06-15T21:45:33.5164972Z Task : Cache
> 2020-06-15T21:45:33.5165250Z Description  : Cache files between runs
> 2020-06-15T21:45:33.5165497Z Version  : 2.0.1
> 2020-06-15T21:45:33.5165769Z Author   : Microsoft Corporation
> 2020-06-15T21:45:33.5166079Z Help : 
> https://aka.ms/pipeline-caching-docs
> 2020-06-15T21:45:33.5166442Z 
> ==
> 2020-06-15T21:45:34.0475096Z ##[section]Finishing: Cache Maven local repo
> 2020-06-15T21:45:34.0502436Z ##[section]Starting: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.0506976Z 
> ==
> 2020-06-15T21:45:34.0507297Z Task : Get sources
> 2020-06-15T21:45:34.0507642Z Description  : Get sources from a repository. 
> Supports Git, TfsVC, and SVN repositories.
> 2020-06-15T21:45:34.0507965Z Version  : 1.0.0
> 2020-06-15T21:45:34.0508198Z Author   : Microsoft
> 2020-06-15T21:45:34.0508559Z Help : [More 
> Information](https://go.microsoft.com/fwlink/?LinkId=798199)
> 2020-06-15T21:45:34.0508934Z 
> ==
> 2020-06-15T21:45:34.3924966Z Cleaning any cached credential from repository: 
> flink-ci/flink-mirror (GitHub)
> 2020-06-15T21:45:34.3990430Z ##[section]Finishing: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.4049857Z ##[section]Starting: Finalize Job
> 2020-06-15T21:45:34.4086754Z Cleaning up task key
> 2020-06-15T21:45:34.4087951Z Start cleaning up orphan processes.
> 2020-06-15T21:45:34.4481307Z Terminate orphan process: pid (11772) (java)
> 2020-06-15T21:45:34.4548480Z Terminate orphan process: pid (12132) (java)
> 2020-06-15T21:45:34.4632331Z Terminate orphan process: pid (30726) (bash)
> 2020-06-15T21:45:34.4660351Z Terminate orphan process: pid (30728) (bash)
> 2020-06-15T21:45:34.4710124Z Terminate orphan process: pid (68958) (java)
> 2020-06-15T21:45:34.4751577Z Terminate orphan process: pid (119102) (java)
> 2020-06-15T21:45:34.4800161Z Terminate orphan process: pid (129546) (sh)
> 2020-06-15T21:45:34.4830588Z Terminate orphan process: pid (129548) (java)
> 2020-06-15T21:45:34.4833955Z ##[section]Finishing: Finalize Job
> 2020-06-15T21:45:34.4877321Z ##[section]Finishing: e2e_ci
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18311) StreamingKafkaITCase stalls indefinitely

2020-06-15 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17136236#comment-17136236
 ] 

Dian Fu commented on FLINK-18311:
-

It seems caused by the change: 
[https://github.com/flink-ci/flink-mirror/commit/b038e718ea2265957b98cfc6a7a79391e7150dc4]

cc [~aljoscha]

> StreamingKafkaITCase stalls indefinitely
> 
>
> Key: FLINK-18311
> URL: https://issues.apache.org/jira/browse/FLINK-18311
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>
> CI: 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3537=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee]
> {code}
> 020-06-15T21:01:59.0792207Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.10_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0793580Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.11_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0794931Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka_2.11:1.11-SNAPSHOT:jar already 
> exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0795686Z [INFO] 
> 2020-06-15T21:01:59.0796403Z [INFO] --- maven-surefire-plugin:2.22.1:test 
> (end-to-end-tests) @ flink-end-to-end-tests-common-kafka ---
> 2020-06-15T21:01:59.0869911Z [INFO] 
> 2020-06-15T21:01:59.0871981Z [INFO] 
> ---
> 2020-06-15T21:01:59.0874203Z [INFO]  T E S T S
> 2020-06-15T21:01:59.0875086Z [INFO] 
> ---
> 2020-06-15T21:02:00.0134000Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-06-15T21:45:33.4889677Z ##[error]The operation was canceled.
> 2020-06-15T21:45:33.4902658Z ##[section]Finishing: Run e2e tests
> 2020-06-15T21:45:33.5058601Z ##[section]Starting: Cache Maven local repo
> 2020-06-15T21:45:33.5164621Z 
> ==
> 2020-06-15T21:45:33.5164972Z Task : Cache
> 2020-06-15T21:45:33.5165250Z Description  : Cache files between runs
> 2020-06-15T21:45:33.5165497Z Version  : 2.0.1
> 2020-06-15T21:45:33.5165769Z Author   : Microsoft Corporation
> 2020-06-15T21:45:33.5166079Z Help : 
> https://aka.ms/pipeline-caching-docs
> 2020-06-15T21:45:33.5166442Z 
> ==
> 2020-06-15T21:45:34.0475096Z ##[section]Finishing: Cache Maven local repo
> 2020-06-15T21:45:34.0502436Z ##[section]Starting: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.0506976Z 
> ==
> 2020-06-15T21:45:34.0507297Z Task : Get sources
> 2020-06-15T21:45:34.0507642Z Description  : Get sources from a repository. 
> Supports Git, TfsVC, and SVN repositories.
> 2020-06-15T21:45:34.0507965Z Version  : 1.0.0
> 2020-06-15T21:45:34.0508198Z Author   : Microsoft
> 2020-06-15T21:45:34.0508559Z Help : [More 
> Information](https://go.microsoft.com/fwlink/?LinkId=798199)
> 2020-06-15T21:45:34.0508934Z 
> ==
> 2020-06-15T21:45:34.3924966Z Cleaning any cached credential from repository: 
> flink-ci/flink-mirror (GitHub)
> 2020-06-15T21:45:34.3990430Z ##[section]Finishing: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.4049857Z ##[section]Starting: Finalize Job
> 2020-06-15T21:45:34.4086754Z Cleaning up task key
> 2020-06-15T21:45:34.4087951Z Start cleaning up orphan processes.
> 2020-06-15T21:45:34.4481307Z Terminate orphan process: pid (11772) (java)
> 2020-06-15T21:45:34.4548480Z Terminate orphan process: pid (12132) (java)
> 2020-06-15T21:45:34.4632331Z Terminate orphan process: pid (30726) (bash)
> 2020-06-15T21:45:34.4660351Z Terminate orphan process: pid (30728) (bash)
> 2020-06-15T21:45:34.4710124Z Terminate orphan process: pid (68958) (java)
> 2020-06-15T21:45:34.4751577Z Terminate orphan process: pid (119102) (java)
> 2020-06-15T21:45:34.4800161Z Terminate orphan process: pid (129546) (sh)
> 2020-06-15T21:45:34.4830588Z Terminate orphan process: pid (129548) (java)
> 2020-06-15T21:45:34.4833955Z ##[section]Finishing: Finalize Job
> 2020-06-15T21:45:34.4877321Z ##[section]Finishing: e2e_ci
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   7   >