[GitHub] [flink] flinkbot edited a comment on issue #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11168: [FLINK-16140] [docs-zh] Translate 
Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#issuecomment-589576910
 
 
   
   ## CI report:
   
   * 76146c2111a47b68765168064b4d1dd90448789c Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151304506) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5800)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11233: [FLINK-16194][k8s] Refactor the Kubernetes decorator design

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11233: [FLINK-16194][k8s] Refactor the 
Kubernetes decorator design
URL: https://github.com/apache/flink/pull/11233#issuecomment-591843120
 
 
   
   ## CI report:
   
   * 0017b8a8930024d0e9ce3355d050565a5007831d UNKNOWN
   * 978c6f00810a66a87325af6454e49e2f084e45be UNKNOWN
   * abfef134eb5940d7831dab4c794d654c2fa70263 UNKNOWN
   * d78c1b1b282a3d7fcfc797bf909dca2b5f2264e4 UNKNOWN
   * 78750a20196de5f01b01b59b1cd15c752bdee5c0 UNKNOWN
   * 08f07deb2243554d24e2e4171e6cb23a5f934cc8 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151292878) Azure: 
[CANCELED](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5791)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386237138
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +456,68 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReceivedBufferForRemovedChannel() throws Exception {
+   final int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = 
createRemoteInputChannel(inputGate, null, networkBufferPool);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(inputChannel);
+
+   try {
+   Buffer buffer = 
TestBufferFactory.createBuffer(bufferSize);
+   BufferResponse bufferResponse = createBufferResponse(
+   buffer,
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   handler.removeInputChannel(inputChannel);
+   handler.channelRead(null, bufferResponse);
+
+   assertNotNull(bufferResponse.getBuffer());
+   assertTrue(bufferResponse.getBuffer().isRecycled());
+   } finally {
+   releaseResource(inputGate, networkBufferPool);
+   }
+   }
+
+   @Test
+   public void testReceivedBufferForReleasedChannel() throws Exception {
+   final int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = 
createRemoteInputChannel(inputGate, null, networkBufferPool);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(inputChannel);
+
+   try {
+   Buffer buffer = 
TestBufferFactory.createBuffer(bufferSize);
+   BufferResponse bufferResponse = createBufferResponse(
+   buffer,
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   inputGate.close();
 
 Review comment:
   Can you check whether we already have the case that releasing the channel 
before `createBufferResponse`, then we can verify whether the created 
`BufferResponse` has the `null` data buffer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16365) awaitTermination() result is not checked

2020-03-01 Thread Roman Leventov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048848#comment-17048848
 ] 

Roman Leventov commented on FLINK-16365:


To find all these places, one can use `$x$.awaitTermination($y$, $z$);` 
structural search pattern in IntelliJ IDEA.

> awaitTermination() result is not checked
> 
>
> Key: FLINK-16365
> URL: https://issues.apache.org/jira/browse/FLINK-16365
> Project: Flink
>  Issue Type: Improvement
>Reporter: Roman Leventov
>Priority: Minor
>
> There are three places in production code where awaitTermination() result is 
> not checked: BlockingGrpcPubSubSubscriber (io.grpc.ManagedChannel), 
> PubSubSink (ManagedChannel), and FileCache (ExecutorService).
> Calling awaitTermination() without checking the result seems to make little 
> sense to me.
> If it's genuinely important to await termination, e. g. for concurrency 
> reasons, or because we are awaiting heavy resource release and if the 
> resource is not released we have a resource leak, then it seems reasonable to 
> at least check the result of awaitTermination() and log a warning if the 
> result is negative, allowing to debug potential problem in the future.
> Otherwise, if we don't really care about awaiting termination, then maybe 
> it's better to not call awaitTermination() at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386236936
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +456,68 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReceivedBufferForRemovedChannel() throws Exception {
+   final int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = 
createRemoteInputChannel(inputGate, null, networkBufferPool);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(inputChannel);
+
+   try {
+   Buffer buffer = 
TestBufferFactory.createBuffer(bufferSize);
+   BufferResponse bufferResponse = createBufferResponse(
+   buffer,
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   handler.removeInputChannel(inputChannel);
+   handler.channelRead(null, bufferResponse);
+
+   assertNotNull(bufferResponse.getBuffer());
+   assertTrue(bufferResponse.getBuffer().isRecycled());
+   } finally {
+   releaseResource(inputGate, networkBufferPool);
+   }
+   }
+
+   @Test
+   public void testReceivedBufferForReleasedChannel() throws Exception {
+   final int bufferSize = 1024;
 
 Review comment:
   ditto: final


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11205) Task Manager Metaspace Memory Leak

2020-03-01 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048845#comment-17048845
 ] 

Guowei Ma commented on FLINK-11205:
---

[~fwiffo] I have a question about the LogFactory caching class loader leads to 
the class leak.

As far as I know, Flink does not use the Apache Commons Logging. So I assume 
that the Apache Commons Log jar is from the application. For failover restart 
only a job
 # If the Apache Commons Log and user jar are loaded by the system class loader 
I think there might be not class leak because all class is loaded by the system 
class.(The user class loader object is a leak.)
 # If the Apache Commons Log and user jar are loaded by the user class loader I 
think there might be also no class leak. The GC would release all the class.
 # If the Apache Commons Log is loaded by the system class loader and the user 
jar is load by the user class loader. I think there might be class leaks if we 
do not call LogFactory.release when closing.

 Do you mean the third scenario? Why do you not choose the other two scenarios? 
Correct me If I miss understanding something.

> Task Manager Metaspace Memory Leak 
> ---
>
> Key: FLINK-11205
> URL: https://issues.apache.org/jira/browse/FLINK-11205
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.5.5, 1.6.2, 1.7.0
>Reporter: NS
>Priority: Critical
> Attachments: Screenshot 2018-12-18 at 12.14.11.png, Screenshot 
> 2018-12-18 at 15.47.55.png
>
>
> Job Restarts causes task manager to dynamically load duplicate classes. 
> Metaspace is unbounded and grows with every restart. YARN aggressively kill 
> such containers but this affect is immediately seems on different task 
> manager which results in death spiral.
> Task Manager uses dynamic loader as described in 
> [https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/debugging_classloading.html]
> {quote}
> *YARN*
> YARN classloading differs between single job deployments and sessions:
>  * When submitting a Flink job/application directly to YARN (via {{bin/flink 
> run -m yarn-cluster ...}}), dedicated TaskManagers and JobManagers are 
> started for that job. Those JVMs have both Flink framework classes and user 
> code classes in the Java classpath. That means that there is _no dynamic 
> classloading_ involved in that case.
>  * When starting a YARN session, the JobManagers and TaskManagers are started 
> with the Flink framework classes in the classpath. The classes from all jobs 
> that are submitted against the session are loaded dynamically.
> {quote}
> The above is not entirely true specially when you set {{-yD 
> classloader.resolve-order=parent-first}} . We also above observed the above 
> behaviour when submitting a Flink job/application directly to YARN (via 
> {{bin/flink run -m yarn-cluster ...}}).
> !Screenshot 2018-12-18 at 12.14.11.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16365) awaitTermination() result is not checked

2020-03-01 Thread Roman Leventov (Jira)
Roman Leventov created FLINK-16365:
--

 Summary: awaitTermination() result is not checked
 Key: FLINK-16365
 URL: https://issues.apache.org/jira/browse/FLINK-16365
 Project: Flink
  Issue Type: Improvement
Reporter: Roman Leventov


There are three places in production code where awaitTermination() result is 
not checked: BlockingGrpcPubSubSubscriber (io.grpc.ManagedChannel), PubSubSink 
(ManagedChannel), and FileCache (ExecutorService).

Calling awaitTermination() without checking the result seems to make little 
sense to me.

If it's genuinely important to await termination, e. g. for concurrency 
reasons, or because we are awaiting heavy resource release and if the resource 
is not released we have a resource leak, then it seems reasonable to at least 
check the result of awaitTermination() and log a warning if the result is 
negative, allowing to debug potential problem in the future.

Otherwise, if we don't really care about awaiting termination, then maybe it's 
better to not call awaitTermination() at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386235708
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +456,68 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReceivedBufferForRemovedChannel() throws Exception {
+   final int bufferSize = 1024;
 
 Review comment:
   nit: remove `final` to keep consistent in this test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386235368
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +456,68 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReceivedBufferForRemovedChannel() throws Exception {
+   final int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = 
createRemoteInputChannel(inputGate, null, networkBufferPool);
 
 Review comment:
   Use this way instead? Then we do not rely on `null` 
`PartitionRequestClient`, and I guess `createRemoteInputChannel` is mainly for 
indicating the required `PartitionRequestClient`.
   ```
   InputChannelBuilder.newBuilder()
   .setMemorySegmentProvider(networkBufferPool)
   .buildRemoteAndSetToGate(inputGate);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386233930
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +456,68 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReceivedBufferForRemovedChannel() throws Exception {
+   final int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = 
createRemoteInputChannel(inputGate, null, networkBufferPool);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(inputChannel);
+
+   try {
+   Buffer buffer = 
TestBufferFactory.createBuffer(bufferSize);
+   BufferResponse bufferResponse = createBufferResponse(
+   buffer,
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   handler.removeInputChannel(inputChannel);
+   handler.channelRead(null, bufferResponse);
+
+   assertNotNull(bufferResponse.getBuffer());
+   assertTrue(bufferResponse.getBuffer().isRecycled());
 
 Review comment:
   add this verify `assertEquals(0, inputChannel.getNumberOfQueuedBuffers())`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11278: [hotfix] [docs] fix the mismatch between java and scala examples

2020-03-01 Thread GitBox
flinkbot commented on issue #11278: [hotfix] [docs] fix the mismatch between 
java and scala examples
URL: https://github.com/apache/flink/pull/11278#issuecomment-593259962
 
 
   
   ## CI report:
   
   * d51ad7d8b1d185562d1e431e2db01621fc9dfa50 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and 
read log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-592336212
 
 
   
   ## CI report:
   
   * e474761e49927bb30bac6a297e992dc2ec98e01a UNKNOWN
   * b8d51c94a0b93fdbfa4b167e0b4c630f791fba10 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/150973416) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5720)
 
   * 70e8ca9774fc4247657f5d6aecc43459229ba9bb Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151307447) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5801)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11277: [FLINK-16360][orc] Flink STRING data type should map to ORC STRING type

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11277: [FLINK-16360][orc] Flink STRING data 
type should map to ORC STRING type
URL: https://github.com/apache/flink/pull/11277#issuecomment-593254512
 
 
   
   ## CI report:
   
   * 8fb0e375ba23258bb6425dee43640bffc8bf75c0 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151307463) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5802)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386230247
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -339,7 +368,11 @@ public void testNotifyCreditAvailable() throws Exception {
 
// Trigger notify credits availability via buffer 
response on the condition of an un-writable channel
final BufferResponse bufferResponse3 = 
createBufferResponse(
-   TestBufferFactory.createBuffer(32), 1, 
inputChannel1.getInputChannelId(), 1);
+   TestBufferFactory.createBuffer(32),
+   1,
+   inputChannel1.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
 
 Review comment:
   ditto: reuse previous `NetworkBufferAllocator`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386230094
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -312,9 +333,17 @@ public void testNotifyCreditAvailable() throws Exception {
// The buffer response will take one available buffer 
from input channel, and it will trigger
// requesting (backlog + numExclusiveBuffers - 
numAvailableBuffers) floating buffers
final BufferResponse bufferResponse1 = 
createBufferResponse(
-   TestBufferFactory.createBuffer(32), 0, 
inputChannel1.getInputChannelId(), 1);
+   TestBufferFactory.createBuffer(32),
+   0,
+   inputChannel1.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
final BufferResponse bufferResponse2 = 
createBufferResponse(
-   TestBufferFactory.createBuffer(32), 0, 
inputChannel2.getInputChannelId(), 1);
+   TestBufferFactory.createBuffer(32),
+   0,
+   inputChannel2.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
 
 Review comment:
   nit : `NetworkBufferAllocator` is created before `try` only once, then all 
these three `BufferResponse` can reuse it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16364) Deprecate the methods in TableEnvironment proposed by FLIP-84

2020-03-01 Thread godfrey he (Jira)
godfrey he created FLINK-16364:
--

 Summary: Deprecate the methods in TableEnvironment proposed by 
FLIP-84
 Key: FLINK-16364
 URL: https://issues.apache.org/jira/browse/FLINK-16364
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: godfrey he
 Fix For: 1.11.0


In 
[FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment],
 We propose to deprecate the following methods in TableEnvironment: 
{code:java}
void sqlUpdate(String sql)
void insertInto(String targetPath, Table table)
void execute(String jobName)
String explain(boolean extended)
Table fromTableSource(TableSource source)
{code}
This issue aims to deprecate them.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11278: [hotfix] [docs] fix the mismatch between java and scala examples

2020-03-01 Thread GitBox
flinkbot commented on issue #11278: [hotfix] [docs] fix the mismatch between 
java and scala examples
URL: https://github.com/apache/flink/pull/11278#issuecomment-593256899
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit d51ad7d8b1d185562d1e431e2db01621fc9dfa50 (Mon Mar 02 
07:20:29 UTC 2020)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15249) Improve PipelinedRegions calculation with Union Set

2020-03-01 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048840#comment-17048840
 ] 

Zhu Zhu commented on FLINK-15249:
-

[~nppoly] sorry for the late response. Just checked the PR and run the test 
again.
Looks to me that this change is targeting to to improve the region building 
performance for a specific topology are rare in production cases. However, the 
performance for the most common topologies are becoming worse (I tested a 
4000x4000 ALL-to-ALL pipelined connected topology, the performance with the new 
change is much slower, to be specific 1570ms v.s. 929ms).

I think we should not make regression to the common cases to improve a corner 
case. So I would say not to make this change.

Need to mention that the set merging cost should not be the critical part for 
region building if there are All-to-All connections. Since the edge iteration 
complexity would be much larger (V^2 compared to V). If there is not  
All-to-All connection, the region building time cost is usually low and not a 
problem. 

> Improve PipelinedRegions calculation with Union Set
> ---
>
> Key: FLINK-15249
> URL: https://issues.apache.org/jira/browse/FLINK-15249
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Chongchen Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: PipelinedRegionComputeUtil.diff, 
> RegionFailoverPerfTest.java, new.diff
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Union Set's Merge Set cost is O(1). current implementation is O(N). the 
> attachment is patch.
> [Disjoint Set Data 
> Structure|[https://en.wikipedia.org/wiki/Disjoint-set_data_structure]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16362) Remove deprecated method in StreamTableSink

2020-03-01 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-16362:
---
Summary: Remove deprecated method in StreamTableSink  (was: remove 
deprecated method in StreamTableSink)

> Remove deprecated method in StreamTableSink
> ---
>
> Key: FLINK-16362
> URL: https://issues.apache.org/jira/browse/FLINK-16362
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.11.0
>
>
> [FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
>  proposes to unify the behavior of {{TableEnvironment}} and 
> {{StreamTableEnvironment}}, and requires the {{StreamTableSink}} always 
> returns {{DataStream}}. However
> {{StreamTableSink.emitDataStream}} returns nothing and is deprecated since 
> Flink 1.9, So we will remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] shuai-xu opened a new pull request #11278: [hotfix] [docs] fix the mismatch between java and scala examples

2020-03-01 Thread GitBox
shuai-xu opened a new pull request #11278: [hotfix] [docs] fix the mismatch 
between java and scala examples
URL: https://github.com/apache/flink/pull/11278
 
 
   The examples in java are not same with those in scala.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11277: [FLINK-16360][orc] Flink STRING data type should map to ORC STRING type

2020-03-01 Thread GitBox
JingsongLi commented on a change in pull request #11277: [FLINK-16360][orc] 
Flink STRING data type should map to ORC STRING type
URL: https://github.com/apache/flink/pull/11277#discussion_r386226261
 
 

 ##
 File path: 
flink-formats/flink-orc/src/main/java/org/apache/flink/orc/OrcSplitReaderUtil.java
 ##
 @@ -140,7 +140,12 @@ static TypeDescription logicalTypeToOrcType(LogicalType 
type) {
case CHAR:
return 
TypeDescription.createChar().withMaxLength(((CharType) type).getLength());
case VARCHAR:
-   return 
TypeDescription.createVarchar().withMaxLength(((VarCharType) type).getLength());
+   int len = ((VarCharType) type).getLength();
+   if (len == VarCharType.MAX_LENGTH) {
+   return TypeDescription.createString();
 
 Review comment:
   For newer versions, orc can schema evolution. So `VARCHAR(2147483647)` is 
supported.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11277: [FLINK-16360][orc] Flink STRING data type should map to ORC STRING type

2020-03-01 Thread GitBox
JingsongLi commented on a change in pull request #11277: [FLINK-16360][orc] 
Flink STRING data type should map to ORC STRING type
URL: https://github.com/apache/flink/pull/11277#discussion_r386226094
 
 

 ##
 File path: 
flink-formats/flink-orc/src/main/java/org/apache/flink/orc/OrcSplitReaderUtil.java
 ##
 @@ -140,7 +140,12 @@ static TypeDescription logicalTypeToOrcType(LogicalType 
type) {
case CHAR:
return 
TypeDescription.createChar().withMaxLength(((CharType) type).getLength());
case VARCHAR:
-   return 
TypeDescription.createVarchar().withMaxLength(((VarCharType) type).getLength());
+   int len = ((VarCharType) type).getLength();
+   if (len == VarCharType.MAX_LENGTH) {
+   return TypeDescription.createString();
 
 Review comment:
   `VARCHAR(2147483647)` is `STRING` in Flink.
   We don't need support real `VARCHAR(2147483647)` in orc for hive 2.0. Hive 
don't have this type.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and 
read log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-592336212
 
 
   
   ## CI report:
   
   * e474761e49927bb30bac6a297e992dc2ec98e01a UNKNOWN
   * b8d51c94a0b93fdbfa4b167e0b4c630f791fba10 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150973416) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5720)
 
   * 70e8ca9774fc4247657f5d6aecc43459229ba9bb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16363) Correct the execution behavior of TableEnvironment and StreamTableEnvironment

2020-03-01 Thread godfrey he (Jira)
godfrey he created FLINK-16363:
--

 Summary: Correct the execution behavior of TableEnvironment and 
StreamTableEnvironment
 Key: FLINK-16363
 URL: https://issues.apache.org/jira/browse/FLINK-16363
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: godfrey he
 Fix For: 1.11.0


Both {{TableEnvironment.execute()}} and {{StreamExecutionEnvironment.execute}} 
can trigger a Flink table program execution. However if you use 
{{TableEnvironment}} to build a Flink table program, you must use 
{{TableEnvironment.execute()}} to trigger execution, because you can’t get the 
{{StreamExecutionEnvironment}} instance. If you use {{StreamTableEnvironment}} 
to build a Flink table program, you can use both to trigger execution. If you 
convert a table program to a {{DataStream}} program (using 
{{StreamExecutionEnvironment.toAppendStream/toRetractStream}}), you also can 
use both to trigger execution. So it’s hard to explain which `execute` method 
should be used.

To correct current messy trigger point, we propose that: for 
{{TableEnvironment}} and {{StreamTableEnvironment}}, you must use 
{{TableEnvironment.execute()}} to trigger table program execution, once you 
convert the table program to a {{DataStream}} program (through 
{{toAppendStream}} or {{toRetractStream}} method), you must use 
{{StreamExecutionEnvironment.execute}} to trigger the {{DataStream}} program.

please refer to 
[FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
 for more detail.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11277: [FLINK-16360][orc] Flink STRING data type should map to ORC STRING type

2020-03-01 Thread GitBox
flinkbot commented on issue #11277: [FLINK-16360][orc] Flink STRING data type 
should map to ORC STRING type
URL: https://github.com/apache/flink/pull/11277#issuecomment-593254512
 
 
   
   ## CI report:
   
   * 8fb0e375ba23258bb6425dee43640bffc8bf75c0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #11277: [FLINK-16360][orc] Flink STRING data type should map to ORC STRING type

2020-03-01 Thread GitBox
lirui-apache commented on a change in pull request #11277: [FLINK-16360][orc] 
Flink STRING data type should map to ORC STRING type
URL: https://github.com/apache/flink/pull/11277#discussion_r386225122
 
 

 ##
 File path: 
flink-formats/flink-orc/src/main/java/org/apache/flink/orc/OrcSplitReaderUtil.java
 ##
 @@ -140,7 +140,12 @@ static TypeDescription logicalTypeToOrcType(LogicalType 
type) {
case CHAR:
return 
TypeDescription.createChar().withMaxLength(((CharType) type).getLength());
case VARCHAR:
-   return 
TypeDescription.createVarchar().withMaxLength(((VarCharType) type).getLength());
+   int len = ((VarCharType) type).getLength();
+   if (len == VarCharType.MAX_LENGTH) {
+   return TypeDescription.createString();
 
 Review comment:
   What if user specifies `VARCHAR(2147483647)` in the schema and we convert it 
to string here? Will there be a problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16362) remove deprecated method in StreamTableSink

2020-03-01 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-16362:
---
Parent: FLINK-16361
Issue Type: Sub-task  (was: Improvement)

> remove deprecated method in StreamTableSink
> ---
>
> Key: FLINK-16362
> URL: https://issues.apache.org/jira/browse/FLINK-16362
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.11.0
>
>
> [FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
>  proposes to unify the behavior of {{TableEnvironment}} and 
> {{StreamTableEnvironment}}, and requires the {{StreamTableSink}} always 
> returns {{DataStream}}. However
> {{StreamTableSink.emitDataStream}} returns nothing and is deprecated since 
> Flink 1.9, So we will remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16362) remove deprecated method in StreamTableSink

2020-03-01 Thread godfrey he (Jira)
godfrey he created FLINK-16362:
--

 Summary: remove deprecated method in StreamTableSink
 Key: FLINK-16362
 URL: https://issues.apache.org/jira/browse/FLINK-16362
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: godfrey he
 Fix For: 1.11.0


[FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
 proposes to unify the behavior of {{TableEnvironment}} and 
{{StreamTableEnvironment}}, and requires the {{StreamTableSink}} always returns 
{{DataStream}}. However
{{StreamTableSink.emitDataStream}} returns nothing and is deprecated since 
Flink 1.9, So we will remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11277: [FLINK-16360][orc] Flink STRING data type should map to ORC STRING type

2020-03-01 Thread GitBox
flinkbot commented on issue #11277: [FLINK-16360][orc] Flink STRING data type 
should map to ORC STRING type
URL: https://github.com/apache/flink/pull/11277#issuecomment-593250524
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 8fb0e375ba23258bb6425dee43640bffc8bf75c0 (Mon Mar 02 
06:59:38 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16360) connector on hive 2.0.1 don't support type conversion from STRING to VARCHAR

2020-03-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16360:
---
Labels: pull-request-available  (was: )

>  connector on hive 2.0.1 don't  support type conversion from STRING to VARCHAR
> --
>
> Key: FLINK-16360
> URL: https://issues.apache.org/jira/browse/FLINK-16360
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: os:centos
> java: 1.8.0_92
> flink :1.10.0
> hadoop: 2.7.2
> hive:2.0.1
>  
>Reporter: wgcn
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.1
>
> Attachments: exceptionstack
>
>
>  it threw  exception  when we query hive 2.0.1 by flink 1.10.0
>  Exception stack:
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=50, 
> backoffTimeMS=1)
>  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
>  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
>  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:484)
>  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
>  at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:279)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:194)
>  at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>  at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>  at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>  at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
>  at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
>  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
>  at akka.actor.ActorCell.invoke(ActorCell.scala:561)
>  at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
>  at akka.dispatch.Mailbox.run(Mailbox.scala:225)
>  at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
>  at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>  at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>  at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>  at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
>  at 
> org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:76)
>  at 
> org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:123)
>  at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:73)
>  at 
> org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:55)
>  at 
> org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:96)
>  at 
> org.apache.flink.connectors.hive.read.HiveVectorizedOrcSplitReader.(HiveVectorizedOrcSplitReader.java:65)
>  at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:117)
>  at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:56)
>  at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:85)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>  at 
> 

[GitHub] [flink] JingsongLi opened a new pull request #11277: [FLINK-16360][orc] Flink STRING data type should map to ORC STRING type

2020-03-01 Thread GitBox
JingsongLi opened a new pull request #11277: [FLINK-16360][orc] Flink STRING 
data type should map to ORC STRING type
URL: https://github.com/apache/flink/pull/11277
 
 
   
   ## What is the purpose of the change
   
   Hive 2.0 ORC not support schema evolution from STRING to VARCHAR.
   We need produce STRING in ORC for VarcharType(MAX_LENGHT) in Flink.
   
   ## Brief change log
   
   Flink STRING data type should map to ORC STRING type in `OrcSplitReaderUtil`
   
   ## Verifying this change
   
   `OrcSplitReaderUtilTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11242: [FLINK-16007][table-planner][table-planner-blink][python] Add PythonCorrelateSplitRule to push down the Java Calls contained in Python Corre

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11242: 
[FLINK-16007][table-planner][table-planner-blink][python] Add 
PythonCorrelateSplitRule to push down the Java Calls contained in Python 
Correlate node
URL: https://github.com/apache/flink/pull/11242#issuecomment-591981740
 
 
   
   ## CI report:
   
   * ad21752d6e9ac8c6d2dc1c0d2d824f86d77d69c0 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151299714) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5796)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11248: [FLINK-16299] Release containers recovered from previous attempt in w…

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11248: [FLINK-16299] Release containers 
recovered from previous attempt in w…
URL: https://github.com/apache/flink/pull/11248#issuecomment-592302068
 
 
   
   ## CI report:
   
   * b61c045eddf32b77b81238ed06cbd961351f2e3b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151300896) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5797)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and 
read log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-592336212
 
 
   
   ## CI report:
   
   * e474761e49927bb30bac6a297e992dc2ec98e01a UNKNOWN
   * b8d51c94a0b93fdbfa4b167e0b4c630f791fba10 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150973416) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5720)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16361) FLIP-84: Improve & Refactor API of TableEnvironment

2020-03-01 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-16361:
---
Description: 
as the 
[FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
 document described, 

We propose to deprecate the following methods in TableEnvironment:
{code:java}
void sqlUpdate(String sql)
void insertInto(String targetPath, Table table)
void execute(String jobName)
String explain(boolean extended)
Table fromTableSource(TableSource source)
{code}

meanwhile, we propose to introduce the following new methods in 
TableEnvironment:

{code:java}
// synchronously execute the given single statement immediately, and return the 
execution result.
ResultTable executeStatement(String statement) 

public interface ResultTable {
TableSchema getResultSchema();
Iterable getResultRows();
}

// create a DmlBatch instance which can add dml statements or Tables to the 
batch and explain or execute them as a batch.
DmlBatch createDmlBatch()

interface DmlBatch {
void addInsert(String insert);
void addInsert(String targetPath, Table table);
ResultTable execute() throws Exception ;
String explain(boolean extended);
}
{code}


We unify the Flink table program trigger behavior, and propose that: for 
{{TableEnvironment}} and {{StreamTableEnvironment}}, you must use 
{{TableEnvironment.execute()}} to trigger table program execution, once you 
convert the table program to a {{DataStream}}program (through 
{{toAppendStream}} or {{toRetractStream}} method), you must use 
{{StreamExecutionEnvironment.execute}} to trigger the {{DataStream}} program.

  was:
as the 
[FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
 document described, 

We propose to deprecate the following methods in TableEnvironment:
{code:java}
void sqlUpdate(String sql)
void insertInto(String targetPath, Table table)
void execute(String jobName)
String explain(boolean extended)
Table fromTableSource(TableSource source)
{code}

meanwhile, we propose to introduce the following new methods in 
TableEnvironment:

{code:java}
// synchronously execute the given single statement immediately, and return the 
execution result.
ResultTable executeStatement(String statement) 

public interface ResultTable {
TableSchema getResultSchema();
Iterable getResultRows();
}

// create a DmlBatch instance which can add dml statements or Tables to the 
batch and explain or execute them as a batch.
DmlBatch createDmlBatch()

interface DmlBatch {
void addInsert(String insert);
void addInsert(String targetPath, Table table);
ResultTable execute() throws Exception ;
String explain(boolean extended);
}
{code}


We unify the Flink table program trigger behavior, and propose that: for 
{{TableEnvironment}} and {{StreamTableEnvironment}}, you must use 
{{TableEnvironment.execute()}} to trigger table program execution, once you 
convert the table program to a {{DataStream }}program (through 
{{toAppendStream}} or {{toRetractStream}} method), you must use 
{{StreamExecutionEnvironment.execute}} to trigger the {{DataStream}} program.


> FLIP-84: Improve & Refactor API of TableEnvironment
> ---
>
> Key: FLINK-16361
> URL: https://issues.apache.org/jira/browse/FLINK-16361
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.11.0
>
>
> as the 
> [FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
>  document described, 
> We propose to deprecate the following methods in TableEnvironment:
> {code:java}
> void sqlUpdate(String sql)
> void insertInto(String targetPath, Table table)
> void execute(String jobName)
> String explain(boolean extended)
> Table fromTableSource(TableSource source)
> {code}
> meanwhile, we propose to introduce the following new methods in 
> TableEnvironment:
> {code:java}
> // synchronously execute the given single statement immediately, and return 
> the execution result.
> ResultTable executeStatement(String statement) 
> public interface ResultTable {
> TableSchema getResultSchema();
> Iterable getResultRows();
> }
> // create a DmlBatch instance which can add dml statements or Tables to the 
> batch and explain or execute them as a batch.
> DmlBatch createDmlBatch()
> interface DmlBatch {
> void addInsert(String insert);
> void addInsert(String targetPath, Table table);
> ResultTable execute() throws Exception ;
> String explain(boolean extended);
> }
> {code}
> We unify the Flink table program trigger behavior, and propose that: for 
> {{TableEnvironment}} and {{StreamTableEnvironment}}, 

[GitHub] [flink] flinkbot edited a comment on issue #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11168: [FLINK-16140] [docs-zh] Translate 
Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#issuecomment-589576910
 
 
   
   ## CI report:
   
   * 7546b4bef354ec3acb52245f867c3338107d0995 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151296073) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5794)
 
   * 76146c2111a47b68765168064b4d1dd90448789c Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151304506) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5800)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16361) FLIP-84: Improve & Refactor API of TableEnvironment

2020-03-01 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-16361:
---
Description: 
as the 
[FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
 document described, 

We propose to deprecate the following methods in TableEnvironment:
{code:java}
void sqlUpdate(String sql)
void insertInto(String targetPath, Table table)
void execute(String jobName)
String explain(boolean extended)
Table fromTableSource(TableSource source)
{code}

meanwhile, we propose to introduce the following new methods in 
TableEnvironment:

{code:java}
// synchronously execute the given single statement immediately, and return the 
execution result.
ResultTable executeStatement(String statement) 

public interface ResultTable {
TableSchema getResultSchema();
Iterable getResultRows();
}

// create a DmlBatch instance which can add dml statements or Tables to the 
batch and explain or execute them as a batch.
DmlBatch createDmlBatch()

interface DmlBatch {
void addInsert(String insert);
void addInsert(String targetPath, Table table);
ResultTable execute() throws Exception ;
String explain(boolean extended);
}
{code}


We unify the Flink table program trigger behavior, and propose that: for 
{{TableEnvironment}} and {{StreamTableEnvironment}}, you must use 
{{TableEnvironment.execute()}} to trigger table program execution, once you 
convert the table program to a {{DataStream}} program (through 
{{toAppendStream}} or {{toRetractStream}} method), you must use 
{{StreamExecutionEnvironment.execute}} to trigger the {{DataStream}} program.

  was:
as the 
[FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
 document described, 

We propose to deprecate the following methods in TableEnvironment:
{code:java}
void sqlUpdate(String sql)
void insertInto(String targetPath, Table table)
void execute(String jobName)
String explain(boolean extended)
Table fromTableSource(TableSource source)
{code}

meanwhile, we propose to introduce the following new methods in 
TableEnvironment:

{code:java}
// synchronously execute the given single statement immediately, and return the 
execution result.
ResultTable executeStatement(String statement) 

public interface ResultTable {
TableSchema getResultSchema();
Iterable getResultRows();
}

// create a DmlBatch instance which can add dml statements or Tables to the 
batch and explain or execute them as a batch.
DmlBatch createDmlBatch()

interface DmlBatch {
void addInsert(String insert);
void addInsert(String targetPath, Table table);
ResultTable execute() throws Exception ;
String explain(boolean extended);
}
{code}


We unify the Flink table program trigger behavior, and propose that: for 
{{TableEnvironment}} and {{StreamTableEnvironment}}, you must use 
{{TableEnvironment.execute()}} to trigger table program execution, once you 
convert the table program to a {{DataStream}}program (through 
{{toAppendStream}} or {{toRetractStream}} method), you must use 
{{StreamExecutionEnvironment.execute}} to trigger the {{DataStream}} program.


> FLIP-84: Improve & Refactor API of TableEnvironment
> ---
>
> Key: FLINK-16361
> URL: https://issues.apache.org/jira/browse/FLINK-16361
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.11.0
>
>
> as the 
> [FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
>  document described, 
> We propose to deprecate the following methods in TableEnvironment:
> {code:java}
> void sqlUpdate(String sql)
> void insertInto(String targetPath, Table table)
> void execute(String jobName)
> String explain(boolean extended)
> Table fromTableSource(TableSource source)
> {code}
> meanwhile, we propose to introduce the following new methods in 
> TableEnvironment:
> {code:java}
> // synchronously execute the given single statement immediately, and return 
> the execution result.
> ResultTable executeStatement(String statement) 
> public interface ResultTable {
> TableSchema getResultSchema();
> Iterable getResultRows();
> }
> // create a DmlBatch instance which can add dml statements or Tables to the 
> batch and explain or execute them as a batch.
> DmlBatch createDmlBatch()
> interface DmlBatch {
> void addInsert(String insert);
> void addInsert(String targetPath, Table table);
> ResultTable execute() throws Exception ;
> String explain(boolean extended);
> }
> {code}
> We unify the Flink table program trigger behavior, and propose that: for 
> {{TableEnvironment}} and {{StreamTableEnvironment}}, 

[jira] [Updated] (FLINK-16361) FLIP-84: Improve & Refactor API of TableEnvironment

2020-03-01 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-16361:
---
Description: 
as the 
[FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
 document described, 

We propose to deprecate the following methods in TableEnvironment:
{code:java}
void sqlUpdate(String sql)
void insertInto(String targetPath, Table table)
void execute(String jobName)
String explain(boolean extended)
Table fromTableSource(TableSource source)
{code}

meanwhile, we propose to introduce the following new methods in 
TableEnvironment:

{code:java}
// synchronously execute the given single statement immediately, and return the 
execution result.
ResultTable executeStatement(String statement) 

public interface ResultTable {
TableSchema getResultSchema();
Iterable getResultRows();
}

// create a DmlBatch instance which can add dml statements or Tables to the 
batch and explain or execute them as a batch.
DmlBatch createDmlBatch()

interface DmlBatch {
void addInsert(String insert);
void addInsert(String targetPath, Table table);
ResultTable execute() throws Exception ;
String explain(boolean extended);
}
{code}


We unify the Flink table program trigger behavior, and propose that: for 
{{TableEnvironment}} and {{StreamTableEnvironment}}, you must use 
{{TableEnvironment.execute()}} to trigger table program execution, once you 
convert the table program to a {{DataStream }}program (through 
{{toAppendStream}} or {{toRetractStream}} method), you must use 
{{StreamExecutionEnvironment.execute}} to trigger the {{DataStream}} program.

  was:
as the 
[FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
 document described, 

We propose to deprecate the following methods in TableEnvironment:
{code:java}
void sqlUpdate(String sql)
void insertInto(String targetPath, Table table)
void execute(String jobName)
String explain(boolean extended)
Table fromTableSource(TableSource source)
{code}

meanwhile, we propose to introduce the following new methods in 
TableEnvironment:

{code:java}
// synchronously execute the given single statement immediately, and return the 
execution result.
ResultTable executeStatement(String statement) 

public interface ResultTable {
TableSchema getResultSchema();
Iterable getResultRows();
}

// create a DmlBatch instance which can add dml statements or Tables to the 
batch and explain or execute them as a batch.
DmlBatch createDmlBatch()

interface DmlBatch {
void addInsert(String insert);
void addInsert(String targetPath, Table table);
ResultTable execute() throws Exception ;
String explain(boolean extended);
}
{code}


We unify the Flink table program trigger point, and propose that: for 
TableEnvironment and StreamTableEnvironment, you must use 
`TableEnvironment.execute()` to trigger table program execution, once you 
convert the table program to a DataStream program (through `toAppendStream` or 
`toRetractStream` method), you must use `StreamExecutionEnvironment.execute` to 
trigger the DataStream program.


> FLIP-84: Improve & Refactor API of TableEnvironment
> ---
>
> Key: FLINK-16361
> URL: https://issues.apache.org/jira/browse/FLINK-16361
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.11.0
>
>
> as the 
> [FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
>  document described, 
> We propose to deprecate the following methods in TableEnvironment:
> {code:java}
> void sqlUpdate(String sql)
> void insertInto(String targetPath, Table table)
> void execute(String jobName)
> String explain(boolean extended)
> Table fromTableSource(TableSource source)
> {code}
> meanwhile, we propose to introduce the following new methods in 
> TableEnvironment:
> {code:java}
> // synchronously execute the given single statement immediately, and return 
> the execution result.
> ResultTable executeStatement(String statement) 
> public interface ResultTable {
> TableSchema getResultSchema();
> Iterable getResultRows();
> }
> // create a DmlBatch instance which can add dml statements or Tables to the 
> batch and explain or execute them as a batch.
> DmlBatch createDmlBatch()
> interface DmlBatch {
> void addInsert(String insert);
> void addInsert(String targetPath, Table table);
> ResultTable execute() throws Exception ;
> String explain(boolean extended);
> }
> {code}
> We unify the Flink table program trigger behavior, and propose that: for 
> {{TableEnvironment}} and {{StreamTableEnvironment}}, you must use 
> 

[jira] [Created] (FLINK-16361) FLIP-84: Improve & Refactor API of TableEnvironment

2020-03-01 Thread godfrey he (Jira)
godfrey he created FLINK-16361:
--

 Summary: FLIP-84: Improve & Refactor API of TableEnvironment
 Key: FLINK-16361
 URL: https://issues.apache.org/jira/browse/FLINK-16361
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: godfrey he
 Fix For: 1.11.0


as the 
[FLIP-84|https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment]
 document described, 

We propose to deprecate the following methods in TableEnvironment:
{code:java}
void sqlUpdate(String sql)
void insertInto(String targetPath, Table table)
void execute(String jobName)
String explain(boolean extended)
Table fromTableSource(TableSource source)
{code}

meanwhile, we propose to introduce the following new methods in 
TableEnvironment:

{code:java}
// synchronously execute the given single statement immediately, and return the 
execution result.
ResultTable executeStatement(String statement) 

public interface ResultTable {
TableSchema getResultSchema();
Iterable getResultRows();
}

// create a DmlBatch instance which can add dml statements or Tables to the 
batch and explain or execute them as a batch.
DmlBatch createDmlBatch()

interface DmlBatch {
void addInsert(String insert);
void addInsert(String targetPath, Table table);
ResultTable execute() throws Exception ;
String explain(boolean extended);
}
{code}


We unify the Flink table program trigger point, and propose that: for 
TableEnvironment and StreamTableEnvironment, you must use 
`TableEnvironment.execute()` to trigger table program execution, once you 
convert the table program to a DataStream program (through `toAppendStream` or 
`toRetractStream` method), you must use `StreamExecutionEnvironment.execute` to 
trigger the DataStream program.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #11174: [FLINK-16199][sql] Support IS JSON predicate for blink planner

2020-03-01 Thread GitBox
wuchong commented on a change in pull request #11174: [FLINK-16199][sql] 
Support IS JSON predicate for blink planner
URL: https://github.com/apache/flink/pull/11174#discussion_r386220827
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/FunctionGenerator.scala
 ##
 @@ -753,6 +753,54 @@ object FunctionGenerator {
 Seq(FLOAT, INTEGER),
 BuiltInMethods.TRUNCATE_FLOAT)
 
+  addSqlFunctionMethod(
+IS_JSON_VALUE,
+Seq(CHAR),
 
 Review comment:
   `FunctionGenerator` is not perfect now, we may need to declare different 
combinations of types. We will improve that in the future. But I think 
`VARCHAR` can suite both for CHAR and VARCHAR here. So let's reduce the 
combinations here. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11174: [FLINK-16199][sql] Support IS JSON predicate for blink planner

2020-03-01 Thread GitBox
wuchong commented on a change in pull request #11174: [FLINK-16199][sql] 
Support IS JSON predicate for blink planner
URL: https://github.com/apache/flink/pull/11174#discussion_r386220447
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/FunctionGenerator.scala
 ##
 @@ -753,6 +753,54 @@ object FunctionGenerator {
 Seq(FLOAT, INTEGER),
 BuiltInMethods.TRUNCATE_FLOAT)
 
+  addSqlFunctionMethod(
+IS_JSON_VALUE,
+Seq(CHAR),
 
 Review comment:
   Yes. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16360) connector on hive 2.0.1 don't support type conversion from STRING to VARCHAR

2020-03-01 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-16360:
-
Fix Version/s: 1.10.1

>  connector on hive 2.0.1 don't  support type conversion from STRING to VARCHAR
> --
>
> Key: FLINK-16360
> URL: https://issues.apache.org/jira/browse/FLINK-16360
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: os:centos
> java: 1.8.0_92
> flink :1.10.0
> hadoop: 2.7.2
> hive:2.0.1
>  
>Reporter: wgcn
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.10.1
>
> Attachments: exceptionstack
>
>
>  it threw  exception  when we query hive 2.0.1 by flink 1.10.0
>  Exception stack:
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=50, 
> backoffTimeMS=1)
>  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
>  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
>  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:484)
>  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
>  at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:279)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:194)
>  at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>  at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>  at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>  at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
>  at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
>  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
>  at akka.actor.ActorCell.invoke(ActorCell.scala:561)
>  at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
>  at akka.dispatch.Mailbox.run(Mailbox.scala:225)
>  at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
>  at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>  at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>  at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>  at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
>  at 
> org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:76)
>  at 
> org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:123)
>  at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:73)
>  at 
> org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:55)
>  at 
> org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:96)
>  at 
> org.apache.flink.connectors.hive.read.HiveVectorizedOrcSplitReader.(HiveVectorizedOrcSplitReader.java:65)
>  at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:117)
>  at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:56)
>  at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:85)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>  at 
> 

[GitHub] [flink] wangyang0918 commented on issue #11233: [FLINK-16194][k8s] Refactor the Kubernetes decorator design

2020-03-01 Thread GitBox
wangyang0918 commented on issue #11233: [FLINK-16194][k8s] Refactor the 
Kubernetes decorator design
URL: https://github.com/apache/flink/pull/11233#issuecomment-593246987
 
 
   ```
   echo 'stop' | ./bin/kubernetes-session.sh 
-Dkubernetes.cluster-id=flink-native-k8s-session-1 -Dexecution.attached=true
   ```
   Probably because we do not include `io.fabric8:zjsonpatch` in the pom.xml of 
`flink-kubernetes` module.
   
   ```
   2020-03-02 14:30:32,386 INFO  
org.apache.flink.kubernetes.KubernetesClusterDescriptor  [] - Retrieve 
flink cluster flink-native-k8s-session-1 successfully, JobManager Web 
Interface: http://11.164.91.5:31318
   Exception in thread "main" java.lang.NoClassDefFoundError: 
io/fabric8/zjsonpatch/JsonDiff
at 
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handlePatch(OperationSupport.java:297)
at 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.handlePatch(BaseOperation.java:808)
at 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.lambda$patch$2(HasMetadataOperation.java:145)
at 
io.fabric8.kubernetes.api.model.apps.DoneableDeployment.done(DoneableDeployment.java:27)
at 
io.fabric8.kubernetes.api.model.apps.DoneableDeployment.done(DoneableDeployment.java:6)
at 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.patch(HasMetadataOperation.java:151)
at 
io.fabric8.kubernetes.client.dsl.internal.RollableScalableResourceOperation.patch(RollableScalableResourceOperation.java:167)
at 
io.fabric8.kubernetes.client.dsl.internal.DeploymentOperationsImpl.patch(DeploymentOperationsImpl.java:113)
at 
io.fabric8.kubernetes.client.dsl.internal.DeploymentOperationsImpl.patch(DeploymentOperationsImpl.java:45)
at 
io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.lambda$edit$0(HasMetadataOperation.java:53)
at 
io.fabric8.kubernetes.api.model.apps.DoneableDeployment.done(DoneableDeployment.java:27)
at 
io.fabric8.kubernetes.client.dsl.internal.DeploymentOperationsImpl$DeploymentReaper.reap(DeploymentOperationsImpl.java:245)
at 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.delete(BaseOperation.java:642)
at 
io.fabric8.kubernetes.client.dsl.base.BaseOperation.delete(BaseOperation.java:63)
at 
org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.stopAndCleanupCluster(Fabric8FlinkKubeClient.java:182)
at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.killCluster(KubernetesClusterDescriptor.java:193)
at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.killCluster(KubernetesClusterDescriptor.java:59)
at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.run(KubernetesSessionCli.java:125)
at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.lambda$main$0(KubernetesSessionCli.java:185)
at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.main(KubernetesSessionCli.java:185)
   Caused by: java.lang.ClassNotFoundException: io.fabric8.zjsonpatch.JsonDiff
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 21 more
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] jinglining commented on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-01 Thread GitBox
jinglining commented on issue #11250: [FLINK-16302][rest]add log list and read 
log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-593246398
 
 
   @flinkbot run azure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] jinglining commented on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-01 Thread GitBox
jinglining commented on issue #11250: [FLINK-16302][rest]add log list and read 
log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-593246362
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16357) Extend Checkpoint Coordinator to differentiate between "regional restore" and "full restore".

2020-03-01 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048828#comment-17048828
 ] 

Zhu Zhu commented on FLINK-16357:
-

Is OperatorCoordinator#resetToCheckpoint(...) expected to be invoked in 
CheckpointCoordinator#restoreLatestCheckpointedState(...) ? If so, seems there 
is not need to tell the CheckpointCoordinator it is a global failure or a 
regional failure, but can just be a set of execution vertices which are 
affected by the failure, namely changing the param {{tasks}} of 
CheckpointCoordinator#restoreLatestCheckpointedState(...) from 
Set to Set.

In the new scheduler (DefaultScheduler), the logics of global failure recovery 
and regional failure recovery are almost the same except for the logic to 
calculate the ExecutionVertex to restart. So it does not differentiate global 
failure nor regional failure in the stage to restore task states and reschedule 
the tasks. And there would always be a set of ExecutionVertex to restart which 
can be passed to the CheckpointCoordinator#restoreLatestCheckpointedState(...).

> Extend Checkpoint Coordinator to differentiate between "regional restore" and 
> "full restore".
> -
>
> Key: FLINK-16357
> URL: https://issues.apache.org/jira/browse/FLINK-16357
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Stephan Ewen
>Priority: Major
> Fix For: 1.11.0
>
>
> The {{ExecutionGraph}} has the notion of "global failure" (failing the entire 
> execution graph) and "regional failure" (recover a region with transient 
> pipelined data exchanges).
> The latter one is for common failover, the former one is a safety net to 
> handle unexpected failures or inconsistencies (full reset of ExecutionGraph 
> recovers most inconsistencies).
> The OperatorCoordinators should only be reset to a checkpoint in the "global 
> failover" case. In the "regional failover" case, they are only notified of 
> the tasks that are reset and keep their internal state and adjust it for the 
> failed tasks.
> To implement that, the ExecutionGraph needs to forward the information about 
> whether we are restoring from a "regional failure" or from a "global failure".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] shuai-xu commented on a change in pull request #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-03-01 Thread GitBox
shuai-xu commented on a change in pull request #11168: [FLINK-16140] [docs-zh] 
Translate Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#discussion_r386216554
 
 

 ##
 File path: docs/dev/libs/cep.zh.md
 ##
 @@ -23,23 +23,20 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-FlinkCEP is the Complex Event Processing (CEP) library implemented on top of 
Flink.
-It allows you to detect event patterns in an endless stream of events, giving 
you the opportunity to get hold of what's important in your
-data.
+FlinkCEP是在Flink上层实现的复杂事件处理库。
+它可以让你在无限事件流中检测出特定的事件模型,有机会掌握数据中重要的那部分。
 
-This page describes the API calls available in Flink CEP. We start by 
presenting the [Pattern API](#the-pattern-api),
-which allows you to specify the patterns that you want to detect in your 
stream, before presenting how you can
-[detect and act upon matching event sequences](#detecting-patterns). We then 
present the assumptions the CEP
-library makes when [dealing with lateness](#handling-lateness-in-event-time) 
in event time and how you can
-[migrate your job](#migrating-from-an-older-flink-versionpre-13) from an older 
Flink version to Flink-1.3.
+本页讲述了Flink 
CEP中可用的API,我们首先讲述[模式API](#模式api),它可以让你指定想在数据流中检测的模式,然后讲述如何[检测匹配的事件序列并进行处理](#检测模式)。
+再然后我们讲述Flink在按照事件时间[处理迟到事件](#按照事件时间处理晚到事件)时的假设,
+以及如何从旧版本的Flink向1.3之后的版本[迁移作业](#从旧版本迁移13之前)。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Getting Started
+## 开始
 
-If you want to jump right in, [set up a Flink program]({{ site.baseurl 
}}/dev/projectsetup/dependencies.html) and
-add the FlinkCEP dependency to the `pom.xml` of your project.
+如果你想现在开始尝试,[创建一个Flink程序]({{ site.baseurl 
}}/dev/projectsetup/dependencies.html),
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] shuai-xu commented on a change in pull request #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-03-01 Thread GitBox
shuai-xu commented on a change in pull request #11168: [FLINK-16140] [docs-zh] 
Translate Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#discussion_r386216591
 
 

 ##
 File path: docs/dev/libs/cep.zh.md
 ##
 @@ -63,13 +60,12 @@ add the FlinkCEP dependency to the `pom.xml` of your 
project.
 
 
 
-{% info %} FlinkCEP is not part of the binary distribution. See how to link 
with it for cluster execution 
[here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
+{% info %} 
FlinkCEP不是二进制分发的一部分。在集群上执行如何链接它可以看[这里]({{site.baseurl}}/dev/projectsetup/dependencies.html)。
 
-Now you can start writing your first CEP program using the Pattern API.
+现在可以开始使用Pattern API写你的第一个CEP程序了。
 
-{% warn Attention %} The events in the `DataStream` to which
-you want to apply pattern matching must implement proper `equals()` and 
`hashCode()` methods
-because FlinkCEP uses them for comparing and matching events.
+{% warn Attention %} `DataStream`中的事件,如果你想在上面进行模式匹配的话,必须实现合适的 
`equals()`和`hashCode()`方法,
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11276: [FLINK-16029][table-planner-blink] Remove register source and sink in test cases of blink planner

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11276: [FLINK-16029][table-planner-blink] 
Remove register source and sink in test cases of blink planner
URL: https://github.com/apache/flink/pull/11276#issuecomment-593234780
 
 
   
   ## CI report:
   
   * 70303310b9062e83705e8d3536660784bf963cca Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151302138) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangyang0918 commented on a change in pull request #11233: [FLINK-16194][k8s] Refactor the Kubernetes decorator design

2020-03-01 Thread GitBox
wangyang0918 commented on a change in pull request #11233: [FLINK-16194][k8s] 
Refactor the Kubernetes decorator design
URL: https://github.com/apache/flink/pull/11233#discussion_r386216470
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/Fabric8FlinkKubeClient.java
 ##
 @@ -66,8 +70,52 @@ public Fabric8FlinkKubeClient(Configuration flinkConfig, 
KubernetesClient client
}
 
@Override
-   public void createTaskManagerPod(TaskManagerPodParameter parameter) {
-   // todo
+   public void createJobManagerComponent(KubernetesJobManagerSpecification 
kubernetesJMSpec) {
+   final Deployment deployment = kubernetesJMSpec.getDeployment();
+   final List accompanyingResources = 
kubernetesJMSpec.getAccompanyingResources();
+
+   // create Deployment
+   LOG.debug("Start to create deployment with spec {}", 
deployment.getSpec().toString());
+   final Deployment createdDeployment = this.internalClient
+   .apps()
+   .deployments()
+   .inNamespace(this.nameSpace)
+   .create(deployment);
+
+   // Note, we should use the server-side uid of the created 
Deployment for the OwnerReference.
+   setOwnerReference(createdDeployment, accompanyingResources);
 
 Review comment:
   The e2e test  `test_kubernetes_session.sh` also needs to be updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] shuai-xu commented on a change in pull request #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-03-01 Thread GitBox
shuai-xu commented on a change in pull request #11168: [FLINK-16140] [docs-zh] 
Translate Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#discussion_r386216484
 
 

 ##
 File path: docs/dev/libs/cep.zh.md
 ##
 @@ -136,140 +132,143 @@ val result: DataStream[Alert] = patternStream.process(
 
 
 
-## The Pattern API
+## 模式API
 
-The pattern API allows you to define complex pattern sequences that you want 
to extract from your input stream.
+模式API可以让你定义想从输入流中抽取的复杂模式序列。
 
-Each complex pattern sequence consists of multiple simple patterns, i.e. 
patterns looking for individual events with the same properties. From now on, 
we will call these simple patterns **patterns**, and the final complex pattern 
sequence we are searching for in the stream, the **pattern sequence**. You can 
see a pattern sequence as a graph of such patterns, where transitions from one 
pattern to the next occur based on user-specified
-*conditions*, e.g. `event.getName().equals("end")`. A **match** is a sequence 
of input events which visits all
-patterns of the complex pattern graph, through a sequence of valid pattern 
transitions.
+每个复杂的模式序列包括多个简单的模式,比如,寻找拥有相同属性事件序列的模式。从现在开始,我们把这些简单的模式称作**模式**,
+把我们在数据流中最终寻找的复杂模式序列称作**模式序列**,你可以把模式序列看作是这样的模式构成的图,
+这些模式基于用户指定的**条件**从一个转换到另外一个,比如 `event.getName().equals("end")`。
+一个**匹配**是输入事件的一个序列,这些事件通过一系列有效的模式转换,能够访问到复杂模式图中的所有模式。
 
-{% warn Attention %} Each pattern must have a unique name, which you use later 
to identify the matched events.
+{% warn Attention %} 每个模式必须有一个独一无二的名字,你可以在后面使用它来识别匹配到的事件。
 
-{% warn Attention %} Pattern names **CANNOT** contain the character `":"`.
+{% warn Attention %} 模式的名字不能包含字符`":"`.
 
-In the rest of this section we will first describe how to define [Individual 
Patterns](#individual-patterns), and then how you can combine individual 
patterns into [Complex Patterns](#combining-patterns).
+这一节的剩余部分我们会先讲述如何定义[单个模式](#单个模式),然后讲如何将单个模式组合成[复杂模式](#组合模式)。
 
-### Individual Patterns
+### 单个模式
 
-A **Pattern** can be either a *singleton* or a *looping* pattern. Singleton 
patterns accept a single
-event, while looping patterns can accept more than one. In pattern matching 
symbols, the pattern `"a b+ c? d"` (or `"a"`, followed by *one or more* 
`"b"`'s, optionally followed by a `"c"`, followed by a `"d"`), `a`, `c?`, and 
`d` are
-singleton patterns, while `b+` is a looping one. By default, a pattern is a 
singleton pattern and you can transform
-it to a looping one by using [Quantifiers](#quantifiers). Each pattern can 
have one or more
-[Conditions](#conditions) based on which it accepts events.
+一个**模式**可以是一个**单例**或者**循环**模式。单例模式只接受一个事件,循环模式可以接受多个事件。
+在模式匹配表达式中,模式`"a b+ c? 
d"`(或者`"a"`,后面跟着一个或者多个`"b"`,再往后可选择的跟着一个`"c"`,最后跟着一个`"d"`),
+`a`,`c?`,和 `d`都是单例模式,`b+`是一个循环模式。默认情况下,模式都是单例的,你可以通过使用[量词](#量词)把它们转换成循环模式。
+每个模式可以有一个或者多个[条件](#条件)来决定它接受哪些事件。
 
- Quantifiers
+ 量词
 
-In FlinkCEP, you can specify looping patterns using these methods: 
`pattern.oneOrMore()`, for patterns that expect one or more occurrences of a 
given event (e.g. the `b+` mentioned before); and `pattern.times(#ofTimes)`, 
for patterns that
-expect a specific number of occurrences of a given type of event, e.g. 4 
`a`'s; and `pattern.times(#fromTimes, #toTimes)`, for patterns that expect a 
specific minimum number of occurrences and a maximum number of occurrences of a 
given type of event, e.g. 2-4 `a`s.
+在FlinkCEP中,你可以通过这些方法指定循环模式:`pattern.oneOrMore()`,指定期望一个给定事件出现一次或者多次的模式(例如前面提到的`b+`模式);
+`pattern.times(#ofTimes)`,指定期望一个给定事件出现特定次数的模式,例如出现4次`a`;
+`pattern.times(#fromTimes, 
#toTimes)`,指定期望一个给定事件出现次数在一个最小值和最大值中间的模式,比如出现2-4次`a`。
 
-You can make looping patterns greedy using the `pattern.greedy()` method, but 
you cannot yet make group patterns greedy. You can make all patterns, looping 
or not, optional using the `pattern.optional()` method.
+你可以使用`pattern.greedy()`方法让循环模式变成贪心的,但现在还不能让模式组贪心。
+你可以使用`pattern.optional()`方法让所有的模式变成可选的,不管是否是循环模式。
 
-For a pattern named `start`, the following are valid quantifiers:
+对一个命名为`start`的模式,以下量词是有效的:
 
  
  
  {% highlight java %}
- // expecting 4 occurrences
+ // 期望出现4次
  start.times(4);
 
- // expecting 0 or 4 occurrences
+ // 期望出现0或者4次
  start.times(4).optional();
 
- // expecting 2, 3 or 4 occurrences
+ // 期望出现2、3或者4次
  start.times(2, 4);
 
- // expecting 2, 3 or 4 occurrences and repeating as many as possible
+ // 期望出现2、3或者4次,并且尽可能的重复次数多
  start.times(2, 4).greedy();
 
- // expecting 0, 2, 3 or 4 occurrences
+ // 期望出现0、2、3或者4次
  start.times(2, 4).optional();
 
- // expecting 0, 2, 3 or 4 occurrences and repeating as many as possible
+ // 期望出现0、2、3或者4次,并且尽可能的重复次数多
  start.times(2, 4).optional().greedy();
 
- // expecting 1 or more occurrences
+ // 期望出现1到多次
  start.oneOrMore();
 
- // expecting 1 or more occurrences and repeating as many as possible
+ // 期望出现1到多次,并且尽可能的重复次数多
  start.oneOrMore().greedy();
 
- // expecting 0 or more occurrences
+ // 期望出现0到多次
  start.oneOrMore().optional();
 
- // expecting 0 

[GitHub] [flink] flinkbot edited a comment on issue #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11168: [FLINK-16140] [docs-zh] Translate 
Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#issuecomment-589576910
 
 
   
   ## CI report:
   
   * 7546b4bef354ec3acb52245f867c3338107d0995 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151296073) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5794)
 
   * 76146c2111a47b68765168064b4d1dd90448789c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] shuai-xu commented on a change in pull request #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-03-01 Thread GitBox
shuai-xu commented on a change in pull request #11168: [FLINK-16140] [docs-zh] 
Translate Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#discussion_r386216501
 
 

 ##
 File path: docs/dev/libs/cep.zh.md
 ##
 @@ -665,12 +647,11 @@ pattern.oneOrMore().greedy()
 
 
 
-### Combining Patterns
+### 组合模式
 
-Now that you've seen what an individual pattern can look like, it is time to 
see how to combine them
-into a full pattern sequence.
+现在你已经看到单个的模式是什么样的了,改取看看如何把它们连接起来组成一个完整的模式序列。
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11242: [FLINK-16007][table-planner][table-planner-blink][python] Add PythonCorrelateSplitRule to push down the Java Calls contained in Python Corre

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11242: 
[FLINK-16007][table-planner][table-planner-blink][python] Add 
PythonCorrelateSplitRule to push down the Java Calls contained in Python 
Correlate node
URL: https://github.com/apache/flink/pull/11242#issuecomment-591981740
 
 
   
   ## CI report:
   
   * ad21752d6e9ac8c6d2dc1c0d2d824f86d77d69c0 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151299714) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5796)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #11174: [FLINK-16199][sql] Support IS JSON predicate for blink planner

2020-03-01 Thread GitBox
TisonKun commented on a change in pull request #11174: [FLINK-16199][sql] 
Support IS JSON predicate for blink planner
URL: https://github.com/apache/flink/pull/11174#discussion_r386215574
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/NegativeCallGen.scala
 ##
 @@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.codegen.calls
+
+import 
org.apache.flink.table.planner.codegen.GenerateUtils.generateCallIfArgsNotNull
+import org.apache.flink.table.planner.codegen.{CodeGeneratorContext, 
GeneratedExpression}
+import org.apache.flink.table.types.logical.{BooleanType, LogicalType}
+
+/**
+ * Inverts the boolean value of a [[CallGenerator]] result.
+ */
+class NegativeCallGen(callGenerator: CallGenerator) extends CallGenerator {
+
+  override def generate(
+ctx: CodeGeneratorContext,
+operands: Seq[GeneratedExpression],
+returnType: LogicalType
+  ): GeneratedExpression = {
+assert(returnType.isInstanceOf[BooleanType])
+
+val expr = callGenerator.generate(ctx, operands, returnType)
+generateCallIfArgsNotNull(ctx, returnType, Seq(expr), 
returnType.isNullable) {
+  originalTerms =>
+assert(originalTerms.size == 1)
+
+s"!${originalTerms.head}"
+}
 
 Review comment:
   Thanks for your pointing out!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #11174: [FLINK-16199][sql] Support IS JSON predicate for blink planner

2020-03-01 Thread GitBox
TisonKun commented on a change in pull request #11174: [FLINK-16199][sql] 
Support IS JSON predicate for blink planner
URL: https://github.com/apache/flink/pull/11174#discussion_r386215651
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/ScalarFunctionsTest.scala
 ##
 @@ -4195,4 +4195,24 @@ class ScalarFunctionsTest extends ScalarTypesTestBase {
   "f55=f57",
   "true")
   }
+
+  @Test
+  def testIsJSONPredicates(): Unit = {
 
 Review comment:
   Make sense.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #11174: [FLINK-16199][sql] Support IS JSON predicate for blink planner

2020-03-01 Thread GitBox
TisonKun commented on a change in pull request #11174: [FLINK-16199][sql] 
Support IS JSON predicate for blink planner
URL: https://github.com/apache/flink/pull/11174#discussion_r386215084
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/NegativeCallGen.scala
 ##
 @@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.codegen.calls
+
+import 
org.apache.flink.table.planner.codegen.GenerateUtils.generateCallIfArgsNotNull
+import org.apache.flink.table.planner.codegen.{CodeGeneratorContext, 
GeneratedExpression}
+import org.apache.flink.table.types.logical.{BooleanType, LogicalType}
+
+/**
+ * Inverts the boolean value of a [[CallGenerator]] result.
+ */
+class NegativeCallGen(callGenerator: CallGenerator) extends CallGenerator {
 
 Review comment:
   Renamed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #11174: [FLINK-16199][sql] Support IS JSON predicate for blink planner

2020-03-01 Thread GitBox
TisonKun commented on a change in pull request #11174: [FLINK-16199][sql] 
Support IS JSON predicate for blink planner
URL: https://github.com/apache/flink/pull/11174#discussion_r386215020
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/FunctionGenerator.scala
 ##
 @@ -753,6 +753,54 @@ object FunctionGenerator {
 Seq(FLOAT, INTEGER),
 BuiltInMethods.TRUNCATE_FLOAT)
 
+  addSqlFunctionMethod(
+IS_JSON_VALUE,
+Seq(CHAR),
 
 Review comment:
   Thanks for your suggestion. Is it ditto for the rest of `CHAR` in these 
declarations?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangyang0918 commented on a change in pull request #11233: [FLINK-16194][k8s] Refactor the Kubernetes decorator design

2020-03-01 Thread GitBox
wangyang0918 commented on a change in pull request #11233: [FLINK-16194][k8s] 
Refactor the Kubernetes decorator design
URL: https://github.com/apache/flink/pull/11233#discussion_r386210683
 
 

 ##
 File path: 
flink-kubernetes/src/test/java/org/apache/flink/kubernetes/kubeclient/factory/KubernetesJobManagerFactoryTest.java
 ##
 @@ -0,0 +1,203 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient.factory;
+
+import org.apache.flink.kubernetes.KubernetesTestUtils;
+import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions;
+import 
org.apache.flink.kubernetes.configuration.KubernetesConfigOptionsInternal;
+import 
org.apache.flink.kubernetes.entrypoint.KubernetesSessionClusterEntrypoint;
+import 
org.apache.flink.kubernetes.kubeclient.KubernetesJobManagerSpecification;
+import org.apache.flink.kubernetes.kubeclient.KubernetesJobManagerTestBase;
+import 
org.apache.flink.kubernetes.kubeclient.parameters.KubernetesJobManagerParameters;
+import org.apache.flink.kubernetes.utils.Constants;
+import org.apache.flink.kubernetes.utils.KubernetesUtils;
+
+import io.fabric8.kubernetes.api.model.ConfigMap;
+import io.fabric8.kubernetes.api.model.Container;
+import io.fabric8.kubernetes.api.model.HasMetadata;
+import io.fabric8.kubernetes.api.model.PodSpec;
+import io.fabric8.kubernetes.api.model.Quantity;
+import io.fabric8.kubernetes.api.model.Service;
+import io.fabric8.kubernetes.api.model.apps.Deployment;
+import io.fabric8.kubernetes.api.model.apps.DeploymentSpec;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import static 
org.apache.flink.configuration.GlobalConfiguration.FLINK_CONF_FILENAME;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * General tests for the {@link KubernetesJobManagerFactory}.
+ */
+public class KubernetesJobManagerFactoryTest extends 
KubernetesJobManagerTestBase {
+
+   private static final String SERVICE_ACCOUNT_NAME = "service-test";
+   private static final String ENTRY_POINT_CLASS = 
KubernetesSessionClusterEntrypoint.class.getCanonicalName();
+
+   private KubernetesJobManagerSpecification 
kubernetesJobManagerSpecification;
+
+   @Before
+   public void setup() throws Exception {
+   super.setup();
+
+   KubernetesTestUtils.createTemporyFile("some data", 
flinkConfDir, "logback.xml");
+   KubernetesTestUtils.createTemporyFile("some data", 
flinkConfDir, "log4j.properties");
+
+   
flinkConfig.set(KubernetesConfigOptionsInternal.ENTRY_POINT_CLASS, 
ENTRY_POINT_CLASS);
+   
flinkConfig.set(KubernetesConfigOptions.JOB_MANAGER_SERVICE_ACCOUNT, 
SERVICE_ACCOUNT_NAME);
+
+   this.kubernetesJobManagerSpecification =
+   
KubernetesJobManagerFactory.createJobManagerComponent(kubernetesJobManagerParameters);
+   }
+
+   @Test
+   public void testDeploymentMetadata() {
+   final Deployment resultDeployment = 
this.kubernetesJobManagerSpecification.getDeployment();
+   assertEquals(Constants.APPS_API_VERSION, 
resultDeployment.getApiVersion());
+   assertEquals(KubernetesUtils.getDeploymentName(CLUSTER_ID), 
resultDeployment.getMetadata().getName());
+   final Map expectedLabels = getCommonLabels();
+   expectedLabels.put(Constants.LABEL_COMPONENT_KEY, 
Constants.LABEL_COMPONENT_JOB_MANAGER);
+   assertEquals(expectedLabels, 
resultDeployment.getMetadata().getLabels());
+   }
+
+   @Test
+   public void testDeploymentSpec() {
+   final DeploymentSpec resultDeploymentSpec = 
this.kubernetesJobManagerSpecification.getDeployment().getSpec();
+   assertEquals(1, resultDeploymentSpec.getReplicas().intValue());
+
+   final Map expectedLabels =  new 
HashMap<>(getCommonLabels());
+   expectedLabels.put(Constants.LABEL_COMPONENT_KEY, 
Constants.LABEL_COMPONENT_JOB_MANAGER);
+
+   

[GitHub] [flink] wangyang0918 commented on a change in pull request #11233: [FLINK-16194][k8s] Refactor the Kubernetes decorator design

2020-03-01 Thread GitBox
wangyang0918 commented on a change in pull request #11233: [FLINK-16194][k8s] 
Refactor the Kubernetes decorator design
URL: https://github.com/apache/flink/pull/11233#discussion_r386198450
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/AbstractServiceDecorator.java
 ##
 @@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient.decorators;
+
+import org.apache.flink.configuration.RestOptions;
+import 
org.apache.flink.kubernetes.kubeclient.parameters.KubernetesJobManagerParameters;
+import org.apache.flink.kubernetes.utils.Constants;
+
+import io.fabric8.kubernetes.api.model.HasMetadata;
+import io.fabric8.kubernetes.api.model.Service;
+import io.fabric8.kubernetes.api.model.ServiceBuilder;
+import io.fabric8.kubernetes.api.model.ServicePort;
+import io.fabric8.kubernetes.api.model.ServicePortBuilder;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * An abstract class containing some common implementations for the 
internal/external Services.
+ */
+public abstract class AbstractServiceDecorator extends 
AbstractKubernetesStepDecorator {
+
+   protected final KubernetesJobManagerParameters 
kubernetesJobManagerParameters;
+
+   public AbstractServiceDecorator(KubernetesJobManagerParameters 
kubernetesJobManagerParameters) {
+   this.kubernetesJobManagerParameters = 
checkNotNull(kubernetesJobManagerParameters);
+   }
+
+   @Override
+   public List buildAccompanyingKubernetesResources() throws 
IOException {
+   final Service service = new ServiceBuilder()
+   .withApiVersion(Constants.API_VERSION)
+   .withNewMetadata()
+   .withName(getServiceName())
+   
.withLabels(kubernetesJobManagerParameters.getCommonLabels())
+   .endMetadata()
+   .withNewSpec()
+   .withType(getServiceType())
+   .withPorts(getServicePorts())
+   
.withSelector(kubernetesJobManagerParameters.getLabels())
+   .endSpec()
+   .build();
+
+   return Collections.singletonList(service);
+   }
+
+   protected abstract String getServiceType();
+
+   protected abstract String getServiceName();
+
+   protected List getServicePorts() {
 
 Review comment:
   Just like @TisonKun said, maybe we could also make `getServicePorts()` as 
`abstract` and leave different implementation to derived classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangyang0918 commented on a change in pull request #11233: [FLINK-16194][k8s] Refactor the Kubernetes decorator design

2020-03-01 Thread GitBox
wangyang0918 commented on a change in pull request #11233: [FLINK-16194][k8s] 
Refactor the Kubernetes decorator design
URL: https://github.com/apache/flink/pull/11233#discussion_r386202186
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/JavaCmdTaskManagerDecorator.java
 ##
 @@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient.decorators;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.kubernetes.kubeclient.FlinkPod;
+import 
org.apache.flink.kubernetes.kubeclient.parameters.KubernetesTaskManagerParameters;
+import org.apache.flink.kubernetes.taskmanager.KubernetesTaskExecutorRunner;
+import org.apache.flink.kubernetes.utils.KubernetesUtils;
+import 
org.apache.flink.runtime.clusterframework.ContaineredTaskManagerParameters;
+import org.apache.flink.runtime.clusterframework.TaskExecutorProcessSpec;
+import org.apache.flink.runtime.clusterframework.TaskExecutorProcessUtils;
+import org.apache.flink.runtime.entrypoint.parser.CommandLineOptions;
+
+import io.fabric8.kubernetes.api.model.Container;
+import io.fabric8.kubernetes.api.model.ContainerBuilder;
+
+import javax.annotation.Nullable;
+
+import java.util.Arrays;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Attach the command and args to the main container for running the 
TaskManager code.
+ */
+public class JavaCmdTaskManagerDecorator extends 
AbstractKubernetesStepDecorator {
+
+   private final KubernetesTaskManagerParameters 
kubernetesTaskManagerParameters;
+
+   public JavaCmdTaskManagerDecorator(KubernetesTaskManagerParameters 
kubernetesTaskManagerParameters) {
+   this.kubernetesTaskManagerParameters = 
checkNotNull(kubernetesTaskManagerParameters);
+   }
+
+   @Override
+   public FlinkPod decorateFlinkPod(FlinkPod flinkPod) {
+   final Container mainContainerWithStartCmd = new 
ContainerBuilder(flinkPod.getMainContainer())
+   
.withCommand(kubernetesTaskManagerParameters.getContainerEntrypoint())
+   .withArgs(Arrays.asList("/bin/bash", "-c", 
getTaskManagerStartCommand()))
+   .build();
+
+   return new FlinkPod.Builder(flinkPod)
+   .withMainContainer(mainContainerWithStartCmd)
+   .build();
+   }
+
+   private String getTaskManagerStartCommand() {
+   final String confDirInPod = 
kubernetesTaskManagerParameters.getFlinkConfDirInPod();
+
+   final String logDirInPod = 
kubernetesTaskManagerParameters.getFlinkLogDirInPod();
+
+   final String mainClassArgs = "--" + 
CommandLineOptions.CONFIG_DIR_OPTION.getLongOpt() + " " +
+   confDirInPod + " " + 
kubernetesTaskManagerParameters.getDynamicProperties();
+
+   return getTaskManagerStartCommand(
+   kubernetesTaskManagerParameters.getFlinkConfiguration(),
+   
kubernetesTaskManagerParameters.getContaineredTaskManagerParameters(),
+   confDirInPod,
+   logDirInPod,
+   kubernetesTaskManagerParameters.hasLogback(),
+   kubernetesTaskManagerParameters.hasLog4j(),
+   KubernetesTaskExecutorRunner.class.getCanonicalName(),
+   mainClassArgs);
+   }
+
+   private static String getTaskManagerStartCommand(
+   Configuration flinkConfig,
+   ContaineredTaskManagerParameters tmParams,
+   String configDirectory,
+   String logDirectory,
+   boolean hasLogback,
+   boolean hasLog4j,
+   String mainClass,
+   @Nullable String mainArgs) {
 
 Review comment:
   `@Nullable` could be removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the 

[jira] [Closed] (FLINK-16358) Failed to execute when using rowtime or proctime and table keywords

2020-03-01 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-16358.
---
Resolution: Duplicate

It sounds a duplicate issue of FLINK-16068.

> Failed to execute when using rowtime or proctime and table keywords
> ---
>
> Key: FLINK-16358
> URL: https://issues.apache.org/jira/browse/FLINK-16358
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: backbook
>Priority: Major
>
> CREATE TABLE ${topicName} (
>  `table` STRING,
>  `proctime` PROCTIME(),
>  `data` ARRAY>,
>  `old` ARRAY>
> )\{--afka config}
>  
> Create table statement according to the above,if you delete proctime,this ddl 
> SQL that's all right
> ,however,this SQL can't work,i think .it's bug



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16353) Issue when flink upload a job with stream sql query

2020-03-01 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048818#comment-17048818
 ] 

Jark Wu commented on FLINK-16353:
-

Hi [~miguelangel], please have a look at the above user mailing list. It should 
be a similar problem. Please make sure {{flink-table-planner}} and 
{{flink-table-planner-blink}} is not packaged into the user jar. 

> Issue when flink upload a job with stream sql query
> ---
>
> Key: FLINK-16353
> URL: https://issues.apache.org/jira/browse/FLINK-16353
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
> Environment: This is my code
> {code:java}
> class TestQueries extends Serializable{
>   def testQuery(): Unit = {
> // Enable settings
> val settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> val tableEnv = StreamTableEnvironment.create(env, settings)
> 
> //Consumer kafka topic
> //... topic_consumer
> val stream: DataStream[String] = env.addSource(topic_consumer)
> 
> // Convert stream to DataStream[Row]
> val result: DataStream[Row] = stream.map(str => desJson(str))(rowType)
> // desJson is a function to return Row values from deserialize json topic
> // rowType is a rowTypeInfo with (fieldTypes, fieldNames). fieldTypes are 
> Strings and fieldNames ("user", "name", "lastName")
> // Register table
> tableEnv.createTemporaryView("table", result)
> //Queries
> val first_query = tableEnv.sqlQuery("SELECT * from table WHERE name = 
> 'Sansa'")
> val second_query = tableEnv.sqlQuery("SELECT * from table WHERE lastName 
> = 'Stark'")
> //In the following two lines is where the exception occurs
> val first_row: DataStream[Row] = tableEnv.toAppendStream[Row](first_query)
> val second_row: DataStream[Row] = 
> tableEnv.toAppendStream[Row](second_query)
> //Elasticsearch
> // Sending data to Elasticsearch
> env.execute("Test Queries")
>   }
> {code}
>Reporter: Miguel Angel
>Priority: Major
>
> {color:#242729}I used the latest flink version(1.10.0) and sbt(1.3.7). I have 
> this exception when upload a job with streaming sql query:{color}
>  {color:#242729}Caused by: java.lang.ClassCastException: class 
> org.codehaus.janino.CompilerFactory cannot be cast to class 
> org.codehaus.commons.compiler.ICompilerFactory 
> (org.codehaus.janino.CompilerFactory is in unnamed module of loader 
> org.apache.flink.util.ChildFirstClassLoader @3270d194; 
> org.codehaus.commons.compiler.ICompilerFactory is in unnamed module of loader 
> 'app')
>  at 
> org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
>  at 
> org.codehaus.commons.compiler.CompilerFactoryFactory.getDefaultCompilerFactory(CompilerFactoryFactory.java:79)
>  at 
> org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:432){color}
>  
> {color:#242729}When I running main class with {color}*sbt run*{color:#242729} 
> it works perfectly.{color}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11273: [FLINK-12814][sql-client] Support tableau result format

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11273: [FLINK-12814][sql-client] Support 
tableau result format
URL: https://github.com/apache/flink/pull/11273#issuecomment-593096244
 
 
   
   ## CI report:
   
   * 87fcf8d5a1309a85519e0a09df0191acd515c15d Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151296114) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5795)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
TisonKun commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386213079
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/ByteBufUtils.java
 ##
 @@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+
+/**
+ * Utility routines to process Netty ByteBuf.
+ */
+public class ByteBufUtils {
+
+   /**
+* Cumulates data from the source buffer to the target buffer.
+*
+* @param cumulationBuf The target buffer.
+* @param src The source buffer.
+* @param expectedSize The expected length to cumulate.
+*
+* @return The ByteBuf containing cumulated data or null if not enough 
data has been cumulated.
+*/
+   public static ByteBuf cumulate(ByteBuf cumulationBuf, ByteBuf src, int 
expectedSize) {
+   // If the cumulation buffer is empty and src has enought bytes,
+   // user could read from src directly without cumulation.
+   if (cumulationBuf.readerIndex() == 0
+   && cumulationBuf.writerIndex() == 0
+   && src.readableBytes() >= expectedSize) {
+
+   return src;
+   }
+
+   int copyLength = Math.min(src.readableBytes(), expectedSize - 
cumulationBuf.readableBytes());
+
+   if (copyLength > 0) {
+   cumulationBuf.writeBytes(src, copyLength);
+   }
+
+   if (cumulationBuf.readableBytes() == expectedSize) {
+   return cumulationBuf;
+   }
+
+   return null;
 
 Review comment:
   I think an `Optional` wrapper is just for this case. And we will execute the 
`if (toDecode != null)` in `.map(...)/.ifPresent(...)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11276: [FLINK-16029][table-planner-blink] Remove register source and sink in test cases of blink planner

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11276: [FLINK-16029][table-planner-blink] 
Remove register source and sink in test cases of blink planner
URL: https://github.com/apache/flink/pull/11276#issuecomment-593234780
 
 
   
   ## CI report:
   
   * 70303310b9062e83705e8d3536660784bf963cca Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151302138) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11275: [FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11275: [FLINK-16359][table-runtime] 
Introduce WritableVectors for abstract writing
URL: https://github.com/apache/flink/pull/11275#issuecomment-593230700
 
 
   
   ## CI report:
   
   * 2df6f64a7d1a62836521bd02ec0ce77fe2b14efb Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151300922) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5798)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-16360) connector on hive 2.0.1 don't support type conversion from STRING to VARCHAR

2020-03-01 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-16360:


Assignee: Jingsong Lee

>  connector on hive 2.0.1 don't  support type conversion from STRING to VARCHAR
> --
>
> Key: FLINK-16360
> URL: https://issues.apache.org/jira/browse/FLINK-16360
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: os:centos
> java: 1.8.0_92
> flink :1.10.0
> hadoop: 2.7.2
> hive:2.0.1
>  
>Reporter: wgcn
>Assignee: Jingsong Lee
>Priority: Major
> Attachments: exceptionstack
>
>
>  it threw  exception  when we query hive 2.0.1 by flink 1.10.0
>  Exception stack:
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=50, 
> backoffTimeMS=1)
>  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
>  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
>  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:484)
>  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
>  at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:279)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:194)
>  at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>  at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>  at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>  at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
>  at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
>  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
>  at akka.actor.ActorCell.invoke(ActorCell.scala:561)
>  at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
>  at akka.dispatch.Mailbox.run(Mailbox.scala:225)
>  at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
>  at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>  at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>  at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>  at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
>  at 
> org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:76)
>  at 
> org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:123)
>  at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:73)
>  at 
> org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:55)
>  at 
> org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:96)
>  at 
> org.apache.flink.connectors.hive.read.HiveVectorizedOrcSplitReader.(HiveVectorizedOrcSplitReader.java:65)
>  at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:117)
>  at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:56)
>  at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:85)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>  at 
> 

[jira] [Commented] (FLINK-16360) connector on hive 2.0.1 don't support type conversion from STRING to VARCHAR

2020-03-01 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048816#comment-17048816
 ] 

Jingsong Lee commented on FLINK-16360:
--

Thanks [~wgcn] for reporting, Hive 2.0 ORC not support schema evolution from 
STRING to VARCHAR.

We need produce STRING in ORC for VarcharType(MAX_LENGHT) in Flink.

>  connector on hive 2.0.1 don't  support type conversion from STRING to VARCHAR
> --
>
> Key: FLINK-16360
> URL: https://issues.apache.org/jira/browse/FLINK-16360
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: os:centos
> java: 1.8.0_92
> flink :1.10.0
> hadoop: 2.7.2
> hive:2.0.1
>  
>Reporter: wgcn
>Priority: Major
> Attachments: exceptionstack
>
>
>  it threw  exception  when we query hive 2.0.1 by flink 1.10.0
>  Exception stack:
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=50, 
> backoffTimeMS=1)
>  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
>  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
>  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
>  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:484)
>  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
>  at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:279)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:194)
>  at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>  at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>  at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>  at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
>  at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
>  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
>  at akka.actor.ActorCell.invoke(ActorCell.scala:561)
>  at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
>  at akka.dispatch.Mailbox.run(Mailbox.scala:225)
>  at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
>  at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>  at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>  at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>  at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
>  at 
> org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:76)
>  at 
> org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:123)
>  at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:73)
>  at 
> org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:55)
>  at 
> org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:96)
>  at 
> org.apache.flink.connectors.hive.read.HiveVectorizedOrcSplitReader.(HiveVectorizedOrcSplitReader.java:65)
>  at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:117)
>  at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:56)
>  at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:85)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>  at 
> 

[jira] [Assigned] (FLINK-16352) Changing HashMap to LinkedHashMap for deterministic iterations in ExpressionTest

2020-03-01 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16352:
---

Assignee: testfixer0

> Changing HashMap to LinkedHashMap for deterministic iterations in 
> ExpressionTest
> 
>
> Key: FLINK-16352
> URL: https://issues.apache.org/jira/browse/FLINK-16352
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: testfixer0
>Assignee: testfixer0
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The test `testValueLiteralString` in `ExpressionTest` may fail due if 
> `HashMap` iterates in a different order. The final variable `map` is a 
> `HashMap`. However, `HashMap` does not guarantee any specific order of 
> entries. Thus, the test can fail due to a different iteration order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16352) Changing HashMap to LinkedHashMap for deterministic iterations in ExpressionTest

2020-03-01 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16352:

Summary: Changing HashMap to LinkedHashMap for deterministic iterations in 
ExpressionTest  (was: Use LinkedHashMap for deterministic iterations)

> Changing HashMap to LinkedHashMap for deterministic iterations in 
> ExpressionTest
> 
>
> Key: FLINK-16352
> URL: https://issues.apache.org/jira/browse/FLINK-16352
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: testfixer0
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The test `testValueLiteralString` in `ExpressionTest` may fail due if 
> `HashMap` iterates in a different order. The final variable `map` is a 
> `HashMap`. However, `HashMap` does not guarantee any specific order of 
> entries. Thus, the test can fail due to a different iteration order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-16352) Use LinkedHashMap for deterministic iterations

2020-03-01 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-16352.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

Fixed in master(1.11.0): da7a6888cbee26f3e7ebc4957ea8d9993c0b53f8

> Use LinkedHashMap for deterministic iterations
> --
>
> Key: FLINK-16352
> URL: https://issues.apache.org/jira/browse/FLINK-16352
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: testfixer0
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The test `testValueLiteralString` in `ExpressionTest` may fail due if 
> `HashMap` iterates in a different order. The final variable `map` is a 
> `HashMap`. However, `HashMap` does not guarantee any specific order of 
> entries. Thus, the test can fail due to a different iteration order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16352) Use LinkedHashMap for deterministic iterations

2020-03-01 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16352:

Affects Version/s: (was: 1.10.0)

> Use LinkedHashMap for deterministic iterations
> --
>
> Key: FLINK-16352
> URL: https://issues.apache.org/jira/browse/FLINK-16352
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: testfixer0
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The test `testValueLiteralString` in `ExpressionTest` may fail due if 
> `HashMap` iterates in a different order. The final variable `map` is a 
> `HashMap`. However, `HashMap` does not guarantee any specific order of 
> entries. Thus, the test can fail due to a different iteration order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on issue #11269: [FLINK-16352][flink-table/flink-table-common]Use LinkedHashMap for deterministic iterations

2020-03-01 Thread GitBox
wuchong commented on issue #11269: 
[FLINK-16352][flink-table/flink-table-common]Use LinkedHashMap for 
deterministic iterations
URL: https://github.com/apache/flink/pull/11269#issuecomment-593237977
 
 
   LGTM. Merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong merged pull request #11269: [FLINK-16352][flink-table/flink-table-common]Use LinkedHashMap for deterministic iterations

2020-03-01 Thread GitBox
wuchong merged pull request #11269: 
[FLINK-16352][flink-table/flink-table-common]Use LinkedHashMap for 
deterministic iterations
URL: https://github.com/apache/flink/pull/11269
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16352) Use LinkedHashMap for deterministic iterations

2020-03-01 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16352:

Component/s: (was: Table SQL / Client)
 Table SQL / API

> Use LinkedHashMap for deterministic iterations
> --
>
> Key: FLINK-16352
> URL: https://issues.apache.org/jira/browse/FLINK-16352
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: testfixer0
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The test `testValueLiteralString` in `ExpressionTest` may fail due if 
> `HashMap` iterates in a different order. The final variable `map` is a 
> `HashMap`. However, `HashMap` does not guarantee any specific order of 
> entries. Thus, the test can fail due to a different iteration order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16360) connector on hive 2.0.1 don't support type conversion from STRING to VARCHAR

2020-03-01 Thread wgcn (Jira)
wgcn created FLINK-16360:


 Summary:  connector on hive 2.0.1 don't  support type conversion 
from STRING to VARCHAR
 Key: FLINK-16360
 URL: https://issues.apache.org/jira/browse/FLINK-16360
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.10.0
 Environment: os:centos
java: 1.8.0_92

flink :1.10.0

hadoop: 2.7.2

hive:2.0.1

 
Reporter: wgcn
 Attachments: exceptionstack

 it threw  exception  when we query hive 2.0.1 by flink 1.10.0

 Exception stack:

org.apache.flink.runtime.JobException: Recovery is suppressed by 
FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=50, 
backoffTimeMS=1)
 at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
 at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
 at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:484)
 at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
 at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:279)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:194)
 at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
 at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
 at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
 at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
 at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
 at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
 at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
 at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
 at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
 at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
 at akka.actor.ActorCell.invoke(ActorCell.scala:561)
 at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
 at akka.dispatch.Mailbox.run(Mailbox.scala:225)
 at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
 at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
 at 
org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:76)
 at 
org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:123)
 at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:73)
 at 
org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:55)
 at 
org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:96)
 at 
org.apache.flink.connectors.hive.read.HiveVectorizedOrcSplitReader.(HiveVectorizedOrcSplitReader.java:65)
 at 
org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:117)
 at 
org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:56)
 at 
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:85)
 at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
 at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
 at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196)
Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.commons.lang3.reflect.MethodUtils.invokeExactMethod(MethodUtils.java:204)
 at 

[jira] [Commented] (FLINK-14991) Export `FLINK_HOME` environment variable to all the entrypoint

2020-03-01 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048811#comment-17048811
 ] 

Guowei Ma commented on FLINK-14991:
---

[~zjffdu] sorry for the late response. 

[~azagrebin] I agree. I think we should be compatible with the previous API.

 

> Export `FLINK_HOME` environment variable to all the entrypoint
> --
>
> Key: FLINK-14991
> URL: https://issues.apache.org/jira/browse/FLINK-14991
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client, Deployment / Scripts
>Reporter: Guowei Ma
>Priority: Minor
>
>  Currently, Flink depends on 6 types of files: configuration files, system 
> jars files, script files、library jar files, plugin jar files, and user jars 
> files. These files are in different directories. 
> Flink exports 5 environment variables to locate these different type files: 
> `FLINK_CONF_DIR`,`FLINK_LIB_DIR`,`FLINK_OPT_DIR`,`FLINK_PLUGIN_DIR`,`FLINK_BIN_DIR`.
> It is not a good style that exports an environment variable for every type of 
> file.
> So this jira proposes to export the `FLINK_HOME` environment variable to all 
> the entrypoint. Derive the directory of the different type files from the 
> `FLINK_HOME` environment variable and every type file has a fixed directory 
> name.
>  This also has another benefit that the method implies the directory 
> structure is the same in all the situations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16351) Use LinkedHashMap for deterministic iterations

2020-03-01 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048810#comment-17048810
 ] 

Jark Wu commented on FLINK-16351:
-

Hi [~testfixer], we shouldn't change {{MapBundleOperatorTest}} to use 
LinkedHashMap, because it will affect performance, and it's not necessary to 
output an insert-order result for {{MapBundleOperatorTest}}.

You can just update the {{TestMapBundleFunction}} to collect output into 
{{HashMap}} and verify the {{HashMap}}.

> Use LinkedHashMap for deterministic iterations
> --
>
> Key: FLINK-16351
> URL: https://issues.apache.org/jira/browse/FLINK-16351
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Reporter: testfixer0
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The test `testSimple` in `MapBundleOperatorTest` may fail due if `HashMap` 
> iterates in a different order. Specifically, 
> `assertThat(Arrays.asList("k1=v1,v2", "k2=v3"), is(func.getOutputs()))` may 
> fail. `testSimple` depends on `open` in class `AbstractMapBundleOperator`. 
> The field `bundle` is a `HashMap`. However, `HashMap` does not guarantee any 
> specific order of entries. Thus, the test can fail due to a different 
> iteration order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-16351) Use LinkedHashMap for deterministic iterations

2020-03-01 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048810#comment-17048810
 ] 

Jark Wu edited comment on FLINK-16351 at 3/2/20 6:04 AM:
-

Hi [~testfixer], we shouldn't change {{MapBundleOperator}} to use 
LinkedHashMap, because it will affect performance, and it's not necessary to 
output an insert-order result for {{MapBundleOperator}}.

You can just update the {{TestMapBundleFunction}} to collect output into 
{{HashMap}} and verify the {{HashMap}}.


was (Author: jark):
Hi [~testfixer], we shouldn't change {{MapBundleOperatorTest}} to use 
LinkedHashMap, because it will affect performance, and it's not necessary to 
output an insert-order result for {{MapBundleOperatorTest}}.

You can just update the {{TestMapBundleFunction}} to collect output into 
{{HashMap}} and verify the {{HashMap}}.

> Use LinkedHashMap for deterministic iterations
> --
>
> Key: FLINK-16351
> URL: https://issues.apache.org/jira/browse/FLINK-16351
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Reporter: testfixer0
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The test `testSimple` in `MapBundleOperatorTest` may fail due if `HashMap` 
> iterates in a different order. Specifically, 
> `assertThat(Arrays.asList("k1=v1,v2", "k2=v3"), is(func.getOutputs()))` may 
> fail. `testSimple` depends on `open` in class `AbstractMapBundleOperator`. 
> The field `bundle` is a `HashMap`. However, `HashMap` does not guarantee any 
> specific order of entries. Thus, the test can fail due to a different 
> iteration order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11276: [FLINK-16029][table-planner-blink] Remove register source and sink in test cases of blink planner

2020-03-01 Thread GitBox
flinkbot commented on issue #11276: [FLINK-16029][table-planner-blink] Remove 
register source and sink in test cases of blink planner
URL: https://github.com/apache/flink/pull/11276#issuecomment-593234780
 
 
   
   ## CI report:
   
   * 70303310b9062e83705e8d3536660784bf963cca UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15911) Flink does not work over NAT

2020-03-01 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048809#comment-17048809
 ] 

Xintong Song commented on FLINK-15911:
--

Some updates on this ticket.

I've decoupled bind-address/bind-port from the address/port of JM/TM RPC 
services, verified successfully with a docker-based e2e test with the default 
parallelism 1. But I run into problems when increasing the parallelism to have 
multiple TMs, because TMs failed to find each other's Netty shuffle 
address/port.

I talked to [~zjwang] offline. He confirmed that Netty shuffle service uses TM 
address in two ways:
 * The address passed into NettyShuffleEnvironment is used for binding to the 
local address. It should use the bind-address.
 * The address wrapped in TaskManagerLocation will be sent to JobMaster, which 
will be used by tasks for accessing the TM's shuffle service.

I will continue trying to resolve address/port problem of Netty shuffle service.

In addition, the address/port and bind-address/bind-port of the following 
services may also need to separated. I would like to exclude them from the 
scope of this ticket, to keep a minimum set of changes in this ticket for 
getting Flink work over NAT.
 * Blob Server on JM. This is only needed if we we want to submit jobs from 
outside of NAT to a Flink session cluster whose JM runs behind NAT. I will try 
to address this in FLINK-15154.
 * KvStateService on TM. This is only used for queryable state, which I'm not 
sure how many use cases do we have. Also, I'm not familiar with how the 
KvStateService works. If we want to get it work over NAT, I would need help 
from someone familiar with it.

> Flink does not work over NAT
> 
>
> Key: FLINK-15911
> URL: https://issues.apache.org/jira/browse/FLINK-15911
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: usability
> Fix For: 1.11.0
>
>
> Currently, it is not possible to run Flink over network address translation. 
> The problem is that the Flink processes do not allow to specify separate bind 
> and external ports. Moreover, the {{TaskManager}} tries to resolve the given 
> {{taskmanager.host}} which might not be resolvable. This breaks NAT or docker 
> setups where the external address is not resolvable from within the 
> container/internal network.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11275: [FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11275: [FLINK-16359][table-runtime] 
Introduce WritableVectors for abstract writing
URL: https://github.com/apache/flink/pull/11275#issuecomment-593230700
 
 
   
   ## CI report:
   
   * 2df6f64a7d1a62836521bd02ec0ce77fe2b14efb Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151300922) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5798)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11168: [FLINK-16140] [docs-zh] Translate 
Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#issuecomment-589576910
 
 
   
   ## CI report:
   
   * 7546b4bef354ec3acb52245f867c3338107d0995 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151296073) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5794)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11248: [FLINK-16299] Release containers recovered from previous attempt in w…

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11248: [FLINK-16299] Release containers 
recovered from previous attempt in w…
URL: https://github.com/apache/flink/pull/11248#issuecomment-592302068
 
 
   
   ## CI report:
   
   * 9742a2c085266460d7b2bcde20ae46de8d8c72d5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151295147) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5793)
 
   * b61c045eddf32b77b81238ed06cbd961351f2e3b Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151300896) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5797)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16345) Computed column can not refer time attribute column

2020-03-01 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048808#comment-17048808
 ] 

Jark Wu commented on FLINK-16345:
-

{{standard_ts}} is not a rowtime attribute, I think the bug is somewhere regard 
{{standard_ts}} as a rowtime attribute. 

> Computed column can not refer time attribute column 
> 
>
> Key: FLINK-16345
> URL: https://issues.apache.org/jira/browse/FLINK-16345
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> If a computed column refer a time attribute column, computed column will lose 
>  time attribute and cause validation fail.
> {code:java}
> CREATE TABLE orders (
>   order_id STRING,
>   order_time TIMESTAMP(3),
>   amount DOUBLE,
>   amount_kg as amount * 1000,
>   // can not select computed column standard_ts which from column order_time 
> that used as WATERMARK
>   standard_ts as order_time + INTERVAL '8' HOUR,
>   WATERMARK FOR order_time AS order_time
> ) WITH (
>   'connector.type' = 'kafka',
>   'connector.version' = '0.10',
>   'connector.topic' = 'flink_orders',
>   'connector.properties.zookeeper.connect' = 'localhost:2181',
>   'connector.properties.bootstrap.servers' = 'localhost:9092',
>   'connector.properties.group.id' = 'testGroup',
>   'connector.startup-mode' = 'earliest-offset',
>   'format.type' = 'json',
>   'format.derive-schema' = 'true'
> );
> {code}
> The query `select amount_kg from orders` runs normally,  
> the` he query `select standard_ts from orders` throws a validation exception 
> message as following:
> {noformat}
> [ERROR] Could not execute SQL statement. Reason:
>  java.lang.AssertionError: Conversion to relational algebra failed to 
> preserve datatypes:
>  validated type:
>  RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" order_id, TIME 
> ATTRIBUTE(ROWTIME) order_time, DOUBLE amount, DOUBLE amount_kg, TIMESTAMP(3) 
> ts) NOT NULL
>  converted type:
>  RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" order_id, TIME 
> ATTRIBUTE(ROWTIME) order_time, DOUBLE amount, DOUBLE amount_kg, TIME 
> ATTRIBUTE(ROWTIME) ts) NOT NULL
>  rel:
>  LogicalProject(order_id=[$0], order_time=[$1], amount=[$2], amount_kg=[$3], 
> ts=[$4])
>  LogicalWatermarkAssigner(rowtime=[order_time], watermark=[$1])
>  LogicalProject(order_id=[$0], order_time=[$1], amount=[$2], amount_kg=[*($2, 
> 1000)], ts=[+($1, 2880:INTERVAL HOUR)])
>  LogicalTableScan(table=[[default_catalog, default_database, orders, source: 
> [Kafka010TableSource(order_id, order_time, amount)]]])
>  {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] carp84 commented on a change in pull request #10768: [FLINK-15012][checkpoint] Introduce shutdown to CheckpointStorageCoordinatorView to clean up checkpoint directory.

2020-03-01 Thread GitBox
carp84 commented on a change in pull request #10768: [FLINK-15012][checkpoint] 
Introduce shutdown to CheckpointStorageCoordinatorView to clean up checkpoint 
directory.
URL: https://github.com/apache/flink/pull/10768#discussion_r386172856
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageCoordinatorView.java
 ##
 @@ -124,5 +117,5 @@ CheckpointStorageLocation initializeLocationForSavepoint(
 *
 * @throws IOException Thrown if the storage cannot be shut down well 
due to an I/O exception.
 */
-   void shutDown(JobStatus jobStatus) throws IOException;
+   void shutDown(JobStatus jobStatus, CheckpointProperties 
checkpointProperties) throws IOException;
 
 Review comment:
   Please add javadoc for the newly added parameter.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16262) Class loader problem with FlinkKafkaProducer.Semantic.EXACTLY_ONCE and usrlib directory

2020-03-01 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048804#comment-17048804
 ] 

Guowei Ma commented on FLINK-16262:
---

Thanks for [~gjy]'s  explanation. This also reminds one thing. Currently, in 
the Yarn/Mesos per job the user class loader is not enabled by default. I think 
maybe we should keep the same behavior in per-job clusters. For example we 
could provide a arguments –with-usrlib to build.sh. Only if user give this 
parameter to build.sh we should copy the user jar to usrlib/ directory.

What do you think [~gjy]?

> Class loader problem with FlinkKafkaProducer.Semantic.EXACTLY_ONCE and usrlib 
> directory
> ---
>
> Key: FLINK-16262
> URL: https://issues.apache.org/jira/browse/FLINK-16262
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
> Environment: openjdk:11-jre with a slightly modified Flink 1.10.0 
> build (nothing changed regarding Kafka and/or class loading).
>Reporter: Jürgen Kreileder
>Assignee: Guowei Ma
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We're using Docker images modeled after 
> [https://github.com/apache/flink/blob/master/flink-container/docker/Dockerfile]
>  (using Java 11)
> When I try to switch a Kafka producer from AT_LEAST_ONCE to EXACTLY_ONCE, the 
> taskmanager startup fails with:
> {code:java}
> 2020-02-24 18:25:16.389 INFO  o.a.f.r.t.Task                           Create 
> Case Fixer -> Sink: Findings local-krei04-kba-digitalweb-uc1 (1/1) 
> (72f7764c6f6c614e5355562ed3d27209) switched from RUNNING to FAILED.
> org.apache.kafka.common.config.ConfigException: Invalid value 
> org.apache.kafka.common.serialization.ByteArraySerializer for configuration 
> key.serializer: Class 
> org.apache.kafka.common.serialization.ByteArraySerializer could not be found.
>  at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:718)
>  at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:471)
>  at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:464)
>  at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62)
>  at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:75)
>  at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:396)
>  at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:326)
>  at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:298)
>  at 
> org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer.(FlinkKafkaInternalProducer.java:76)
>  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.lambda$abortTransactions$2(FlinkKafkaProducer.java:1107)
>  at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(Unknown 
> Source)
>  at java.base/java.util.HashMap$KeySpliterator.forEachRemaining(Unknown 
> Source)
>  at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
>  at java.base/java.util.stream.ForEachOps$ForEachTask.compute(Unknown Source)
>  at java.base/java.util.concurrent.CountedCompleter.exec(Unknown Source)
>  at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
>  at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown 
> Source)
>  at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source)
>  at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
>  at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown 
> Source){code}
> This looks like a class loading issue: If I copy our JAR to FLINK_LIB_DIR 
> instead of FLINK_USR_LIB_DIR, everything works fine.
> (AT_LEAST_ONCE producers works fine with the JAR in FLINK_USR_LIB_DIR)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16345) Computed column can not refer time attribute column

2020-03-01 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16345:

Fix Version/s: 1.10.1

> Computed column can not refer time attribute column 
> 
>
> Key: FLINK-16345
> URL: https://issues.apache.org/jira/browse/FLINK-16345
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.10.1
>
>
> If a computed column refer a time attribute column, computed column will lose 
>  time attribute and cause validation fail.
> {code:java}
> CREATE TABLE orders (
>   order_id STRING,
>   order_time TIMESTAMP(3),
>   amount DOUBLE,
>   amount_kg as amount * 1000,
>   // can not select computed column standard_ts which from column order_time 
> that used as WATERMARK
>   standard_ts as order_time + INTERVAL '8' HOUR,
>   WATERMARK FOR order_time AS order_time
> ) WITH (
>   'connector.type' = 'kafka',
>   'connector.version' = '0.10',
>   'connector.topic' = 'flink_orders',
>   'connector.properties.zookeeper.connect' = 'localhost:2181',
>   'connector.properties.bootstrap.servers' = 'localhost:9092',
>   'connector.properties.group.id' = 'testGroup',
>   'connector.startup-mode' = 'earliest-offset',
>   'format.type' = 'json',
>   'format.derive-schema' = 'true'
> );
> {code}
> The query `select amount_kg from orders` runs normally,  
> the` he query `select standard_ts from orders` throws a validation exception 
> message as following:
> {noformat}
> [ERROR] Could not execute SQL statement. Reason:
>  java.lang.AssertionError: Conversion to relational algebra failed to 
> preserve datatypes:
>  validated type:
>  RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" order_id, TIME 
> ATTRIBUTE(ROWTIME) order_time, DOUBLE amount, DOUBLE amount_kg, TIMESTAMP(3) 
> ts) NOT NULL
>  converted type:
>  RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" order_id, TIME 
> ATTRIBUTE(ROWTIME) order_time, DOUBLE amount, DOUBLE amount_kg, TIME 
> ATTRIBUTE(ROWTIME) ts) NOT NULL
>  rel:
>  LogicalProject(order_id=[$0], order_time=[$1], amount=[$2], amount_kg=[$3], 
> ts=[$4])
>  LogicalWatermarkAssigner(rowtime=[order_time], watermark=[$1])
>  LogicalProject(order_id=[$0], order_time=[$1], amount=[$2], amount_kg=[*($2, 
> 1000)], ts=[+($1, 2880:INTERVAL HOUR)])
>  LogicalTableScan(table=[[default_catalog, default_database, orders, source: 
> [Kafka010TableSource(order_id, order_time, amount)]]])
>  {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16345) Computed column can not refer time attribute column

2020-03-01 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16345:

Fix Version/s: 1.11.0

> Computed column can not refer time attribute column 
> 
>
> Key: FLINK-16345
> URL: https://issues.apache.org/jira/browse/FLINK-16345
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> If a computed column refer a time attribute column, computed column will lose 
>  time attribute and cause validation fail.
> {code:java}
> CREATE TABLE orders (
>   order_id STRING,
>   order_time TIMESTAMP(3),
>   amount DOUBLE,
>   amount_kg as amount * 1000,
>   // can not select computed column standard_ts which from column order_time 
> that used as WATERMARK
>   standard_ts as order_time + INTERVAL '8' HOUR,
>   WATERMARK FOR order_time AS order_time
> ) WITH (
>   'connector.type' = 'kafka',
>   'connector.version' = '0.10',
>   'connector.topic' = 'flink_orders',
>   'connector.properties.zookeeper.connect' = 'localhost:2181',
>   'connector.properties.bootstrap.servers' = 'localhost:9092',
>   'connector.properties.group.id' = 'testGroup',
>   'connector.startup-mode' = 'earliest-offset',
>   'format.type' = 'json',
>   'format.derive-schema' = 'true'
> );
> {code}
> The query `select amount_kg from orders` runs normally,  
> the` he query `select standard_ts from orders` throws a validation exception 
> message as following:
> {noformat}
> [ERROR] Could not execute SQL statement. Reason:
>  java.lang.AssertionError: Conversion to relational algebra failed to 
> preserve datatypes:
>  validated type:
>  RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" order_id, TIME 
> ATTRIBUTE(ROWTIME) order_time, DOUBLE amount, DOUBLE amount_kg, TIMESTAMP(3) 
> ts) NOT NULL
>  converted type:
>  RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" order_id, TIME 
> ATTRIBUTE(ROWTIME) order_time, DOUBLE amount, DOUBLE amount_kg, TIME 
> ATTRIBUTE(ROWTIME) ts) NOT NULL
>  rel:
>  LogicalProject(order_id=[$0], order_time=[$1], amount=[$2], amount_kg=[$3], 
> ts=[$4])
>  LogicalWatermarkAssigner(rowtime=[order_time], watermark=[$1])
>  LogicalProject(order_id=[$0], order_time=[$1], amount=[$2], amount_kg=[*($2, 
> 1000)], ts=[+($1, 2880:INTERVAL HOUR)])
>  LogicalTableScan(table=[[default_catalog, default_database, orders, source: 
> [Kafka010TableSource(order_id, order_time, amount)]]])
>  {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11276: [FLINK-16029][table-planner-blink] Remove register source and sink in test cases of blink planner

2020-03-01 Thread GitBox
flinkbot commented on issue #11276: [FLINK-16029][table-planner-blink] Remove 
register source and sink in test cases of blink planner
URL: https://github.com/apache/flink/pull/11276#issuecomment-593232346
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 70303310b9062e83705e8d3536660784bf963cca (Mon Mar 02 
05:50:39 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-16029).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete opened a new pull request #11276: [FLINK-16029][table-planner-blink] Remove register source and sink in test cases of blink planner

2020-03-01 Thread GitBox
docete opened a new pull request #11276: [FLINK-16029][table-planner-blink] 
Remove register source and sink in test cases of blink planner
URL: https://github.com/apache/flink/pull/11276
 
 
   ## What is the purpose of the change
   Many test cases of planner use TableEnvironement.registerTableSource() and 
registerTableSink() which should be avoid. We want to refactor these cases via 
TableEnvironment.connect(). This PR 
   remove most of these calls except in two situations
   1)table sources implements DefinedRowtimeAttributes or 
DefinedProctimeAttribute, which will be handled by 
https://issues.apache.org/jira/browse/FLINK-16160
   2) TableTestBase#addTableSource which will be handled by 
https://issues.apache.org/jira/browse/FLINK-16117
   
   ## Brief change log
   
   - b53f6b1 Port CustomConnectorDescriptor to flink-table-common module
   - f70d4bc-7030331 Replace TableEnvironment.registerTableSource/Sink() by 
TableEnvironment.connect()
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16029) Remove register source and sink in test cases of blink planner

2020-03-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16029:
---
Labels: pull-request-available  (was: )

> Remove register source and sink in test cases of blink planner
> --
>
> Key: FLINK-16029
> URL: https://issues.apache.org/jira/browse/FLINK-16029
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Tests
>Reporter: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
>
> Many test cases of planner use TableEnvironement.registerTableSource() and 
> registerTableSink() which should be avoid。We want to refactor these cases via 
> TableEnvironment.connect().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11275: [FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing

2020-03-01 Thread GitBox
flinkbot commented on issue #11275: [FLINK-16359][table-runtime] Introduce 
WritableVectors for abstract writing
URL: https://github.com/apache/flink/pull/11275#issuecomment-593230700
 
 
   
   ## CI report:
   
   * 2df6f64a7d1a62836521bd02ec0ce77fe2b14efb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #11273: [FLINK-12814][sql-client] Support tableau result format

2020-03-01 Thread GitBox
KurtYoung commented on a change in pull request #11273: 
[FLINK-12814][sql-client] Support tableau result format
URL: https://github.com/apache/flink/pull/11273#discussion_r386205852
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliTableauResultView.java
 ##
 @@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.table.api.TableColumn;
+import org.apache.flink.table.client.gateway.Executor;
+import org.apache.flink.table.client.gateway.ResultDescriptor;
+import org.apache.flink.table.client.gateway.SqlExecutionException;
+import org.apache.flink.table.client.gateway.TypedResult;
+import org.apache.flink.table.types.logical.BigIntType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.IntType;
+import org.apache.flink.table.types.logical.LocalZonedTimestampType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.SmallIntType;
+import org.apache.flink.table.types.logical.TimeType;
+import org.apache.flink.table.types.logical.TimestampType;
+import org.apache.flink.table.types.logical.TinyIntType;
+import org.apache.flink.types.Row;
+
+import org.apache.commons.lang3.StringUtils;
+import org.jline.terminal.Terminal;
+
+import java.io.Closeable;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.CancellationException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.stream.Stream;
+
+import static org.apache.flink.table.client.cli.CliUtils.rowToString;
+
+/**
+ * Print result in tableau mode.
+ */
+public class CliTableauResultView implements Closeable {
+
+   private static final int NULL_COLUMN_WIDTH = 
CliStrings.NULL_COLUMN.length();
+   private static final int MAX_COLUMN_WIDTH = 30;
+   private static final int DEFAULT_COLUMN_WIDTH = 20;
+   private static final String COLUMN_TRUNCATED_FLAG = "...";
+   private static final String CHANGEFLAG_COLUMN_NAME = "+/-";
+
+   private final Terminal terminal;
+   private final Executor sqlExecutor;
+   private final String sessionId;
+   private final ResultDescriptor resultDescriptor;
+   private final ExecutorService displayResultExecutorService;
+
+   private volatile boolean cleanUpQuery;
+
+   public CliTableauResultView(
+   final Terminal terminal,
+   final Executor sqlExecutor,
+   final String sessionId,
+   final ResultDescriptor resultDescriptor) {
+   this.terminal = terminal;
+   this.sqlExecutor = sqlExecutor;
+   this.sessionId = sessionId;
+   this.resultDescriptor = resultDescriptor;
+   this.displayResultExecutorService = 
Executors.newSingleThreadExecutor();
+   }
+
+   public void displayStreamResults() throws SqlExecutionException {
+   final AtomicInteger receivedRowCount = new AtomicInteger(0);
+   Future resultFuture = displayResultExecutorService.submit(() 
-> {
+   printStreamResults(receivedRowCount);
+   });
+
+   // capture CTRL-C
+   terminal.handle(Terminal.Signal.INT, signal -> {
+   resultFuture.cancel(true);
+   });
+
+   cleanUpQuery = true;
+   try {
+   resultFuture.get();
+   cleanUpQuery = false; // job finished successfully
+   } catch (CancellationException e) {
+   terminal.writer().println("Query terminated, received a 
total of " + receivedRowCount.get() + " rows");
+   terminal.flush();
+   } catch (ExecutionException e) {
+   if (e.getCause() instanceof 

[GitHub] [flink] flinkbot edited a comment on issue #11248: [FLINK-16299] Release containers recovered from previous attempt in w…

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11248: [FLINK-16299] Release containers 
recovered from previous attempt in w…
URL: https://github.com/apache/flink/pull/11248#issuecomment-592302068
 
 
   
   ## CI report:
   
   * 9742a2c085266460d7b2bcde20ae46de8d8c72d5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151295147) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5793)
 
   * b61c045eddf32b77b81238ed06cbd961351f2e3b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11242: [FLINK-16007][table-planner][table-planner-blink][python] Add PythonCorrelateSplitRule to push down the Java Calls contained in Python Corre

2020-03-01 Thread GitBox
flinkbot edited a comment on issue #11242: 
[FLINK-16007][table-planner][table-planner-blink][python] Add 
PythonCorrelateSplitRule to push down the Java Calls contained in Python 
Correlate node
URL: https://github.com/apache/flink/pull/11242#issuecomment-591981740
 
 
   
   ## CI report:
   
   * 2de60095d8046648dd942cf0915a18c8b4a3a854 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/150841822) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5694)
 
   * ad21752d6e9ac8c6d2dc1c0d2d824f86d77d69c0 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151299714) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5796)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KarmaGYZ commented on issue #11248: [FLINK-16299] Release containers recovered from previous attempt in w…

2020-03-01 Thread GitBox
KarmaGYZ commented on issue #11248: [FLINK-16299] Release containers recovered 
from previous attempt in w…
URL: https://github.com/apache/flink/pull/11248#issuecomment-593230211
 
 
   FYI: Add empty `increaseContainerResourceAsync` method in 
`TestingNMClientAsync` in case using Hadoop 2.8.0+.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11275: [FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing

2020-03-01 Thread GitBox
flinkbot commented on issue #11275: [FLINK-16359][table-runtime] Introduce 
WritableVectors for abstract writing
URL: https://github.com/apache/flink/pull/11275#issuecomment-593229703
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2df6f64a7d1a62836521bd02ec0ce77fe2b14efb (Mon Mar 02 
05:40:22 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16359) Introduce WritableVectors for abstract writing

2020-03-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16359:
---
Labels: pull-request-available  (was: )

> Introduce WritableVectors for abstract writing
> --
>
> Key: FLINK-16359
> URL: https://issues.apache.org/jira/browse/FLINK-16359
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> In FLINK-11899 , we need write vectors from parquet input streams.
> We need abstract vector writing, in future, we can provide OffHeapVectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   >