[jira] [Updated] (FLINK-15660) Redundant AllocationID verification for allocateSlot in TaskSlotTable

2020-01-18 Thread xiajun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiajun updated FLINK-15660:
---
Description: 
!image-2020-01-19-15-46-42-664.png!

 

In function TaskSlotTable::allocateSlot, first we will check whether 
allocationId is exist, when exist we will refused this allocation, this was 
introduced by https://issues.apache.org/jira/browse/FLINK-14589. But in 
https://issues.apache.org/jira/browse/FLINK-14189, when allocationId exist, we 
think this is valid, which is contradictory which the first check.

 The code is following:

[https://github.com/apache/flink/blob/310452e800355f0dcc4bc9dd26e9cecba263f3d6/flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/slot/TaskSlotTable.java#L261]

  was:
!image-2020-01-19-15-46-42-664.png!

 

In function TaskSlotTable::allocateSlot, first we will check whether 
allocationId is exist, when exist we will refused this allocation, this was 
introduced by https://issues.apache.org/jira/browse/FLINK-14589.

But in https://issues.apache.org/jira/browse/FLINK-14189, 

 

https://github.com/apache/flink/blob/310452e800355f0dcc4bc9dd26e9cecba263f3d6/flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/slot/TaskSlotTable.java#L261


> Redundant AllocationID verification for allocateSlot in TaskSlotTable
> -
>
> Key: FLINK-15660
> URL: https://issues.apache.org/jira/browse/FLINK-15660
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: xiajun
>Priority: Major
> Attachments: image-2020-01-19-15-46-42-664.png
>
>
> !image-2020-01-19-15-46-42-664.png!
>  
> In function TaskSlotTable::allocateSlot, first we will check whether 
> allocationId is exist, when exist we will refused this allocation, this was 
> introduced by https://issues.apache.org/jira/browse/FLINK-14589. But in 
> https://issues.apache.org/jira/browse/FLINK-14189, when allocationId exist, 
> we think this is valid, which is contradictory which the first check.
>  The code is following:
> [https://github.com/apache/flink/blob/310452e800355f0dcc4bc9dd26e9cecba263f3d6/flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/slot/TaskSlotTable.java#L261]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10897: [FLINK-15657][python][doc] Fix the python table api doc link in Python API tutorial

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10897: [FLINK-15657][python][doc] Fix the 
python table api doc link in Python API tutorial
URL: https://github.com/apache/flink/pull/10897#issuecomment-575975169
 
 
   
   ## CI report:
   
   * a7ecf46073a9f7c25258848b119462bb900a6631 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/145098739) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4467)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10878: [FLINK-15599][table] SQL client requires both legacy and blink planner to be on the classpath

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10878: [FLINK-15599][table] SQL client 
requires both legacy and blink planner to be on the classpath
URL: https://github.com/apache/flink/pull/10878#issuecomment-575493814
 
 
   
   ## CI report:
   
   * 6ce81e21780883796248af5c87d2ec5f1dc5e0ef UNKNOWN
   * 47ab573d50ed82018632ed1106b9deb79e83d820 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144866957) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4419)
 
   * 82ad91185396e2ff3b6be078e7e933d0acbfabe4 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145097683) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4464)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] ifndef-SleePy commented on a change in pull request #10332: [FLINK-13905][checkpointing] Separate checkpoint triggering into several asynchronous stages

2020-01-18 Thread GitBox
ifndef-SleePy commented on a change in pull request #10332: 
[FLINK-13905][checkpointing] Separate checkpoint triggering into several 
asynchronous stages
URL: https://github.com/apache/flink/pull/10332#discussion_r368273421
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTriggeringTest.java
 ##
 @@ -283,31 +300,494 @@ public void testStopPeriodicScheduler() throws 
Exception {
failureManager);
 
// Periodic
+   final CompletableFuture 
onCompletionPromise1 = coord.triggerCheckpoint(
+   System.currentTimeMillis(),
+   
CheckpointProperties.forCheckpoint(CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION),
+   null,
+   true,
+   false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
try {
-   coord.triggerCheckpoint(
-   System.currentTimeMillis(),
-   
CheckpointProperties.forCheckpoint(CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION),
-   null,
-   true,
-   false);
-   manuallyTriggeredScheduledExecutor.triggerAll();
+   onCompletionPromise1.get();
fail("The triggerCheckpoint call expected an 
exception");
-   } catch (CheckpointException e) {
-   
assertEquals(CheckpointFailureReason.PERIODIC_SCHEDULER_SHUTDOWN, 
e.getCheckpointFailureReason());
+   } catch (ExecutionException e) {
+   final Optional 
checkpointExceptionOptional =
+   ExceptionUtils.findThrowable(e, 
CheckpointException.class);
+   assertTrue(checkpointExceptionOptional.isPresent());
+   
assertEquals(CheckpointFailureReason.PERIODIC_SCHEDULER_SHUTDOWN,
+   
checkpointExceptionOptional.get().getCheckpointFailureReason());
}
 
// Not periodic
+   final CompletableFuture 
onCompletionPromise2 = coord.triggerCheckpoint(
+   System.currentTimeMillis(),
+   
CheckpointProperties.forCheckpoint(CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION),
+   null,
+   false,
+   false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
+   assertFalse(onCompletionPromise2.isCompletedExceptionally());
+   }
+
+   @Test
+   public void testTriggerCheckpointWithShuttingDownCoordinator() throws 
Exception {
+   // create some mock Execution vertices that receive the 
checkpoint trigger messages
+   final ExecutionAttemptID attemptID1 = new ExecutionAttemptID();
+   ExecutionVertex vertex1 = mockExecutionVertex(attemptID1);
+
+   // set up the coordinator and validate the initial state
+   CheckpointCoordinatorConfiguration chkConfig = new 
CheckpointCoordinatorConfiguration(
+   60,
+   60,
+   0,
+   Integer.MAX_VALUE,
+   
CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION,
+   true,
+   false,
+   0);
+   CheckpointCoordinator coord = new CheckpointCoordinator(
+   new JobID(),
+   chkConfig,
+   new ExecutionVertex[] { vertex1 },
+   new ExecutionVertex[] { vertex1 },
+   new ExecutionVertex[] { vertex1 },
+   new StandaloneCheckpointIDCounter(),
+   new StandaloneCompletedCheckpointStore(1),
+   new MemoryStateBackend(),
+   Executors.directExecutor(),
+   manuallyTriggeredScheduledExecutor,
+   SharedStateRegistry.DEFAULT_FACTORY,
+   failureManager);
+
+   coord.startCheckpointScheduler();
+   // Periodic
+   final CompletableFuture 
onCompletionPromise = coord.triggerCheckpoint(
+   System.currentTimeMillis(),
+   
CheckpointProperties.forCheckpoint(CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION),
+   null,
+   true,
+   false);
+
+   coord.shutdown(JobStatus.FAILED);
+   manuallyTriggeredScheduledExecutor.triggerAll();
try {
-   coord.triggerCheckpoint(
-   System.currentTimeMillis(),
-

[jira] [Updated] (FLINK-15660) Redundant AllocationID verification for allocateSlot in TaskSlotTable

2020-01-18 Thread xiajun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiajun updated FLINK-15660:
---
Description: 
!image-2020-01-19-15-46-42-664.png!

 

In function TaskSlotTable::allocateSlot, first we will check whether 
allocationId is exist, when exist we will refused this allocation, this was 
introduced by https://issues.apache.org/jira/browse/FLINK-14589.

But in https://issues.apache.org/jira/browse/FLINK-14189, 

 

https://github.com/apache/flink/blob/310452e800355f0dcc4bc9dd26e9cecba263f3d6/flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/slot/TaskSlotTable.java#L261

> Redundant AllocationID verification for allocateSlot in TaskSlotTable
> -
>
> Key: FLINK-15660
> URL: https://issues.apache.org/jira/browse/FLINK-15660
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: xiajun
>Priority: Major
> Attachments: image-2020-01-19-15-46-42-664.png
>
>
> !image-2020-01-19-15-46-42-664.png!
>  
> In function TaskSlotTable::allocateSlot, first we will check whether 
> allocationId is exist, when exist we will refused this allocation, this was 
> introduced by https://issues.apache.org/jira/browse/FLINK-14589.
> But in https://issues.apache.org/jira/browse/FLINK-14189, 
>  
> https://github.com/apache/flink/blob/310452e800355f0dcc4bc9dd26e9cecba263f3d6/flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/slot/TaskSlotTable.java#L261



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] ifndef-SleePy commented on a change in pull request #10332: [FLINK-13905][checkpointing] Separate checkpoint triggering into several asynchronous stages

2020-01-18 Thread GitBox
ifndef-SleePy commented on a change in pull request #10332: 
[FLINK-13905][checkpointing] Separate checkpoint triggering into several 
asynchronous stages
URL: https://github.com/apache/flink/pull/10332#discussion_r368273302
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTriggeringTest.java
 ##
 @@ -283,31 +300,494 @@ public void testStopPeriodicScheduler() throws 
Exception {
failureManager);
 
// Periodic
+   final CompletableFuture 
onCompletionPromise1 = coord.triggerCheckpoint(
+   System.currentTimeMillis(),
+   
CheckpointProperties.forCheckpoint(CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION),
+   null,
+   true,
+   false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
try {
-   coord.triggerCheckpoint(
-   System.currentTimeMillis(),
-   
CheckpointProperties.forCheckpoint(CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION),
-   null,
-   true,
-   false);
-   manuallyTriggeredScheduledExecutor.triggerAll();
+   onCompletionPromise1.get();
fail("The triggerCheckpoint call expected an 
exception");
-   } catch (CheckpointException e) {
-   
assertEquals(CheckpointFailureReason.PERIODIC_SCHEDULER_SHUTDOWN, 
e.getCheckpointFailureReason());
+   } catch (ExecutionException e) {
+   final Optional 
checkpointExceptionOptional =
+   ExceptionUtils.findThrowable(e, 
CheckpointException.class);
+   assertTrue(checkpointExceptionOptional.isPresent());
+   
assertEquals(CheckpointFailureReason.PERIODIC_SCHEDULER_SHUTDOWN,
+   
checkpointExceptionOptional.get().getCheckpointFailureReason());
}
 
// Not periodic
+   final CompletableFuture 
onCompletionPromise2 = coord.triggerCheckpoint(
+   System.currentTimeMillis(),
+   
CheckpointProperties.forCheckpoint(CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION),
+   null,
+   false,
+   false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
+   assertFalse(onCompletionPromise2.isCompletedExceptionally());
+   }
+
+   @Test
+   public void testTriggerCheckpointWithShuttingDownCoordinator() throws 
Exception {
+   // create some mock Execution vertices that receive the 
checkpoint trigger messages
+   final ExecutionAttemptID attemptID1 = new ExecutionAttemptID();
+   ExecutionVertex vertex1 = mockExecutionVertex(attemptID1);
+
+   // set up the coordinator and validate the initial state
+   CheckpointCoordinatorConfiguration chkConfig = new 
CheckpointCoordinatorConfiguration(
+   60,
+   60,
+   0,
+   Integer.MAX_VALUE,
+   
CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION,
+   true,
+   false,
+   0);
+   CheckpointCoordinator coord = new CheckpointCoordinator(
+   new JobID(),
+   chkConfig,
+   new ExecutionVertex[] { vertex1 },
+   new ExecutionVertex[] { vertex1 },
+   new ExecutionVertex[] { vertex1 },
+   new StandaloneCheckpointIDCounter(),
+   new StandaloneCompletedCheckpointStore(1),
+   new MemoryStateBackend(),
+   Executors.directExecutor(),
+   manuallyTriggeredScheduledExecutor,
+   SharedStateRegistry.DEFAULT_FACTORY,
+   failureManager);
+
+   coord.startCheckpointScheduler();
+   // Periodic
+   final CompletableFuture 
onCompletionPromise = coord.triggerCheckpoint(
+   System.currentTimeMillis(),
+   
CheckpointProperties.forCheckpoint(CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION),
+   null,
+   true,
+   false);
+
+   coord.shutdown(JobStatus.FAILED);
+   manuallyTriggeredScheduledExecutor.triggerAll();
try {
-   coord.triggerCheckpoint(
-   System.currentTimeMillis(),
-

[jira] [Commented] (FLINK-15639) Support to set toleration for jobmanager and taskmanger

2020-01-18 Thread yuzhaojing (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018836#comment-17018836
 ] 

yuzhaojing commented on FLINK-15639:


Can assign this ticket for me?

[~fly_in_gis]

> Support to set toleration for jobmanager and taskmanger
> ---
>
> Key: FLINK-15639
> URL: https://issues.apache.org/jira/browse/FLINK-15639
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
>
> Taints and tolerations work together to ensure that pods are not scheduled 
> onto inappropriate nodes. Navigate to [Kubernetes 
> doc|https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/] 
> for more information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15630) Improve the environment requirement documentation of the Python API

2020-01-18 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-15630.
---
Resolution: Resolved

> Improve the environment requirement documentation of the Python API
> ---
>
> Key: FLINK-15630
> URL: https://issues.apache.org/jira/browse/FLINK-15630
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Wei Zhong
>Assignee: Wei Zhong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current Python API documentation is not very clear about the environment 
> requirements. It should be described in more detail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14460) Active Kubernetes integration phase2 - Advanced Features

2020-01-18 Thread yuzhaojing (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018831#comment-17018831
 ] 

yuzhaojing edited comment on FLINK-14460 at 1/19/20 7:46 AM:
-

Can assign https://issues.apache.org/jira/browse/FLINK-15639 for me?
 And I want to support pullSecrets, ingress, scheduler, dnsPolicy and dnsConfig 
to flink,Can Create this and assign for me?


was (Author: yuzhaojing):
Can assign https://issues.apache.org/jira/browse/FLINK-15639 for me?
And I want to support pullSecrets, ingress, scheduler, and dnsPolicy and 
dnsConfig to flink,Can Create this and assign for me?

> Active Kubernetes integration phase2 - Advanced Features
> 
>
> Key: FLINK-14460
> URL: https://issues.apache.org/jira/browse/FLINK-14460
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
>
> This is phase2 of active kubernetes integration. It is a umbrella jira to 
> track all the advanced features and make Flink on Kubernetes production ready.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15660) Redundant AllocationID verification for allocateSlot in TaskSlotTable

2020-01-18 Thread xiajun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiajun updated FLINK-15660:
---
Attachment: image-2020-01-19-15-46-42-664.png

> Redundant AllocationID verification for allocateSlot in TaskSlotTable
> -
>
> Key: FLINK-15660
> URL: https://issues.apache.org/jira/browse/FLINK-15660
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: xiajun
>Priority: Major
> Attachments: image-2020-01-19-15-46-42-664.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15660) Redundant AllocationID verification for allocateSlot in TaskSlotTable

2020-01-18 Thread xiajun (Jira)
xiajun created FLINK-15660:
--

 Summary: Redundant AllocationID verification for allocateSlot in 
TaskSlotTable
 Key: FLINK-15660
 URL: https://issues.apache.org/jira/browse/FLINK-15660
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.10.0
Reporter: xiajun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14460) Active Kubernetes integration phase2 - Advanced Features

2020-01-18 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018831#comment-17018831
 ] 

喻兆靖 commented on FLINK-14460:
-

Can assign https://issues.apache.org/jira/browse/FLINK-15639 for me?
And I want to support pullSecrets, ingress, scheduler, and dnsPolicy and 
dnsConfig to flink,Can Create this and assign for me?

> Active Kubernetes integration phase2 - Advanced Features
> 
>
> Key: FLINK-14460
> URL: https://issues.apache.org/jira/browse/FLINK-14460
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
>
> This is phase2 of active kubernetes integration. It is a umbrella jira to 
> track all the advanced features and make Flink on Kubernetes production ready.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15630) Improve the environment requirement documentation of the Python API

2020-01-18 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018830#comment-17018830
 ] 

Hequn Cheng commented on FLINK-15630:
-

Resolved in
1.11.0 via 48547baaada33a69edb582483f9044dd4ac960db
1.10.0 via 13c34b98722e87be9bba077d6987987700f499c3

> Improve the environment requirement documentation of the Python API
> ---
>
> Key: FLINK-15630
> URL: https://issues.apache.org/jira/browse/FLINK-15630
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Wei Zhong
>Assignee: Wei Zhong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current Python API documentation is not very clear about the environment 
> requirements. It should be described in more detail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15632) Zookeeper HA service could not work for active kubernetes integration

2020-01-18 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018827#comment-17018827
 ] 

Zhu Zhu commented on FLINK-15632:
-

Thanks for reporting this issue [~fly_in_gis].
I have assigned it to you.

> Zookeeper HA service could not work for active kubernetes integration
> -
>
> Key: FLINK-15632
> URL: https://issues.apache.org/jira/browse/FLINK-15632
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Critical
> Fix For: 1.10.0
>
>
> It will be some different, if we want to support HA for active Kubernetes 
> integration.
>  # The K8s service is designed for accessing the jobmanager out of K8s 
> cluster. So Flink client will not use HA service to retrieve address of 
> jobmanager. Instead, it always use Kubernetes service to contact with 
> jobmanager via rest client. 
>  # The Kubernetes DNS creates A and SRV records only for Services. It doesn't 
> generate pods' A records. So the ip address, not hostname, will be used as 
> jobmanager address.
>  
> All other behaviors will be same as Zookeeper HA for standalone and Yarn.
> To fix this problem, we just need some minor changes to 
> {{KubernetesClusterDescriptor}} and {{KubernetesSessionEntrypoint}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15632) Zookeeper HA service could not work for active kubernetes integration

2020-01-18 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-15632:

Priority: Critical  (was: Major)

> Zookeeper HA service could not work for active kubernetes integration
> -
>
> Key: FLINK-15632
> URL: https://issues.apache.org/jira/browse/FLINK-15632
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Critical
> Fix For: 1.10.0
>
>
> It will be some different, if we want to support HA for active Kubernetes 
> integration.
>  # The K8s service is designed for accessing the jobmanager out of K8s 
> cluster. So Flink client will not use HA service to retrieve address of 
> jobmanager. Instead, it always use Kubernetes service to contact with 
> jobmanager via rest client. 
>  # The Kubernetes DNS creates A and SRV records only for Services. It doesn't 
> generate pods' A records. So the ip address, not hostname, will be used as 
> jobmanager address.
>  
> All other behaviors will be same as Zookeeper HA for standalone and Yarn.
> To fix this problem, we just need some minor changes to 
> {{KubernetesClusterDescriptor}} and {{KubernetesSessionEntrypoint}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15632) Zookeeper HA service could not work for active kubernetes integration

2020-01-18 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-15632:
---

Assignee: Yang Wang

> Zookeeper HA service could not work for active kubernetes integration
> -
>
> Key: FLINK-15632
> URL: https://issues.apache.org/jira/browse/FLINK-15632
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
> Fix For: 1.10.0
>
>
> It will be some different, if we want to support HA for active Kubernetes 
> integration.
>  # The K8s service is designed for accessing the jobmanager out of K8s 
> cluster. So Flink client will not use HA service to retrieve address of 
> jobmanager. Instead, it always use Kubernetes service to contact with 
> jobmanager via rest client. 
>  # The Kubernetes DNS creates A and SRV records only for Services. It doesn't 
> generate pods' A records. So the ip address, not hostname, will be used as 
> jobmanager address.
>  
> All other behaviors will be same as Zookeeper HA for standalone and Yarn.
> To fix this problem, we just need some minor changes to 
> {{KubernetesClusterDescriptor}} and {{KubernetesSessionEntrypoint}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10894: [FLINK-15592][hive] Add black list for Hive built-in functions

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10894: [FLINK-15592][hive] Add black list 
for Hive built-in functions
URL: https://github.com/apache/flink/pull/10894#issuecomment-575966596
 
 
   
   ## CI report:
   
   * a72e031a0f93df6cf782968234f59a9ba0c40821 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145094605) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4459)
 
   * 5adc6ab35ccc1a53562983afb90d7212dcf66a1c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145096925) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4462)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10896: [FLINK-15631][table-planner-blink] Fix equals code generation for raw and timestamp type

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10896: [FLINK-15631][table-planner-blink] 
Fix equals code generation for raw and timestamp type
URL: https://github.com/apache/flink/pull/10896#issuecomment-575971337
 
 
   
   ## CI report:
   
   * f765a3816b97657b93895347980f6899a45e95b9 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145096929) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4463)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10893: [FLINK-15637][state backends] Make RocksDB the default store for timers when using RocksDBStateBackend

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10893: [FLINK-15637][state backends] Make 
RocksDB the default store for timers when using RocksDBStateBackend
URL: https://github.com/apache/flink/pull/10893#issuecomment-575859797
 
 
   
   ## CI report:
   
   * 2a07b12e4f943499075f42960fc84dd445df6f3c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145041173) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4454)
 
   * d644b42865da92de5b84b807f373f53adbd7945b UNKNOWN
   * 3c71f512cb2a8adfdfd094d794085cf883b39de9 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145091925) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4456)
 
   * d55ffe03a45c1cf272e0e912450b9f391df73bc7 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145096915) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4461)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10897: [FLINK-15657][python][doc] Fix the python table api doc link in Python API tutorial

2020-01-18 Thread GitBox
flinkbot commented on issue #10897: [FLINK-15657][python][doc] Fix the python 
table api doc link in Python API tutorial
URL: https://github.com/apache/flink/pull/10897#issuecomment-575975169
 
 
   
   ## CI report:
   
   * a7ecf46073a9f7c25258848b119462bb900a6631 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15659) Introduce a client-side StatusWatcher for JM Pods

2020-01-18 Thread Canbin Zheng (Jira)
Canbin Zheng created FLINK-15659:


 Summary: Introduce a client-side StatusWatcher for JM Pods
 Key: FLINK-15659
 URL: https://issues.apache.org/jira/browse/FLINK-15659
 Project: Flink
  Issue Type: New Feature
  Components: Deployment / Kubernetes
Reporter: Canbin Zheng


It's quite useful to introduce a client-side StatusWatcher to track and log the 
status of the JobManager Pods so that users who deploy applications on K8s via 
CLI can learn about the deploying progress conveniently.

The status logging could occur on every state change, also at an interval.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10884: [FLINK-15630][python][doc] Improve the environment requirement documentation of the Python API.

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10884: [FLINK-15630][python][doc] Improve 
the environment requirement documentation of the Python API.
URL: https://github.com/apache/flink/pull/10884#issuecomment-575556115
 
 
   
   ## CI report:
   
   * 6656e341e1fc6759bcdc547daad7252a85913404 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/144906427) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4430)
 
   * 4c85362fa2e7f102ca152d00bfa7ca4674a90858 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144911942) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4434)
 
   * 1fd3dd87276855f8ec05eba6f6d2f88fc3b22f8f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145095505) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4460)
 
   * 140ac90e152dc6621ce8a78d0bb94ddade21d40e Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/145097689) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4465)
 
   * 7e0619ae7e3e156b11c986141c8d18f1df590098 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10878: [FLINK-15599][table] SQL client requires both legacy and blink planner to be on the classpath

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10878: [FLINK-15599][table] SQL client 
requires both legacy and blink planner to be on the classpath
URL: https://github.com/apache/flink/pull/10878#issuecomment-575493814
 
 
   
   ## CI report:
   
   * 6ce81e21780883796248af5c87d2ec5f1dc5e0ef UNKNOWN
   * 47ab573d50ed82018632ed1106b9deb79e83d820 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144866957) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4419)
 
   * 82ad91185396e2ff3b6be078e7e933d0acbfabe4 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/145097683) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4464)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15630) Improve the environment requirement documentation of the Python API

2020-01-18 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng reassigned FLINK-15630:
---

Assignee: Wei Zhong

> Improve the environment requirement documentation of the Python API
> ---
>
> Key: FLINK-15630
> URL: https://issues.apache.org/jira/browse/FLINK-15630
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Wei Zhong
>Assignee: Wei Zhong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current Python API documentation is not very clear about the environment 
> requirements. It should be described in more detail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 merged pull request #10884: [FLINK-15630][python][doc] Improve the environment requirement documentation of the Python API.

2020-01-18 Thread GitBox
hequn8128 merged pull request #10884: [FLINK-15630][python][doc] Improve the 
environment requirement documentation of the Python API.
URL: https://github.com/apache/flink/pull/10884
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10897: [FLINK-15657][python][doc] Fix the python table api doc link in Python API tutorial

2020-01-18 Thread GitBox
flinkbot commented on issue #10897: [FLINK-15657][python][doc] Fix the python 
table api doc link in Python API tutorial
URL: https://github.com/apache/flink/pull/10897#issuecomment-575974555
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit a7ecf46073a9f7c25258848b119462bb900a6631 (Sun Jan 19 
07:12:23 UTC 2020)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15657).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15657) Fix the python table api doc link in Python API tutorial

2020-01-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15657:
---
Labels: pull-request-available  (was: )

> Fix the python table api doc link in Python API tutorial
> 
>
> Key: FLINK-15657
> URL: https://issues.apache.org/jira/browse/FLINK-15657
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.2, 1.10.0
>
>
> Fix the python table api doc link



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo opened a new pull request #10897: [FLINK-15657][python][doc] Fix the python table api doc link in Python API tutorial

2020-01-18 Thread GitBox
HuangXingBo opened a new pull request #10897: [FLINK-15657][python][doc] Fix 
the python table api doc link in Python API tutorial
URL: https://github.com/apache/flink/pull/10897
 
 
   ## What is the purpose of the change
   
   *This pull request fixes the python table api doc link in Python API 
tutorial*
   
   
   ## Brief change log
   
 - *correct the python table api doc link in Python API tutorial*
   
   ## Verifying this change
   
 - *This change is a trivial rework / code cleanup without any test 
coverage.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (docs)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15657) Fix the python table api doc link in Python API tutorial

2020-01-18 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-15657:
-
Summary: Fix the python table api doc link in Python API tutorial  (was: 
Fix the python table api doc link)

> Fix the python table api doc link in Python API tutorial
> 
>
> Key: FLINK-15657
> URL: https://issues.apache.org/jira/browse/FLINK-15657
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Huang Xingbo
>Priority: Major
> Fix For: 1.9.2, 1.10.0
>
>
> Fix the python table api doc link



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15609) Add blink built-in functions from FlinkSqlOperatorTable to BuiltInFunctionDefinitions

2020-01-18 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15609:
-
Fix Version/s: (was: 1.10.0)
   1.11.0

> Add blink built-in functions from FlinkSqlOperatorTable to 
> BuiltInFunctionDefinitions
> -
>
> Key: FLINK-15609
> URL: https://issues.apache.org/jira/browse/FLINK-15609
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> In FLINK-15595, CoreModule should contains all functions in 
> FlinkSqlOperatorTable. Otherwise, resolution Order is chaotic. I think it is 
> time to align blink built-in functions to BuiltInFunctionDefinitions.
> Impact to legacy planner: user can not use the function name directly that he 
> define function with the same name of blink built-in function in catalog. I 
> think it is reasonable, since he will migrate his job to blink planner.
> What do you think? [~twalthr] [~dwysakowicz]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 commented on a change in pull request #10884: [FLINK-15630][python][doc] Improve the environment requirement documentation of the Python API.

2020-01-18 Thread GitBox
hequn8128 commented on a change in pull request #10884: 
[FLINK-15630][python][doc] Improve the environment requirement documentation of 
the Python API.
URL: https://github.com/apache/flink/pull/10884#discussion_r368270104
 
 

 ##
 File path: docs/ops/python_shell.zh.md
 ##
 @@ -24,6 +24,21 @@ under the License.
 
 Flink附带了一个集成的交互式Python Shell。
 它既能够运行在本地启动的local模式,也能够运行在集群启动的cluster模式下。
+本地安装Flink,请看[本地安装](../getting-started/tutorials/local_setup.html)页面。
 
 Review comment:
   The link here is invalid, change the link to `deployment/local.html`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15658) The same sql run in a streaming environment producing a Exception, but a batch env can run normally.

2020-01-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15658:
---
Description: 
*summary:*
The same sql can run in a batch environment normally,  but in a streaming 
environment there will be a exception like this:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field names must be unique. 
Found duplicates: [f1]





*The sql is:*

CREATE TABLE `tenk1` (
unique1 int,
unique2 int,
two int,
four int,
ten int,
twenty int,
hundred int,
thousand int,
twothousand int,
fivethous int,
tenthous int,
odd int,
even int,
stringu1 varchar,
stringu2 varchar,
string4 varchar
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

CREATE TABLE `int4_tbl` (
f1 INT
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select a.f1, b.f1, t.thousand, t.tenthous from
  tenk1 t,
  (select sum(f1)+1 as f1 from int4_tbl i4a) a,
  (select sum(f1) as f1 from int4_tbl i4b) b
where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;







  was:
*summary:
*
The same sql can run in a batch environment normally,  but in a streaming 
environment there will be a exception like this:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field names must be unique. 
Found duplicates: [f1]





*The sql is:*

CREATE TABLE `tenk1` (
unique1 int,
unique2 int,
two int,
four int,
ten int,
twenty int,
hundred int,
thousand int,
twothousand int,
fivethous int,
tenthous int,
odd int,
even int,
stringu1 varchar,
stringu2 varchar,
string4 varchar
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

CREATE TABLE `int4_tbl` (
f1 INT
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select a.f1, b.f1, t.thousand, t.tenthous from
  tenk1 t,
  (select sum(f1)+1 as f1 from int4_tbl i4a) a,
  (select sum(f1) as f1 from int4_tbl i4b) b
where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;








> The same sql run in a streaming environment producing a Exception, but a 
> batch env can run normally.
> 
>
> Key: FLINK-15658
> URL: https://issues.apache.org/jira/browse/FLINK-15658
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: *Input data:
> *
> tenk1 is:
> 4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
> 4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
> 6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
> 6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
> 429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
> 5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
> 1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
> 2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
> 0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
> 2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx
> int4_tbl is:
> 0
> 123456
> -123456
> 2147483647
> -2147483647
> *The sql-client configuration is :
> *
> execution:
>   planner: blink
>   type: batch
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *summary:*
> The same sql can run in a batch environment normally,  but in a streaming 
> environment there will be a exception like this:
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Field 

[jira] [Updated] (FLINK-15658) The same sql run in a streaming environment producing a Exception, but a batch env can run normally.

2020-01-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15658:
---
Environment: 
*Input data:*
tenk1 is:
4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx

int4_tbl is:
0
123456
-123456
2147483647
-2147483647

*The sql-client configuration is :*

execution:
  planner: blink
  type: batch

  was:
*Input data:
*
tenk1 is:
4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx

int4_tbl is:
0
123456
-123456
2147483647
-2147483647

*The sql-client configuration is :
*

execution:
  planner: blink
  type: batch


> The same sql run in a streaming environment producing a Exception, but a 
> batch env can run normally.
> 
>
> Key: FLINK-15658
> URL: https://issues.apache.org/jira/browse/FLINK-15658
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: *Input data:*
> tenk1 is:
> 4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
> 4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
> 6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
> 6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
> 429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
> 5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
> 1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
> 2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
> 0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
> 2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx
> int4_tbl is:
> 0
> 123456
> -123456
> 2147483647
> -2147483647
> *The sql-client configuration is :*
> execution:
>   planner: blink
>   type: batch
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *summary:*
> The same sql can run in a batch environment normally,  but in a streaming 
> environment there will be a exception like this:
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Field names must be unique. 
> Found duplicates: [f1]
> *The sql is:*
> CREATE TABLE `tenk1` (
>   unique1 int,
>   unique2 int,
>   two int,
>   four int,
>   ten int,
>   twenty int,
>   hundred int,
>   thousand int,
>   twothousand int,
>   fivethous int,
>   tenthous int,
>   odd int,
>   even int,
>   stringu1 varchar,
>   stringu2 varchar,
>   string4 varchar
> ) WITH (
>   
> 'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
>   'format.empty-column-as-null'='true',
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   'format.type'='csv'
> );
> CREATE TABLE `int4_tbl` (
>   f1 INT
> ) WITH (
>   
> 'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
>   'format.empty-column-as-null'='true',
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   'format.type'='csv'
> );
> select a.f1, b.f1, t.thousand, t.tenthous from
>   tenk1 t,
>   (select sum(f1)+1 as f1 from int4_tbl i4a) a,
>   (select sum(f1) as f1 from int4_tbl i4b) b
> where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15658) The same sql run in a streaming environment producing a Exception, but a batch env can run normally.

2020-01-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15658:
---
Description: 
*summary:
*
The same sql can run in a batch environment normally,  but in a streaming 
environment there will be a exception like this:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field names must be unique. 
Found duplicates: [f1]





*The sql is:*

CREATE TABLE `tenk1` (
unique1 int,
unique2 int,
two int,
four int,
ten int,
twenty int,
hundred int,
thousand int,
twothousand int,
fivethous int,
tenthous int,
odd int,
even int,
stringu1 varchar,
stringu2 varchar,
string4 varchar
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

CREATE TABLE `int4_tbl` (
f1 INT
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select a.f1, b.f1, t.thousand, t.tenthous from
  tenk1 t,
  (select sum(f1)+1 as f1 from int4_tbl i4a) a,
  (select sum(f1) as f1 from int4_tbl i4b) b
where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;







  was:
The same sql can run in a batch environment normally,  but in a streaming 
environment there will be a exception like this:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field names must be unique. 
Found duplicates: [f1]





*The sql is:*

CREATE TABLE `tenk1` (
unique1 int,
unique2 int,
two int,
four int,
ten int,
twenty int,
hundred int,
thousand int,
twothousand int,
fivethous int,
tenthous int,
odd int,
even int,
stringu1 varchar,
stringu2 varchar,
string4 varchar
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

CREATE TABLE `int4_tbl` (
f1 INT
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select a.f1, b.f1, t.thousand, t.tenthous from
  tenk1 t,
  (select sum(f1)+1 as f1 from int4_tbl i4a) a,
  (select sum(f1) as f1 from int4_tbl i4b) b
where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;








> The same sql run in a streaming environment producing a Exception, but a 
> batch env can run normally.
> 
>
> Key: FLINK-15658
> URL: https://issues.apache.org/jira/browse/FLINK-15658
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: *Input data:
> *
> tenk1 is:
> 4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
> 4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
> 6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
> 6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
> 429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
> 5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
> 1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
> 2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
> 0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
> 2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx
> int4_tbl is:
> 0
> 123456
> -123456
> 2147483647
> -2147483647
> *The sql-client configuration is :
> *
> execution:
>   planner: blink
>   type: batch
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *summary:
> *
> The same sql can run in a batch environment normally,  but in a streaming 
> environment there will be a exception like this:
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Field names must 

[jira] [Created] (FLINK-15658) The same sql run in a streaming environment producing a Exception, but a batch env can run normally.

2020-01-18 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15658:
--

 Summary: The same sql run in a streaming environment producing a 
Exception, but a batch env can run normally.
 Key: FLINK-15658
 URL: https://issues.apache.org/jira/browse/FLINK-15658
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
 Environment: *Input data:
*
tenk1 is:
4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx

int4_tbl is:
0
123456
-123456
2147483647
-2147483647

*The sql-client configuration is :
*

execution:
  planner: blink
  type: batch
Reporter: xiaojin.wy
 Fix For: 1.10.0


The same sql can run in a batch environment normally,  but in a streaming 
environment there will be a exception like this:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field names must be unique. 
Found duplicates: [f1]





*The sql is:*

CREATE TABLE `tenk1` (
unique1 int,
unique2 int,
two int,
four int,
ten int,
twenty int,
hundred int,
thousand int,
twothousand int,
fivethous int,
tenthous int,
odd int,
even int,
stringu1 varchar,
stringu2 varchar,
string4 varchar
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

CREATE TABLE `int4_tbl` (
f1 INT
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select a.f1, b.f1, t.thousand, t.tenthous from
  tenk1 t,
  (select sum(f1)+1 as f1 from int4_tbl i4a) a,
  (select sum(f1) as f1 from int4_tbl i4b) b
where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;









--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-15537) Type of keys should be `BinaryRow` when manipulating map state with `BaseRow` as key type.

2020-01-18 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-15537.
-
Resolution: Fixed

1.11.0: ac5e288524991d340309d118fa9fa29109e7427f
1.10.0: 791b93d451e371e88ed1d05a91b418fee364f0c5

> Type of keys should be `BinaryRow` when manipulating map state with `BaseRow` 
> as key type.
> --
>
> Key: FLINK-15537
> URL: https://issues.apache.org/jira/browse/FLINK-15537
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.1
>Reporter: Shuo Cheng
>Assignee: Shuo Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> `BaseRow` is serialized and deserialized as `BinaryRow` by default, so when 
> the key type of the map state is `BaseRow`, we should construct map keys with 
> `BinaryRow` as type to get value from map state, otherwise, you would  always 
> get Null...
> Try it with following SQL:
> {code:java}
> // (b: Int, c: String)
> SELECT 
>   b, listagg(DISTINCT c, '#')
> FROM MyTable
> GROUP BY b
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15657) Fix the python table api doc link

2020-01-18 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-15657:


 Summary: Fix the python table api doc link
 Key: FLINK-15657
 URL: https://issues.apache.org/jira/browse/FLINK-15657
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Documentation
Reporter: Huang Xingbo
 Fix For: 1.9.2, 1.10.0


Fix the python table api doc link



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #10815: [FLINK-15537][table-planner-blink] Type of keys should be `BinaryRow`…

2020-01-18 Thread GitBox
wuchong merged pull request #10815: [FLINK-15537][table-planner-blink] Type of 
keys should be `BinaryRow`…
URL: https://github.com/apache/flink/pull/10815
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10896: [FLINK-15631][table-planner-blink] Fix equals code generation for raw and timestamp type

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10896: [FLINK-15631][table-planner-blink] 
Fix equals code generation for raw and timestamp type
URL: https://github.com/apache/flink/pull/10896#issuecomment-575971337
 
 
   
   ## CI report:
   
   * f765a3816b97657b93895347980f6899a45e95b9 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145096929) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4463)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10894: [FLINK-15592][hive] Add black list for Hive built-in functions

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10894: [FLINK-15592][hive] Add black list 
for Hive built-in functions
URL: https://github.com/apache/flink/pull/10894#issuecomment-575966596
 
 
   
   ## CI report:
   
   * a72e031a0f93df6cf782968234f59a9ba0c40821 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145094605) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4459)
 
   * 5adc6ab35ccc1a53562983afb90d7212dcf66a1c Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/145096925) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4462)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10884: [FLINK-15630][python][doc] Improve the environment requirement documentation of the Python API.

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10884: [FLINK-15630][python][doc] Improve 
the environment requirement documentation of the Python API.
URL: https://github.com/apache/flink/pull/10884#issuecomment-575556115
 
 
   
   ## CI report:
   
   * 6656e341e1fc6759bcdc547daad7252a85913404 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/144906427) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4430)
 
   * 4c85362fa2e7f102ca152d00bfa7ca4674a90858 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144911942) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4434)
 
   * 1fd3dd87276855f8ec05eba6f6d2f88fc3b22f8f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145095505) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4460)
 
   * 140ac90e152dc6621ce8a78d0bb94ddade21d40e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10893: [FLINK-15637][state backends] Make RocksDB the default store for timers when using RocksDBStateBackend

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10893: [FLINK-15637][state backends] Make 
RocksDB the default store for timers when using RocksDBStateBackend
URL: https://github.com/apache/flink/pull/10893#issuecomment-575859797
 
 
   
   ## CI report:
   
   * 2a07b12e4f943499075f42960fc84dd445df6f3c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145041173) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4454)
 
   * d644b42865da92de5b84b807f373f53adbd7945b UNKNOWN
   * 3c71f512cb2a8adfdfd094d794085cf883b39de9 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145091925) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4456)
 
   * d55ffe03a45c1cf272e0e912450b9f391df73bc7 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/145096915) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4461)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on a change in pull request #10884: [FLINK-15630][python][doc] Improve the environment requirement documentation of the Python API.

2020-01-18 Thread GitBox
hequn8128 commented on a change in pull request #10884: 
[FLINK-15630][python][doc] Improve the environment requirement documentation of 
the Python API.
URL: https://github.com/apache/flink/pull/10884#discussion_r368270104
 
 

 ##
 File path: docs/ops/python_shell.zh.md
 ##
 @@ -24,6 +24,21 @@ under the License.
 
 Flink附带了一个集成的交互式Python Shell。
 它既能够运行在本地启动的local模式,也能够运行在集群启动的cluster模式下。
+本地安装Flink,请看[本地安装](../getting-started/tutorials/local_setup.html)页面。
 
 Review comment:
   The link here is invalid, change the link to `../ops/deployment/local.html`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on a change in pull request #10884: [FLINK-15630][python][doc] Improve the environment requirement documentation of the Python API.

2020-01-18 Thread GitBox
hequn8128 commented on a change in pull request #10884: 
[FLINK-15630][python][doc] Improve the environment requirement documentation of 
the Python API.
URL: https://github.com/apache/flink/pull/10884#discussion_r368269152
 
 

 ##
 File path: docs/ops/python_shell.md
 ##
 @@ -24,6 +24,21 @@ under the License.
 
 Flink comes with an integrated interactive Python Shell.
 It can be used in a local setup as well as in a cluster setup.
+See the [local setup page](../getting-started/tutorials/local_setup.html) for 
more information about how to setup a local Flink.
+You can also [build a local setup from source](../flinkDev/building.html).
+
+Note The Python Shell will run the 
command “python”. Please run following command to confirm that the command 
“python” in current environment points to Python 3.5+:
+
+{% highlight bash %}
+$ python --version
+# the version printed here must be 3.5+
+{% endhighlight %}
+
+Note Using Python UDF in Python Shell 
requires apache-beam 2.15.0. Run following command to confirm that it meets the 
requirements before run the Shell in local mode:
 
 Review comment:
   Run **the** following command to confirm that it meets the requirements 
before **running** the Shell in local mode
   
   Same for other places.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10878: [FLINK-15599][table] SQL client requires both legacy and blink planner to be on the classpath

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10878: [FLINK-15599][table] SQL client 
requires both legacy and blink planner to be on the classpath
URL: https://github.com/apache/flink/pull/10878#issuecomment-575493814
 
 
   
   ## CI report:
   
   * 6ce81e21780883796248af5c87d2ec5f1dc5e0ef UNKNOWN
   * 47ab573d50ed82018632ed1106b9deb79e83d820 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144866957) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4419)
 
   * 82ad91185396e2ff3b6be078e7e933d0acbfabe4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10815: [FLINK-15537][table-planner-blink] Type of keys should be `BinaryRow`…

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10815: [FLINK-15537][table-planner-blink] 
Type of keys should be `BinaryRow`…
URL: https://github.com/apache/flink/pull/10815#issuecomment-572566376
 
 
   
   ## CI report:
   
   * 19a4290f709495491fe460037c8c31d106984ea8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143732723) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4229)
 
   * c3ef5ea345a343170806de8112163edb7df31f69 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144110200) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4284)
 
   * 941a5d4725dee3317ca05f8ab16eb103f61d3fcb Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144255612) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4312)
 
   * c60878a8c878ddfd03bade488d688068d30bd1d5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145094603) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4458)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #10815: [FLINK-15537][table-planner-blink] Type of keys should be `BinaryRow`…

2020-01-18 Thread GitBox
wuchong commented on issue #10815: [FLINK-15537][table-planner-blink] Type of 
keys should be `BinaryRow`…
URL: https://github.com/apache/flink/pull/10815#issuecomment-575972770
 
 
   The failed case `TaskExecutorITCase.teardown` is not related to this PR, it 
seems is tracked by FLINK-15247. 
   Merging...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15247) Closing (Testing)MiniCluster may cause ConcurrentModificationException

2020-01-18 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018811#comment-17018811
 ] 

Jark Wu commented on FLINK-15247:
-

Maybe another instance: https://api.travis-ci.com/v3/job/277276310/log.txt

{code:java}
TaskExecutorITCase.teardown:86 » Flink Could not close resource.

[ERROR] 
testJobReExecutionAfterTaskExecutorTermination(org.apache.flink.runtime.taskexecutor.TaskExecutorITCase)
  Time elapsed: 0.326 s  <<< ERROR!
org.apache.flink.util.FlinkException: Could not close resource.
at 
org.apache.flink.runtime.taskexecutor.TaskExecutorITCase.teardown(TaskExecutorITCase.java:86)
Caused by: org.apache.flink.util.FlinkException: Error while shutting the 
TaskExecutor down.
Caused by: org.apache.flink.util.FlinkException: Could not properly shut down 
the TaskManager services.
Caused by: java.util.ConcurrentModificationException
{code}


> Closing (Testing)MiniCluster may cause ConcurrentModificationException
> --
>
> Key: FLINK-15247
> URL: https://issues.apache.org/jira/browse/FLINK-15247
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: Gary Yao
>Assignee: Andrey Zagrebin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {noformat}
> Test 
> operatorsBecomeBackPressured(org.apache.flink.test.streaming.runtime.BackPressureITCase)
>  failed with:
> org.apache.flink.util.FlinkException: Could not close resource.
> at 
> org.apache.flink.util.AutoCloseableAsync.close(AutoCloseableAsync.java:42)org.apache.flink.test.streaming.runtime.BackPressureITCase.tearDown(BackPressureITCase.java:165)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runners.Suite.runChild(Suite.java:128)
> at org.junit.runners.Suite.runChild(Suite.java:27)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at 
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> at 
> 

[jira] [Assigned] (FLINK-15631) Cannot use generic types as the result of an AggregateFunction in Blink planner

2020-01-18 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15631:
---

Assignee: Jingsong Lee

> Cannot use generic types as the result of an AggregateFunction in Blink 
> planner
> ---
>
> Key: FLINK-15631
> URL: https://issues.apache.org/jira/browse/FLINK-15631
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Dawid Wysakowicz
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It is not possible to use a GenericTypeInfo for a result type of an 
> {{AggregateFunction}} in a retract mode with state cleaning disabled.
> {code}
>   @Test
>   def testGenericTypes(): Unit = {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> val setting = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
> val tEnv = StreamTableEnvironment.create(env, setting)
> val t = env.fromElements(1, 2, 3).toTable(tEnv, 'a)
> val results = t
>   .select(new GenericAggregateFunction()('a))
>   .toRetractStream[Row]
> val sink = new TestingRetractSink
> results.addSink(sink).setParallelism(1)
> env.execute()
>   }
> class RandomClass(var i: Int)
> class GenericAggregateFunction extends AggregateFunction[java.lang.Integer, 
> RandomClass] {
>   override def getValue(accumulator: RandomClass): java.lang.Integer = 
> accumulator.i
>   override def createAccumulator(): RandomClass = new RandomClass(0)
>   override def getResultType: TypeInformation[java.lang.Integer] = new 
> GenericTypeInfo[Integer](classOf[Integer])
>   override def getAccumulatorType: TypeInformation[RandomClass] = new 
> GenericTypeInfo[RandomClass](
> classOf[RandomClass])
>   def accumulate(acc: RandomClass, value: Int): Unit = {
> acc.i = value
>   }
>   def retract(acc: RandomClass, value: Int): Unit = {
> acc.i = value
>   }
>   def resetAccumulator(acc: RandomClass): Unit = {
> acc.i = 0
>   }
> }
> {code}
> The code above fails with:
> {code}
> Caused by: java.lang.UnsupportedOperationException: BinaryGeneric cannot be 
> compared
>   at 
> org.apache.flink.table.dataformat.BinaryGeneric.equals(BinaryGeneric.java:77)
>   at GroupAggValueEqualiser$17.equalsWithoutHeader(Unknown Source)
>   at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:177)
>   at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
>   at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:170)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> This is related to FLINK-13702



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15632) Zookeeper HA service could not work for active kubernetes integration

2020-01-18 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018809#comment-17018809
 ] 

Yang Wang commented on FLINK-15632:
---

[~zhuzh] Could you assign this ticket to me? I will try to give a fix and hope 
to get in the release-1.10.

> Zookeeper HA service could not work for active kubernetes integration
> -
>
> Key: FLINK-15632
> URL: https://issues.apache.org/jira/browse/FLINK-15632
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
> Fix For: 1.10.0
>
>
> It will be some different, if we want to support HA for active Kubernetes 
> integration.
>  # The K8s service is designed for accessing the jobmanager out of K8s 
> cluster. So Flink client will not use HA service to retrieve address of 
> jobmanager. Instead, it always use Kubernetes service to contact with 
> jobmanager via rest client. 
>  # The Kubernetes DNS creates A and SRV records only for Services. It doesn't 
> generate pods' A records. So the ip address, not hostname, will be used as 
> jobmanager address.
>  
> All other behaviors will be same as Zookeeper HA for standalone and Yarn.
> To fix this problem, we just need some minor changes to 
> {{KubernetesClusterDescriptor}} and {{KubernetesSessionEntrypoint}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15632) Zookeeper HA service could not work for active kubernetes integration

2020-01-18 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-15632:
--
Fix Version/s: 1.10.0

> Zookeeper HA service could not work for active kubernetes integration
> -
>
> Key: FLINK-15632
> URL: https://issues.apache.org/jira/browse/FLINK-15632
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
> Fix For: 1.10.0
>
>
> It will be some different, if we want to support HA for active Kubernetes 
> integration.
>  # The K8s service is designed for accessing the jobmanager out of K8s 
> cluster. So Flink client will not use HA service to retrieve address of 
> jobmanager. Instead, it always use Kubernetes service to contact with 
> jobmanager via rest client. 
>  # The Kubernetes DNS creates A and SRV records only for Services. It doesn't 
> generate pods' A records. So the ip address, not hostname, will be used as 
> jobmanager address.
>  
> All other behaviors will be same as Zookeeper HA for standalone and Yarn.
> To fix this problem, we just need some minor changes to 
> {{KubernetesClusterDescriptor}} and {{KubernetesSessionEntrypoint}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15656) Support user-specified pod templates

2020-01-18 Thread Canbin Zheng (Jira)
Canbin Zheng created FLINK-15656:


 Summary: Support user-specified pod templates
 Key: FLINK-15656
 URL: https://issues.apache.org/jira/browse/FLINK-15656
 Project: Flink
  Issue Type: New Feature
  Components: Deployment / Kubernetes
Reporter: Canbin Zheng


The current approach of introducing new configuration options for each aspect 
of pod specification a user might wish is becoming unwieldy, we have to 
maintain more and more Flink side Kubernetes configuration options and users 
have to learn the gap between the declarative model used by Kubernetes and the 
configuration model used by Flink. It's a great improvement to allow users to 
specify pod templates as central places for all customization needs for the 
jobmanager and taskmanager pods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #10878: [FLINK-15599][table] SQL client requires both legacy and blink planner to be on the classpath

2020-01-18 Thread GitBox
JingsongLi commented on a change in pull request #10878: [FLINK-15599][table] 
SQL client requires both legacy and blink planner to be on the classpath
URL: https://github.com/apache/flink/pull/10878#discussion_r368268966
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/FunctionService.java
 ##
 @@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.functions;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.descriptors.ClassInstanceValidator;
+import org.apache.flink.table.descriptors.DescriptorProperties;
+import org.apache.flink.table.descriptors.FunctionDescriptor;
+import org.apache.flink.table.descriptors.FunctionDescriptorValidator;
+import org.apache.flink.table.descriptors.HierarchyDescriptorValidator;
+import org.apache.flink.table.descriptors.LiteralValueValidator;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.lang.reflect.Constructor;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Service for creating configured instances of {@link UserDefinedFunction} 
using a
+ * {@link FunctionDescriptor}.
+ */
+public class FunctionService {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(FunctionService.class);
+
+   /**
+* Creates a user-defined function with the given properties and the 
current thread's
+* context class loader.
+*
+* @param descriptor the descriptor that describes a function
+* @return the generated user-defined function
+*/
+   public static UserDefinedFunction createFunction(FunctionDescriptor 
descriptor) {
+   return createFunction(descriptor, 
Thread.currentThread().getContextClassLoader());
+   }
+
+   /**
+* Creates a user-defined function with the given properties.
+*
+* @param descriptor the descriptor that describes a function
+* @param classLoader the class loader to load the function and its 
parameter's classes
+* @return the generated user-defined function
+*/
+   public static UserDefinedFunction createFunction(
+   FunctionDescriptor descriptor,
+   ClassLoader classLoader) {
+   return createFunction(descriptor, classLoader, true);
+   }
+
+   /**
+* Creates a user-defined function with the given properties.
+*
+* @param descriptor the descriptor that describes a function
+* @param classLoader the class loader to load the function and its 
parameter's classes
+* @param performValidation whether or not the descriptor should be 
validated
+* @return the generated user-defined function
+*/
+   public static UserDefinedFunction createFunction(
+   FunctionDescriptor descriptor,
+   ClassLoader classLoader,
+   boolean performValidation) {
+
+   DescriptorProperties properties = new 
DescriptorProperties(true);
+   properties.putProperties(descriptor.toProperties());
+
+   // validate
+   if (performValidation) {
+   new FunctionDescriptorValidator().validate(properties);
+   }
+
+   // instantiate
+   Tuple2, Object> tuple2 = generateInstance(
+   HierarchyDescriptorValidator.EMPTY_PREFIX,
+   properties,
+   classLoader);
+
+   if (!UserDefinedFunction.class.isAssignableFrom(tuple2.f0)) {
+   throw new ValidationException(
+   String.format("Instantiated class '%s' 
is not a user-defined function.", tuple2.f0.getName()));
+   }
+   return (UserDefinedFunction) tuple2.f1;
+   }
+
+   /**
+* Recursively generate an instance of a class according the given 
properties.
+*
+* @param keyPrefix the prefix to fetch properties

[GitHub] [flink] flinkbot commented on issue #10896: [FLINK-15631][table-planner-blink] Fix equals code generation for raw and timestamp type

2020-01-18 Thread GitBox
flinkbot commented on issue #10896: [FLINK-15631][table-planner-blink] Fix 
equals code generation for raw and timestamp type
URL: https://github.com/apache/flink/pull/10896#issuecomment-575971337
 
 
   
   ## CI report:
   
   * f765a3816b97657b93895347980f6899a45e95b9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10894: [FLINK-15592][hive] Add black list for Hive built-in functions

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10894: [FLINK-15592][hive] Add black list 
for Hive built-in functions
URL: https://github.com/apache/flink/pull/10894#issuecomment-575966596
 
 
   
   ## CI report:
   
   * a72e031a0f93df6cf782968234f59a9ba0c40821 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145094605) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4459)
 
   * 5adc6ab35ccc1a53562983afb90d7212dcf66a1c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10893: [FLINK-15637][state backends] Make RocksDB the default store for timers when using RocksDBStateBackend

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10893: [FLINK-15637][state backends] Make 
RocksDB the default store for timers when using RocksDBStateBackend
URL: https://github.com/apache/flink/pull/10893#issuecomment-575859797
 
 
   
   ## CI report:
   
   * 2a07b12e4f943499075f42960fc84dd445df6f3c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145041173) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4454)
 
   * d644b42865da92de5b84b807f373f53adbd7945b UNKNOWN
   * 3c71f512cb2a8adfdfd094d794085cf883b39de9 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145091925) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4456)
 
   * d55ffe03a45c1cf272e0e912450b9f391df73bc7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10884: [FLINK-15630][python][doc] Improve the environment requirement documentation of the Python API.

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10884: [FLINK-15630][python][doc] Improve 
the environment requirement documentation of the Python API.
URL: https://github.com/apache/flink/pull/10884#issuecomment-575556115
 
 
   
   ## CI report:
   
   * 6656e341e1fc6759bcdc547daad7252a85913404 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/144906427) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4430)
 
   * 4c85362fa2e7f102ca152d00bfa7ca4674a90858 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144911942) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4434)
 
   * 1fd3dd87276855f8ec05eba6f6d2f88fc3b22f8f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145095505) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4460)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10878: [FLINK-15599][table] SQL client requires both legacy and blink planner to be on the classpath

2020-01-18 Thread GitBox
JingsongLi commented on a change in pull request #10878: [FLINK-15599][table] 
SQL client requires both legacy and blink planner to be on the classpath
URL: https://github.com/apache/flink/pull/10878#discussion_r368268653
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java
 ##
 @@ -269,12 +272,11 @@ public Pipeline createPipeline(String name, 
Configuration flinkConfig) {
if (streamExecEnv != null) {
// special case for Blink planner to apply batch 
optimizations
// note: it also modifies the ExecutionConfig!
-   if (executor instanceof ExecutorBase) {
+   if (isBlinkPlanner) {
 
 Review comment:
   Good point! Catch `NoClassDefFoundError` looks very good to me.
   I've been struggling with this for a long time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15595) Entirely implement resolution order as FLIP-68 concept

2020-01-18 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15595:
-
Priority: Major  (was: Critical)

> Entirely implement resolution order as FLIP-68 concept
> --
>
> Key: FLINK-15595
> URL: https://issues.apache.org/jira/browse/FLINK-15595
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> First of all, the implementation is problematic. CoreModule returns 
> BuiltinFunctionDefinition, which cannot be resolved in 
> FunctionCatalogOperatorTable, so it will fall back to FlinkSqlOperatorTable.
> Second, the function defined by CoreModule is seriously incomplete. You can 
> compare it with FunctionCatalogOperatorTable, a lot less. This leads to the 
> fact that the priority of some functions is in CoreModule, and the priority 
> of some functions is behind all modules. This is confusing, which is not what 
> we want to define in FLIP-68. 
> We should:
>  * We should resolve BuiltinFunctionDefinition correctly in 
> FunctionCatalogOperatorTable.
>  * CoreModule should contains all functions in FlinkSqlOperatorTable, a 
> simple way could provided calcite wrapper to wrap all functions.
>  * PlannerContext.getBuiltinSqlOperatorTable should not contains 
> FlinkSqlOperatorTable, we should use one 
> FunctionCatalogOperatorTable.Otherwise, there will be a lot of confusion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15595) Entirely implement resolution order as FLIP-68 concept

2020-01-18 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018805#comment-17018805
 ] 

Jingsong Lee commented on FLINK-15595:
--

[~phoenixjiangnan] PR created, but I am not sure to merge it. Considering the 
uncertainty. I'll set this to major instead of critical.

> Entirely implement resolution order as FLIP-68 concept
> --
>
> Key: FLINK-15595
> URL: https://issues.apache.org/jira/browse/FLINK-15595
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> First of all, the implementation is problematic. CoreModule returns 
> BuiltinFunctionDefinition, which cannot be resolved in 
> FunctionCatalogOperatorTable, so it will fall back to FlinkSqlOperatorTable.
> Second, the function defined by CoreModule is seriously incomplete. You can 
> compare it with FunctionCatalogOperatorTable, a lot less. This leads to the 
> fact that the priority of some functions is in CoreModule, and the priority 
> of some functions is behind all modules. This is confusing, which is not what 
> we want to define in FLIP-68. 
> We should:
>  * We should resolve BuiltinFunctionDefinition correctly in 
> FunctionCatalogOperatorTable.
>  * CoreModule should contains all functions in FlinkSqlOperatorTable, a 
> simple way could provided calcite wrapper to wrap all functions.
>  * PlannerContext.getBuiltinSqlOperatorTable should not contains 
> FlinkSqlOperatorTable, we should use one 
> FunctionCatalogOperatorTable.Otherwise, there will be a lot of confusion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15631) Cannot use generic types as the result of an AggregateFunction in Blink planner

2020-01-18 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018804#comment-17018804
 ] 

Jingsong Lee commented on FLINK-15631:
--

Created FLINK-15655 to refactor current confused codes.

> Cannot use generic types as the result of an AggregateFunction in Blink 
> planner
> ---
>
> Key: FLINK-15631
> URL: https://issues.apache.org/jira/browse/FLINK-15631
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It is not possible to use a GenericTypeInfo for a result type of an 
> {{AggregateFunction}} in a retract mode with state cleaning disabled.
> {code}
>   @Test
>   def testGenericTypes(): Unit = {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> val setting = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
> val tEnv = StreamTableEnvironment.create(env, setting)
> val t = env.fromElements(1, 2, 3).toTable(tEnv, 'a)
> val results = t
>   .select(new GenericAggregateFunction()('a))
>   .toRetractStream[Row]
> val sink = new TestingRetractSink
> results.addSink(sink).setParallelism(1)
> env.execute()
>   }
> class RandomClass(var i: Int)
> class GenericAggregateFunction extends AggregateFunction[java.lang.Integer, 
> RandomClass] {
>   override def getValue(accumulator: RandomClass): java.lang.Integer = 
> accumulator.i
>   override def createAccumulator(): RandomClass = new RandomClass(0)
>   override def getResultType: TypeInformation[java.lang.Integer] = new 
> GenericTypeInfo[Integer](classOf[Integer])
>   override def getAccumulatorType: TypeInformation[RandomClass] = new 
> GenericTypeInfo[RandomClass](
> classOf[RandomClass])
>   def accumulate(acc: RandomClass, value: Int): Unit = {
> acc.i = value
>   }
>   def retract(acc: RandomClass, value: Int): Unit = {
> acc.i = value
>   }
>   def resetAccumulator(acc: RandomClass): Unit = {
> acc.i = 0
>   }
> }
> {code}
> The code above fails with:
> {code}
> Caused by: java.lang.UnsupportedOperationException: BinaryGeneric cannot be 
> compared
>   at 
> org.apache.flink.table.dataformat.BinaryGeneric.equals(BinaryGeneric.java:77)
>   at GroupAggValueEqualiser$17.equalsWithoutHeader(Unknown Source)
>   at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:177)
>   at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
>   at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:170)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> This is related to FLINK-13702



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15655) Refactor equals code generation in blink

2020-01-18 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-15655:


 Summary: Refactor equals code generation in blink
 Key: FLINK-15655
 URL: https://issues.apache.org/jira/browse/FLINK-15655
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Jingsong Lee
 Fix For: 1.11.0


Now we have {{EqualiserCodeGenerator}} and 
{{ScalarOperatorGens.generateEquals}} and 
{{ScalarOperatorGens.generateNotEquals}} , we should merge them and support all 
types and avoid bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 commented on issue #10893: [FLINK-15637][state backends] Make RocksDB the default store for timers when using RocksDBStateBackend

2020-01-18 Thread GitBox
klion26 commented on issue #10893: [FLINK-15637][state backends] Make RocksDB 
the default store for timers when using RocksDBStateBackend
URL: https://github.com/apache/flink/pull/10893#issuecomment-575969969
 
 
   rebase the master to sync some doc generation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-15640) Support to set label and node selector

2020-01-18 Thread hippo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018802#comment-17018802
 ] 

hippo edited comment on FLINK-15640 at 1/19/20 5:47 AM:


Could assign this ticket for me?

[~fly_in_gis]


was (Author: hippo):
Can assign https://issues.apache.org/jira/browse/FLINK-15640 for me?

[~fly_in_gis]

> Support to set label and node selector
> --
>
> Key: FLINK-15640
> URL: https://issues.apache.org/jira/browse/FLINK-15640
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
>
> Navigate to [Kubernetes 
> doc|https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/]
>  for more information.
> {code:java}
> public static final ConfigOption JOB_MANAGER_USER_LABELS =
>  key("kubernetes.jobmanager.user.labels")
>   .noDefaultValue()
>   .withDescription("The labels to be set for JobManager replica controller, 
> service and pods. " +
>"Specified as key:value pairs separated by commas. such as 
> version:alphav1,deploy:test.");
> public static final ConfigOption TASK_MANAGER_USER_LABELS =
>  key("kubernetes.taskmanager.user.labels")
>   .noDefaultValue()
>   .withDescription("The labels to be set for TaskManager pods. " +
>"Specified as key:value pairs separated by commas. such as 
> version:alphav1,deploy:test.");
> public static final ConfigOption JOB_MANAGER_NODE_SELECTOR =
>  key("kubernetes.jobmanager.node-selector")
>   .noDefaultValue()
>   .withDescription("The node-selector to be set for JobManager pod. " +
>"Specified as key:value pairs separated by commas. such as 
> environment:dev,tier:frontend.");
> public static final ConfigOption TASK_MANAGER_NODE_SELECTOR =
>  key("kubernetes.taskmanager.node-selector")
>   .noDefaultValue()
>   .withDescription("The node-selector to be set for TaskManager pods. " +
>"Specified as key:value pairs separated by commas. such as 
> environment:dev,tier:frontend.");
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15640) Support to set label and node selector

2020-01-18 Thread hippo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018802#comment-17018802
 ] 

hippo commented on FLINK-15640:
---

Can assign https://issues.apache.org/jira/browse/FLINK-15640 for me?

[~fly_in_gis]

> Support to set label and node selector
> --
>
> Key: FLINK-15640
> URL: https://issues.apache.org/jira/browse/FLINK-15640
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
>
> Navigate to [Kubernetes 
> doc|https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/]
>  for more information.
> {code:java}
> public static final ConfigOption JOB_MANAGER_USER_LABELS =
>  key("kubernetes.jobmanager.user.labels")
>   .noDefaultValue()
>   .withDescription("The labels to be set for JobManager replica controller, 
> service and pods. " +
>"Specified as key:value pairs separated by commas. such as 
> version:alphav1,deploy:test.");
> public static final ConfigOption TASK_MANAGER_USER_LABELS =
>  key("kubernetes.taskmanager.user.labels")
>   .noDefaultValue()
>   .withDescription("The labels to be set for TaskManager pods. " +
>"Specified as key:value pairs separated by commas. such as 
> version:alphav1,deploy:test.");
> public static final ConfigOption JOB_MANAGER_NODE_SELECTOR =
>  key("kubernetes.jobmanager.node-selector")
>   .noDefaultValue()
>   .withDescription("The node-selector to be set for JobManager pod. " +
>"Specified as key:value pairs separated by commas. such as 
> environment:dev,tier:frontend.");
> public static final ConfigOption TASK_MANAGER_NODE_SELECTOR =
>  key("kubernetes.taskmanager.node-selector")
>   .noDefaultValue()
>   .withDescription("The node-selector to be set for TaskManager pods. " +
>"Specified as key:value pairs separated by commas. such as 
> environment:dev,tier:frontend.");
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15654) Expose podIP for Containers by Environment Variables

2020-01-18 Thread duchen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018795#comment-17018795
 ] 

duchen edited comment on FLINK-15654 at 1/19/20 5:47 AM:
-

Could assign this ticket for me ?[~felixzheng]


was (Author: duchen):
Could assign this ticket for me ?

> Expose podIP for Containers by Environment Variables
> 
>
> Key: FLINK-15654
> URL: https://issues.apache.org/jira/browse/FLINK-15654
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Priority: Major
>
> Expose IP information of a Pod for its containers to use.
>  
> {code:java}
> new ContainerBuilder()
> .addNewEnv()
>   .withName(ENV_JOBMANAGER_BIND_ADDRESS)
>   .withValueFrom(new EnvVarSourceBuilder()
> .withNewFieldRef("v1", "status.podIP")
> .build())
>   .endEnv()
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15642) Support to set JobManager liveness check

2020-01-18 Thread duchen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018793#comment-17018793
 ] 

duchen edited comment on FLINK-15642 at 1/19/20 5:46 AM:
-

Can assign https://issues.apache.org/jira/browse/FLINK-15642 for me, i'm very 
interested!

[~fly_in_gis]


was (Author: duchen):
Can assign https://issues.apache.org/jira/browse/FLINK-15642 for me, i'm very 
interested!

 

> Support to set JobManager liveness check
> 
>
> Key: FLINK-15642
> URL: https://issues.apache.org/jira/browse/FLINK-15642
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
>
> The liveness of TaskManager will be controlled by Flink Master. When it 
> failed, timeout, a new pod will be started to replace. We need to add a 
> liveness check for JobManager.
>  
> It just like what we could do in the yaml.
> {code:java}
> ...
> livenessProbe:
>   tcpSocket:
> port: 6123
>   initialDelaySeconds: 30
>   periodSeconds: 60
> ...{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10894: [FLINK-15592][hive] Add black list for Hive built-in functions

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10894: [FLINK-15592][hive] Add black list 
for Hive built-in functions
URL: https://github.com/apache/flink/pull/10894#issuecomment-575966596
 
 
   
   ## CI report:
   
   * a72e031a0f93df6cf782968234f59a9ba0c40821 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145094605) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4459)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10896: [FLINK-15631][table-planner-blink] Fix equals code generation for raw and timestamp type

2020-01-18 Thread GitBox
flinkbot commented on issue #10896: [FLINK-15631][table-planner-blink] Fix 
equals code generation for raw and timestamp type
URL: https://github.com/apache/flink/pull/10896#issuecomment-575969660
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 5e14a59953462a613d2f736450654cdf5023b875 (Sun Jan 19 
05:43:33 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15631).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15631) Cannot use generic types as the result of an AggregateFunction in Blink planner

2020-01-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15631:
---
Labels: pull-request-available  (was: )

> Cannot use generic types as the result of an AggregateFunction in Blink 
> planner
> ---
>
> Key: FLINK-15631
> URL: https://issues.apache.org/jira/browse/FLINK-15631
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> It is not possible to use a GenericTypeInfo for a result type of an 
> {{AggregateFunction}} in a retract mode with state cleaning disabled.
> {code}
>   @Test
>   def testGenericTypes(): Unit = {
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> val setting = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
> val tEnv = StreamTableEnvironment.create(env, setting)
> val t = env.fromElements(1, 2, 3).toTable(tEnv, 'a)
> val results = t
>   .select(new GenericAggregateFunction()('a))
>   .toRetractStream[Row]
> val sink = new TestingRetractSink
> results.addSink(sink).setParallelism(1)
> env.execute()
>   }
> class RandomClass(var i: Int)
> class GenericAggregateFunction extends AggregateFunction[java.lang.Integer, 
> RandomClass] {
>   override def getValue(accumulator: RandomClass): java.lang.Integer = 
> accumulator.i
>   override def createAccumulator(): RandomClass = new RandomClass(0)
>   override def getResultType: TypeInformation[java.lang.Integer] = new 
> GenericTypeInfo[Integer](classOf[Integer])
>   override def getAccumulatorType: TypeInformation[RandomClass] = new 
> GenericTypeInfo[RandomClass](
> classOf[RandomClass])
>   def accumulate(acc: RandomClass, value: Int): Unit = {
> acc.i = value
>   }
>   def retract(acc: RandomClass, value: Int): Unit = {
> acc.i = value
>   }
>   def resetAccumulator(acc: RandomClass): Unit = {
> acc.i = 0
>   }
> }
> {code}
> The code above fails with:
> {code}
> Caused by: java.lang.UnsupportedOperationException: BinaryGeneric cannot be 
> compared
>   at 
> org.apache.flink.table.dataformat.BinaryGeneric.equals(BinaryGeneric.java:77)
>   at GroupAggValueEqualiser$17.equalsWithoutHeader(Unknown Source)
>   at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:177)
>   at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
>   at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:170)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> This is related to FLINK-13702



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10884: [FLINK-15630][python][doc] Improve the environment requirement documentation of the Python API.

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10884: [FLINK-15630][python][doc] Improve 
the environment requirement documentation of the Python API.
URL: https://github.com/apache/flink/pull/10884#issuecomment-575556115
 
 
   
   ## CI report:
   
   * 6656e341e1fc6759bcdc547daad7252a85913404 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/144906427) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4430)
 
   * 4c85362fa2e7f102ca152d00bfa7ca4674a90858 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144911942) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4434)
 
   * 1fd3dd87276855f8ec05eba6f6d2f88fc3b22f8f Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/145095505) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4460)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi opened a new pull request #10896: [FLINK-15631][table-planner-blink] Fix equals code generation for raw and timestamp type

2020-01-18 Thread GitBox
JingsongLi opened a new pull request #10896: [FLINK-15631][table-planner-blink] 
Fix equals code generation for raw and timestamp type
URL: https://github.com/apache/flink/pull/10896
 
 
   
   ## What is the purpose of the change
   
   - equals for generic type not work
   - when close idle state, aggregation not work for generic type and timestamp 
type
   
   ## Brief change log
   
   - Fix equals code generation for raw type in ScalarOperatorGens
   - Fix raw and timestamp type in EqualiserCodeGenerator
   
   ## Verifying this change
   
   - ScalarFunctionsTest.testEquality
   - EqualiserCodeGeneratorTest
   - AggregateITCase.testGenericTypesWithoutStateClean
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10815: [FLINK-15537][table-planner-blink] Type of keys should be `BinaryRow`…

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10815: [FLINK-15537][table-planner-blink] 
Type of keys should be `BinaryRow`…
URL: https://github.com/apache/flink/pull/10815#issuecomment-572566376
 
 
   
   ## CI report:
   
   * 19a4290f709495491fe460037c8c31d106984ea8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143732723) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4229)
 
   * c3ef5ea345a343170806de8112163edb7df31f69 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144110200) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4284)
 
   * 941a5d4725dee3317ca05f8ab16eb103f61d3fcb Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144255612) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4312)
 
   * c60878a8c878ddfd03bade488d688068d30bd1d5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145094603) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4458)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15350) develop JDBC catalogs to connect to relational databases

2020-01-18 Thread wgcn (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018800#comment-17018800
 ] 

wgcn commented on FLINK-15350:
--

Hi~ [~phoenixjiangnan]  , I wanna know   which scene  JDBC catalog will be 
apply in stream/batch mode

> develop JDBC catalogs to connect to relational databases
> 
>
> Key: FLINK-15350
> URL: https://issues.apache.org/jira/browse/FLINK-15350
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>
> introduce AbastractJDBCCatalog and a set of JDBC catalog implementations to 
> connect Flink to all relational databases.
> Class hierarchy:
> {code:java}
> Catalog API 
> |
> AbstractJDBCCatalog
> |
> PostgresJDBCCatalog, MySqlJDBCCatalog, OracleJDBCCatalog, ...
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10894: [FLINK-15592][hive] Add black list for Hive built-in functions

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10894: [FLINK-15592][hive] Add black list 
for Hive built-in functions
URL: https://github.com/apache/flink/pull/10894#issuecomment-575966596
 
 
   
   ## CI report:
   
   * a72e031a0f93df6cf782968234f59a9ba0c40821 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145094605) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4459)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10884: [FLINK-15630][python][doc] Improve the environment requirement documentation of the Python API.

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10884: [FLINK-15630][python][doc] Improve 
the environment requirement documentation of the Python API.
URL: https://github.com/apache/flink/pull/10884#issuecomment-575556115
 
 
   
   ## CI report:
   
   * 6656e341e1fc6759bcdc547daad7252a85913404 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/144906427) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4430)
 
   * 4c85362fa2e7f102ca152d00bfa7ca4674a90858 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144911942) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4434)
 
   * 1fd3dd87276855f8ec05eba6f6d2f88fc3b22f8f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10815: [FLINK-15537][table-planner-blink] Type of keys should be `BinaryRow`…

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10815: [FLINK-15537][table-planner-blink] 
Type of keys should be `BinaryRow`…
URL: https://github.com/apache/flink/pull/10815#issuecomment-572566376
 
 
   
   ## CI report:
   
   * 19a4290f709495491fe460037c8c31d106984ea8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143732723) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4229)
 
   * c3ef5ea345a343170806de8112163edb7df31f69 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144110200) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4284)
 
   * 941a5d4725dee3317ca05f8ab16eb103f61d3fcb Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144255612) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4312)
 
   * c60878a8c878ddfd03bade488d688068d30bd1d5 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/145094603) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4458)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14460) Active Kubernetes integration phase2 - Advanced Features

2020-01-18 Thread duchen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018799#comment-17018799
 ] 

duchen commented on FLINK-14460:


Can assign https://issues.apache.org/jira/browse/FLINK-15642 for me ?

> Active Kubernetes integration phase2 - Advanced Features
> 
>
> Key: FLINK-14460
> URL: https://issues.apache.org/jira/browse/FLINK-14460
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
>
> This is phase2 of active kubernetes integration. It is a umbrella jira to 
> track all the advanced features and make Flink on Kubernetes production ready.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13758) failed to submit JobGraph when registered hdfs file in DistributedCache

2020-01-18 Thread luoguohao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018798#comment-17018798
 ] 

luoguohao commented on FLINK-13758:
---

i'm ok to transfer this ticket to you if you have much more time to handle 
it.[~fly_in_gis]

> failed to submit JobGraph when registered hdfs file in DistributedCache 
> 
>
> Key: FLINK-13758
> URL: https://issues.apache.org/jira/browse/FLINK-13758
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0
>Reporter: luoguohao
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> when using HDFS files for DistributedCache, it would failed to submit 
> jobGraph, we can see exceptions stack traces in log file after a while, but 
> if DistributedCache file is a local file, every thing goes fine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15650) Improve the udfs documentation of the Python API

2020-01-18 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018797#comment-17018797
 ] 

Hequn Cheng commented on FLINK-15650:
-

Resolved in 
1.11.0 via ee3101a075f681501fbc8c7cc4119476d497e5f3
1.10.0 via 459705b80fd3db61524abc42faed72ec4525f568

> Improve the udfs documentation of the Python API
> 
>
> Key: FLINK-15650
> URL: https://issues.apache.org/jira/browse/FLINK-15650
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current udfs documentation of Python API is not very clear. It should be 
> described in more detail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15650) Improve the udfs documentation of the Python API

2020-01-18 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-15650.
---
Resolution: Resolved

> Improve the udfs documentation of the Python API
> 
>
> Key: FLINK-15650
> URL: https://issues.apache.org/jira/browse/FLINK-15650
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The current udfs documentation of Python API is not very clear. It should be 
> described in more detail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 merged pull request #10895: [FLINK-15650][python][doc] Improves the UDFs documentation of Python API

2020-01-18 Thread GitBox
hequn8128 merged pull request #10895: [FLINK-15650][python][doc] Improves the 
UDFs documentation of Python API
URL: https://github.com/apache/flink/pull/10895
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15650) Improve the udfs documentation of the Python API

2020-01-18 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng reassigned FLINK-15650:
---

Assignee: Huang Xingbo

> Improve the udfs documentation of the Python API
> 
>
> Key: FLINK-15650
> URL: https://issues.apache.org/jira/browse/FLINK-15650
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The current udfs documentation of Python API is not very clear. It should be 
> described in more detail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15654) Expose podIP for Containers by Environment Variables

2020-01-18 Thread Canbin Zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Canbin Zheng updated FLINK-15654:
-
Environment: (was: {code:java}
//代码占
{code})

> Expose podIP for Containers by Environment Variables
> 
>
> Key: FLINK-15654
> URL: https://issues.apache.org/jira/browse/FLINK-15654
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Priority: Major
>
> Expose IP information of a Pod for its containers to use.
>  
> {code:java}
> new ContainerBuilder()
> .addNewEnv()
>   .withName(ENV_JOBMANAGER_BIND_ADDRESS)
>   .withValueFrom(new EnvVarSourceBuilder()
> .withNewFieldRef("v1", "status.podIP")
> .build())
>   .endEnv()
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15654) Expose podIP for Containers by Environment Variables

2020-01-18 Thread duchen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018795#comment-17018795
 ] 

duchen commented on FLINK-15654:


Could assign this ticket for me ?

> Expose podIP for Containers by Environment Variables
> 
>
> Key: FLINK-15654
> URL: https://issues.apache.org/jira/browse/FLINK-15654
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes
> Environment: {code:java}
> //代码占
> {code}
>Reporter: Canbin Zheng
>Priority: Major
>
> Expose IP information of a Pod for its containers to use.
>  
> {code:java}
> new ContainerBuilder()
> .addNewEnv()
>   .withName(ENV_JOBMANAGER_BIND_ADDRESS)
>   .withValueFrom(new EnvVarSourceBuilder()
> .withNewFieldRef("v1", "status.podIP")
> .build())
>   .endEnv()
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10894: [FLINK-15592][hive] Add black list for Hive built-in functions

2020-01-18 Thread GitBox
flinkbot commented on issue #10894: [FLINK-15592][hive] Add black list for Hive 
built-in functions
URL: https://github.com/apache/flink/pull/10894#issuecomment-575966596
 
 
   
   ## CI report:
   
   * a72e031a0f93df6cf782968234f59a9ba0c40821 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10893: [FLINK-15637][state backends] Make RocksDB the default store for timers when using RocksDBStateBackend

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10893: [FLINK-15637][state backends] Make 
RocksDB the default store for timers when using RocksDBStateBackend
URL: https://github.com/apache/flink/pull/10893#issuecomment-575859797
 
 
   
   ## CI report:
   
   * 2a07b12e4f943499075f42960fc84dd445df6f3c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145041173) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4454)
 
   * d644b42865da92de5b84b807f373f53adbd7945b UNKNOWN
   * 3c71f512cb2a8adfdfd094d794085cf883b39de9 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145091925) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4456)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10895: [FLINK-15650][python][doc] Improve the udfs documentation of the Python API

2020-01-18 Thread GitBox
flinkbot commented on issue #10895: [FLINK-15650][python][doc] Improve the udfs 
documentation of the Python API
URL: https://github.com/apache/flink/pull/10895#issuecomment-575966561
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit dc8f2bac3c2b41de413a2634de7f755968b71ad3 (Sun Jan 19 
04:38:01 UTC 2020)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15650).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10879: [FLINK-15595][table] Remove CoreMudule in ModuleManager

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10879: [FLINK-15595][table] Remove 
CoreMudule in ModuleManager
URL: https://github.com/apache/flink/pull/10879#issuecomment-575502614
 
 
   
   ## CI report:
   
   * dea387f141fd1d91c2082e39d9a10bab4890747d Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144889679) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4421)
 
   * add34123c391041252f5255e4e1c99f13b85f842 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145091913) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4455)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #10879: [FLINK-15595][table] Remove CoreMudule in ModuleManager

2020-01-18 Thread GitBox
lirui-apache commented on a change in pull request #10879: [FLINK-15595][table] 
Remove CoreMudule in ModuleManager
URL: https://github.com/apache/flink/pull/10879#discussion_r368265163
 
 

 ##
 File path: docs/dev/table/functions/index.md
 ##
 @@ -84,11 +84,13 @@ The resolution order is:
 1. Temporary catalog function
 2. Catalog function
 
-## Ambiguous Function Reference
+### Ambiguous Function Reference
 
 The resolution order is:
 
 1. Temporary system function
-2. System function
-3. Temporary catalog function, in the current catalog and current database of 
the session
-4. Catalog function, in the current catalog and current database of the session
+2. Temporary catalog function, in the current catalog and current database of 
the session
 
 Review comment:
   I'm also wary about this change. IMHO function resolution order change can 
break user programs, so we probably need to have a discussion on the mailing 
list. Besides, I don't see how Module plays its part in the resolution. Is it 
treated as the `System function`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10858: [FLINK-15582][tests] Enable batch scheduling tests for both LegacyScheduler and DefaultScheduler

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10858: [FLINK-15582][tests] Enable batch 
scheduling tests for both LegacyScheduler and DefaultScheduler
URL: https://github.com/apache/flink/pull/10858#issuecomment-574561465
 
 
   
   ## CI report:
   
   * cf716ecc2f23aa89683be8096f16725b1f1f8d26 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144472558) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4359)
 
   * a0f5bacf182ee5fc8711e18d2c47edaffa8d1948 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144666495) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4383)
 
   * 9414cbf1065aad51211ed0b91da5adc4b2a597e3 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144902055) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4425)
 
   * 4ef73203456ba0bbf55eab48b8727f248b12dbd6 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145092807) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4457)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15650) Improve the udfs documentation of the Python API

2020-01-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15650:
---
Labels: pull-request-available  (was: )

> Improve the udfs documentation of the Python API
> 
>
> Key: FLINK-15650
> URL: https://issues.apache.org/jira/browse/FLINK-15650
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> The current udfs documentation of Python API is not very clear. It should be 
> described in more detail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo opened a new pull request #10895: [FLINK-15650][python][doc] Improve the udfs documentation of the Python API

2020-01-18 Thread GitBox
HuangXingBo opened a new pull request #10895: [FLINK-15650][python][doc] 
Improve the udfs documentation of the Python API
URL: https://github.com/apache/flink/pull/10895
 
 
   ## What is the purpose of the change
   
   *This pull request improves the udfs documentation of the Python API.*
   
   ## Brief change log
   
 - *Improve the udf demo and the dependencies apis table  in `udfs.md`*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (docs)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10815: [FLINK-15537][table-planner-blink] Type of keys should be `BinaryRow`…

2020-01-18 Thread GitBox
flinkbot edited a comment on issue #10815: [FLINK-15537][table-planner-blink] 
Type of keys should be `BinaryRow`…
URL: https://github.com/apache/flink/pull/10815#issuecomment-572566376
 
 
   
   ## CI report:
   
   * 19a4290f709495491fe460037c8c31d106984ea8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143732723) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4229)
 
   * c3ef5ea345a343170806de8112163edb7df31f69 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144110200) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4284)
 
   * 941a5d4725dee3317ca05f8ab16eb103f61d3fcb Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144255612) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4312)
 
   * c60878a8c878ddfd03bade488d688068d30bd1d5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15642) Support to set JobManager liveness check

2020-01-18 Thread duchen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018793#comment-17018793
 ] 

duchen commented on FLINK-15642:


Can assign https://issues.apache.org/jira/browse/FLINK-15642 for me, i'm very 
interested!

 

> Support to set JobManager liveness check
> 
>
> Key: FLINK-15642
> URL: https://issues.apache.org/jira/browse/FLINK-15642
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: Yang Wang
>Priority: Major
>
> The liveness of TaskManager will be controlled by Flink Master. When it 
> failed, timeout, a new pod will be started to replace. We need to add a 
> liveness check for JobManager.
>  
> It just like what we could do in the yaml.
> {code:java}
> ...
> livenessProbe:
>   tcpSocket:
> port: 6123
>   initialDelaySeconds: 30
>   periodSeconds: 60
> ...{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15447) Change "java.io.tmpdir" of JM/TM on Yarn to "{{PWD}}/tmp"

2020-01-18 Thread Victor Wong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018792#comment-17018792
 ] 

Victor Wong commented on FLINK-15447:
-

[~fly_in_gis] the owner of the working directory of Yarn container should be 
the same as the owner of the JM/TM process, so I think there should be no 
permission issue, right?

> Change "java.io.tmpdir"  of JM/TM on Yarn to "{{PWD}}/tmp" 
> ---
>
> Key: FLINK-15447
> URL: https://issues.apache.org/jira/browse/FLINK-15447
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.9.1
>Reporter: Victor Wong
>Priority: Major
>
> Currently, when running Flink on Yarn, the "java.io.tmpdir" property is set 
> to the default value, which is "/tmp". 
>  
> Sometimes we ran into exceptions caused by a full "/tmp" directory, which 
> would not be cleaned automatically after applications finished.
> I think we can set "java.io.tmpdir" to "PWD/tmp" directory, or 
> something similar. "PWD" will be replaced with the true working 
> directory of JM/TM by Yarn, which will be cleaned automatically.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15447) Change "java.io.tmpdir" of JM/TM on Yarn to "{{PWD}}/tmp"

2020-01-18 Thread Victor Wong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018790#comment-17018790
 ] 

Victor Wong edited comment on FLINK-15447 at 1/19/20 4:24 AM:
--

[~rongr] for me #2 is most close to the main concern, but we do not want to 
share with others for fine-grain control on disk resource, but for a shared 
location is very prone to be disk full.

There are some good points in HADOOP-2735:

_Can we add -Djava.io.tmpdir="./tmp" somewhere ?_
 _so that,_
 _1) Tasks can utilize all disks when using tmp_
 _2) Any undeleted tmp files will be deleted by the tasktracker when task(job?) 
is done._


was (Author: victor-wong):
[~rongr] for me #2 is most close to the main concern, but we do not want to 
share with others not for fine-grain control on disk resource, but for a shared 
location is very prone to be disk full.

There are some good points in HADOOP-2735:

_Can we add -Djava.io.tmpdir="./tmp" somewhere ?_
 _so that,_
 _1) Tasks can utilize all disks when using tmp_
 _2) Any undeleted tmp files will be deleted by the tasktracker when task(job?) 
is done._

> Change "java.io.tmpdir"  of JM/TM on Yarn to "{{PWD}}/tmp" 
> ---
>
> Key: FLINK-15447
> URL: https://issues.apache.org/jira/browse/FLINK-15447
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.9.1
>Reporter: Victor Wong
>Priority: Major
>
> Currently, when running Flink on Yarn, the "java.io.tmpdir" property is set 
> to the default value, which is "/tmp". 
>  
> Sometimes we ran into exceptions caused by a full "/tmp" directory, which 
> would not be cleaned automatically after applications finished.
> I think we can set "java.io.tmpdir" to "PWD/tmp" directory, or 
> something similar. "PWD" will be replaced with the true working 
> directory of JM/TM by Yarn, which will be cleaned automatically.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15447) Change "java.io.tmpdir" of JM/TM on Yarn to "{{PWD}}/tmp"

2020-01-18 Thread Victor Wong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018790#comment-17018790
 ] 

Victor Wong edited comment on FLINK-15447 at 1/19/20 4:23 AM:
--

[~rongr] for me #2 is most close to the main concern, but we do not want to 
share with others not for fine-grain control on disk resource, but for a shared 
location is very prone to be disk full.

There are some good points in HADOOP-2735:

_Can we add -Djava.io.tmpdir="./tmp" somewhere ?_
 _so that,_
 _1) Tasks can utilize all disks when using tmp_
 _2) Any undeleted tmp files will be deleted by the tasktracker when task(job?) 
is done._


was (Author: victor-wong):
[~rongr] for me #2 is most close to the main concern, but we do not want to 
share with others not for fine-grain control on disk resource, but for a shared 
location is very prone to be disk full.

There are some good points in 
[HADOOP-2735|https://issues.apache.org/jira/browse/HADOOP-2735]:

_Can we add -Djava.io.tmpdir="./tmp" somewhere ?
so that,
1) Tasks can utilize all disks when using tmp
2) Any undeleted tmp files will be deleted by the tasktracker when task(job?) 
is done._

> Change "java.io.tmpdir"  of JM/TM on Yarn to "{{PWD}}/tmp" 
> ---
>
> Key: FLINK-15447
> URL: https://issues.apache.org/jira/browse/FLINK-15447
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.9.1
>Reporter: Victor Wong
>Priority: Major
>
> Currently, when running Flink on Yarn, the "java.io.tmpdir" property is set 
> to the default value, which is "/tmp". 
>  
> Sometimes we ran into exceptions caused by a full "/tmp" directory, which 
> would not be cleaned automatically after applications finished.
> I think we can set "java.io.tmpdir" to "PWD/tmp" directory, or 
> something similar. "PWD" will be replaced with the true working 
> directory of JM/TM by Yarn, which will be cleaned automatically.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15447) Change "java.io.tmpdir" of JM/TM on Yarn to "{{PWD}}/tmp"

2020-01-18 Thread Victor Wong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17018790#comment-17018790
 ] 

Victor Wong commented on FLINK-15447:
-

[~rongr] for me #2 is most close to the main concern, but we do not want to 
share with others not for fine-grain control on disk resource, but for a shared 
location is very prone to be disk full.

There are some good points in 
[HADOOP-2735|https://issues.apache.org/jira/browse/HADOOP-2735]:

_Can we add -Djava.io.tmpdir="./tmp" somewhere ?
so that,
1) Tasks can utilize all disks when using tmp
2) Any undeleted tmp files will be deleted by the tasktracker when task(job?) 
is done._

> Change "java.io.tmpdir"  of JM/TM on Yarn to "{{PWD}}/tmp" 
> ---
>
> Key: FLINK-15447
> URL: https://issues.apache.org/jira/browse/FLINK-15447
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.9.1
>Reporter: Victor Wong
>Priority: Major
>
> Currently, when running Flink on Yarn, the "java.io.tmpdir" property is set 
> to the default value, which is "/tmp". 
>  
> Sometimes we ran into exceptions caused by a full "/tmp" directory, which 
> would not be cleaned automatically after applications finished.
> I think we can set "java.io.tmpdir" to "PWD/tmp" directory, or 
> something similar. "PWD" will be replaced with the true working 
> directory of JM/TM by Yarn, which will be cleaned automatically.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10894: [FLINK-15592][hive] Add black list for Hive built-in functions

2020-01-18 Thread GitBox
flinkbot commented on issue #10894: [FLINK-15592][hive] Add black list for Hive 
built-in functions
URL: https://github.com/apache/flink/pull/10894#issuecomment-575965721
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit a72e031a0f93df6cf782968234f59a9ba0c40821 (Sun Jan 19 
04:19:25 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >