[jira] [Commented] (FLINK-18366) Track E2E test durations centrally

2020-09-14 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195888#comment-17195888
 ] 

Robert Metzger commented on FLINK-18366:


Slow run: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6494=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529

> Track E2E test durations centrally
> --
>
> Key: FLINK-18366
> URL: https://issues.apache.org/jira/browse/FLINK-18366
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>
> Every now and then, our E2E tests start timing out (see FLINK-16795), because 
> they hit the currently configured time-limit.
> To better understand what the expected E2E time, and potential performance 
> regressions, we should track the test execution duration centrally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13389: [FLINK-19158][e2e] Increase download timeouts

2020-09-14 Thread GitBox


flinkbot commented on pull request #13389:
URL: https://github.com/apache/flink/pull/13389#issuecomment-692483165


   
   ## CI report:
   
   * c6fd8544c7108eb05c21a24f4b897193219187a9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13387: [Hotfix][typos]Modify debug log to print the compiling code

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13387:
URL: https://github.com/apache/flink/pull/13387#issuecomment-692407926


   
   ## CI report:
   
   * 5ae3077ffa0d5f79358008be1515a7985706c062 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6505)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13390: [FLINK-19037] Forward ioExecutor from ClusterEntrypoint to Dispatcher

2020-09-14 Thread GitBox


flinkbot commented on pull request #13390:
URL: https://github.com/apache/flink/pull/13390#issuecomment-692482079


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit fd16dde2164920786908bee57d113c918fc3abca (Tue Sep 15 
05:54:27 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger opened a new pull request #13390: [FLINK-19037] Forward ioExecutor from ClusterEntrypoint to Dispatcher

2020-09-14 Thread GitBox


rmetzger opened a new pull request #13390:
URL: https://github.com/apache/flink/pull/13390


   ## What is the purpose of the change
   
   Before this change, the dispatcher was using the executor of the RPC service 
for executing IO or heavy tasks.
   Now, we forward the ioExecutor of the ClusterEntrypoint to the Dispatcher.
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19037) Introduce proper IO executor in Dispatcher

2020-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19037:
---
Labels: pull-request-available  (was: )

> Introduce proper IO executor in Dispatcher
> --
>
> Key: FLINK-19037
> URL: https://issues.apache.org/jira/browse/FLINK-19037
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently, IO operations in the {{Dispatcher}} are scheduled on the 
> {{rpcService.getExecutor()}}.
> We should introduce a separate executor for IO operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13389: [FLINK-19158][e2e] Increase download timeouts

2020-09-14 Thread GitBox


flinkbot commented on pull request #13389:
URL: https://github.com/apache/flink/pull/13389#issuecomment-692480858


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c6fd8544c7108eb05c21a24f4b897193219187a9 (Tue Sep 15 
05:51:06 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19158) Revisit java e2e download timeouts

2020-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19158:
---
Labels: pull-request-available test-stability  (was: test-stability)

> Revisit java e2e download timeouts
> --
>
> Key: FLINK-19158
> URL: https://issues.apache.org/jira/browse/FLINK-19158
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> Consider this failed test case
> {code}
> Test testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) is 
> running.
> 
> 09:38:38,719 [main] INFO  
> org.apache.flink.tests.util.cache.PersistingDownloadCache[] - Downloading 
> https://archive.apache.org/dist/hbase/1.4.3/hbase-1.4.3-bin.tar.gz.
> 09:40:38,732 [main] ERROR 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase   [] - 
> 
> Test testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) failed 
> with:
> java.io.IOException: Process ([wget, -q, -P, 
> /home/vsts/work/1/e2e_cache/downloads/1598516010, 
> https://archive.apache.org/dist/hbase/1.4.3/hbase-1.4.3-bin.tar.gz]) exceeded 
> timeout (12) or number of retries (3).
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:148)
>   at 
> org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:127)
>   at 
> org.apache.flink.tests.util.cache.PersistingDownloadCache.getOrDownload(PersistingDownloadCache.java:36)
>   at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.setupHBaseDist(LocalStandaloneHBaseResource.java:76)
>   at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.before(LocalStandaloneHBaseResource.java:70)
>   at 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46)
> {code}
> It seems that the download has not been retried. The download might be stuck? 
> I would propose to set a timeout per try and increase the total time from 2 
> to 5 minutes.
> This example is from: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6267=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger opened a new pull request #13389: [FLINK-19158][e2e] Increase download timeouts

2020-09-14 Thread GitBox


rmetzger opened a new pull request #13389:
URL: https://github.com/apache/flink/pull/13389


   ## What is the purpose of the change
   
   Increase build stability: some Java e2e tests were failing due to timeout 
errors. 
   With this PR, we are more generous with the timeouts.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13225: [FLINK-18974][docs-zh]Translate the 'User-Defined Functions' page of "Application Development's DataStream API" into Chinese

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13225:
URL: https://github.com/apache/flink/pull/13225#issuecomment-678953566


   
   ## CI report:
   
   * 313c6760e7d491540ab8d782b87ee2c637d9b5ab Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6508)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-14 Thread GitBox


KarmaGYZ commented on a change in pull request #13311:
URL: https://github.com/apache/flink/pull/13311#discussion_r488391778



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/TaskExecutorProcessSpec.java
##
@@ -141,6 +143,24 @@ public MemorySize getManagedMemorySize() {
return getFlinkMemory().getManaged();
}
 
+   @Override
+   public boolean equals(Object obj) {
+   if (obj == this) {
+   return true;
+   } else if (obj instanceof TaskExecutorProcessSpec) {

Review comment:
   I think your concern is probably valid. We could add a class-equal check 
to `CommonProcessMemorySpec#equals`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17137) Support mini batch for WindowOperator in blink planner

2020-09-14 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195873#comment-17195873
 ] 

Benchao Li commented on FLINK-17137:


[~tartarus] Would you like to contribute it back to the community?

> Support mini batch for WindowOperator in blink planner
> --
>
> Key: FLINK-17137
> URL: https://issues.apache.org/jira/browse/FLINK-17137
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: Benchao Li
>Priority: Major
>
> Currently only regular aggregate and deduplicate support mini batch. 
> WindowOperator is a very frequently used operator in Flink, it's very helpful 
> to support mini batch for it.
> Design document:  
> https://docs.google.com/document/d/1GYlrg8dkYcw5fuq1HptdA3lygXrS_VRbnI8NXoEtZCg/edit?usp=sharing
> cc [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13386: [FLINK-19226][Kinesis] Updated FullJitterBackoff defaults for describeStream and describeStreamConsumer

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13386:
URL: https://github.com/apache/flink/pull/13386#issuecomment-692379768


   
   ## CI report:
   
   * 61a8bc9d3ddb52628c042b240829ef977b1fe6f2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6504)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13225: [FLINK-18974][docs-zh]Translate the 'User-Defined Functions' page of "Application Development's DataStream API" into Chinese

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13225:
URL: https://github.com/apache/flink/pull/13225#issuecomment-678953566


   
   ## CI report:
   
   * 9d52b2bde1c0a87d20aac1d7057f231b433d353e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6481)
 
   * 313c6760e7d491540ab8d782b87ee2c637d9b5ab Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6508)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13225: [FLINK-18974][docs-zh]Translate the 'User-Defined Functions' page of "Application Development's DataStream API" into Chinese

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13225:
URL: https://github.com/apache/flink/pull/13225#issuecomment-678953566


   
   ## CI report:
   
   * 9d52b2bde1c0a87d20aac1d7057f231b433d353e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6481)
 
   * 313c6760e7d491540ab8d782b87ee2c637d9b5ab UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-14 Thread GitBox


KarmaGYZ commented on a change in pull request #13311:
URL: https://github.com/apache/flink/pull/13311#discussion_r488366730



##
File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnResourceManagerDriver.java
##
@@ -508,52 +371,103 @@ private void 
removeContainerRequest(AMRMClient.ContainerRequest pendingContainer
return matchingContainerRequests;
}
 
-   @Override
-   public void onShutdownRequest() {
-   onFatalError(new 
ResourceManagerException(ERROR_MASSAGE_ON_SHUTDOWN_REQUEST));
-   }
+   private ContainerLaunchContext createTaskExecutorLaunchContext(
+   ResourceID containerId,
+   String host,
+   TaskExecutorProcessSpec taskExecutorProcessSpec) throws 
Exception {
 
-   @Override
-   public void onNodesUpdated(List list) {
-   // We are not interested in node updates
-   }
+   // init the ContainerLaunchContext
+   final String currDir = configuration.getCurrentDir();
 
-   @Override
-   public void onError(Throwable error) {
-   onFatalError(error);
-   }
+   final ContaineredTaskManagerParameters taskManagerParameters =
+   ContaineredTaskManagerParameters.create(flinkConfig, 
taskExecutorProcessSpec);
 
-   // 

-   //  NMClientAsync CallbackHandler methods
-   // 

-   @Override
-   public void onContainerStarted(ContainerId containerId, Map map) {
-   log.debug("Succeeded to call YARN Node Manager to start 
container {}.", containerId);
-   }
+   log.info("TaskExecutor {} will be started on {} with {}.",
+   containerId.getStringWithMetadata(),
+   host,
+   taskExecutorProcessSpec);
 
-   @Override
-   public void onContainerStatusReceived(ContainerId containerId, 
ContainerStatus containerStatus) {
-   // We are not interested in getting container status
+   final Configuration taskManagerConfig = 
BootstrapTools.cloneConfiguration(flinkConfig);
+   
taskManagerConfig.set(TaskManagerOptions.TASK_MANAGER_RESOURCE_ID, 
containerId.getResourceIdString());
+   
taskManagerConfig.set(TaskManagerOptionsInternal.TASK_MANAGER_RESOURCE_ID_METADATA,
 containerId.getMetadata());
+
+   final String taskManagerDynamicProperties =
+   
BootstrapTools.getDynamicPropertiesAsString(flinkClientConfig, 
taskManagerConfig);
+
+   log.debug("TaskManager configuration: {}", taskManagerConfig);
+
+   final ContainerLaunchContext taskExecutorLaunchContext = 
Utils.createTaskExecutorContext(
+   flinkConfig,
+   yarnConfig,
+   configuration,
+   taskManagerParameters,
+   taskManagerDynamicProperties,
+   currDir,
+   YarnTaskExecutorRunner.class,
+   log);
+
+   taskExecutorLaunchContext.getEnvironment()
+   .put(ENV_FLINK_NODE_ID, host);
+   return taskExecutorLaunchContext;
}
 
-   @Override
-   public void onContainerStopped(ContainerId containerId) {
-   log.debug("Succeeded to call YARN Node Manager to stop 
container {}.", containerId);
+   @VisibleForTesting
+   Optional getContainerResource(TaskExecutorProcessSpec 
taskExecutorProcessSpec) {
+   return 
taskExecutorProcessSpecContainerResourceAdapter.tryComputeContainerResource(taskExecutorProcessSpec);
}
 
-   @Override
-   public void onStartContainerError(ContainerId containerId, Throwable t) 
{
-   runAsync(() -> 
releaseFailedContainerAndRequestNewContainerIfRequired(containerId, t));
+   private RegisterApplicationMasterResponse registerApplicationMaster() 
throws Exception {
+   final int restPort;
+   final String webInterfaceUrl = 
configuration.getWebInterfaceUrl();
+   final String rpcAddress = configuration.getRpcAddress();
+
+   if (webInterfaceUrl != null) {
+   final int lastColon = webInterfaceUrl.lastIndexOf(':');

Review comment:
   I'm not quite familiar with that logic. I could find the 
`webInterfaceUrl` is originally derived in `RestServerEndpoint`. So it should 
always have a port.
   
   @tillrohrmann Could you help the ensure this?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact 

[GitHub] [flink-playgrounds] shuiqiangchen edited a comment on pull request #16: [FLINK-19145][walkthroughs] Add PyFlink-walkthrough to Flink playground.

2020-09-14 Thread GitBox


shuiqiangchen edited a comment on pull request #16:
URL: https://github.com/apache/flink-playgrounds/pull/16#issuecomment-692423562


   Hi @morsapaes @alpinegizmo , sorry for the late reply, I was on vacation in 
the last four days. Highly appreciate for @morsapaes valuable comments and 
warmly welcome @alpinegizmo  to help reviewing this PR, I have updated the PR 
according to your suggestions, Please have a look at it. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-14 Thread GitBox


KarmaGYZ commented on a change in pull request #13311:
URL: https://github.com/apache/flink/pull/13311#discussion_r488363795



##
File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnResourceManagerDriver.java
##
@@ -435,53 +308,43 @@ private void onContainersOfResourceAllocated(Resource 
resource, List
numAccepted, numExcess, numPending, resource);
}
 
-   @VisibleForTesting
-   static ResourceID getContainerResourceId(Container container) {
-   return new ResourceID(container.getId().toString(), 
container.getNodeId().toString());
+   private int getNumRequestedNotAllocatedWorkers() {
+   return 
requestResourceFutures.values().stream().mapToInt(Queue::size).sum();
+   }
+
+   private int 
getNumRequestedNotAllocatedWorkersFor(TaskExecutorProcessSpec 
taskExecutorProcessSpec) {
+   return 
requestResourceFutures.getOrDefault(taskExecutorProcessSpec, new 
LinkedList<>()).size();
+   }
+
+   private void removeContainerRequest(AMRMClient.ContainerRequest 
pendingContainerRequest) {
+   log.info("Removing container request {}.", 
pendingContainerRequest);
+   
resourceManagerClient.removeContainerRequest(pendingContainerRequest);
+   }
+
+   private void returnExcessContainer(Container excessContainer) {
+   log.info("Returning excess container {}.", 
excessContainer.getId());
+   
resourceManagerClient.releaseAssignedContainer(excessContainer.getId());
}
 
-   private void startTaskExecutorInContainer(Container container, 
WorkerResourceSpec workerResourceSpec, ResourceID resourceId) {
-   workerNodeMap.put(resourceId, new YarnWorkerNode(container, 
resourceId));
+   private void startTaskExecutorInContainer(Container container, 
TaskExecutorProcessSpec taskExecutorProcessSpec, ResourceID resourceId, 
CompletableFuture requestResourceFuture) {
+   final YarnWorkerNode yarnWorkerNode = new 
YarnWorkerNode(container, resourceId);
 
try {
// Context information used to start a TaskExecutor 
Java process
ContainerLaunchContext taskExecutorLaunchContext = 
createTaskExecutorLaunchContext(
resourceId,
container.getNodeId().getHost(),
-   
TaskExecutorProcessUtils.processSpecFromWorkerResourceSpec(flinkConfig, 
workerResourceSpec));
+   taskExecutorProcessSpec);
 
nodeManagerClient.startContainerAsync(container, 
taskExecutorLaunchContext);
+   requestResourceFuture.complete(yarnWorkerNode);
} catch (Throwable t) {
-   
releaseFailedContainerAndRequestNewContainerIfRequired(container.getId(), t);
+   requestResourceFuture.completeExceptionally(t);
}
}
 
-   private void 
releaseFailedContainerAndRequestNewContainerIfRequired(ContainerId containerId, 
Throwable throwable) {
-   validateRunsInMainThread();
-
-   log.error("Could not start TaskManager in container {}.", 
containerId, throwable);
-
-   final ResourceID resourceId = new 
ResourceID(containerId.toString());
-   // release the failed container
-   workerNodeMap.remove(resourceId);
-   resourceManagerClient.releaseAssignedContainer(containerId);
-   notifyAllocatedWorkerStopped(resourceId);
-   // and ask for a new one
-   requestYarnContainerIfRequired();
-   }
-
-   private void returnExcessContainer(Container excessContainer) {
-   log.info("Returning excess container {}.", 
excessContainer.getId());
-   
resourceManagerClient.releaseAssignedContainer(excessContainer.getId());
-   }
-
-   private void removeContainerRequest(AMRMClient.ContainerRequest 
pendingContainerRequest, WorkerResourceSpec workerResourceSpec) {
-   log.info("Removing container request {}.", 
pendingContainerRequest);
-   
resourceManagerClient.removeContainerRequest(pendingContainerRequest);
-   }
-
private Collection 
getPendingRequestsAndCheckConsistency(Resource resource, int expectedNum) {
-   final Collection equivalentResources = 
workerSpecContainerResourceAdapter.getEquivalentContainerResource(resource, 
matchingStrategy);
+   final Collection equivalentResources = 
taskExecutorProcessSpecContainerResourceAdapter.getEquivalentContainerResource(resource,
 matchingStrategy);
final List> 
matchingRequests =

Review comment:
   I think we could get a bit readability here. Thanks for the remark.





This is an automated message from the Apache Git Service.
To 

[GitHub] [flink] zhuxiaoshang commented on pull request #13225: [FLINK-18974][docs-zh]Translate the 'User-Defined Functions' page of "Application Development's DataStream API" into Chinese

2020-09-14 Thread GitBox


zhuxiaoshang commented on pull request #13225:
URL: https://github.com/apache/flink/pull/13225#issuecomment-692442654


   > @zhuxiaoshang thanks for your contribution, left some comments based on 
the item `5. 中文标点符号` of 
[Flink+Translation+Specifications](https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications)
 please let me know what do you think ,thanks.
   
   yes,you are right,brackets should be chinese form.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-14 Thread GitBox


KarmaGYZ commented on a change in pull request #13311:
URL: https://github.com/apache/flink/pull/13311#discussion_r488362483



##
File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnResourceManagerDriver.java
##
@@ -435,53 +308,43 @@ private void onContainersOfResourceAllocated(Resource 
resource, List
numAccepted, numExcess, numPending, resource);
}
 
-   @VisibleForTesting
-   static ResourceID getContainerResourceId(Container container) {
-   return new ResourceID(container.getId().toString(), 
container.getNodeId().toString());
+   private int getNumRequestedNotAllocatedWorkers() {
+   return 
requestResourceFutures.values().stream().mapToInt(Queue::size).sum();
+   }
+
+   private int 
getNumRequestedNotAllocatedWorkersFor(TaskExecutorProcessSpec 
taskExecutorProcessSpec) {
+   return 
requestResourceFutures.getOrDefault(taskExecutorProcessSpec, new 
LinkedList<>()).size();

Review comment:
   We indeed need an empty queue here, `Collections.emptyList()` is not fit 
this argument.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-14 Thread GitBox


KarmaGYZ commented on a change in pull request #13311:
URL: https://github.com/apache/flink/pull/13311#discussion_r488361230



##
File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnResourceManagerDriver.java
##
@@ -72,354 +62,237 @@
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.Iterator;
+import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.Optional;
+import java.util.Queue;
 import java.util.Set;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CompletableFuture;
 import java.util.stream.Collectors;
 
 /**
- * The yarn implementation of the resource manager. Used when the system is 
started
- * via the resource framework YARN.
+ * Implementation of {@link ResourceManagerDriver} for Kubernetes deployment.
  */
-public class YarnResourceManager extends 
LegacyActiveResourceManager
-   implements AMRMClientAsync.CallbackHandler, 
NMClientAsync.CallbackHandler {
+public class YarnResourceManagerDriver extends 
AbstractResourceManagerDriver {
 
private static final Priority RM_REQUEST_PRIORITY = 
Priority.newInstance(1);
 
-   /** YARN container map. */
-   private final ConcurrentMap workerNodeMap;
-
/** Environment variable name of the hostname given by the YARN.
 * In task executor we use the hostnames given by YARN consistently 
throughout akka */
static final String ENV_FLINK_NODE_ID = "_FLINK_NODE_ID";
 
static final String ERROR_MASSAGE_ON_SHUTDOWN_REQUEST = "Received 
shutdown request from YARN ResourceManager.";
 
-   /** Default heartbeat interval between this resource manager and the 
YARN ResourceManager. */
-   private final int yarnHeartbeatIntervalMillis;
-
private final YarnConfiguration yarnConfig;
 
-   @Nullable
-   private final String webInterfaceUrl;
+   /** The process environment variables. */
+   private final YarnResourceManagerDriverConfiguration configuration;
 
-   /** The heartbeat interval while the resource master is waiting for 
containers. */
-   private final int containerRequestHeartbeatIntervalMillis;
+   /** Default heartbeat interval between this resource manager and the 
YARN ResourceManager. */
+   private final int yarnHeartbeatIntervalMillis;
 
/** Client to communicate with the Resource Manager (YARN's master). */
private AMRMClientAsync 
resourceManagerClient;
 
+   /** The heartbeat interval while the resource master is waiting for 
containers. */
+   private final int containerRequestHeartbeatIntervalMillis;
+
/** Client to communicate with the Node manager and launch TaskExecutor 
processes. */
private NMClientAsync nodeManagerClient;
 
-   private final WorkerSpecContainerResourceAdapter 
workerSpecContainerResourceAdapter;
+   /** Request resource futures, keyed by container ids. */
+   private final Map>> requestResourceFutures;
+
+   private final TaskExecutorProcessSpecContainerResourceAdapter 
taskExecutorProcessSpecContainerResourceAdapter;
 
private final RegisterApplicationMasterResponseReflector 
registerApplicationMasterResponseReflector;
 
-   private WorkerSpecContainerResourceAdapter.MatchingStrategy 
matchingStrategy;
-
-   public YarnResourceManager(
-   RpcService rpcService,
-   ResourceID resourceId,
-   Configuration flinkConfig,
-   Map env,
-   HighAvailabilityServices highAvailabilityServices,
-   HeartbeatServices heartbeatServices,
-   SlotManager slotManager,
-   ResourceManagerPartitionTrackerFactory 
clusterPartitionTrackerFactory,
-   JobLeaderIdService jobLeaderIdService,
-   ClusterInformation clusterInformation,
-   FatalErrorHandler fatalErrorHandler,
-   @Nullable String webInterfaceUrl,
-   ResourceManagerMetricGroup resourceManagerMetricGroup) {
-   super(
-   flinkConfig,
-   env,
-   rpcService,
-   resourceId,
-   highAvailabilityServices,
-   heartbeatServices,
-   slotManager,
-   clusterPartitionTrackerFactory,
-   jobLeaderIdService,
-   clusterInformation,
-   fatalErrorHandler,
-   resourceManagerMetricGroup);
+   private 
TaskExecutorProcessSpecContainerResourceAdapter.MatchingStrategy 
matchingStrategy;
+
+   private final YarnResourceManagerClientFactory 
yarnResourceManagerClientFactory;
+
+   

[GitHub] [flink] KarmaGYZ commented on a change in pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-14 Thread GitBox


KarmaGYZ commented on a change in pull request #13311:
URL: https://github.com/apache/flink/pull/13311#discussion_r488360039



##
File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnResourceManagerDriver.java
##
@@ -72,354 +62,237 @@
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.Iterator;
+import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.Optional;
+import java.util.Queue;
 import java.util.Set;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.CompletableFuture;
 import java.util.stream.Collectors;
 
 /**
- * The yarn implementation of the resource manager. Used when the system is 
started
- * via the resource framework YARN.
+ * Implementation of {@link ResourceManagerDriver} for Kubernetes deployment.
  */
-public class YarnResourceManager extends 
LegacyActiveResourceManager
-   implements AMRMClientAsync.CallbackHandler, 
NMClientAsync.CallbackHandler {
+public class YarnResourceManagerDriver extends 
AbstractResourceManagerDriver {
 
private static final Priority RM_REQUEST_PRIORITY = 
Priority.newInstance(1);
 
-   /** YARN container map. */
-   private final ConcurrentMap workerNodeMap;
-
/** Environment variable name of the hostname given by the YARN.
 * In task executor we use the hostnames given by YARN consistently 
throughout akka */
static final String ENV_FLINK_NODE_ID = "_FLINK_NODE_ID";
 
static final String ERROR_MASSAGE_ON_SHUTDOWN_REQUEST = "Received 
shutdown request from YARN ResourceManager.";
 
-   /** Default heartbeat interval between this resource manager and the 
YARN ResourceManager. */
-   private final int yarnHeartbeatIntervalMillis;
-
private final YarnConfiguration yarnConfig;
 
-   @Nullable
-   private final String webInterfaceUrl;
+   /** The process environment variables. */
+   private final YarnResourceManagerDriverConfiguration configuration;
 
-   /** The heartbeat interval while the resource master is waiting for 
containers. */
-   private final int containerRequestHeartbeatIntervalMillis;
+   /** Default heartbeat interval between this resource manager and the 
YARN ResourceManager. */
+   private final int yarnHeartbeatIntervalMillis;
 
/** Client to communicate with the Resource Manager (YARN's master). */
private AMRMClientAsync 
resourceManagerClient;
 
+   /** The heartbeat interval while the resource master is waiting for 
containers. */
+   private final int containerRequestHeartbeatIntervalMillis;
+
/** Client to communicate with the Node manager and launch TaskExecutor 
processes. */
private NMClientAsync nodeManagerClient;
 
-   private final WorkerSpecContainerResourceAdapter 
workerSpecContainerResourceAdapter;
+   /** Request resource futures, keyed by container ids. */
+   private final Map>> requestResourceFutures;
+
+   private final TaskExecutorProcessSpecContainerResourceAdapter 
taskExecutorProcessSpecContainerResourceAdapter;
 
private final RegisterApplicationMasterResponseReflector 
registerApplicationMasterResponseReflector;
 
-   private WorkerSpecContainerResourceAdapter.MatchingStrategy 
matchingStrategy;
-
-   public YarnResourceManager(
-   RpcService rpcService,
-   ResourceID resourceId,
-   Configuration flinkConfig,
-   Map env,
-   HighAvailabilityServices highAvailabilityServices,
-   HeartbeatServices heartbeatServices,
-   SlotManager slotManager,
-   ResourceManagerPartitionTrackerFactory 
clusterPartitionTrackerFactory,
-   JobLeaderIdService jobLeaderIdService,
-   ClusterInformation clusterInformation,
-   FatalErrorHandler fatalErrorHandler,
-   @Nullable String webInterfaceUrl,
-   ResourceManagerMetricGroup resourceManagerMetricGroup) {
-   super(
-   flinkConfig,
-   env,
-   rpcService,
-   resourceId,
-   highAvailabilityServices,
-   heartbeatServices,
-   slotManager,
-   clusterPartitionTrackerFactory,
-   jobLeaderIdService,
-   clusterInformation,
-   fatalErrorHandler,
-   resourceManagerMetricGroup);
+   private 
TaskExecutorProcessSpecContainerResourceAdapter.MatchingStrategy 
matchingStrategy;
+
+   private final YarnResourceManagerClientFactory 
yarnResourceManagerClientFactory;
+
+   

[jira] [Updated] (FLINK-19092) Parse comment on computed column failed

2020-09-14 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-19092:

Issue Type: New Feature  (was: Bug)

> Parse comment on computed column failed
> ---
>
> Key: FLINK-19092
> URL: https://issues.apache.org/jira/browse/FLINK-19092
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Leonard Xu
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Parser Exception will be thrown when add comment on computed column,
> {{}}
> {code:java}
> CREATE TABLE test (
>   `id` BIGINT,
>   `age` INT COMMENT 'test comment', //PASS
>   `nominal_age` as age + 1 COMMENT 'test comment' // FAIL
> ) WITH (
>   'connector' = 'values',
>   'data-id' = '$dataId'
> ){code}
>  
> {code:java}
> org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered 
> "COMMENT" at line 6, column 28.org.apache.flink.table.api.SqlParserException: 
> SQL parse failed. Encountered "COMMENT" at line 6, column 28.Was expecting 
> one of:    ")" ...    "," ...    "." ...    "NOT" ...    "IN" ...    "<" ...  
>   "<=" ...    ">" ...    ">=" ...    "=" ...    "<>" ...    "!=" ...    
> "BETWEEN" ...    "LIKE" ...    "SIMILAR" ...    "+" ...    "-" ...    "*" ... 
>    "/" ...    "%" ...    "||" ...    "AND" ...    "OR" ...    "IS" ...    
> "MEMBER" ...    "SUBMULTISET" ...    "CONTAINS" ...    "OVERLAPS" ...    
> "EQUALS" ...    "PRECEDES" ...    "SUCCEEDS" ...    "IMMEDIATELY" ...    
> "MULTISET" ...    "[" ...    "FORMAT" ...    
>  at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:56)
>  at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:76)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:659)
>  at 
> org.apache.flink.table.planner.runtime.stream.sql.LookupJoinITCase.createLookupTable(LookupJoinITCase.scala:96)
>  at 
> org.apache.flink.table.planner.runtime.stream.sql.LookupJoinITCase.before(LookupJoinITCase.scala:72)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>  at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 

[jira] [Closed] (FLINK-19092) Parse comment on computed column failed

2020-09-14 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-19092.
---
Resolution: Fixed

Implemented in master (1.12.0): b873564357612bbea788332632955fc256105d35

> Parse comment on computed column failed
> ---
>
> Key: FLINK-19092
> URL: https://issues.apache.org/jira/browse/FLINK-19092
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Leonard Xu
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Parser Exception will be thrown when add comment on computed column,
> {{}}
> {code:java}
> CREATE TABLE test (
>   `id` BIGINT,
>   `age` INT COMMENT 'test comment', //PASS
>   `nominal_age` as age + 1 COMMENT 'test comment' // FAIL
> ) WITH (
>   'connector' = 'values',
>   'data-id' = '$dataId'
> ){code}
>  
> {code:java}
> org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered 
> "COMMENT" at line 6, column 28.org.apache.flink.table.api.SqlParserException: 
> SQL parse failed. Encountered "COMMENT" at line 6, column 28.Was expecting 
> one of:    ")" ...    "," ...    "." ...    "NOT" ...    "IN" ...    "<" ...  
>   "<=" ...    ">" ...    ">=" ...    "=" ...    "<>" ...    "!=" ...    
> "BETWEEN" ...    "LIKE" ...    "SIMILAR" ...    "+" ...    "-" ...    "*" ... 
>    "/" ...    "%" ...    "||" ...    "AND" ...    "OR" ...    "IS" ...    
> "MEMBER" ...    "SUBMULTISET" ...    "CONTAINS" ...    "OVERLAPS" ...    
> "EQUALS" ...    "PRECEDES" ...    "SUCCEEDS" ...    "IMMEDIATELY" ...    
> "MULTISET" ...    "[" ...    "FORMAT" ...    
>  at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:56)
>  at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:76)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:659)
>  at 
> org.apache.flink.table.planner.runtime.stream.sql.LookupJoinITCase.createLookupTable(LookupJoinITCase.scala:96)
>  at 
> org.apache.flink.table.planner.runtime.stream.sql.LookupJoinITCase.before(LookupJoinITCase.scala:72)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>  at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runners.Suite.runChild(Suite.java:128) at 
> org.junit.runners.Suite.runChild(Suite.java:27) at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> 

[GitHub] [flink] wuchong merged pull request #13352: [FLINK-19092][sql-parser] Parse comment on computed column failed

2020-09-14 Thread GitBox


wuchong merged pull request #13352:
URL: https://github.com/apache/flink/pull/13352


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19237) LeaderChangeClusterComponentsTest.testReelectionOfJobMaster failed with "NoResourceAvailableException: Could not allocate the required slot within slot request timeout

2020-09-14 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195842#comment-17195842
 ] 

Dian Fu commented on FLINK-19237:
-

This issue and FLINK-19210 seems caused of the same problem.

> LeaderChangeClusterComponentsTest.testReelectionOfJobMaster failed with 
> "NoResourceAvailableException: Could not allocate the required slot within 
> slot request timeout"
> 
>
> Key: FLINK-19237
> URL: https://issues.apache.org/jira/browse/FLINK-19237
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6499=logs=6bfdaf55-0c08-5e3f-a2d2-2a0285fd41cf=fd9796c3-9ce8-5619-781c-42f873e126a6]
> {code}
> 2020-09-14T21:11:02.8200203Z [ERROR] 
> testReelectionOfJobMaster(org.apache.flink.runtime.leaderelection.LeaderChangeClusterComponentsTest)
>   Time elapsed: 300.14 s  <<< FAILURE!
> 2020-09-14T21:11:02.8201761Z java.lang.AssertionError: Job failed.
> 2020-09-14T21:11:02.8202749Z  at 
> org.apache.flink.runtime.jobmaster.utils.JobResultUtils.throwAssertionErrorOnFailedResult(JobResultUtils.java:54)
> 2020-09-14T21:11:02.8203794Z  at 
> org.apache.flink.runtime.jobmaster.utils.JobResultUtils.assertSuccess(JobResultUtils.java:30)
> 2020-09-14T21:11:02.8205177Z  at 
> org.apache.flink.runtime.leaderelection.LeaderChangeClusterComponentsTest.testReelectionOfJobMaster(LeaderChangeClusterComponentsTest.java:152)
> 2020-09-14T21:11:02.8206191Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-14T21:11:02.8206985Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-14T21:11:02.8207930Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-14T21:11:02.8208927Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-14T21:11:02.8209753Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-14T21:11:02.8210710Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-14T21:11:02.8211608Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-14T21:11:02.8214473Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-09-14T21:11:02.8215398Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-09-14T21:11:02.8216199Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-09-14T21:11:02.8216947Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-09-14T21:11:02.8217695Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-09-14T21:11:02.8218635Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-09-14T21:11:02.8219499Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-09-14T21:11:02.8220313Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-09-14T21:11:02.8221060Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-09-14T21:11:02.8222171Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-09-14T21:11:02.8222937Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-09-14T21:11:02.8223688Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-09-14T21:11:02.8225191Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-09-14T21:11:02.8226086Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-09-14T21:11:02.8226761Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-14T21:11:02.8227453Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-09-14T21:11:02.8228392Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-09-14T21:11:02.8229256Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-14T21:11:02.8235798Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-09-14T21:11:02.8237650Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-09-14T21:11:02.8239039Z  at 
> 

[GitHub] [flink] wuchong commented on pull request #13352: [FLINK-19092][sql-parser] Parse comment on computed column failed

2020-09-14 Thread GitBox


wuchong commented on pull request #13352:
URL: https://github.com/apache/flink/pull/13352#issuecomment-692439120


   Mering...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17137) Support mini batch for WindowOperator in blink planner

2020-09-14 Thread tartarus (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195841#comment-17195841
 ] 

tartarus commented on FLINK-17137:
--

[~libenchao] we have done the local-global optimization for window,  Why not 
save the accumulator in memory, but the original data?

> Support mini batch for WindowOperator in blink planner
> --
>
> Key: FLINK-17137
> URL: https://issues.apache.org/jira/browse/FLINK-17137
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: Benchao Li
>Priority: Major
>
> Currently only regular aggregate and deduplicate support mini batch. 
> WindowOperator is a very frequently used operator in Flink, it's very helpful 
> to support mini batch for it.
> Design document:  
> https://docs.google.com/document/d/1GYlrg8dkYcw5fuq1HptdA3lygXrS_VRbnI8NXoEtZCg/edit?usp=sharing
> cc [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19237) LeaderChangeClusterComponentsTest.testReelectionOfJobMaster failed with "NoResourceAvailableException: Could not allocate the required slot within slot request timeout"

2020-09-14 Thread Dian Fu (Jira)
Dian Fu created FLINK-19237:
---

 Summary: 
LeaderChangeClusterComponentsTest.testReelectionOfJobMaster failed with 
"NoResourceAvailableException: Could not allocate the required slot within slot 
request timeout"
 Key: FLINK-19237
 URL: https://issues.apache.org/jira/browse/FLINK-19237
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.12.0
Reporter: Dian Fu


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6499=logs=6bfdaf55-0c08-5e3f-a2d2-2a0285fd41cf=fd9796c3-9ce8-5619-781c-42f873e126a6]

{code}
2020-09-14T21:11:02.8200203Z [ERROR] 
testReelectionOfJobMaster(org.apache.flink.runtime.leaderelection.LeaderChangeClusterComponentsTest)
  Time elapsed: 300.14 s  <<< FAILURE!
2020-09-14T21:11:02.8201761Z java.lang.AssertionError: Job failed.
2020-09-14T21:11:02.8202749Zat 
org.apache.flink.runtime.jobmaster.utils.JobResultUtils.throwAssertionErrorOnFailedResult(JobResultUtils.java:54)
2020-09-14T21:11:02.8203794Zat 
org.apache.flink.runtime.jobmaster.utils.JobResultUtils.assertSuccess(JobResultUtils.java:30)
2020-09-14T21:11:02.8205177Zat 
org.apache.flink.runtime.leaderelection.LeaderChangeClusterComponentsTest.testReelectionOfJobMaster(LeaderChangeClusterComponentsTest.java:152)
2020-09-14T21:11:02.8206191Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-09-14T21:11:02.8206985Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-09-14T21:11:02.8207930Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-09-14T21:11:02.8208927Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-09-14T21:11:02.8209753Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-09-14T21:11:02.8210710Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-09-14T21:11:02.8211608Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-09-14T21:11:02.8214473Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-09-14T21:11:02.8215398Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2020-09-14T21:11:02.8216199Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2020-09-14T21:11:02.8216947Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-09-14T21:11:02.8217695Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2020-09-14T21:11:02.8218635Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2020-09-14T21:11:02.8219499Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2020-09-14T21:11:02.8220313Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2020-09-14T21:11:02.8221060Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2020-09-14T21:11:02.8222171Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2020-09-14T21:11:02.8222937Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2020-09-14T21:11:02.8223688Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2020-09-14T21:11:02.8225191Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2020-09-14T21:11:02.8226086Zat 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2020-09-14T21:11:02.8226761Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2020-09-14T21:11:02.8227453Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2020-09-14T21:11:02.8228392Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2020-09-14T21:11:02.8229256Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
2020-09-14T21:11:02.8235798Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
2020-09-14T21:11:02.8237650Zat 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
2020-09-14T21:11:02.8239039Zat 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
2020-09-14T21:11:02.8239894Zat 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
2020-09-14T21:11:02.8240591Zat 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
2020-09-14T21:11:02.8241325Z Caused by: org.apache.flink.runtime.JobException: 
Recovery is suppressed by NoRestartBackoffTimeStrategy
2020-09-14T21:11:02.8242225Zat 

[jira] [Updated] (FLINK-19237) LeaderChangeClusterComponentsTest.testReelectionOfJobMaster failed with "NoResourceAvailableException: Could not allocate the required slot within slot request timeout"

2020-09-14 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19237:

Labels: test-stability  (was: )

> LeaderChangeClusterComponentsTest.testReelectionOfJobMaster failed with 
> "NoResourceAvailableException: Could not allocate the required slot within 
> slot request timeout"
> 
>
> Key: FLINK-19237
> URL: https://issues.apache.org/jira/browse/FLINK-19237
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6499=logs=6bfdaf55-0c08-5e3f-a2d2-2a0285fd41cf=fd9796c3-9ce8-5619-781c-42f873e126a6]
> {code}
> 2020-09-14T21:11:02.8200203Z [ERROR] 
> testReelectionOfJobMaster(org.apache.flink.runtime.leaderelection.LeaderChangeClusterComponentsTest)
>   Time elapsed: 300.14 s  <<< FAILURE!
> 2020-09-14T21:11:02.8201761Z java.lang.AssertionError: Job failed.
> 2020-09-14T21:11:02.8202749Z  at 
> org.apache.flink.runtime.jobmaster.utils.JobResultUtils.throwAssertionErrorOnFailedResult(JobResultUtils.java:54)
> 2020-09-14T21:11:02.8203794Z  at 
> org.apache.flink.runtime.jobmaster.utils.JobResultUtils.assertSuccess(JobResultUtils.java:30)
> 2020-09-14T21:11:02.8205177Z  at 
> org.apache.flink.runtime.leaderelection.LeaderChangeClusterComponentsTest.testReelectionOfJobMaster(LeaderChangeClusterComponentsTest.java:152)
> 2020-09-14T21:11:02.8206191Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-14T21:11:02.8206985Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-14T21:11:02.8207930Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-14T21:11:02.8208927Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-14T21:11:02.8209753Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-14T21:11:02.8210710Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-14T21:11:02.8211608Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-14T21:11:02.8214473Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-09-14T21:11:02.8215398Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-09-14T21:11:02.8216199Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-09-14T21:11:02.8216947Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-09-14T21:11:02.8217695Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-09-14T21:11:02.8218635Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-09-14T21:11:02.8219499Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-09-14T21:11:02.8220313Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-09-14T21:11:02.8221060Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-09-14T21:11:02.8222171Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-09-14T21:11:02.8222937Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-09-14T21:11:02.8223688Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-09-14T21:11:02.8225191Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-09-14T21:11:02.8226086Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-09-14T21:11:02.8226761Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-14T21:11:02.8227453Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-09-14T21:11:02.8228392Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-09-14T21:11:02.8229256Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-14T21:11:02.8235798Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-09-14T21:11:02.8237650Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-09-14T21:11:02.8239039Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-09-14T21:11:02.8239894Z  at 
> 

[jira] [Updated] (FLINK-19227) The catalog is still created after opening failed in catalog registering

2020-09-14 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-19227:

Labels: starter  (was: )

> The catalog is still created after opening failed in catalog registering
> 
>
> Key: FLINK-19227
> URL: https://issues.apache.org/jira/browse/FLINK-19227
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: starter
> Fix For: 1.11.3
>
>
> > When I create the HiveCatalog and Flink is not able to connect to the 
> >HiveMetastore, the statement can not be executed, but the catalog is still 
> >created. Subsequent attempts to query the tables result in a NPE.
> In CatalogManager.registerCatalog.
>  Consider open is a relatively easy operation to fail, we should put catalog 
> into catalog manager after its open.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19194) The UDF split and split_index get wrong result

2020-09-14 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195837#comment-17195837
 ] 

Jark Wu commented on FLINK-19194:
-

Hi [~faaronzheng], I'm not sure this is a bug in Table/SQL. Flink doesn't 
provide {{split}} or {{split_index}}. I just tested {{select 
UPPER('ab-123"xyz\cd')}} in SQL CLI, and the result is {{AB-123"XYZ\CD}} which 
is as expected. 

> The UDF split and split_index get wrong result 
> ---
>
> Key: FLINK-19194
> URL: https://issues.apache.org/jira/browse/FLINK-19194
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.0, 1.10.1
>Reporter: fa zheng
>Priority: Major
>
> If we run sql
> {code:sql}
> select split_index('ab-123"xyz\cd','-',1) ;
> {code}
> or
> {code:sql}
> select split('ab-123"xyz\cd','-')[2] ;
> {code}
> {noformat}
>  in sql-client, we should get result 123"xyz\cd, not 123\"xyz\\cd 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19194) The UDF split and split_index get wrong result

2020-09-14 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195838#comment-17195838
 ] 

Jark Wu commented on FLINK-19194:
-

Could you provide a program which can reproduce your result? 

> The UDF split and split_index get wrong result 
> ---
>
> Key: FLINK-19194
> URL: https://issues.apache.org/jira/browse/FLINK-19194
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.0, 1.10.1
>Reporter: fa zheng
>Priority: Major
>
> If we run sql
> {code:sql}
> select split_index('ab-123"xyz\cd','-',1) ;
> {code}
> or
> {code:sql}
> select split('ab-123"xyz\cd','-')[2] ;
> {code}
> {noformat}
>  in sql-client, we should get result 123"xyz\cd, not 123\"xyz\\cd 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19196) FlinkSQL aggregation function to aggregate array typed column to Map/Multiset

2020-09-14 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195836#comment-17195836
 ] 

Jark Wu edited comment on FLINK-19196 at 9/15/20, 3:04 AM:
---

Hi, Flink doesn't provide such built-in function yet. You can implement a UDF 
for this purpose. As a workaround, you can use 2 separate UDF to achieve this.


{code:java}
SELECT userID, COLLECT(type)
FROM (
  SELECT userID, type
  FROM src_table CROSS JOIN UNNEST(arr_types) AS t (type)
) GROUP BY userID
{code}

You can see more description in the documentation: 
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/functions/systemFunctions.html#aggregate-functions



was (Author: jark):
Hi, Flink doesn't provide such built-in function yet. You can implement a UDF 
for this purpose. As a workaround, you can use 2 separate UDF to achieve this.


{code:java}
SELECT userID, COLLECT(type)
FROM (
  SELECT userID, type
  FROM src_table CROSS JOIN UNNEST(arr_types) AS t (type)
) GROUP BY userID
{code}


> FlinkSQL aggregation function to aggregate array typed column to Map/Multiset
> -
>
> Key: FLINK-19196
> URL: https://issues.apache.org/jira/browse/FLINK-19196
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.11.1
>Reporter: sam lin
>Priority: Major
>
> Hi,
>  
> For FlinkSQL, how do I aggregate an array typed column to a Map/Multiset 
> type? e.g. Given the source events:
> userID, arr_types
> 1, [a, b]
> 2, [a]
> 1, [b, c]
>  
> ```
> SELECT userID, collect_arr(arr_types) FROM src_table GROUP BY userID
> ```
> Using the above SQL could produce results like:
> 1, \{a: 1, b: 2, c:1}
> 2, \{a: 1}
> My question is do we have existing UDF to do that?  If not, what is the 
> recommended way to achieve this?  I'm happy to write a UDF if you could 
> provide some code pointers.  Thanks in advance!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19196) FlinkSQL aggregation function to aggregate array typed column to Map/Multiset

2020-09-14 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195836#comment-17195836
 ] 

Jark Wu commented on FLINK-19196:
-

Hi, Flink doesn't provide such built-in function yet. You can implement a UDF 
for this purpose. As a workaround, you can use 2 separate UDF to achieve this.


{code:java}
SELECT userID, COLLECT(type)
FROM (
  SELECT userID, type
  FROM Orders CROSS JOIN UNNEST(arr_types) AS t (type)
) GROUP BY userID
{code}


> FlinkSQL aggregation function to aggregate array typed column to Map/Multiset
> -
>
> Key: FLINK-19196
> URL: https://issues.apache.org/jira/browse/FLINK-19196
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.11.1
>Reporter: sam lin
>Priority: Major
>
> Hi,
>  
> For FlinkSQL, how do I aggregate an array typed column to a Map/Multiset 
> type? e.g. Given the source events:
> userID, arr_types
> 1, [a, b]
> 2, [a]
> 1, [b, c]
>  
> ```
> SELECT userID, collect_arr(arr_types) FROM src_table GROUP BY userID
> ```
> Using the above SQL could produce results like:
> 1, \{a: 1, b: 2, c:1}
> 2, \{a: 1}
> My question is do we have existing UDF to do that?  If not, what is the 
> recommended way to achieve this?  I'm happy to write a UDF if you could 
> provide some code pointers.  Thanks in advance!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19196) FlinkSQL aggregation function to aggregate array typed column to Map/Multiset

2020-09-14 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195836#comment-17195836
 ] 

Jark Wu edited comment on FLINK-19196 at 9/15/20, 3:03 AM:
---

Hi, Flink doesn't provide such built-in function yet. You can implement a UDF 
for this purpose. As a workaround, you can use 2 separate UDF to achieve this.


{code:java}
SELECT userID, COLLECT(type)
FROM (
  SELECT userID, type
  FROM src_table CROSS JOIN UNNEST(arr_types) AS t (type)
) GROUP BY userID
{code}



was (Author: jark):
Hi, Flink doesn't provide such built-in function yet. You can implement a UDF 
for this purpose. As a workaround, you can use 2 separate UDF to achieve this.


{code:java}
SELECT userID, COLLECT(type)
FROM (
  SELECT userID, type
  FROM Orders CROSS JOIN UNNEST(arr_types) AS t (type)
) GROUP BY userID
{code}


> FlinkSQL aggregation function to aggregate array typed column to Map/Multiset
> -
>
> Key: FLINK-19196
> URL: https://issues.apache.org/jira/browse/FLINK-19196
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.11.1
>Reporter: sam lin
>Priority: Major
>
> Hi,
>  
> For FlinkSQL, how do I aggregate an array typed column to a Map/Multiset 
> type? e.g. Given the source events:
> userID, arr_types
> 1, [a, b]
> 2, [a]
> 1, [b, c]
>  
> ```
> SELECT userID, collect_arr(arr_types) FROM src_table GROUP BY userID
> ```
> Using the above SQL could produce results like:
> 1, \{a: 1, b: 2, c:1}
> 2, \{a: 1}
> My question is do we have existing UDF to do that?  If not, what is the 
> recommended way to achieve this?  I'm happy to write a UDF if you could 
> provide some code pointers.  Thanks in advance!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13385: [FLINK-18128][FLINK-19223][connectors] Fixes to Connector Base Availability logic

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13385:
URL: https://github.com/apache/flink/pull/13385#issuecomment-692360622


   
   ## CI report:
   
   * 0231272dac19c37325c150946ad3f87416e7b6b5 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6503)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19207) TtlListState#IteratorWithCleanup support remove method

2020-09-14 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195832#comment-17195832
 ] 

Jark Wu commented on FLINK-19207:
-

Hi [~lsy], remove on the returned iterator can't remove the element in the 
state. We need to {{update}} a {{List}} back to the state anyway, so maybe we 
can construct a new ArrayList, and add remaining element to it, and update it 
back to state. 

> TtlListState#IteratorWithCleanup support remove method
> --
>
> Key: FLINK-19207
> URL: https://issues.apache.org/jira/browse/FLINK-19207
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.12.0
>Reporter: dalongliu
>Priority: Critical
>
> In Flink-17096 , we are using TTL State refactor group Agg function state 
> expiration implement instead of Timer way. However, there’s a slight problem 
> which AggregateITCase#testListAggWithRetraction test failed because of 
> *IteratorWithCleanup* didn't support remove element, so maybe we need to 
> support {{remove}} for {{TtlListState#IteratorWithCleanup.}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19167) Proccess Function Example could not work

2020-09-14 Thread tinny cat (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195831#comment-17195831
 ] 

tinny cat commented on FLINK-19167:
---

the timer method did fired, but this if statement will not hold:

{code:java}
if (timestamp == result.lastModified + 6) {
// emit the state on timeout
  out.collect(new Tuple2(result.key, result.count));
 }
{code}
for example : 
There are 5 events incoming with timestamps t = 1, 2, 4, 5, 7.  We want to emit 
results when the event time reaches 60007. Therefore we need to register a 
timer for 7 + 6, which is ctx.timestamp() + 6. when timestamp increases 
to 60007, the timer method will be fired, now the timestamp is 60007,and the 
value of result.lastModified is also 60007, therefore,result.lastModified + 
6000 is greater than timestamp。so `out.collect(new Tuple2(result.key, result.count));` will never happen.


> Proccess Function Example could not work
> 
>
> Key: FLINK-19167
> URL: https://issues.apache.org/jira/browse/FLINK-19167
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.1
>Reporter: tinny cat
>Priority: Major
>
> Section "*Porccess Function Example*" of 
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/process_function.html]
>  current is:
> {code:java}
> // Some comments here
> @Override
> public void processElement(
> Tuple2 value, 
> Context ctx, 
> Collector> out) throws Exception {
> // retrieve the current count
> CountWithTimestamp current = state.value();
> if (current == null) {
> current = new CountWithTimestamp();
> current.key = value.f0;
> }
> // update the state's count
> current.count++;
> // set the state's timestamp to the record's assigned event time 
> timestamp
> current.lastModified = ctx.timestamp();
> // write the state back
> state.update(current);
> // schedule the next timer 60 seconds from the current event time
> ctx.timerService().registerEventTimeTimer(current.lastModified + 
> 6);
> }
> @Override
> public void onTimer(
> long timestamp, 
> OnTimerContext ctx, 
> Collector> out) throws Exception {
> // get the state for the key that scheduled the timer
> CountWithTimestamp result = state.value();
> // check if this is an outdated timer or the latest timer
> // this will be never happened
> if (timestamp == result.lastModified + 6) {
> // emit the state on timeout
> out.collect(new Tuple2(result.key, result.count));
> }
> }
> {code}
> however, it should be: 
> {code:java}
> @Override
> public void processElement(
> Tuple2 value, 
> Context ctx, 
> Collector> out) throws Exception {
> // retrieve the current count
> CountWithTimestamp current = state.value();
> if (current == null) {
> current = new CountWithTimestamp();
> current.key = value.f0;
> }
> // update the state's count
> current.count++;
> // set the state's timestamp to the record's assigned event time 
> timestamp
> // it should be the previous watermark
> current.lastModified = ctx.timerService().currentWatermark();
> // write the state back
> state.update(current);
> // schedule the next timer 60 seconds from the current event time
> ctx.timerService().registerEventTimeTimer(current.lastModified + 
> 6);
> }
> @Override
> public void onTimer(
> long timestamp, 
> OnTimerContext ctx, 
> Collector> out) throws Exception {
> // get the state for the key that scheduled the timer
> CountWithTimestamp result = state.value();
> // check if this is an outdated timer or the latest timer
> if (timestamp == result.lastModified + 6) {
> // emit the state on timeout
> out.collect(new Tuple2(result.key, result.count));
> }
> }
> {code}
> `current.lastModified = ctx.timestamp();` should be ` current.lastModified = 
> ctx.timerService().currentWatermark();`  otherwise, `timestamp == 
> result.lastModified + 6` will be never happend



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13388: [FLINK-19184][python] Add Batch Physical Pandas Group Aggregate Rule and RelNode

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13388:
URL: https://github.com/apache/flink/pull/13388#issuecomment-692422938


   
   ## CI report:
   
   * 6d3f77bd08ed3dba62375807a06325e320b98fb3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6506)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-19167) Proccess Function Example could not work

2020-09-14 Thread tinny cat (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193306#comment-17193306
 ] 

tinny cat edited comment on FLINK-19167 at 9/15/20, 2:38 AM:
-

the watermark is:

{code:java}
stream
.assignTimestampsAndWatermarks(new AssignerWithPeriodicWatermarks() 
{
private long currentMaxTimestamp = 0L;
private long maxOutOfOrderness = 1L;
private Watermark watermark = null;
@Override
public long extractTimestamp(UserAction element, long 
previousElementTimestamp) {
long timestamp = element.viewTime;
currentMaxTimestamp = Math.max(timestamp, currentMaxTimestamp);
return currentMaxTimestamp;
}
@Nullable
@Override
public Watermark getCurrentWatermark() {
watermark = new Watermark(currentMaxTimestamp - 
maxOutOfOrderness);
return watermark;
}
})
{code}



was (Author: tinny):
the watermark is:

{code:java}
stream
.assignTimestampsAndWatermarks(new AssignerWithPeriodicWatermarks() 
{
private long currentMaxTimestamp = 0L;
private long maxOutOfOrderness = 1L;
private Watermark watermark = null;
@Override
public long extractTimestamp(UserAction element, long 
previousElementTimestamp) {
long timestamp = element.viewTime;
currentMaxTimestamp = Math.max(timestamp, currentMaxTimestamp);
}
@Nullable
@Override
public Watermark getCurrentWatermark() {
watermark = new Watermark(currentMaxTimestamp - 
maxOutOfOrderness);
return watermark;
}
})
{code}


> Proccess Function Example could not work
> 
>
> Key: FLINK-19167
> URL: https://issues.apache.org/jira/browse/FLINK-19167
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.1
>Reporter: tinny cat
>Priority: Major
>
> Section "*Porccess Function Example*" of 
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/process_function.html]
>  current is:
> {code:java}
> // Some comments here
> @Override
> public void processElement(
> Tuple2 value, 
> Context ctx, 
> Collector> out) throws Exception {
> // retrieve the current count
> CountWithTimestamp current = state.value();
> if (current == null) {
> current = new CountWithTimestamp();
> current.key = value.f0;
> }
> // update the state's count
> current.count++;
> // set the state's timestamp to the record's assigned event time 
> timestamp
> current.lastModified = ctx.timestamp();
> // write the state back
> state.update(current);
> // schedule the next timer 60 seconds from the current event time
> ctx.timerService().registerEventTimeTimer(current.lastModified + 
> 6);
> }
> @Override
> public void onTimer(
> long timestamp, 
> OnTimerContext ctx, 
> Collector> out) throws Exception {
> // get the state for the key that scheduled the timer
> CountWithTimestamp result = state.value();
> // check if this is an outdated timer or the latest timer
> // this will be never happened
> if (timestamp == result.lastModified + 6) {
> // emit the state on timeout
> out.collect(new Tuple2(result.key, result.count));
> }
> }
> {code}
> however, it should be: 
> {code:java}
> @Override
> public void processElement(
> Tuple2 value, 
> Context ctx, 
> Collector> out) throws Exception {
> // retrieve the current count
> CountWithTimestamp current = state.value();
> if (current == null) {
> current = new CountWithTimestamp();
> current.key = value.f0;
> }
> // update the state's count
> current.count++;
> // set the state's timestamp to the record's assigned event time 
> timestamp
> // it should be the previous watermark
> current.lastModified = ctx.timerService().currentWatermark();
> // write the state back
> state.update(current);
> // schedule the next timer 60 seconds from the current event time
> ctx.timerService().registerEventTimeTimer(current.lastModified + 
> 6);
> }
> @Override
> public void onTimer(
> long timestamp, 
> OnTimerContext ctx, 
> Collector> out) throws Exception {
> // get the state for the key that scheduled the timer
> CountWithTimestamp result = state.value();
> 

[GitHub] [flink] KarmaGYZ commented on a change in pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-14 Thread GitBox


KarmaGYZ commented on a change in pull request #13311:
URL: https://github.com/apache/flink/pull/13311#discussion_r488345943



##
File path: 
flink-yarn/src/test/java/org/apache/flink/yarn/TestingYarnNMClientAsync.java
##
@@ -51,30 +65,74 @@ public void stopContainerAsync(ContainerId containerId, 
NodeId nodeId) {
this.stopContainerAsyncConsumer.accept(containerId, nodeId, 
callbackHandler);
}
 
-   void setStartContainerAsyncConsumer(TriConsumer startContainerAsyncConsumer) {
-   this.startContainerAsyncConsumer = 
Preconditions.checkNotNull(startContainerAsyncConsumer);
-   }
-
-   void setStopContainerAsyncConsumer(TriConsumer stopContainerAsyncConsumer) {
-   this.stopContainerAsyncConsumer = 
Preconditions.checkNotNull(stopContainerAsyncConsumer);
+   static Builder builder() {
+   return new Builder();
}
 
// 

//  Override lifecycle methods to avoid actually starting the service
// 

 
@Override
-   protected void serviceInit(Configuration conf) throws Exception {
-   // noop
+   public void init(Configuration conf) {
+   clientInitRunnable.run();
}
 
@Override
-   protected void serviceStart() throws Exception {
-   // noop
+   public void start() {
+   clientStartRunnable.run();
}
 
@Override
-   protected void serviceStop() throws Exception {
-   // noop
+   public void stop() {
+   clientStopRunnable.run();
+   }
+
+   /**
+* Builder class for {@link TestingYarnAMRMClientAsync}.
+*/
+   public static class Builder {
+   private volatile TriConsumer startContainerAsyncConsumer = (ignored1, ignored2, ignored3) 
-> {};
+   private volatile TriConsumer stopContainerAsyncConsumer = (ignored1, ignored2, ignored3) -> 
{};
+   private volatile Runnable clientInitRunnable = () -> {};
+   private volatile Runnable clientStartRunnable = () -> {};
+   private volatile Runnable clientStopRunnable = () -> {};

Review comment:
   Ditto.

##
File path: 
flink-yarn/src/test/java/org/apache/flink/yarn/TestingYarnAMRMClientAsync.java
##
@@ -45,25 +45,40 @@
  */
 public class TestingYarnAMRMClientAsync extends 
AMRMClientAsyncImpl {
 
-   private volatile Function, List>>
-   getMatchingRequestsFunction = ignored -> 
Collections.emptyList();
-   private volatile BiConsumer addContainerRequestConsumer = (ignored1, ignored2) -> {};
-   private volatile BiConsumer removeContainerRequestConsumer = (ignored1, ignored2) -> {};
-   private volatile BiConsumer 
releaseAssignedContainerConsumer = (ignored1, ignored2) -> {};
-   private volatile Consumer setHeartbeatIntervalConsumer = 
(ignored) -> {};
-   private volatile TriFunction registerApplicationMasterFunction =
-   (ignored1, ignored2, ignored3) -> 
RegisterApplicationMasterResponse.newInstance(
-   Resource.newInstance(0, 0),
-   Resource.newInstance(Integer.MAX_VALUE, 
Integer.MAX_VALUE),
-   Collections.emptyMap(),
-   null,
-   Collections.emptyList(),
-   null,
-   Collections.emptyList());
-   private volatile TriConsumer 
unregisterApplicationMasterConsumer = (ignored1, ignored2, ignored3) -> {};
-
-   TestingYarnAMRMClientAsync(CallbackHandler callbackHandler) {
+   private volatile Function, List>> 
getMatchingRequestsFunction;
+   private volatile BiConsumer addContainerRequestConsumer;
+   private volatile BiConsumer removeContainerRequestConsumer;
+   private volatile BiConsumer 
releaseAssignedContainerConsumer;
+   private volatile Consumer setHeartbeatIntervalConsumer;
+   private volatile TriFunction registerApplicationMasterFunction;
+   private volatile TriConsumer 
unregisterApplicationMasterConsumer;
+   private volatile Runnable clientInitRunnable;
+   private volatile Runnable clientStartRunnable;
+   private volatile Runnable clientStopRunnable;

Review comment:
   Ditto.

##
File path: 
flink-yarn/src/test/java/org/apache/flink/yarn/TestingYarnAMRMClientAsync.java
##
@@ -101,58 +116,111 @@ public void 
unregisterApplicationMaster(FinalApplicationStatus appStatus, String
unregisterApplicationMasterConsumer.accept(appStatus, 
appMessage, appTrackingUrl);
}
 
-   void setGetMatchingRequestsFunction(
-   Function, 
List>>
-   getMatchingRequestsFunction) {
-   this.getMatchingRequestsFunction = 

[GitHub] [flink] KarmaGYZ commented on a change in pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-14 Thread GitBox


KarmaGYZ commented on a change in pull request #13311:
URL: https://github.com/apache/flink/pull/13311#discussion_r488345856



##
File path: 
flink-yarn/src/test/java/org/apache/flink/yarn/TestingYarnNMClientAsync.java
##
@@ -34,11 +34,25 @@
  */
 class TestingYarnNMClientAsync extends NMClientAsyncImpl {
 
-   private volatile TriConsumer startContainerAsyncConsumer = (ignored1, ignored2, ignored3) 
-> {};
-   private volatile TriConsumer 
stopContainerAsyncConsumer = (ignored1, ignored2, ignored3) -> {};
+   private volatile TriConsumer startContainerAsyncConsumer;
+   private volatile TriConsumer 
stopContainerAsyncConsumer;
+   private volatile Runnable clientInitRunnable;
+   private volatile Runnable clientStartRunnable;
+   private volatile Runnable clientStopRunnable;

Review comment:
   Before we introduce the `Builder` class, these functions could be set by 
multiple threads. You're right, we do not need the `volatile` now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17949) KafkaShuffleITCase.testSerDeIngestionTime:156->testRecordSerDe:388 expected:<310> but was:<0>

2020-09-14 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195819#comment-17195819
 ] 

Dian Fu commented on FLINK-17949:
-

Another instance on 1.11: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6488=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c5f0071e-1851-543e-9a45-9ac140befc32

> KafkaShuffleITCase.testSerDeIngestionTime:156->testRecordSerDe:388 
> expected:<310> but was:<0>
> -
>
> Key: FLINK-17949
> URL: https://issues.apache.org/jira/browse/FLINK-17949
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
> Attachments: logs-ci-kafkagelly-1590500380.zip, 
> logs-ci-kafkagelly-1590524911.zip
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2209=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20
> {code}
> 2020-05-26T13:35:19.4022562Z [ERROR] 
> testSerDeIngestionTime(org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase)
>   Time elapsed: 5.786 s  <<< FAILURE!
> 2020-05-26T13:35:19.4023185Z java.lang.AssertionError: expected:<310> but 
> was:<0>
> 2020-05-26T13:35:19.4023498Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-05-26T13:35:19.4023825Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-05-26T13:35:19.4024461Z  at 
> org.junit.Assert.assertEquals(Assert.java:645)
> 2020-05-26T13:35:19.4024900Z  at 
> org.junit.Assert.assertEquals(Assert.java:631)
> 2020-05-26T13:35:19.4028546Z  at 
> org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testRecordSerDe(KafkaShuffleITCase.java:388)
> 2020-05-26T13:35:19.4029629Z  at 
> org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testSerDeIngestionTime(KafkaShuffleITCase.java:156)
> 2020-05-26T13:35:19.4030253Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-26T13:35:19.4030673Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-26T13:35:19.4031332Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-26T13:35:19.4031763Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-26T13:35:19.4032155Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-26T13:35:19.4032630Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-26T13:35:19.4033188Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-26T13:35:19.4033638Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-26T13:35:19.4034103Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-26T13:35:19.4034593Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-05-26T13:35:19.4035118Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-05-26T13:35:19.4035570Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-05-26T13:35:19.4035888Z  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pyscala commented on pull request #13276: [FLINK-18604][connectors/HBase] HBase ConnectorDescriptor can not work in Table API

2020-09-14 Thread GitBox


pyscala commented on pull request #13276:
URL: https://github.com/apache/flink/pull/13276#issuecomment-692424268


   @JingsongLi  I finished and hope you take a look, thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-playgrounds] shuiqiangchen edited a comment on pull request #16: [FLINK-19145][walkthroughs] Add PyFlink-walkthrough to Flink playground.

2020-09-14 Thread GitBox


shuiqiangchen edited a comment on pull request #16:
URL: https://github.com/apache/flink-playgrounds/pull/16#issuecomment-692423562


   Hi @morsapaes @alpinegizmo , sorry for the late reply, I was on vacation in 
the last four days. Highly appreciate for @morsapaes valuable comments and 
warmly welcome @morsapaes to help reviewing this PR, I have updated the PR 
according to your suggestions, Please have a look at it. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-playgrounds] shuiqiangchen commented on pull request #16: [FLINK-19145][walkthroughs] Add PyFlink-walkthrough to Flink playground.

2020-09-14 Thread GitBox


shuiqiangchen commented on pull request #16:
URL: https://github.com/apache/flink-playgrounds/pull/16#issuecomment-692423562


   Hi @morsapaes @alpinegizmo , sorry for the late reply, I was on vacation in 
the last four days. Highly appreciate for @morsapaes valuable comments and 
warmly welcome @morsapaes to help reviewing these PR, I have updated the PR 
according to your suggestions, Please have a look at it. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13388: [FLINK-19184][python] Add Batch Physical Pandas Group Aggregate Rule and RelNode

2020-09-14 Thread GitBox


flinkbot commented on pull request #13388:
URL: https://github.com/apache/flink/pull/13388#issuecomment-692422938


   
   ## CI report:
   
   * 6d3f77bd08ed3dba62375807a06325e320b98fb3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19236) Optimize the performance of Python UDAF

2020-09-14 Thread Wei Zhong (Jira)
Wei Zhong created FLINK-19236:
-

 Summary: Optimize the performance of Python UDAF
 Key: FLINK-19236
 URL: https://issues.apache.org/jira/browse/FLINK-19236
 Project: Flink
  Issue Type: Sub-task
Reporter: Wei Zhong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19233) Support DISTINCT KeyWord for Python UDAF

2020-09-14 Thread Wei Zhong (Jira)
Wei Zhong created FLINK-19233:
-

 Summary: Support DISTINCT KeyWord for Python UDAF
 Key: FLINK-19233
 URL: https://issues.apache.org/jira/browse/FLINK-19233
 Project: Flink
  Issue Type: Sub-task
Reporter: Wei Zhong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19235) Support mixed use with built-in aggs for Python UDAF

2020-09-14 Thread Wei Zhong (Jira)
Wei Zhong created FLINK-19235:
-

 Summary: Support mixed use with built-in aggs for Python UDAF
 Key: FLINK-19235
 URL: https://issues.apache.org/jira/browse/FLINK-19235
 Project: Flink
  Issue Type: Sub-task
Reporter: Wei Zhong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19234) Support FILTER KeyWord for Python UDAF

2020-09-14 Thread Wei Zhong (Jira)
Wei Zhong created FLINK-19234:
-

 Summary: Support FILTER KeyWord for Python UDAF
 Key: FLINK-19234
 URL: https://issues.apache.org/jira/browse/FLINK-19234
 Project: Flink
  Issue Type: Sub-task
Reporter: Wei Zhong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19230) Support Python UDAF on blink batch planner

2020-09-14 Thread Wei Zhong (Jira)
Wei Zhong created FLINK-19230:
-

 Summary: Support Python UDAF on blink batch planner
 Key: FLINK-19230
 URL: https://issues.apache.org/jira/browse/FLINK-19230
 Project: Flink
  Issue Type: Sub-task
Reporter: Wei Zhong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19231) Support ListState and ListView for Python UDAF

2020-09-14 Thread Wei Zhong (Jira)
Wei Zhong created FLINK-19231:
-

 Summary: Support ListState and ListView for Python UDAF
 Key: FLINK-19231
 URL: https://issues.apache.org/jira/browse/FLINK-19231
 Project: Flink
  Issue Type: Sub-task
Reporter: Wei Zhong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19232) Support MapState and MapView for Python UDAF

2020-09-14 Thread Wei Zhong (Jira)
Wei Zhong created FLINK-19232:
-

 Summary: Support MapState and MapView for Python UDAF
 Key: FLINK-19232
 URL: https://issues.apache.org/jira/browse/FLINK-19232
 Project: Flink
  Issue Type: Sub-task
Reporter: Wei Zhong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13384: [FLINK-19224][state-processor-api] Support reading window operator state

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13384:
URL: https://github.com/apache/flink/pull/13384#issuecomment-692330852


   
   ## CI report:
   
   * 2e73e742abe73fd0324c3647403131817ef2aada Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6502)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13388: [FLINK-19184][python] Add Batch Physical Pandas Group Aggregate Rule and RelNode

2020-09-14 Thread GitBox


flinkbot commented on pull request #13388:
URL: https://github.com/apache/flink/pull/13388#issuecomment-692417535


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6d3f77bd08ed3dba62375807a06325e320b98fb3 (Tue Sep 15 
02:08:21 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-19184).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19184) Add Batch Physical Pandas Group Aggregate Rule and RelNode

2020-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19184:
---
Labels: pull-request-available  (was: )

> Add Batch Physical Pandas Group Aggregate Rule and RelNode
> --
>
> Key: FLINK-19184
> URL: https://issues.apache.org/jira/browse/FLINK-19184
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Add Batch Physical Pandas Group Aggregate Rule and RelNode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo opened a new pull request #13388: [FLINK-19184][python] Add Batch Physical Pandas Group Aggregate Rule and RelNode

2020-09-14 Thread GitBox


HuangXingBo opened a new pull request #13388:
URL: https://github.com/apache/flink/pull/13388


   ## What is the purpose of the change
   
   *This pull request will add Batch Physical Pandas Group Aggregate Rule and 
RelNode*
   
   
   ## Brief change log
   
 - *add `BatchExecPythonAggregateRule`*
 - *add `BatchExecPythonGroupAggregate`*
 - *change the match condition of `BatchExecHashAggRule` and 
`BatchExecSortAggRule`*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - *Added ut test `PandasGroupAggregateTest`*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19229) Support ValueState and Python UDAF on blink stream planner

2020-09-14 Thread Wei Zhong (Jira)
Wei Zhong created FLINK-19229:
-

 Summary: Support ValueState and Python UDAF on blink stream planner
 Key: FLINK-19229
 URL: https://issues.apache.org/jira/browse/FLINK-19229
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Reporter: Wei Zhong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19228) Avoid accessing FileSystem in client for file system connector

2020-09-14 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-19228:


 Summary: Avoid accessing FileSystem in client for file system 
connector
 Key: FLINK-19228
 URL: https://issues.apache.org/jira/browse/FLINK-19228
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Reporter: Jingsong Lee
 Fix For: 1.11.3


On the client, there may not be a corresponding file system plugin, so we can 
not access the specific file system. We can not access the file system on the 
client, but put the work on the job manager or task manager.

Currently, it seems that only creating temporary directory through Filesystem 
in {{toStagingPath}}, but this is completely avoidable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] RocMarshal commented on a change in pull request #13225: [FLINK-18974][docs-zh]Translate the 'User-Defined Functions' page of "Application Development's DataStream API" into Chinese

2020-09-14 Thread GitBox


RocMarshal commented on a change in pull request #13225:
URL: https://github.com/apache/flink/pull/13225#discussion_r488333064



##
File path: docs/dev/user_defined_functions.zh.md
##
@@ -147,95 +153,78 @@ data.map (new RichMapFunction[String, Int] {
 
 
 
-Rich functions provide, in addition to the user-defined function (map,
-reduce, etc), four methods: `open`, `close`, `getRuntimeContext`, and
-`setRuntimeContext`. These are useful for parameterizing the function
-(see [Passing Parameters to Functions]({{ site.baseurl 
}}/dev/batch/index.html#passing-parameters-to-functions)),
-creating and finalizing local state, accessing broadcast variables (see
-[Broadcast Variables]({{ site.baseurl 
}}/dev/batch/index.html#broadcast-variables)), and for accessing runtime
-information such as accumulators and counters (see
-[Accumulators and Counters](#accumulators--counters)), and information
-on iterations (see [Iterations]({{ site.baseurl }}/dev/batch/iterations.html)).
+除了用户自定义的 function(map,reduce 等),Rich functions 
还提供了四个方法:`open`、`close`、`getRuntimeContext` 和
+`setRuntimeContext`。这些方法对于参数化 function
+(参阅 [给 function 传递参数]({% link dev/batch/index.zh.md 
%}#passing-parameters-to-functions)),

Review comment:
   ```suggestion
   (参阅 [给 function 传递参数]({% link dev/batch/index.zh.md 
%}#passing-parameters-to-functions)),
   ```

##
File path: docs/dev/user_defined_functions.zh.md
##
@@ -147,95 +153,78 @@ data.map (new RichMapFunction[String, Int] {
 
 
 
-Rich functions provide, in addition to the user-defined function (map,
-reduce, etc), four methods: `open`, `close`, `getRuntimeContext`, and
-`setRuntimeContext`. These are useful for parameterizing the function
-(see [Passing Parameters to Functions]({{ site.baseurl 
}}/dev/batch/index.html#passing-parameters-to-functions)),
-creating and finalizing local state, accessing broadcast variables (see
-[Broadcast Variables]({{ site.baseurl 
}}/dev/batch/index.html#broadcast-variables)), and for accessing runtime
-information such as accumulators and counters (see
-[Accumulators and Counters](#accumulators--counters)), and information
-on iterations (see [Iterations]({{ site.baseurl }}/dev/batch/iterations.html)).
+除了用户自定义的 function(map,reduce 等),Rich functions 
还提供了四个方法:`open`、`close`、`getRuntimeContext` 和
+`setRuntimeContext`。这些方法对于参数化 function
+(参阅 [给 function 传递参数]({% link dev/batch/index.zh.md 
%}#passing-parameters-to-functions)),
+创建和最终确定本地状态,访问广播变量(参阅

Review comment:
   ```suggestion
   创建和最终确定本地状态,访问广播变量(参阅
   ```

##
File path: docs/dev/user_defined_functions.zh.md
##
@@ -147,95 +153,78 @@ data.map (new RichMapFunction[String, Int] {
 
 
 
-Rich functions provide, in addition to the user-defined function (map,
-reduce, etc), four methods: `open`, `close`, `getRuntimeContext`, and
-`setRuntimeContext`. These are useful for parameterizing the function
-(see [Passing Parameters to Functions]({{ site.baseurl 
}}/dev/batch/index.html#passing-parameters-to-functions)),
-creating and finalizing local state, accessing broadcast variables (see
-[Broadcast Variables]({{ site.baseurl 
}}/dev/batch/index.html#broadcast-variables)), and for accessing runtime
-information such as accumulators and counters (see
-[Accumulators and Counters](#accumulators--counters)), and information
-on iterations (see [Iterations]({{ site.baseurl }}/dev/batch/iterations.html)).
+除了用户自定义的 function(map,reduce 等),Rich functions 
还提供了四个方法:`open`、`close`、`getRuntimeContext` 和
+`setRuntimeContext`。这些方法对于参数化 function
+(参阅 [给 function 传递参数]({% link dev/batch/index.zh.md 
%}#passing-parameters-to-functions)),
+创建和最终确定本地状态,访问广播变量(参阅
+[广播变量]({% link dev/batch/index.zh.md %}#broadcast-variables 
)),以及访问运行时信息,例如累加器和计数器(参阅
+[累加器和计数器](#accumulators--counters)),以及迭代器的相关信息(参阅 [迭代器]({% link 
dev/batch/iterations.zh.md %}))

Review comment:
   ```suggestion
   [累加器和计数器](#accumulators--counters)),以及迭代器的相关信息(参阅 [迭代器]({% link 
dev/batch/iterations.zh.md %}))
   ```

##
File path: docs/dev/user_defined_functions.zh.md
##
@@ -147,95 +153,78 @@ data.map (new RichMapFunction[String, Int] {
 
 
 
-Rich functions provide, in addition to the user-defined function (map,
-reduce, etc), four methods: `open`, `close`, `getRuntimeContext`, and
-`setRuntimeContext`. These are useful for parameterizing the function
-(see [Passing Parameters to Functions]({{ site.baseurl 
}}/dev/batch/index.html#passing-parameters-to-functions)),
-creating and finalizing local state, accessing broadcast variables (see
-[Broadcast Variables]({{ site.baseurl 
}}/dev/batch/index.html#broadcast-variables)), and for accessing runtime
-information such as accumulators and counters (see
-[Accumulators and Counters](#accumulators--counters)), and information
-on iterations (see [Iterations]({{ site.baseurl }}/dev/batch/iterations.html)).
+除了用户自定义的 function(map,reduce 等),Rich functions 
还提供了四个方法:`open`、`close`、`getRuntimeContext` 和
+`setRuntimeContext`。这些方法对于参数化 function

[GitHub] [flink-playgrounds] shuiqiangchen commented on a change in pull request #16: [FLINK-19145][walkthroughs] Add PyFlink-walkthrough to Flink playground.

2020-09-14 Thread GitBox


shuiqiangchen commented on a change in pull request #16:
URL: https://github.com/apache/flink-playgrounds/pull/16#discussion_r488335267



##
File path: pyflink-walkthrough/payment_msg_proccessing.py
##
@@ -0,0 +1,93 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from pyflink.datastream import StreamExecutionEnvironment, TimeCharacteristic
+from pyflink.table import StreamTableEnvironment, DataTypes, 
EnvironmentSettings
+from pyflink.table.udf import udf
+
+
+provinces = ("Beijing", "Shanghai", "Hangzhou", "Shenzhen", "Jiangxi", 
"Chongqing", "Xizang")
+
+
+@udf(input_types=[DataTypes.STRING()], result_type=DataTypes.STRING())
+def province_id_to_name(id):
+return provinces[id]
+
+
+def log_processing():
+env = StreamExecutionEnvironment.get_execution_environment()
+env_settings = EnvironmentSettings.Builder().use_blink_planner().build()

Review comment:
   This could be remove to make the code simpler.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13387: [Hotfix][typos]Modify debug log to print the compiling code

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13387:
URL: https://github.com/apache/flink/pull/13387#issuecomment-692407926


   
   ## CI report:
   
   * 5ae3077ffa0d5f79358008be1515a7985706c062 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6505)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19193) Recommend stop-with-savepoint in upgrade guidelines

2020-09-14 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19193:

Fix Version/s: (was: 1.11.2)
   1.11.3

> Recommend stop-with-savepoint in upgrade guidelines
> ---
>
> Key: FLINK-19193
> URL: https://issues.apache.org/jira/browse/FLINK-19193
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
> Fix For: 1.12.0, 1.11.3
>
>
> This is about step one in the documentation here: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/upgrading.html#step-1-take-a-savepoint-in-the-old-flink-version
> We currently advise users to take a savepoint, without telling them to stop 
> or cancel the job afterwards. We should update this to suggest stopping the 
> job with a savepoint in step one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19227) The catalog is still created after opening failed in catalog registering

2020-09-14 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-19227:
-
Description: 
> When I create the HiveCatalog and Flink is not able to connect to the 
>HiveMetastore, the statement can not be executed, but the catalog is still 
>created. Subsequent attempts to query the tables result in a NPE.

In CatalogManager.registerCatalog.
 Consider open is a relatively easy operation to fail, we should put catalog 
into catalog manager after its open.

  was:
In CatalogManager.registerCatalog.
Consider open is a relatively easy operation to fail, we should put catalog 
into catalog manager after its open.


> The catalog is still created after opening failed in catalog registering
> 
>
> Key: FLINK-19227
> URL: https://issues.apache.org/jira/browse/FLINK-19227
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.11.3
>
>
> > When I create the HiveCatalog and Flink is not able to connect to the 
> >HiveMetastore, the statement can not be executed, but the catalog is still 
> >created. Subsequent attempts to query the tables result in a NPE.
> In CatalogManager.registerCatalog.
>  Consider open is a relatively easy operation to fail, we should put catalog 
> into catalog manager after its open.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19227) The catalog is still created after opening failed in catalog registering

2020-09-14 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-19227:
-
Issue Type: Bug  (was: New Feature)

> The catalog is still created after opening failed in catalog registering
> 
>
> Key: FLINK-19227
> URL: https://issues.apache.org/jira/browse/FLINK-19227
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.11.3
>
>
> > When I create the HiveCatalog and Flink is not able to connect to the 
> >HiveMetastore, the statement can not be executed, but the catalog is still 
> >created. Subsequent attempts to query the tables result in a NPE.
> In CatalogManager.registerCatalog.
>  Consider open is a relatively easy operation to fail, we should put catalog 
> into catalog manager after its open.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19227) The catalog is still created after opening failed in catalog registering

2020-09-14 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-19227:


 Summary: The catalog is still created after opening failed in 
catalog registering
 Key: FLINK-19227
 URL: https://issues.apache.org/jira/browse/FLINK-19227
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: Jingsong Lee
 Fix For: 1.11.3


In CatalogManager.registerCatalog.
Consider open is a relatively easy operation to fail, we should put catalog 
into catalog manager after its open.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13387: [Hotfix][typos]Modify debug log to print the compiling code

2020-09-14 Thread GitBox


flinkbot commented on pull request #13387:
URL: https://github.com/apache/flink/pull/13387#issuecomment-692407926


   
   ## CI report:
   
   * 5ae3077ffa0d5f79358008be1515a7985706c062 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13387: [Hotfix][typos]Modify debug log to print the compiling code

2020-09-14 Thread GitBox


flinkbot commented on pull request #13387:
URL: https://github.com/apache/flink/pull/13387#issuecomment-692403820


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 5ae3077ffa0d5f79358008be1515a7985706c062 (Tue Sep 15 
01:22:20 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangxlong opened a new pull request #13387: [Hotfix][typos]Modify debug log to print the compiling code

2020-09-14 Thread GitBox


wangxlong opened a new pull request #13387:
URL: https://github.com/apache/flink/pull/13387


   
   ## What is the purpose of the change
   
   Modify debug log to print the compiling code.
   
   ## Brief change log
   
   Modify function name to compiling code in 
OperatorCodeGenerator#generateOneInputStreamOperator and 
OperatorCodeGenerator#generateTwoInputStreamOperator
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13386: [FLINK-19226][Kinesis] Updated FullJitterBackoff defaults for describeStream and describeStreamConsumer

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13386:
URL: https://github.com/apache/flink/pull/13386#issuecomment-692379768


   
   ## CI report:
   
   * 61a8bc9d3ddb52628c042b240829ef977b1fe6f2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6504)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13386: [FLINK-19226][Kinesis] Updated FullJitterBackoff defaults for describeStream and describeStreamConsumer

2020-09-14 Thread GitBox


flinkbot commented on pull request #13386:
URL: https://github.com/apache/flink/pull/13386#issuecomment-692379768


   
   ## CI report:
   
   * 61a8bc9d3ddb52628c042b240829ef977b1fe6f2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13386: [FLINK-19226][Kinesis] Updated FullJitterBackoff defaults for describeStream and describeStreamConsumer

2020-09-14 Thread GitBox


flinkbot commented on pull request #13386:
URL: https://github.com/apache/flink/pull/13386#issuecomment-692377293


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 61a8bc9d3ddb52628c042b240829ef977b1fe6f2 (Mon Sep 14 
23:49:21 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-19226).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19226) [Kinesis] [EFO] Connector reaches default max attempts for describeStream and describeStreamConsumer when parallelism is high

2020-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19226:
---
Labels: pull-request-available  (was: )

> [Kinesis] [EFO] Connector reaches default max attempts for describeStream and 
> describeStreamConsumer when parallelism is high
> -
>
> Key: FLINK-19226
> URL: https://issues.apache.org/jira/browse/FLINK-19226
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Hong Liang Teoh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> *Background*
> When lazily registering the stream consumer on apps with high parallelism, 
> EFO connector hits default maximum number of attempts when calling 
> describeStream and describeStreamConsumer on the Kinesis Streams API.
> The default FullJitterBackoff constants are tuned to prevent this when 
> parallelism of 1024 is used.
> *Scope*
>  * See 
> [FLIP|https://cwiki.apache.org/confluence/display/FLINK/FLIP-128%3A+Enhanced+Fan+Out+for+AWS+Kinesis+Consumers]
>  for full list of configuration options
>  * Suggested changes:
> |flink.stream.describe.maxretries|50|
> |flink.stream.describe.backoff.base|2000L|
> |flink.stream.describe.backoff.max|5000L|
> |flink.stream.describestreamconsumer.maxretries|50|
> |flink.stream.describestreamconsumer.backoff.base|2000L|
> |flink.stream.describestreamconsumer.backoff.max|5000L|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hlteoh37 opened a new pull request #13386: [FLINK-19226][Kinesis] Updated FullJitterBackoff defaults for describeStream and describeStreamConsumer

2020-09-14 Thread GitBox


hlteoh37 opened a new pull request #13386:
URL: https://github.com/apache/flink/pull/13386


   ## What is the purpose of the change
   Allow FlinkKinesisConsumer to use EFO record publisher with higher 
parallelism using the default config values.
   
   ## Brief change log
   
   - `ConsumerConfigConstants`
 - _Changed default FullJitterBackoff configuration for describeStream and 
describeStreamConsumer_
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19226) [Kinesis] [EFO] Connector reaches default max attempts for describeStream and describeStreamConsumer when parallelism is high

2020-09-14 Thread Hong Liang Teoh (Jira)
Hong Liang Teoh created FLINK-19226:
---

 Summary: [Kinesis] [EFO] Connector reaches default max attempts 
for describeStream and describeStreamConsumer when parallelism is high
 Key: FLINK-19226
 URL: https://issues.apache.org/jira/browse/FLINK-19226
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kinesis
Reporter: Hong Liang Teoh
 Fix For: 1.12.0


*Background*

When lazily registering the stream consumer on apps with high parallelism, EFO 
connector hits default maximum number of attempts when calling describeStream 
and describeStreamConsumer on the Kinesis Streams API.

The default FullJitterBackoff constants are tuned to prevent this when 
parallelism of 1024 is used.

*Scope*
 * See 
[FLIP|https://cwiki.apache.org/confluence/display/FLINK/FLIP-128%3A+Enhanced+Fan+Out+for+AWS+Kinesis+Consumers]
 for full list of configuration options
 * Suggested changes:
|flink.stream.describe.maxretries|50|
|flink.stream.describe.backoff.base|2000L|
|flink.stream.describe.backoff.max|5000L|
|flink.stream.describestreamconsumer.maxretries|50|
|flink.stream.describestreamconsumer.backoff.base|2000L|
|flink.stream.describestreamconsumer.backoff.max|5000L|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13385: [FLINK-18128][FLINK-19223][connectors] Fixes to Connector Base Availability logic

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13385:
URL: https://github.com/apache/flink/pull/13385#issuecomment-692360622


   
   ## CI report:
   
   * 0231272dac19c37325c150946ad3f87416e7b6b5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6503)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] StephanEwen commented on pull request #13344: [FLINK-10740] FLIP-27 File Source and various Updates on the Split Reader API

2020-09-14 Thread GitBox


StephanEwen commented on pull request #13344:
URL: https://github.com/apache/flink/pull/13344#issuecomment-692360653


   The tests in this PR fail due to bugs fixed in #13385



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13385: [FLINK-18128][FLINK-19223][connectors] Fixes to Connector Base Availability logic

2020-09-14 Thread GitBox


flinkbot commented on pull request #13385:
URL: https://github.com/apache/flink/pull/13385#issuecomment-692360622


   
   ## CI report:
   
   * 0231272dac19c37325c150946ad3f87416e7b6b5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13385: [FLINK-18128][FLINK-19223][connectors] Fixes to Connector Base Availability logic

2020-09-14 Thread GitBox


flinkbot commented on pull request #13385:
URL: https://github.com/apache/flink/pull/13385#issuecomment-692355155


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 0231272dac19c37325c150946ad3f87416e7b6b5 (Mon Sep 14 
22:45:54 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] StephanEwen commented on pull request #13366: [FLINK-17393][connector/common] Wakeup the SplitFetchers more elegantly.

2020-09-14 Thread GitBox


StephanEwen commented on pull request #13366:
URL: https://github.com/apache/flink/pull/13366#issuecomment-692355604


   I incorporated this PR in a follow-up PR with some more changes and bug 
fixes: #13385



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18128) CoordinatedSourceITCase.testMultipleSources gets stuck

2020-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18128:
---
Labels: pull-request-available test-stability  (was: test-stability)

> CoordinatedSourceITCase.testMultipleSources gets stuck
> --
>
> Key: FLINK-18128
> URL: https://issues.apache.org/jira/browse/FLINK-18128
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Task, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Stephan Ewen
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2705=logs=fc5181b0-e452-5c8f-68de-1097947f6483=6b04ca5f-0b52-511d-19c9-52bf0d9fbdfa
> {code}
> 2020-06-04T11:19:39.6335702Z [INFO] 
> 2020-06-04T11:19:39.6337440Z [INFO] 
> ---
> 2020-06-04T11:19:39.6338176Z [INFO]  T E S T S
> 2020-06-04T11:19:39.6339305Z [INFO] 
> ---
> 2020-06-04T11:19:40.1906157Z [INFO] Running 
> org.apache.flink.connector.base.source.reader.CoordinatedSourceITCase
> 2020-06-04T11:34:51.0599860Z 
> ==
> 2020-06-04T11:34:51.0603015Z Maven produced no output for 900 seconds.
> 2020-06-04T11:34:51.0604174Z 
> ==
> 2020-06-04T11:34:51.0613908Z 
> ==
> 2020-06-04T11:34:51.0615097Z The following Java processes are running (JPS)
> 2020-06-04T11:34:51.0616043Z 
> ==
> 2020-06-04T11:34:51.0762007Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-06-04T11:34:51.2775341Z 29635 surefirebooter5307550588991461882.jar
> 2020-06-04T11:34:51.2931264Z 2100 Launcher
> 2020-06-04T11:34:51.3012583Z 32203 Jps
> 2020-06-04T11:34:51.3258038Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-06-04T11:34:51.5443730Z 
> ==
> 2020-06-04T11:34:51.5445134Z Printing stack trace of Java process 29635
> 2020-06-04T11:34:51.5445984Z 
> ==
> 2020-06-04T11:34:51.5528602Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-06-04T11:34:51.9617670Z 2020-06-04 11:34:51
> 2020-06-04T11:34:51.9619131Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.242-b08 mixed mode):
> 2020-06-04T11:34:51.9619732Z 
> 2020-06-04T11:34:51.9620618Z "Attach Listener" #299 daemon prio=9 os_prio=0 
> tid=0x7f4d60001000 nid=0x7e59 waiting on condition [0x]
> 2020-06-04T11:34:51.9621720Zjava.lang.Thread.State: RUNNABLE
> 2020-06-04T11:34:51.9622190Z 
> 2020-06-04T11:34:51.9623631Z "flink-akka.actor.default-dispatcher-185" #297 
> prio=5 os_prio=0 tid=0x7f4ca0003000 nid=0x7db4 waiting on condition 
> [0x7f4d10136000]
> 2020-06-04T11:34:51.9624972Zjava.lang.Thread.State: WAITING (parking)
> 2020-06-04T11:34:51.9625716Z  at sun.misc.Unsafe.park(Native Method)
> 2020-06-04T11:34:51.9627072Z  - parking to wait for  <0x80c557f0> (a 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
> 2020-06-04T11:34:51.9628593Z  at 
> akka.dispatch.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
> 2020-06-04T11:34:51.9629649Z  at 
> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> 2020-06-04T11:34:51.9630825Z  at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2020-06-04T11:34:51.9631559Z 
> 2020-06-04T11:34:51.9633020Z "flink-akka.actor.default-dispatcher-186" #298 
> prio=5 os_prio=0 tid=0x7f4d08006800 nid=0x7db3 waiting on condition 
> [0x7f4d12974000]
> 2020-06-04T11:34:51.9634074Zjava.lang.Thread.State: WAITING (parking)
> 2020-06-04T11:34:51.9634965Z  at sun.misc.Unsafe.park(Native Method)
> 2020-06-04T11:34:51.9636384Z  - parking to wait for  <0x80c557f0> (a 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
> 2020-06-04T11:34:51.9637683Z  at 
> akka.dispatch.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
> 2020-06-04T11:34:51.9638795Z  at 
> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> 2020-06-04T11:34:51.9639845Z  at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2020-06-04T11:34:51.9640475Z 
> 2020-06-04T11:34:51.9642008Z "flink-akka.actor.default-dispatcher-182" #293 
> prio=5 os_prio=0 tid=0x7f4ce4007000 

[GitHub] [flink] StephanEwen opened a new pull request #13385: [FLINK-18128][FLINK-19223][connectors] Fixes to Connector Base Availability logic

2020-09-14 Thread GitBox


StephanEwen opened a new pull request #13385:
URL: https://github.com/apache/flink/pull/13385


   ## What is the purpose of the change
   
   This PR fixes bugs in the connector base (`SourceReaderBase` and 
`SplitFetcher`) availability logic.
   This could previously lead to conditions where the reader was done (all 
`SplitFetchers` idle) but this was never recognized.
   The result was deadlocking jobs.
   
   ## Brief change log
   
   The fix contains two steps, the problem and rational is described in the 
Jira issues:
 - https://issues.apache.org/jira/browse/FLINK-18128
 - https://issues.apache.org/jira/browse/FLINK-19223
   
   ## Verifying this change
   
   This change is partially covered by existing tests and also  adds tests. See 
changes in `SplitFetcherTest`.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no**
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **yes**
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: **no**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
 - If yes, how is the feature documented? **not applicable**
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13384: [FLINK-19224][state-processor-api] Support reading window operator state

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13384:
URL: https://github.com/apache/flink/pull/13384#issuecomment-692330852


   
   ## CI report:
   
   * 2e73e742abe73fd0324c3647403131817ef2aada Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6502)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19225) Improve code and logging in SourceReaderBase

2020-09-14 Thread Stephan Ewen (Jira)
Stephan Ewen created FLINK-19225:


 Summary: Improve code and logging in SourceReaderBase
 Key: FLINK-19225
 URL: https://issues.apache.org/jira/browse/FLINK-19225
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Common
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.12.0


An umbrella issue for minor improvements to the {{SourceReaderBase}}, such as 
logging, thread names, code simplifications.

The concrete change is described in the messages of the commits tagged with 
this issue (separate commits to better track the changes).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13384: [FLINK-19224][state-processor-api] Support reading window operator state

2020-09-14 Thread GitBox


flinkbot commented on pull request #13384:
URL: https://github.com/apache/flink/pull/13384#issuecomment-692330852


   
   ## CI report:
   
   * 2e73e742abe73fd0324c3647403131817ef2aada UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13384: [FLINK-19224][state-processor-api] Support reading window operator state

2020-09-14 Thread GitBox


flinkbot commented on pull request #13384:
URL: https://github.com/apache/flink/pull/13384#issuecomment-692328370


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2e73e742abe73fd0324c3647403131817ef2aada (Mon Sep 14 
21:38:48 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on pull request #13384: [FLINK-19224][state-processor-api] Support reading window operator state

2020-09-14 Thread GitBox


sjwiesman commented on pull request #13384:
URL: https://github.com/apache/flink/pull/13384#issuecomment-692328179


   @alpinegizmo would you like to take a look at this. This PR is mostly 
boilerplate to support all the different window operator configurations. All 
the actual runtime changes have already been merged in. In particular, if you 
want to look at the docs and the interfaces in the `api` package. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman opened a new pull request #13384: [FLINK-19224][state-processor-api] Support reading window operator state

2020-09-14 Thread GitBox


sjwiesman opened a new pull request #13384:
URL: https://github.com/apache/flink/pull/13384


   ## What is the purpose of the change
   
   Add support for reading window operator state with the state processor api. 
   This includes:
   - TimeWindows and other arbitrary window types
   - Pre-aggregated types
   - Unaggregated types
   - Evictor window state. 
   
   
   
   ## Verifying this change
   
   UT and IT cases
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / **docs** / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19224) Provide an easy way to read window state

2020-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19224:
---
Labels: pull-request-available  (was: )

> Provide an easy way to read window state
> 
>
> Key: FLINK-19224
> URL: https://issues.apache.org/jira/browse/FLINK-19224
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / State Processor
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19224) Provide an easy way to read window state

2020-09-14 Thread Seth Wiesman (Jira)
Seth Wiesman created FLINK-19224:


 Summary: Provide an easy way to read window state
 Key: FLINK-19224
 URL: https://issues.apache.org/jira/browse/FLINK-19224
 Project: Flink
  Issue Type: Sub-task
  Components: API / State Processor
Reporter: Seth Wiesman
Assignee: Seth Wiesman






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18128) CoordinatedSourceITCase.testMultipleSources gets stuck

2020-09-14 Thread Stephan Ewen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195735#comment-17195735
 ] 

Stephan Ewen commented on FLINK-18128:
--

The problem is the following:

  - When the fetching thread gets the fetch that signals "end of split" it 
enqueues that into the handover queue
  - Then there may be a context switch to the reader thread that checks this 
fetch and finishes its own split handling and checks whether the split fetcher 
is idle. It it not at that point.
  - The result is that the reader eventually goes to MORE_AVAILABLE_LATER

  - The split fetcher thread continues and marks the split fetcher as idle. 
However, it will never be checked again, because the reader thread is never 
notified again of any availability so it never checks.


*Solution part 1:*

The split fetcher needs to notify the availability future when it turns idle. I 
think there is no way to guarantee that the fetcher gets checked for being idle 
otherwise, save for introducing locking around the enqueuing and the idleness 
check. That is more expensive, the notification is rare and cheap.

*Solution part 2:*

The current model how the availability future is implemented makes this still 
dead-lock dangerous: The reader thread (mailbox thread) creates the 
availability future when obtaining it. It may be that the notification comes 
before the thread obtains the future and the notification is lost.

Consider this scenario:

  - Fetcher Thread gets "end of split" it enqueues that into the handover queue
  - Reader Thread obtains this fetch and removes the split locally. Split 
Reader is not idle, yet, Reader Thread concludes MORE_AVAILABLE_LATER
  - Fetcher Thread notifies the FutureNotifier - no future has been requested 
so far, the notification does nothing.
  - Reader Thread gets the future from the FutureNotifier, which will now never 
be complete.

We could fix this by adding a dummy element into the 
{{FutureCompletingBlockingQueue}}, but I think that  FLINK-19223 is a better 
solution when looking at the bigger picture.

> CoordinatedSourceITCase.testMultipleSources gets stuck
> --
>
> Key: FLINK-18128
> URL: https://issues.apache.org/jira/browse/FLINK-18128
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Task, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Stephan Ewen
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2705=logs=fc5181b0-e452-5c8f-68de-1097947f6483=6b04ca5f-0b52-511d-19c9-52bf0d9fbdfa
> {code}
> 2020-06-04T11:19:39.6335702Z [INFO] 
> 2020-06-04T11:19:39.6337440Z [INFO] 
> ---
> 2020-06-04T11:19:39.6338176Z [INFO]  T E S T S
> 2020-06-04T11:19:39.6339305Z [INFO] 
> ---
> 2020-06-04T11:19:40.1906157Z [INFO] Running 
> org.apache.flink.connector.base.source.reader.CoordinatedSourceITCase
> 2020-06-04T11:34:51.0599860Z 
> ==
> 2020-06-04T11:34:51.0603015Z Maven produced no output for 900 seconds.
> 2020-06-04T11:34:51.0604174Z 
> ==
> 2020-06-04T11:34:51.0613908Z 
> ==
> 2020-06-04T11:34:51.0615097Z The following Java processes are running (JPS)
> 2020-06-04T11:34:51.0616043Z 
> ==
> 2020-06-04T11:34:51.0762007Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-06-04T11:34:51.2775341Z 29635 surefirebooter5307550588991461882.jar
> 2020-06-04T11:34:51.2931264Z 2100 Launcher
> 2020-06-04T11:34:51.3012583Z 32203 Jps
> 2020-06-04T11:34:51.3258038Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-06-04T11:34:51.5443730Z 
> ==
> 2020-06-04T11:34:51.5445134Z Printing stack trace of Java process 29635
> 2020-06-04T11:34:51.5445984Z 
> ==
> 2020-06-04T11:34:51.5528602Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-06-04T11:34:51.9617670Z 2020-06-04 11:34:51
> 2020-06-04T11:34:51.9619131Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.242-b08 mixed mode):
> 2020-06-04T11:34:51.9619732Z 
> 2020-06-04T11:34:51.9620618Z "Attach Listener" #299 daemon prio=9 os_prio=0 
> tid=0x7f4d60001000 nid=0x7e59 waiting on condition [0x]
> 2020-06-04T11:34:51.9621720Z

[jira] [Created] (FLINK-19223) Simplify Availability Future Model in Base Connector

2020-09-14 Thread Stephan Ewen (Jira)
Stephan Ewen created FLINK-19223:


 Summary: Simplify Availability Future Model in Base Connector
 Key: FLINK-19223
 URL: https://issues.apache.org/jira/browse/FLINK-19223
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Common
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.12.0


The current model implemented by the {{FutureNotifier}} and the 
{{SourceReaderBase}} has a shortcoming:
  - It does not support availability notifications where the notification comes 
before the check. IN that case the notification is lost.

  - One can see the added complexity created by this model also in the 
{{SourceReaderBase#isAvailable()}} where the returned future needs to be 
"post-processed" and eagerly completed if the reader is in fact available. This 
is based on queue size, which makes it hard to have other conditions.

I think we can do something that is both easier and a bit more efficient by 
following a similar model as the 
{{org.apache.flink.runtime.io.AvailabilityProvider.AvailabilityHelper}}.

Furthermore, I believe we can win more efficiency by integrating this better 
with the {{FutureCompletingBlockingQueue}}.

I suggest to do a similar implementation as the {{AvailabilityHelper}} directly 
in the {{FutureCompletingBlockingQueue}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19209) Single bucket

2020-09-14 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman updated FLINK-19209:
-
Priority: Minor  (was: Major)

> Single bucket
> -
>
> Key: FLINK-19209
> URL: https://issues.apache.org/jira/browse/FLINK-19209
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.11.1
>Reporter: Michał Strużek
>Priority: Minor
>
> There is always a single bucket returned from partition method:
> https://github.com/apache/flink/blob/f42a3ebc3e81a034b7221a803c153636fef34903/flink-core/src/main/java/org/apache/flink/util/CollectionUtil.java#L76



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19209) Single bucket

2020-09-14 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195732#comment-17195732
 ] 

Seth Wiesman commented on FLINK-19209:
--

thanks for catching this!

> Single bucket
> -
>
> Key: FLINK-19209
> URL: https://issues.apache.org/jira/browse/FLINK-19209
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.11.1
>Reporter: Michał Strużek
>Assignee: Seth Wiesman
>Priority: Minor
>
> There is always a single bucket returned from partition method:
> https://github.com/apache/flink/blob/f42a3ebc3e81a034b7221a803c153636fef34903/flink-core/src/main/java/org/apache/flink/util/CollectionUtil.java#L76



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19209) Single bucket

2020-09-14 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman reassigned FLINK-19209:


Assignee: Seth Wiesman

> Single bucket
> -
>
> Key: FLINK-19209
> URL: https://issues.apache.org/jira/browse/FLINK-19209
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.11.1
>Reporter: Michał Strużek
>Assignee: Seth Wiesman
>Priority: Minor
>
> There is always a single bucket returned from partition method:
> https://github.com/apache/flink/blob/f42a3ebc3e81a034b7221a803c153636fef34903/flink-core/src/main/java/org/apache/flink/util/CollectionUtil.java#L76



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13383: [FLINK-15467][task] Wait for sourceTaskThread to finish before exiting from StreamTask.invoke [1.10]

2020-09-14 Thread GitBox


flinkbot edited a comment on pull request #13383:
URL: https://github.com/apache/flink/pull/13383#issuecomment-692066644


   
   ## CI report:
   
   * 75eb7195c13f8641a58870c1ae2861c1302e437c Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/184220187) 
   * cc178a714fc8d6637418e81efae55231a5c08cc1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18978) Support full table scan of key and namespace from statebackend

2020-09-14 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman closed FLINK-18978.

Resolution: Fixed

> Support full table scan of key and namespace from statebackend
> --
>
> Key: FLINK-18978
> URL: https://issues.apache.org/jira/browse/FLINK-18978
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Support full table scan of keys and namespaces from the state backend. All 
> operations assume the calling code already knows what namespace they are 
> interested in interacting with.
> This is a prerequisite to support reading window operators with the state 
> processor api because window panes are stored as additional namespace 
> components. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18978) Support full table scan of key and namespace from statebackend

2020-09-14 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195728#comment-17195728
 ] 

Seth Wiesman commented on FLINK-18978:
--

fixed in cd81e9bbbd7456e6aedbff31054700cc4da70fa3

> Support full table scan of key and namespace from statebackend
> --
>
> Key: FLINK-18978
> URL: https://issues.apache.org/jira/browse/FLINK-18978
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Support full table scan of keys and namespaces from the state backend. All 
> operations assume the calling code already knows what namespace they are 
> interested in interacting with.
> This is a prerequisite to support reading window operators with the state 
> processor api because window panes are stored as additional namespace 
> components. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   >