[GitHub] [flink] AHeise commented on pull request #14279: [FLINK-20418][core] Fixing checkpointing of IteratorSourceReader.

2020-12-03 Thread GitBox


AHeise commented on pull request #14279:
URL: https://github.com/apache/flink/pull/14279#issuecomment-737733731


   Sure, I haven't found the test in the nick of time. Do you want to merge it 
or can I do it?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise merged pull request #14177: [FLINK-16443][checkpointing] Make sure that CheckpointException are also serialized in DeclineCheckpoint.

2020-12-03 Thread GitBox


AHeise merged pull request #14177:
URL: https://github.com/apache/flink/pull/14177


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20419) Insert fails due to failure to generate execution plan

2020-12-03 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242986#comment-17242986
 ] 

Jark Wu commented on FLINK-20419:
-

[~lirui], the logic of {{BatchExecSinkRule}} is copied from 
{{BatchExecLegacySinkRule}}.

> Insert fails due to failure to generate execution plan
> --
>
> Key: FLINK-20419
> URL: https://issues.apache.org/jira/browse/FLINK-20419
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Rui Li
>Priority: Critical
>
> Test case to reproduce:
> {code}
>   @Test
>   public void test() throws Exception {
>   tableEnv.executeSql("create table src(x int)");
>   tableEnv.executeSql("create table dest(x int) partitioned by (p 
> string,q string)");
>   tableEnv.executeSql("insert into dest select x,'0','0' from src 
> order by x").await();
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20461) YARNFileReplicationITCase.testPerJobModeWithDefaultFileReplication

2020-12-03 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-20461:
--

Assignee: Zhenqiu Huang

> YARNFileReplicationITCase.testPerJobModeWithDefaultFileReplication
> --
>
> Key: FLINK-20461
> URL: https://issues.apache.org/jira/browse/FLINK-20461
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Huang Xingbo
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: testability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10450&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=62110053-334f-5295-a0ab-80dd7e2babbf]
> {code:java}
> [ERROR] 
> testPerJobModeWithDefaultFileReplication(org.apache.flink.yarn.YARNFileReplicationITCase)
>  Time elapsed: 32.501 s <<< ERROR! java.io.FileNotFoundException: File does 
> not exist: 
> hdfs://localhost:46072/user/agent04_azpcontainer/.flink/application_1606950278664_0001/flink-dist_2.11-1.12-SNAPSHOT.jar
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1441)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1434)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1434)
>  at 
> org.apache.flink.yarn.YARNFileReplicationITCase.extraVerification(YARNFileReplicationITCase.java:148)
>  at 
> org.apache.flink.yarn.YARNFileReplicationITCase.deployPerJob(YARNFileReplicationITCase.java:113)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16443) Fix wrong fix for user-code CheckpointExceptions

2020-12-03 Thread Arvid Heise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242987#comment-17242987
 ] 

Arvid Heise commented on FLINK-16443:
-

Merged into master as 208126aa242c4e217be493140aefcf16c3c3aba9.

> Fix wrong fix for user-code CheckpointExceptions
> 
>
> Key: FLINK-16443
> URL: https://issues.apache.org/jira/browse/FLINK-16443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Stephan Ewen
>Assignee: Arvid Heise
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> The problem of having exceptions that are only in the user code classloader 
> was fixed by proactively serializing them inside the {{CheckpointException}}. 
> That means all consumers of  {{CheckpointException}} now need to be aware of 
> that and unwrap the serializable exception.
> I believe the right way to fix this would have been to use a 
> SerializedException in the {{DeclineCheckpoint}} message instead, which would 
> have localized the change to the actual problem: RPC transport.
> I would suggest to revert https://github.com/apache/flink/pull/9742 and 
> instead apply the above described change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-16443) Fix wrong fix for user-code CheckpointExceptions

2020-12-03 Thread Arvid Heise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arvid Heise resolved FLINK-16443.
-
Fix Version/s: (was: 1.12.0)
   1.13.0
   Resolution: Fixed

> Fix wrong fix for user-code CheckpointExceptions
> 
>
> Key: FLINK-16443
> URL: https://issues.apache.org/jira/browse/FLINK-16443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Stephan Ewen
>Assignee: Arvid Heise
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> The problem of having exceptions that are only in the user code classloader 
> was fixed by proactively serializing them inside the {{CheckpointException}}. 
> That means all consumers of  {{CheckpointException}} now need to be aware of 
> that and unwrap the serializable exception.
> I believe the right way to fix this would have been to use a 
> SerializedException in the {{DeclineCheckpoint}} message instead, which would 
> have localized the change to the actual problem: RPC transport.
> I would suggest to revert https://github.com/apache/flink/pull/9742 and 
> instead apply the above described change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16443) Fix wrong fix for user-code CheckpointExceptions

2020-12-03 Thread Arvid Heise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arvid Heise reassigned FLINK-16443:
---

Assignee: Arvid Heise

> Fix wrong fix for user-code CheckpointExceptions
> 
>
> Key: FLINK-16443
> URL: https://issues.apache.org/jira/browse/FLINK-16443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Stephan Ewen
>Assignee: Arvid Heise
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> The problem of having exceptions that are only in the user code classloader 
> was fixed by proactively serializing them inside the {{CheckpointException}}. 
> That means all consumers of  {{CheckpointException}} now need to be aware of 
> that and unwrap the serializable exception.
> I believe the right way to fix this would have been to use a 
> SerializedException in the {{DeclineCheckpoint}} message instead, which would 
> have localized the change to the actual problem: RPC transport.
> I would suggest to revert https://github.com/apache/flink/pull/9742 and 
> instead apply the above described change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on a change in pull request #14293: [Flink 20437][table-planner-blink] Move the utility methods in ExecNode into ExecNodeUtil & Port ExecNode to Java

2020-12-03 Thread GitBox


godfreyhe commented on a change in pull request #14293:
URL: https://github.com/apache/flink/pull/14293#discussion_r534865707



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/utils/ExecNodeUtil.java
##
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.utils;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.core.memory.ManagedMemoryUseCase;
+import org.apache.flink.streaming.api.operators.StreamOperatorFactory;
+import org.apache.flink.streaming.api.transformations.OneInputTransformation;
+import org.apache.flink.streaming.api.transformations.TwoInputTransformation;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNode;
+
+import java.util.Optional;
+
+/**
+ * An Utility class that helps translating {@link ExecNode} to {@link 
Transformation}.
+ */
+public class ExecNodeUtil {

Review comment:
   It's used for some `ExecNode`s to create Transformation.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache opened a new pull request #14294: [FLINK-20419][table-planner-blink] Avoid dynamic partition grouping if the input defines collation

2020-12-03 Thread GitBox


lirui-apache opened a new pull request #14294:
URL: https://github.com/apache/flink/pull/14294


   
   
   ## What is the purpose of the change
   
   Fix the failure to generate plan for dynamic partition with order by.
   
   
   ## Brief change log
   
 - Skip partition grouping in `BatchExecSinkRule` if input already defines 
a collation.
 - Add test to verify the plan in `TableSinkTest`
 - Add hive IT case
   
   
   ## Verifying this change
   
   Added test cases
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? no
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16947) ArtifactResolutionException: Could not transfer artifact. Entry [...] has not been leased from this pool

2020-12-03 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242988#comment-17242988
 ] 

Matthias commented on FLINK-16947:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10450&view=logs&j=298e20ef-7951-5965-0e79-ea664ddc435e&t=b4cd3436-dbe8-556d-3bca-42f92c3cbf2f]

> ArtifactResolutionException: Could not transfer artifact.  Entry [...] has 
> not been leased from this pool
> -
>
> Key: FLINK-16947
> URL: https://issues.apache.org/jira/browse/FLINK-16947
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Piotr Nowojski
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6982&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> Build of flink-metrics-availability-test failed with:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-metrics-availability-test: Unable to generate classpath: 
> org.apache.maven.artifact.resolver.ArtifactResolutionException: Could not 
> transfer artifact org.apache.maven.surefire:surefire-grouper:jar:2.22.1 
> from/to google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/): Entry 
> [id:13][route:{s}->https://maven-central-eu.storage-download.googleapis.com:443][state:null]
>  has not been leased from this pool
> [ERROR] org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] 
> [ERROR] from the specified remote repositories:
> [ERROR] google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/, 
> releases=true, snapshots=false),
> [ERROR] apache.snapshots (https://repository.apache.org/snapshots, 
> releases=false, snapshots=true)
> [ERROR] Path to dependency:
> [ERROR] 1) dummy:dummy:jar:1.0
> [ERROR] 2) org.apache.maven.surefire:surefire-junit47:jar:2.22.1
> [ERROR] 3) org.apache.maven.surefire:common-junit48:jar:2.22.1
> [ERROR] 4) org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :flink-metrics-availability-test
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14244: [FLINK-20299][docs-zh] Update Chinese table overview

2020-12-03 Thread GitBox


flinkbot edited a comment on pull request #14244:
URL: https://github.com/apache/flink/pull/14244#issuecomment-734665945


   
   ## CI report:
   
   * e2d25f123282470094c6ebe4122e04921304cc57 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10227)
 
   * 2c6bd4bafadb83252e0fa01b777b44539e320396 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10466)
 
   * 7e6ebb48a9a018d7f23fa72e0304d293a3706294 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20419) Insert fails due to failure to generate execution plan

2020-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20419:
---
Labels: pull-request-available  (was: )

> Insert fails due to failure to generate execution plan
> --
>
> Key: FLINK-20419
> URL: https://issues.apache.org/jira/browse/FLINK-20419
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Rui Li
>Priority: Critical
>  Labels: pull-request-available
>
> Test case to reproduce:
> {code}
>   @Test
>   public void test() throws Exception {
>   tableEnv.executeSql("create table src(x int)");
>   tableEnv.executeSql("create table dest(x int) partitioned by (p 
> string,q string)");
>   tableEnv.executeSql("insert into dest select x,'0','0' from src 
> order by x").await();
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14294: [FLINK-20419][table-planner-blink] Avoid dynamic partition grouping if the input defines collation

2020-12-03 Thread GitBox


flinkbot commented on pull request #14294:
URL: https://github.com/apache/flink/pull/14294#issuecomment-737736790


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 371b9450830b6f961a62d5ba25f8356564bab0a5 (Thu Dec 03 
08:07:03 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-20419).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on pull request #14229: [FLINK-20292][doc] Improve the document about table formats overlap in user fat jar

2020-12-03 Thread GitBox


JingsongLi commented on pull request #14229:
URL: https://github.com/apache/flink/pull/14229#issuecomment-737737701


   Looks good to me. Thanks @leonardBang and @gaoyunhaii , merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi merged pull request #14229: [FLINK-20292][doc] Improve the document about table formats overlap in user fat jar

2020-12-03 Thread GitBox


JingsongLi merged pull request #14229:
URL: https://github.com/apache/flink/pull/14229


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20462) MailboxOperatorTest.testAvoidTaskStarvation

2020-12-03 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242990#comment-17242990
 ] 

Matthias commented on FLINK-20462:
--

I verified locally that the test is unstable. It failed after 2016 runs:

```

java.lang.AssertionError: java.lang.AssertionError: Expected: is <[0, 2, 4]>    
 but: was <[0, 42, 44]>Expected :is <[0, 2, 4]>Actual   :<[0, 42, 44]>

at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) at 
org.apache.flink.streaming.runtime.operators.MailboxOperatorTest.testAvoidTaskStarvation(MailboxOperatorTest.java:85)
 at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
 at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:52)
 at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220)
 at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53)

```

> MailboxOperatorTest.testAvoidTaskStarvation
> ---
>
> Key: FLINK-20462
> URL: https://issues.apache.org/jira/browse/FLINK-20462
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.12.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10450&view=logs&j=f0ac5c25-1168-55a5-07ff-0e88223afed9&t=0dbaca5d-7c38-52e6-f4fe-2fb69ccb3ada
> {code:java}
> [ERROR] 
> testAvoidTaskStarvation(org.apache.flink.streaming.runtime.operators.MailboxOperatorTest)
>  Time elapsed: 1.142 s <<< FAILURE! 
> java.lang.AssertionError: 
>  
> Expected: is <[0, 2, 4]> 
>  but: was <[0, 2, 516]> 
>  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) 
>  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) 
>  at 
> org.apache.flink.streaming.runtime.operators.MailboxOperatorTest.testAvoidTaskStarvation(MailboxOperatorTest.java:85)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20462) MailboxOperatorTest.testAvoidTaskStarvation

2020-12-03 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242990#comment-17242990
 ] 

Matthias edited comment on FLINK-20462 at 12/3/20, 8:09 AM:


I verified locally that the test is unstable. It failed after 2016 runs:
{code:java}
java.lang.AssertionError: 
Expected: is <[0, 2, 4]>
     but: was <[0, 42, 44]>
Expected :is <[0, 2, 4]>
Actual   :<[0, 42, 44]>





at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
at 
org.apache.flink.streaming.runtime.operators.MailboxOperatorTest.testAvoidTaskStarvation(MailboxOperatorTest.java:85)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:52)
at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53) {code}


was (Author: mapohl):
I verified locally that the test is unstable. It failed after 2016 runs:

```

java.lang.AssertionError: java.lang.AssertionError: Expected: is <[0, 2, 4]>    
 but: was <[0, 42, 44]>Expected :is <[0, 2, 4]>Actual   :<[0, 42, 44]>

at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) at 
org.apache.flink.streaming.runtime.operators.MailboxOperatorTest.testAvoidTaskStarvation(MailboxOperatorTest.java:85)
 at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
 at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:52)
 at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220)
 at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53)

```

> MailboxOperatorTest.testAvoidTaskStarvation
> 

[jira] [Closed] (FLINK-20292) Improve the document about table formats overlap in user fat jar

2020-12-03 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-20292.

Resolution: Fixed

master (1.13): 25b07f158f48e2db6fc844834176ae285489d1d3

release-1.12: b1eb0d11b0fac8c0292324dce54fa37ce46ecb0e

> Improve the document about table formats overlap in user fat jar
> 
>
> Key: FLINK-20292
> URL: https://issues.apache.org/jira/browse/FLINK-20292
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Reporter: Yun Gao
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> When testing the Flink 1.12 in a standalone mode cluster, I found that if the 
> user job jar contains both _flink-avro_ and _flink-parquet/flink-orc_, the 
> FileSystemTableSink would not be able to load the corresponding format 
> factory correctly. But if only one format is dependent it works.
> The test project located in 
> [here|https://github.com/gaoyunhaii/flink1.12test] and the test class is 
> [FileCompactionTest|https://github.com/gaoyunhaii/flink1.12test/blob/main/src/main/java/FileCompactionTest.java].
> The conflict does not seem to affect the local runner, but only has problem 
> when submitted to the standalone cluster.
> If the problem does exists, we might need to fix it or give user some tips 
> about the conflicts. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #14286: [FLINK-19398][connectors/hive] Fix the failure when creating hive connector from userclassloader

2020-12-03 Thread GitBox


JingsongLi merged pull request #14286:
URL: https://github.com/apache/flink/pull/14286


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19398) Hive connector fails with IllegalAccessError if submitted as usercode

2020-12-03 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-19398.

Resolution: Fixed

release-1.11: 4616f7d954275081a9a97c5745d5f55072241d17

> Hive connector fails with IllegalAccessError if submitted as usercode
> -
>
> Key: FLINK-19398
> URL: https://issues.apache.org/jira/browse/FLINK-19398
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Fabian Hueske
>Assignee: Yun Gao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.4
>
>
> Using Flink's Hive connector fails if the dependency is loaded with the user 
> code classloader with the following exception.
> {code:java}
> java.lang.IllegalAccessError: tried to access method 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.(Lorg/apache/flink/core/fs/Path;Lorg/apache/flink/streaming/api/functions/sink/filesystem/BucketAssigner;Lorg/apache/flink/streaming/api/functions/sink/filesystem/BucketFactory;Lorg/apache/flink/streaming/api/functions/sink/filesystem/BucketWriter;Lorg/apache/flink/streaming/api/functions/sink/filesystem/RollingPolicy;ILorg/apache/flink/streaming/api/functions/sink/filesystem/OutputFileConfig;)V
>  from class 
> org.apache.flink.streaming.api.functions.sink.filesystem.HadoopPathBasedBulkFormatBuilder
>   at 
> org.apache.flink.streaming.api.functions.sink.filesystem.HadoopPathBasedBulkFormatBuilder.createBuckets(HadoopPathBasedBulkFormatBuilder.java:127)
>  
> ~[flink-connector-hive_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.table.filesystem.stream.StreamingFileWriter.initializeState(StreamingFileWriter.java:81)
>  ~[flink-table-blink_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:106)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:258)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:290)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:479)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:92)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:475)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:528)
>  ~[flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721) 
> [flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546) 
> [flink-dist_2.12-1.11.2-stream2-SNAPSHOT.jar:1.11.2-stream2-SNAPSHOT]
> {code}
> The problem is the constructor of {{Buckets}} with default visibility which 
> is called from {{HadoopPathBasedBulkFormatBuilder}} . This works as long as 
> both classes are loaded with the same classloader but when they are loaded in 
> different classloaders, the access fails.
> {{Buckets}} is loaded with the system CL because it is part of 
> flink-streaming-java. 
>  
> To solve this issue, we should change the visibility of the {{Buckets}} 
> constructor to {{public}}.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20361) Using sliding window with duration of hours in Table API returns wrong time

2020-12-03 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20361:

Priority: Critical  (was: Blocker)

> Using sliding window with duration of hours in Table API returns wrong time
> ---
>
> Key: FLINK-20361
> URL: https://issues.apache.org/jira/browse/FLINK-20361
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0, 1.11.1, 1.11.2
> Environment: Java 11, test executed in IntelliJ IDE on mac OS.
>Reporter: Aleksandra Cz
>Priority: Critical
>
> In [Table walkthrough| 
> [https://github.com/apache/flink-playgrounds/blob/master/table-walkthrough/src/main/java/org/apache/flink/playgrounds/spendreport/SpendReport.java]]
>  
> current Date time is defined as: 
> {code:java}
> private static final LocalDateTime DATE_TIME = LocalDateTime.of(2020, 1, 
> 1, 0, 0);
> {code}
> If implemented *report* method would be as follows:
>  
> {code:java}
> public static Table report(Table transactions) {
> return transactions
> 
> .window(Slide.over(lit(1).hours()).every(lit(5).minute()).on($("transaction_time")).as("log_ts"))
> .groupBy($("log_ts"),$("account_id"))
> .select(
> $("log_ts").start().as("log_ts_start"),
> $("log_ts").end().as("log_ts_end"),
> $("account_id"),
> $("amount").sum().as("amount"));
> {code}
>  
> Then the resulting sliding window start and sliding window end would be in 
> year 1969/1970 instead of 2020. Please see first 3 elements of resulting 
> table: 
> {code:java}
> [1969-12-31T23:05,1970-01-01T00:05,3,432, 
> 1969-12-31T23:10,1970-01-01T00:10,3,432, 
> 1969-12-31T23:15,1970-01-01T00:15,3,432]{code}
> This behaviour repeats if using SQL instead of Table API,
> it does not repeat for window duration of minutes, nor in Tumbling window.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20361) Using sliding window with duration of hours in Table API returns wrong time

2020-12-03 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242994#comment-17242994
 ] 

Jark Wu commented on FLINK-20361:
-

Hi [~leukonoe], you said "This behaviour repeats if using SQL instead of Table 
API", could you share your SQL? 
I guess it might because the SQL is wrong. 

I degrade the priority first. Will upgrade the priority if it is indeed. 

> Using sliding window with duration of hours in Table API returns wrong time
> ---
>
> Key: FLINK-20361
> URL: https://issues.apache.org/jira/browse/FLINK-20361
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0, 1.11.1, 1.11.2
> Environment: Java 11, test executed in IntelliJ IDE on mac OS.
>Reporter: Aleksandra Cz
>Priority: Blocker
>
> In [Table walkthrough| 
> [https://github.com/apache/flink-playgrounds/blob/master/table-walkthrough/src/main/java/org/apache/flink/playgrounds/spendreport/SpendReport.java]]
>  
> current Date time is defined as: 
> {code:java}
> private static final LocalDateTime DATE_TIME = LocalDateTime.of(2020, 1, 
> 1, 0, 0);
> {code}
> If implemented *report* method would be as follows:
>  
> {code:java}
> public static Table report(Table transactions) {
> return transactions
> 
> .window(Slide.over(lit(1).hours()).every(lit(5).minute()).on($("transaction_time")).as("log_ts"))
> .groupBy($("log_ts"),$("account_id"))
> .select(
> $("log_ts").start().as("log_ts_start"),
> $("log_ts").end().as("log_ts_end"),
> $("account_id"),
> $("amount").sum().as("amount"));
> {code}
>  
> Then the resulting sliding window start and sliding window end would be in 
> year 1969/1970 instead of 2020. Please see first 3 elements of resulting 
> table: 
> {code:java}
> [1969-12-31T23:05,1970-01-01T00:05,3,432, 
> 1969-12-31T23:10,1970-01-01T00:10,3,432, 
> 1969-12-31T23:15,1970-01-01T00:15,3,432]{code}
> This behaviour repeats if using SQL instead of Table API,
> it does not repeat for window duration of minutes, nor in Tumbling window.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20361) Using sliding window with duration of hours in Table API returns wrong time

2020-12-03 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242995#comment-17242995
 ] 

Jark Wu commented on FLINK-20361:
-

cc [~sjwiesman] the author of the example.

> Using sliding window with duration of hours in Table API returns wrong time
> ---
>
> Key: FLINK-20361
> URL: https://issues.apache.org/jira/browse/FLINK-20361
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0, 1.11.1, 1.11.2
> Environment: Java 11, test executed in IntelliJ IDE on mac OS.
>Reporter: Aleksandra Cz
>Priority: Critical
>
> In [Table walkthrough| 
> [https://github.com/apache/flink-playgrounds/blob/master/table-walkthrough/src/main/java/org/apache/flink/playgrounds/spendreport/SpendReport.java]]
>  
> current Date time is defined as: 
> {code:java}
> private static final LocalDateTime DATE_TIME = LocalDateTime.of(2020, 1, 
> 1, 0, 0);
> {code}
> If implemented *report* method would be as follows:
>  
> {code:java}
> public static Table report(Table transactions) {
> return transactions
> 
> .window(Slide.over(lit(1).hours()).every(lit(5).minute()).on($("transaction_time")).as("log_ts"))
> .groupBy($("log_ts"),$("account_id"))
> .select(
> $("log_ts").start().as("log_ts_start"),
> $("log_ts").end().as("log_ts_end"),
> $("account_id"),
> $("amount").sum().as("amount"));
> {code}
>  
> Then the resulting sliding window start and sliding window end would be in 
> year 1969/1970 instead of 2020. Please see first 3 elements of resulting 
> table: 
> {code:java}
> [1969-12-31T23:05,1970-01-01T00:05,3,432, 
> 1969-12-31T23:10,1970-01-01T00:10,3,432, 
> 1969-12-31T23:15,1970-01-01T00:15,3,432]{code}
> This behaviour repeats if using SQL instead of Table API,
> it does not repeat for window duration of minutes, nor in Tumbling window.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys commented on a change in pull request #14195: [FLINK-20300] Add Flink 1.12 release notes

2020-12-03 Thread GitBox


dawidwys commented on a change in pull request #14195:
URL: https://github.com/apache/flink/pull/14195#discussion_r534886522



##
File path: docs/release-notes/flink-1.12.md
##
@@ -0,0 +1,169 @@
+---
+title: "Release Notes - Flink 1.12"
+---
+
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.11 and Flink 1.12. Please read
+these notes carefully if you are planning to upgrade your Flink version to 
1.12.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### APIs
+
+ Remove deprecated methods in ExecutionConfig 
[FLINK-19084](https://issues.apache.org/jira/browse/FLINK-19084)
+
+Deprecated method `ExecutionConfig#isLatencyTrackingEnabled` was removed, you 
can use `ExecutionConfig#getLatencyTrackingInterval` instead. 
+
+Deprecated and methods without effect were removed: 
`ExecutionConfig#enable/disableSysoutLogging`, 
`ExecutionConfig#set/isFailTaskOnCheckpointError`.
+
+Removed `-q` flag from cli. The option had no effect. 
+
+ Remove deprecated RuntimeContext#getAllAccumulators 
[FLINK-19032](https://issues.apache.org/jira/browse/FLINK-19032)
+
+The deprecated method `RuntimeContext#getAllAccumulators` was removed. Please 
use `RuntimeContext#getAccumulator` instead. 
+
+ Deprecated CheckpointConfig#setPreferCheckpointForRecovery due to risk of 
data loss [FLINK-20441](https://issues.apache.org/jira/browse/FLINK-20441)
+
+The `CheckpointConfig#setPreferCheckpointForRecovery` method has been 
deprecated, because using checkpoints for recovery can lead to data loss.
+
+ FLIP-134: Batch execution for the DataStream API
+
+- Allow explicitly configuring time behaviour on `KeyedStream.intervalJoin()` 
[FLINK-19032](https://issues.apache.org/jira/browse/FLINK-19032)
+
+  Before Flink 1.12 the `KeyedStream.intervalJoin()` operation was changing 
behavior based on the globally set Stream TimeCharacteristic. In Flink 1.12 we 
introduced explicit `inProcessingTime()` and `inEventTime()` methods on 
`IntervalJoin` and the join no longer changes behaviour based on the global 
characteristic. 
+
+- Deprecate `timeWindow()` operations in DataStream API 
[FLINK-19318](https://issues.apache.org/jira/browse/FLINK-19318)
+
+  In Flink 1.12 we deprecated the `timeWindow()` operations in the DataStream 
API. Please use `window(WindowAssigner)` with either a 
`TumblingEventTimeWindows`, `SlidingEventTimeWindows`, 
`TumblingProcessingTimeWindows`, or `SlidingProcessingTimeWindows`. For more 
information, see the deprecation description of 
`TimeCharacteristic`/`setStreamTimeCharacteristic`. 
+
+- Deprecate `StreamExecutionEnvironment.setStreamTimeCharacteristic()` and 
`TimeCharacteristic` 
[FLINK-19319](https://issues.apache.org/jira/browse/FLINK-19319)
+
+  In Flink 1.12 the default stream time characteristic has been changed to 
`EventTime`, thus you don't need to call this method for enabling event-time 
support anymore. Explicitly using processing-time windows and timers works in 
event-time mode. If you need to disable watermarks, please use 
`ExecutionConfig.setAutoWatermarkInterval(long)`. If you are using 
`IngestionTime`, please manually set an appropriate `WatermarkStrategy`. If you 
are using generic "time window" operations (for example 
`KeyedStream.timeWindow()` that change behaviour based on the time 
characteristic, please use equivalent operations that explicitly specify 
processing time or event time). 
+
+- Remove deprecated `DataStream#split` 
[FLINK-19083](https://issues.apache.org/jira/browse/FLINK-19083)
+
+  The `DataStream#split()` operation has been removed after being marked as 
deprecated for a couple of versions. Please use [Side Outputs]({% link 
dev/stream/side_output.md %})) instead.

Review comment:
   Removing `split` and `fold` is not related to the `Batch` execution. 
It's just an API cleanup similar to removing UdfAnalyzer.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] tzulitai closed pull request #183: [hotfix][sdk] Change variable names to comply with camel case naming rules and correct spelling of wrong words.

2020-12-03 Thread GitBox


tzulitai closed pull request #183:
URL: https://github.com/apache/flink-statefun/pull/183


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] tzulitai closed pull request #178: [FLINK-20303][test] Add a SmokeE2E test

2020-12-03 Thread GitBox


tzulitai closed pull request #178:
URL: https://github.com/apache/flink-statefun/pull/178


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-20424) The percent of acknowledged checkpoint seems incorrect

2020-12-03 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-20424:
-

Assignee: Andrew.D.lin

> The percent of acknowledged checkpoint seems incorrect
> --
>
> Key: FLINK-20424
> URL: https://issues.apache.org/jira/browse/FLINK-20424
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: zlzhang0122
>Assignee: Andrew.D.lin
>Priority: Minor
> Attachments: 2020-11-30 14-18-34 的屏幕截图.png
>
>
> As the picture below, the percent of acknowledged checkpoint seems 
> incorrect.I think the number must not be 100% because one of the checkpoint 
> acknowledge was failed.
> !2020-11-30 14-18-34 的屏幕截图.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20424) The percent of acknowledged checkpoint seems incorrect

2020-12-03 Thread Zili Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242998#comment-17242998
 ] 

Zili Chen commented on FLINK-20424:
---

@andrew_lin Go ahead and remember update status when you start progress.

> The percent of acknowledged checkpoint seems incorrect
> --
>
> Key: FLINK-20424
> URL: https://issues.apache.org/jira/browse/FLINK-20424
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: zlzhang0122
>Assignee: Andrew.D.lin
>Priority: Minor
> Attachments: 2020-11-30 14-18-34 的屏幕截图.png
>
>
> As the picture below, the percent of acknowledged checkpoint seems 
> incorrect.I think the number must not be 100% because one of the checkpoint 
> acknowledge was failed.
> !2020-11-30 14-18-34 的屏幕截图.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14294: [FLINK-20419][table-planner-blink] Avoid dynamic partition grouping if the input defines collation

2020-12-03 Thread GitBox


flinkbot commented on pull request #14294:
URL: https://github.com/apache/flink/pull/14294#issuecomment-737746054


   
   ## CI report:
   
   * 371b9450830b6f961a62d5ba25f8356564bab0a5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20303) Add a SmokeE2E test

2020-12-03 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-20303.
---
Fix Version/s: statefun-2.3.0
 Assignee: Tzu-Li (Gordon) Tai
   Resolution: Fixed

statefun/master: b86616bae4ce58dbdeebb263810de9a9c85fff61

> Add a SmokeE2E test
> ---
>
> Key: FLINK-20303
> URL: https://issues.apache.org/jira/browse/FLINK-20303
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>  Labels: pull-request-available
> Fix For: statefun-2.3.0
>
>
> We need an E2E test that mimics random stateful function applications, that 
> creates random failures.
> This test should also verify that messages and state are consistent.
> This test should be run:
>  # in a dockerized environment (for example via test containers)
>  # via the IDE (in a mini cluster) for debuggability.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20419) Insert fails due to failure to generate execution plan

2020-12-03 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-20419:
---
Fix Version/s: 1.13.0
   1.12.0

> Insert fails due to failure to generate execution plan
> --
>
> Key: FLINK-20419
> URL: https://issues.apache.org/jira/browse/FLINK-20419
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Rui Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.13.0
>
>
> Test case to reproduce:
> {code}
>   @Test
>   public void test() throws Exception {
>   tableEnv.executeSql("create table src(x int)");
>   tableEnv.executeSql("create table dest(x int) partitioned by (p 
> string,q string)");
>   tableEnv.executeSql("insert into dest select x,'0','0' from src 
> order by x").await();
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] chendonglin521 opened a new pull request #14295: [FLINK-20424][Web Frontend] Make the percent of acknowledged checkpoint more accurately

2020-12-03 Thread GitBox


chendonglin521 opened a new pull request #14295:
URL: https://github.com/apache/flink/pull/14295


   
   ## What is the purpose of the change
   
   *In Web UI, Make the percent of acknowledged checkpoint more accurately*
   
   
   ## Brief change log
   
 - *Make the percent of acknowledged checkpoint more accurately*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: (no )
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20424) The percent of acknowledged checkpoint seems incorrect

2020-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20424:
---
Labels: pull-request-available  (was: )

> The percent of acknowledged checkpoint seems incorrect
> --
>
> Key: FLINK-20424
> URL: https://issues.apache.org/jira/browse/FLINK-20424
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: zlzhang0122
>Assignee: Andrew.D.lin
>Priority: Minor
>  Labels: pull-request-available
> Attachments: 2020-11-30 14-18-34 的屏幕截图.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As the picture below, the percent of acknowledged checkpoint seems 
> incorrect.I think the number must not be 100% because one of the checkpoint 
> acknowledge was failed.
> !2020-11-30 14-18-34 的屏幕截图.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14295: [FLINK-20424][Web Frontend] Make the percent of acknowledged checkpoint more accurately

2020-12-03 Thread GitBox


flinkbot commented on pull request #14295:
URL: https://github.com/apache/flink/pull/14295#issuecomment-737750145


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 95b395326189c894448b3e6741bb747337ec44cb (Thu Dec 03 
08:34:22 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20420) ES6 ElasticsearchSinkITCase failed due to no output for 900 seconds

2020-12-03 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243008#comment-17243008
 ] 

Matthias commented on FLINK-20420:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10450&view=logs&j=e9af9cde-9a65-5281-a58e-2c8511d36983&t=b6c4efed-9c7d-55ea-03a9-9bd7d5b08e4c

> ES6 ElasticsearchSinkITCase failed due to no output for 900 seconds
> ---
>
> Key: FLINK-20420
> URL: https://issues.apache.org/jira/browse/FLINK-20420
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0
>Reporter: Yun Tang
>Priority: Major
>
> Instance:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10249&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=03dca39c-73e8-5aaf-601d-328ae5c35f20&l=18821
> {code:java}
> Process produced no output for 900 seconds.
> ==
> ==
> The following Java processes are running (JPS)
> ==
> Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
> 2274 Launcher
> 18260 Jps
> 15916 surefirebooter3434370240444055571.jar
> ==
> "main" #1 prio=5 os_prio=0 tid=0x7feec000b800 nid=0x3e2d runnable 
> [0x7feec8541000]
>java.lang.Thread.State: RUNNABLE
>   at org.testcontainers.shaded.okio.Buffer.indexOf(Buffer.java:1463)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.indexOf(RealBufferedSource.java:352)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:230)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:224)
>   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.java:489)
>   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.java:471)
>   at 
> org.testcontainers.shaded.okhttp3.internal.Util.skipAll(Util.java:204)
>   at 
> org.testcontainers.shaded.okhttp3.internal.Util.discard(Util.java:186)
>   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.close(Http1ExchangeCodec.java:511)
>   at 
> org.testcontainers.shaded.okio.ForwardingSource.close(ForwardingSource.java:43)
>   at 
> org.testcontainers.shaded.okhttp3.internal.connection.Exchange$ResponseBodySource.close(Exchange.java:313)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.close(RealBufferedSource.java:476)
>   at 
> org.testcontainers.shaded.okhttp3.internal.Util.closeQuietly(Util.java:139)
>   at 
> org.testcontainers.shaded.okhttp3.ResponseBody.close(ResponseBody.java:192)
>   at org.testcontainers.shaded.okhttp3.Response.close(Response.java:290)
>   at 
> org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient$OkResponse.close(OkDockerHttpClient.java:280)
>   at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$null$0(DefaultInvocationBuilder.java:272)
>   at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$$Lambda$87/1112018050.close(Unknown
>  Source)
>   at 
> com.github.dockerjava.api.async.ResultCallbackTemplate.close(ResultCallbackTemplate.java:77)
>   at 
> org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:177)
>   at 
> org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:203)
>   - locked <0x88fcbbf0> (a [Ljava.lang.Object;)
>   at 
> org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
>   at 
> org.testcontainers.LazyDockerClient.listImagesCmd(LazyDockerClient.java:12)
>   at 
> org.testcontainers.images.LocalImagesCache.maybeInitCache(LocalImagesCache.java:68)
>   - locked <0x88fcb940> (a 
> org.testcontainers.images.LocalImagesCache)
>   at 
> org.testcontainers.images.LocalImagesCache.get(LocalImagesCache.java:32)
>   at 
> org.testcontainers.images.AbstractImagePullPolicy.shouldPull(AbstractImagePullPolicy.java:18)
>   at 
> org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:66)
>   at 
> org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:27)
>   at 
> org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:17)
>   - locked <0x890763d0> (a 
> java.util.concurrent.atomic.AtomicReference)
>   at org.

[jira] [Updated] (FLINK-20420) ES6 ElasticsearchSinkITCase failed due to no output for 900 seconds

2020-12-03 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-20420:
-
Labels: testability  (was: )

> ES6 ElasticsearchSinkITCase failed due to no output for 900 seconds
> ---
>
> Key: FLINK-20420
> URL: https://issues.apache.org/jira/browse/FLINK-20420
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0
>Reporter: Yun Tang
>Priority: Major
>  Labels: testability
>
> Instance:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10249&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=03dca39c-73e8-5aaf-601d-328ae5c35f20&l=18821
> {code:java}
> Process produced no output for 900 seconds.
> ==
> ==
> The following Java processes are running (JPS)
> ==
> Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
> 2274 Launcher
> 18260 Jps
> 15916 surefirebooter3434370240444055571.jar
> ==
> "main" #1 prio=5 os_prio=0 tid=0x7feec000b800 nid=0x3e2d runnable 
> [0x7feec8541000]
>java.lang.Thread.State: RUNNABLE
>   at org.testcontainers.shaded.okio.Buffer.indexOf(Buffer.java:1463)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.indexOf(RealBufferedSource.java:352)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:230)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:224)
>   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.java:489)
>   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.java:471)
>   at 
> org.testcontainers.shaded.okhttp3.internal.Util.skipAll(Util.java:204)
>   at 
> org.testcontainers.shaded.okhttp3.internal.Util.discard(Util.java:186)
>   at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.close(Http1ExchangeCodec.java:511)
>   at 
> org.testcontainers.shaded.okio.ForwardingSource.close(ForwardingSource.java:43)
>   at 
> org.testcontainers.shaded.okhttp3.internal.connection.Exchange$ResponseBodySource.close(Exchange.java:313)
>   at 
> org.testcontainers.shaded.okio.RealBufferedSource.close(RealBufferedSource.java:476)
>   at 
> org.testcontainers.shaded.okhttp3.internal.Util.closeQuietly(Util.java:139)
>   at 
> org.testcontainers.shaded.okhttp3.ResponseBody.close(ResponseBody.java:192)
>   at org.testcontainers.shaded.okhttp3.Response.close(Response.java:290)
>   at 
> org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient$OkResponse.close(OkDockerHttpClient.java:280)
>   at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$null$0(DefaultInvocationBuilder.java:272)
>   at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$$Lambda$87/1112018050.close(Unknown
>  Source)
>   at 
> com.github.dockerjava.api.async.ResultCallbackTemplate.close(ResultCallbackTemplate.java:77)
>   at 
> org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:177)
>   at 
> org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:203)
>   - locked <0x88fcbbf0> (a [Ljava.lang.Object;)
>   at 
> org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
>   at 
> org.testcontainers.LazyDockerClient.listImagesCmd(LazyDockerClient.java:12)
>   at 
> org.testcontainers.images.LocalImagesCache.maybeInitCache(LocalImagesCache.java:68)
>   - locked <0x88fcb940> (a 
> org.testcontainers.images.LocalImagesCache)
>   at 
> org.testcontainers.images.LocalImagesCache.get(LocalImagesCache.java:32)
>   at 
> org.testcontainers.images.AbstractImagePullPolicy.shouldPull(AbstractImagePullPolicy.java:18)
>   at 
> org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:66)
>   at 
> org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:27)
>   at 
> org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:17)
>   - locked <0x890763d0> (a 
> java.util.concurrent.atomic.AtomicReference)
>   at org.testcontainers.utility.LazyFuture.get(LazyFuture.java:39)
>   at 
> org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.

[GitHub] [flink] chendonglin521 commented on pull request #14295: [FLINK-20424][Web Frontend] Make the percent of acknowledged checkpoint more accurately

2020-12-03 Thread GitBox


chendonglin521 commented on pull request #14295:
URL: https://github.com/apache/flink/pull/14295#issuecomment-737755450


   > Is there a way to instead round down?
   `| percent:'0.0-2'`
   The percentage will be rounded to two decimal places, and the third place 
will be rounded.
   
   
https://www.concretepage.com/angular-2/angular-2-decimal-pipe-percent-pipe-and-currency-pipe-example#percentpipe



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14120: [FLINK-19984][core] Add TypeSerializerTestCoverageTest to check whether tests based on SerializerTestBase and TypeSerializerUpgradeTe

2020-12-03 Thread GitBox


flinkbot edited a comment on pull request #14120:
URL: https://github.com/apache/flink/pull/14120#issuecomment-729644511


   
   ## CI report:
   
   * 340a6c872bbf65b85c1eaaaf7399b1cec764152a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10462)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14294: [FLINK-20419][table-planner-blink] Avoid dynamic partition grouping if the input defines collation

2020-12-03 Thread GitBox


flinkbot edited a comment on pull request #14294:
URL: https://github.com/apache/flink/pull/14294#issuecomment-737746054


   
   ## CI report:
   
   * 371b9450830b6f961a62d5ba25f8356564bab0a5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10471)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] chendonglin521 removed a comment on pull request #14295: [FLINK-20424][Web Frontend] Make the percent of acknowledged checkpoint more accurately

2020-12-03 Thread GitBox


chendonglin521 removed a comment on pull request #14295:
URL: https://github.com/apache/flink/pull/14295#issuecomment-737755450


   > Is there a way to instead round down?
   `| percent:'0.0-2'`
   The percentage will be rounded to two decimal places, and the third place 
will be rounded.
   
   
https://www.concretepage.com/angular-2/angular-2-decimal-pipe-percent-pipe-and-currency-pipe-example#percentpipe



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14295: [FLINK-20424][Web Frontend] Make the percent of acknowledged checkpoint more accurately

2020-12-03 Thread GitBox


flinkbot commented on pull request #14295:
URL: https://github.com/apache/flink/pull/14295#issuecomment-737756527


   
   ## CI report:
   
   * 95b395326189c894448b3e6741bb747337ec44cb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on pull request #14120: [FLINK-19984][core] Add TypeSerializerTestCoverageTest to check whether tests based on SerializerTestBase and TypeSerializerUpgradeTestBa

2020-12-03 Thread GitBox


SteNicholas commented on pull request #14120:
URL: https://github.com/apache/flink/pull/14120#issuecomment-737757185


   @aljoscha , please review the `TypeSerializerCoverageTest` again. I have 
merged your commit about the serializers in the Scala package.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MengYao updated FLINK-20463:

Attachment: 无标题111.png

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporter: 谢波
>Priority: Major
> Attachments: 无标题111.png
>
>
> can Flink SQL provide an option to ignore exception record?
> I have a table that maps kafka data in json format.
> When parsing the exception data, an exception is thrown, but the data is 
> valid JSON, not a valid record.
> {color:#FF}exception data:{"SHEET":[""]}{color}
> {color:#FF}my table:{color}
> CREATE TABLE offline
> (
>  SHEET ROW (
>  HEADER MAP < STRING, STRING >,
>  ITEM ROW (
>  AMOUNT STRING,
>  COST STRING,
>  GOODSID STRING,
>  SALEVALUE STRING,
>  SAP_RTMATNR STRING,
>  SAP_RTPLU STRING,
>  SERIALID STRING,
>  SHEETID STRING
>  ) ARRAY,
>  ITEM5 MAP < STRING, STRING > ARRAY,
>  ITEM1 MAP < STRING, STRING > ARRAY,
>  TENDER MAP < STRING, STRING > ARRAY
>  ) ARRAY
> )
> WITH (
>  'connector' = 'kafka',
>  'properties.bootstrap.servers' = 'xxx:9092',
>  'properties.group.id' = 'realtime.sales.offline.group',
>  'topic' = 'bms133',
>  'format' = 'json',
>  {color:#FF}'json.ignore-parse-errors' = 'true',{color}
>  'scan.startup.mode' = 'earliest-offset'
> );
> {color:#FF}exception:{color}
> Caused by: java.lang.NullPointerExceptionCaused by: 
> java.lang.NullPointerException at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:116)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copyGenericArray(ArrayDataSerializer.java:129)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:90)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:51)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:156)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:123)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:715)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
>  at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>  at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
>  at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)
>  at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185)
>  at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141)
>  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>  at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:213)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MengYao updated FLINK-20463:

Attachment: QQ截图111.jpg

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporter: 谢波
>Priority: Major
> Attachments: QQ截图111.jpg, 无标题111.png
>
>
> can Flink SQL provide an option to ignore exception record?
> I have a table that maps kafka data in json format.
> When parsing the exception data, an exception is thrown, but the data is 
> valid JSON, not a valid record.
> {color:#FF}exception data:{"SHEET":[""]}{color}
> {color:#FF}my table:{color}
> CREATE TABLE offline
> (
>  SHEET ROW (
>  HEADER MAP < STRING, STRING >,
>  ITEM ROW (
>  AMOUNT STRING,
>  COST STRING,
>  GOODSID STRING,
>  SALEVALUE STRING,
>  SAP_RTMATNR STRING,
>  SAP_RTPLU STRING,
>  SERIALID STRING,
>  SHEETID STRING
>  ) ARRAY,
>  ITEM5 MAP < STRING, STRING > ARRAY,
>  ITEM1 MAP < STRING, STRING > ARRAY,
>  TENDER MAP < STRING, STRING > ARRAY
>  ) ARRAY
> )
> WITH (
>  'connector' = 'kafka',
>  'properties.bootstrap.servers' = 'xxx:9092',
>  'properties.group.id' = 'realtime.sales.offline.group',
>  'topic' = 'bms133',
>  'format' = 'json',
>  {color:#FF}'json.ignore-parse-errors' = 'true',{color}
>  'scan.startup.mode' = 'earliest-offset'
> );
> {color:#FF}exception:{color}
> Caused by: java.lang.NullPointerExceptionCaused by: 
> java.lang.NullPointerException at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:116)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copyGenericArray(ArrayDataSerializer.java:129)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:90)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:51)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:156)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:123)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:715)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
>  at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>  at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
>  at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)
>  at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185)
>  at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141)
>  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>  at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:213)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243021#comment-17243021
 ] 

MengYao commented on FLINK-20463:
-

 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        **        $FLINK_HOME/bin/sql-client.sh embedded

        **        *{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
        **         order_id BIGINT,  -- 订单ID
        **         original_price DOUBLE,  -- 实付金额
        **         ctime BIGINT,  -- 创建时间
        **         ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  -- 使用ctime字段值作为时间戳ts
        **         WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- 
在ts字段上定义5秒延迟的水位线
        **        ) WITH (
        **         'connector' = 'kafka',
        **         'topic' = 'test01',
        **         'properties.bootstrap.servers' = 'node1:9092',
        **         'properties.group.id' = 'testGroup',
        **         'format' = 'json',
        **         'scan.startup.mode' = 'earliest-offset'
        **        );**

        *{color:#00875a}// 3、execute query statement{color}*

        **        *{color:#57d9a3}Flink SQL>{color}* **select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

*!QQ截图111.jpg!*

 

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporter: 谢波
>Priority: Major
> Attachments: QQ截图111.jpg, 无标题111.png
>
>
> can Flink SQL provide an option to ignore exception record?
> I have a table that maps kafka data in json format.
> When parsing the exception data, an exception is thrown, but the data is 
> valid JSON, not a valid record.
> {color:#FF}exception data:{"SHEET":[""]}{color}
> {color:#FF}my table:{color}
> CREATE TABLE offline
> (
>  SHEET ROW (
>  HEADER MAP < STRING, STRING >,
>  ITEM ROW (
>  AMOUNT STRING,
>  COST STRING,
>  GOODSID STRING,
>  SALEVALUE STRING,
>  SAP_RTMATNR STRING,
>  SAP_RTPLU STRING,
>  SERIALID STRING,
>  SHEETID STRING
>  ) ARRAY,
>  ITEM5 MAP < STRING, STRING > ARRAY,
>  ITEM1 MAP < STRING, STRING > ARRAY,
>  TENDER MAP < STRING, STRING > ARRAY
>  ) ARRAY
> )
> WITH (
>  'connector' = 'kafka',
>  'properties.bootstrap.servers' = 'xxx:9092',
>  'properties.group.id' = 'realtime.sales.offline.group',
>  'topic' = 'bms133',
>  'format' = 'json',
>  {color:#FF}'json.ignore-parse-errors' = 'true',{color}
>  'scan.startup.mode' = 'earliest-offset'
> );
> {color:#FF}exception:{color}
> Caused by: java.lang.NullPointerExceptionCaused by: 
> java.lang.NullPointerException at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:116)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copyGenericArray(ArrayDataSerializer.java:129)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:90)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:51)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:156)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:123)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:715)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
>  at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>  at 
> org.apache.flink.streaming.api.oper

[GitHub] [flink] chendonglin521 commented on pull request #14295: [FLINK-20424][Web Frontend] Make the percent of acknowledged checkpoint more accurately

2020-12-03 Thread GitBox


chendonglin521 commented on pull request #14295:
URL: https://github.com/apache/flink/pull/14295#issuecomment-737760349


   > Is there a way to instead round down?
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] chendonglin521 closed pull request #14295: [FLINK-20424][Web Frontend] Make the percent of acknowledged checkpoint more accurately

2020-12-03 Thread GitBox


chendonglin521 closed pull request #14295:
URL: https://github.com/apache/flink/pull/14295


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] chendonglin521 commented on pull request #14295: [FLINK-20424][Web Frontend] Make the percent of acknowledged checkpoint more accurately

2020-12-03 Thread GitBox


chendonglin521 commented on pull request #14295:
URL: https://github.com/apache/flink/pull/14295#issuecomment-737761730


   > Is there a way to instead round down?
   thank you for your reply.
   I didn't find an elegant and non-rounding method. Two decimal places can 
ensure that the display below 1 parallelism is accurate.
   Could you give me some advice?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20297) Make `SerializerTestBase::getTestData` return List

2020-12-03 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243025#comment-17243025
 ] 

Guowei Ma commented on FLINK-20297:
---

Sorry for replying so late. [~dwysakowicz] 

There are some cases that we could not extend `SerializerTestBase`. For example 
when we could write a `UnitSerializerTest extends SerializerTestBase` the 
compiler would report following error:

 

 
{code:java}
Error:(18, 26) overriding method getTestData in class SerializerTestBase of 
type ()Array[Unit];
 method getTestData has incompatible type
  override protected def getTestData: Array[Unit] = null
 
{code}
 

I am not sure the specific reason. But maybe the not object type(any subtype of 
`AnyValue`) could not use as T[], which causes the problem.

 

 

> Make `SerializerTestBase::getTestData` return List
> -
>
> Key: FLINK-20297
> URL: https://issues.apache.org/jira/browse/FLINK-20297
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Minor
>  Labels: pull-request-available
>
> Currently `SerializerTestBase::getTestData` return T[], which can not be 
> override by the Scala. It means that developer could not add scala serializer 
> test based on `SerializerTestBase`
> So I would propose to change the `SerializerTestBase::getTestData` to return 
> List



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20297) Make `SerializerTestBase::getTestData` return List

2020-12-03 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243025#comment-17243025
 ] 

Guowei Ma edited comment on FLINK-20297 at 12/3/20, 8:57 AM:
-

Sorry for replying so late. [~dwysakowicz]

There are some cases that we could not extend `SerializerTestBase`. For example 
when we write a `UnitSerializerTest extends SerializerTestBase` the compiler 
would report following error:

 

 
{code:java}
Error:(18, 26) overriding method getTestData in class SerializerTestBase of 
type ()Array[Unit];
 method getTestData has incompatible type
  override protected def getTestData: Array[Unit] = null
 
{code}
 

I am not sure the specific reason. But maybe the not object type(any subtype of 
`AnyValue`) could not use as T[], which causes the problem.

 

 


was (Author: maguowei):
Sorry for replying so late. [~dwysakowicz] 

There are some cases that we could not extend `SerializerTestBase`. For example 
when we could write a `UnitSerializerTest extends SerializerTestBase` the 
compiler would report following error:

 

 
{code:java}
Error:(18, 26) overriding method getTestData in class SerializerTestBase of 
type ()Array[Unit];
 method getTestData has incompatible type
  override protected def getTestData: Array[Unit] = null
 
{code}
 

I am not sure the specific reason. But maybe the not object type(any subtype of 
`AnyValue`) could not use as T[], which causes the problem.

 

 

> Make `SerializerTestBase::getTestData` return List
> -
>
> Key: FLINK-20297
> URL: https://issues.apache.org/jira/browse/FLINK-20297
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Minor
>  Labels: pull-request-available
>
> Currently `SerializerTestBase::getTestData` return T[], which can not be 
> override by the Scala. It means that developer could not add scala serializer 
> test based on `SerializerTestBase`
> So I would propose to change the `SerializerTestBase::getTestData` to return 
> List



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243021#comment-17243021
 ] 

MengYao edited comment on FLINK-20463 at 12/3/20, 8:59 AM:
---

 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        **        $FLINK_HOME/bin/sql-client.sh embedded

        *        **{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
         **         order_id BIGINT,  -- 订单ID
         **         original_price DOUBLE,  -- 实付金额
         **         ctime BIGINT,  -- 创建时间
         **         ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  -- 使用ctime字段值作为时间戳ts
         **         WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- 
在ts字段上定义5秒延迟的水位线
         **        ) WITH (
         **         'connector' = 'kafka',
         **         'topic' = 'test01',
         **         'properties.bootstrap.servers' = 'node1:9092',
         **         'properties.group.id' = 'testGroup',
         **         'format' = 'json',
         **         'scan.startup.mode' = 'earliest-offset'
         *        *);**

        *{color:#00875a}// 3、execute query statement{color}*

        *        **{color:#57d9a3}Flink SQL>{color}* **select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

*!QQ截图111.jpg!*

        *// 5、能否添加类似于MapReduce中可以跳过坏记录的通用配置项,它可以用于json和csv*

        *例如:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*


was (Author: mengyao):
 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        **        $FLINK_HOME/bin/sql-client.sh embedded

        **        *{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
        **         order_id BIGINT,  -- 订单ID
        **         original_price DOUBLE,  -- 实付金额
        **         ctime BIGINT,  -- 创建时间
        **         ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  -- 使用ctime字段值作为时间戳ts
        **         WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- 
在ts字段上定义5秒延迟的水位线
        **        ) WITH (
        **         'connector' = 'kafka',
        **         'topic' = 'test01',
        **         'properties.bootstrap.servers' = 'node1:9092',
        **         'properties.group.id' = 'testGroup',
        **         'format' = 'json',
        **         'scan.startup.mode' = 'earliest-offset'
        **        );**

        *{color:#00875a}// 3、execute query statement{color}*

        **        *{color:#57d9a3}Flink SQL>{color}* **select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

*!QQ截图111.jpg!*

 

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporter: 谢波
>Priority: Major
> Attachments: QQ截图111.jpg, 无标题111.png
>
>
> can Flink SQL provide an option to ignore exception record?
> I have a table that maps kafka data in json format.
> When parsing the exception data, an exception is thrown, but the data is 
> valid JSON, not a valid record.
> {color:#FF}exception data:{"SHEET":[""]}{color}
> {color:#FF}my table:{color}
> CREATE TABLE offline
> (
>  SHEET ROW (
>  HEADER MAP < STRING, STRING >,
>  ITEM ROW (
>  AMOUNT STRING,
>  COST STRING,
>  GOODSID STRING,
>  SALEVALUE STRING,
>  SAP_RTMATNR STRING,
>  SAP_RTPLU STRING,
>  SERIALID STRING,
>  SHEETID STRING
>  ) 

[jira] [Updated] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MengYao updated FLINK-20463:

Attachment: (was: QQ截图111.jpg)

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporter: 谢波
>Priority: Major
> Attachments: 无标题111.png
>
>
> can Flink SQL provide an option to ignore exception record?
> I have a table that maps kafka data in json format.
> When parsing the exception data, an exception is thrown, but the data is 
> valid JSON, not a valid record.
> {color:#FF}exception data:{"SHEET":[""]}{color}
> {color:#FF}my table:{color}
> CREATE TABLE offline
> (
>  SHEET ROW (
>  HEADER MAP < STRING, STRING >,
>  ITEM ROW (
>  AMOUNT STRING,
>  COST STRING,
>  GOODSID STRING,
>  SALEVALUE STRING,
>  SAP_RTMATNR STRING,
>  SAP_RTPLU STRING,
>  SERIALID STRING,
>  SHEETID STRING
>  ) ARRAY,
>  ITEM5 MAP < STRING, STRING > ARRAY,
>  ITEM1 MAP < STRING, STRING > ARRAY,
>  TENDER MAP < STRING, STRING > ARRAY
>  ) ARRAY
> )
> WITH (
>  'connector' = 'kafka',
>  'properties.bootstrap.servers' = 'xxx:9092',
>  'properties.group.id' = 'realtime.sales.offline.group',
>  'topic' = 'bms133',
>  'format' = 'json',
>  {color:#FF}'json.ignore-parse-errors' = 'true',{color}
>  'scan.startup.mode' = 'earliest-offset'
> );
> {color:#FF}exception:{color}
> Caused by: java.lang.NullPointerExceptionCaused by: 
> java.lang.NullPointerException at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:116)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copyGenericArray(ArrayDataSerializer.java:129)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:90)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:51)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:156)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:123)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:715)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
>  at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>  at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
>  at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)
>  at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185)
>  at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141)
>  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>  at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:213)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243021#comment-17243021
 ] 

MengYao edited comment on FLINK-20463 at 12/3/20, 9:00 AM:
---

 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        **        $FLINK_HOME/bin/sql-client.sh embedded

                ***{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
         **         order_id BIGINT,  -- 订单ID
         **         original_price DOUBLE,  -- 实付金额
         **         ctime BIGINT,  -- 创建时间
         **         ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  -- 使用ctime字段值作为时间戳ts
         **         WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- 
在ts字段上定义5秒延迟的水位线
         **        ) WITH (
         **         'connector' = 'kafka',
         **         'topic' = 'test01',
         **         'properties.bootstrap.servers' = 'node1:9092',
         **         'properties.group.id' = 'testGroup',
         **         'format' = 'json',
         **         'scan.startup.mode' = 'earliest-offset'
                 **        );**

        *{color:#00875a}// 3、execute query statement{color}*

                ***{color:#57d9a3}Flink SQL>{color}* **select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

*!QQ截图111.jpg!*

        *// 5、Can you add a general configuration item similar to MapReduce 
that can skip bad records, it can be used for json and csv*

        *例如:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*


was (Author: mengyao):
 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        **        $FLINK_HOME/bin/sql-client.sh embedded

        *        **{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
         **         order_id BIGINT,  -- 订单ID
         **         original_price DOUBLE,  -- 实付金额
         **         ctime BIGINT,  -- 创建时间
         **         ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  -- 使用ctime字段值作为时间戳ts
         **         WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- 
在ts字段上定义5秒延迟的水位线
         **        ) WITH (
         **         'connector' = 'kafka',
         **         'topic' = 'test01',
         **         'properties.bootstrap.servers' = 'node1:9092',
         **         'properties.group.id' = 'testGroup',
         **         'format' = 'json',
         **         'scan.startup.mode' = 'earliest-offset'
         *        *);**

        *{color:#00875a}// 3、execute query statement{color}*

        *        **{color:#57d9a3}Flink SQL>{color}* **select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

*!QQ截图111.jpg!*

        *// 5、能否添加类似于MapReduce中可以跳过坏记录的通用配置项,它可以用于json和csv*

        *例如:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporter: 谢波
>Priority: Major
> Attachments: QQ截图111.jpg, 无标题111.png
>
>
> can Flink SQL pro

[jira] [Updated] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MengYao updated FLINK-20463:

Attachment: (was: 无标题111.png)

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporter: 谢波
>Priority: Major
>
> can Flink SQL provide an option to ignore exception record?
> I have a table that maps kafka data in json format.
> When parsing the exception data, an exception is thrown, but the data is 
> valid JSON, not a valid record.
> {color:#FF}exception data:{"SHEET":[""]}{color}
> {color:#FF}my table:{color}
> CREATE TABLE offline
> (
>  SHEET ROW (
>  HEADER MAP < STRING, STRING >,
>  ITEM ROW (
>  AMOUNT STRING,
>  COST STRING,
>  GOODSID STRING,
>  SALEVALUE STRING,
>  SAP_RTMATNR STRING,
>  SAP_RTPLU STRING,
>  SERIALID STRING,
>  SHEETID STRING
>  ) ARRAY,
>  ITEM5 MAP < STRING, STRING > ARRAY,
>  ITEM1 MAP < STRING, STRING > ARRAY,
>  TENDER MAP < STRING, STRING > ARRAY
>  ) ARRAY
> )
> WITH (
>  'connector' = 'kafka',
>  'properties.bootstrap.servers' = 'xxx:9092',
>  'properties.group.id' = 'realtime.sales.offline.group',
>  'topic' = 'bms133',
>  'format' = 'json',
>  {color:#FF}'json.ignore-parse-errors' = 'true',{color}
>  'scan.startup.mode' = 'earliest-offset'
> );
> {color:#FF}exception:{color}
> Caused by: java.lang.NullPointerExceptionCaused by: 
> java.lang.NullPointerException at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:116)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copyGenericArray(ArrayDataSerializer.java:129)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:90)
>  at 
> org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:51)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:156)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:123)
>  at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:715)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
>  at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>  at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
>  at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)
>  at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185)
>  at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141)
>  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>  at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>  at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:213)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243021#comment-17243021
 ] 

MengYao edited comment on FLINK-20463 at 12/3/20, 9:02 AM:
---

 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        **        $FLINK_HOME/bin/sql-client.sh embedded

                ***{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
         **         order_id BIGINT,  
         **         original_price DOUBLE, 
         **         ctime BIGINT,  
         **         ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  
         **         WATERMARK FOR ts AS ts - INTERVAL '5' SECOND 
         **        ) WITH (
         **         'connector' = 'kafka',
         **         'topic' = 'test01',
         **         'properties.bootstrap.servers' = 'node1:9092',
         **         'properties.group.id' = 'testGroup',
         **         'format' = 'json',
         **         'scan.startup.mode' = 'earliest-offset'
                         **        );**

        *{color:#00875a}// 3、execute query statement{color}*

                ***{color:#57d9a3}Flink SQL>{color}* **select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

!image-2020-12-03-17-04-01-463.png!

        *// 5、Can you add a general configuration item similar to MapReduce 
that can skip bad records, it can be used for json and csv*

        *例如:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*


was (Author: mengyao):
 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        **        $FLINK_HOME/bin/sql-client.sh embedded

                ***{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
         **         order_id BIGINT,  -- 订单ID
         **         original_price DOUBLE,  -- 实付金额
         **         ctime BIGINT,  -- 创建时间
         **         ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  -- 使用ctime字段值作为时间戳ts
         **         WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- 
在ts字段上定义5秒延迟的水位线
         **        ) WITH (
         **         'connector' = 'kafka',
         **         'topic' = 'test01',
         **         'properties.bootstrap.servers' = 'node1:9092',
         **         'properties.group.id' = 'testGroup',
         **         'format' = 'json',
         **         'scan.startup.mode' = 'earliest-offset'
                 *        *);**

        *{color:#00875a}// 3、execute query statement{color}*

                ***{color:#57d9a3}Flink SQL>{color}* **select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

!image-2020-12-03-17-04-01-463.png!

        *// 5、Can you add a general configuration item similar to MapReduce 
that can skip bad records, it can be used for json and csv*

        *例如:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporter: 谢波
>Priority: Major

[jira] [Comment Edited] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243021#comment-17243021
 ] 

MengYao edited comment on FLINK-20463 at 12/3/20, 9:02 AM:
---

 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        **        $FLINK_HOME/bin/sql-client.sh embedded

                ***{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
         **         order_id BIGINT,  -- 订单ID
         **         original_price DOUBLE,  -- 实付金额
         **         ctime BIGINT,  -- 创建时间
         **         ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  -- 使用ctime字段值作为时间戳ts
         **         WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- 
在ts字段上定义5秒延迟的水位线
         **        ) WITH (
         **         'connector' = 'kafka',
         **         'topic' = 'test01',
         **         'properties.bootstrap.servers' = 'node1:9092',
         **         'properties.group.id' = 'testGroup',
         **         'format' = 'json',
         **         'scan.startup.mode' = 'earliest-offset'
                 *        *);**

        *{color:#00875a}// 3、execute query statement{color}*

                ***{color:#57d9a3}Flink SQL>{color}* **select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

!image-2020-12-03-17-04-01-463.png!

        *// 5、Can you add a general configuration item similar to MapReduce 
that can skip bad records, it can be used for json and csv*

        *例如:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*


was (Author: mengyao):
 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        **        $FLINK_HOME/bin/sql-client.sh embedded

                ***{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
         **         order_id BIGINT,  -- 订单ID
         **         original_price DOUBLE,  -- 实付金额
         **         ctime BIGINT,  -- 创建时间
         **         ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  -- 使用ctime字段值作为时间戳ts
         **         WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- 
在ts字段上定义5秒延迟的水位线
         **        ) WITH (
         **         'connector' = 'kafka',
         **         'topic' = 'test01',
         **         'properties.bootstrap.servers' = 'node1:9092',
         **         'properties.group.id' = 'testGroup',
         **         'format' = 'json',
         **         'scan.startup.mode' = 'earliest-offset'
                 **        );**

        *{color:#00875a}// 3、execute query statement{color}*

                ***{color:#57d9a3}Flink SQL>{color}* **select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

*!QQ截图111.jpg!*

        *// 5、Can you add a general configuration item similar to MapReduce 
that can skip bad records, it can be used for json and csv*

        *例如:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporte

[jira] [Created] (FLINK-20464) Some Table examples have wrong program-class defined

2020-12-03 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-20464:


 Summary: Some Table examples have wrong program-class defined
 Key: FLINK-20464
 URL: https://issues.apache.org/jira/browse/FLINK-20464
 Project: Flink
  Issue Type: Bug
  Components: Examples
Affects Versions: 1.12.0
Reporter: Dawid Wysakowicz
 Fix For: 1.12.0


Some examples were moved to the 
{{org.apache.flink.table.examples.scala.basics}} package but the manifest entry 
was not updated in the pom.xml. This means it is not possible to run the 
examples without passing the class name explicitly.

Examples that I noticed:
* org.apache.flink.table.examples.scala.basics.StreamTableExample
* org.apache.flink.table.examples.scala.basics.TPCHQuery3Table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243021#comment-17243021
 ] 

MengYao edited comment on FLINK-20463 at 12/3/20, 9:04 AM:
---

 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        $FLINK_HOME/bin/sql-client.sh embedded

        *{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
                    order_id BIGINT,  
                    original_price DOUBLE, 
                    ctime BIGINT,  
                    ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  
                    WATERMARK FOR ts AS ts - INTERVAL '5' SECOND 
                   ) WITH (
                    'connector' = 'kafka',
                    'topic' = 'test01',
                    'properties.bootstrap.servers' = 'node1:9092',
                    'properties.group.id' = 'testGroup',
                    'format' = 'json',
                    'scan.startup.mode' = 'earliest-offset'
        *);*

        *{color:#00875a}// 3、execute query statement{color}*

        *{color:#57d9a3}Flink SQL>{color}* select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

!image-2020-12-03-17-04-01-463.png!

        *// 5、Can you add a general configuration item similar to MapReduce 
that can skip bad records, it can be used for json and csv*

        *例如:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*


was (Author: mengyao):
 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        **        $FLINK_HOME/bin/sql-client.sh embedded

                ***{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
         **         order_id BIGINT,  
         **         original_price DOUBLE, 
         **         ctime BIGINT,  
         **         ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  
         **         WATERMARK FOR ts AS ts - INTERVAL '5' SECOND 
         **        ) WITH (
         **         'connector' = 'kafka',
         **         'topic' = 'test01',
         **         'properties.bootstrap.servers' = 'node1:9092',
         **         'properties.group.id' = 'testGroup',
         **         'format' = 'json',
         **         'scan.startup.mode' = 'earliest-offset'
                         **        );**

        *{color:#00875a}// 3、execute query statement{color}*

                ***{color:#57d9a3}Flink SQL>{color}* **select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

!image-2020-12-03-17-04-01-463.png!

        *// 5、Can you add a general configuration item similar to MapReduce 
that can skip bad records, it can be used for json and csv*

        *例如:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporter: 谢波
>Priority: Major
> Attachments: image-2020-12-03-17-04-01-463.png
>
>
> can Flink SQL provide an option to ignore exception

[jira] [Created] (FLINK-20465) Fail globally when not resuming from the latest checkpoint in regional failover

2020-12-03 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-20465:
-

 Summary: Fail globally when not resuming from the latest 
checkpoint in regional failover
 Key: FLINK-20465
 URL: https://issues.apache.org/jira/browse/FLINK-20465
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Affects Versions: 1.12.0
Reporter: Till Rohrmann
 Fix For: 1.13.0


As a follow up for FLINK-20290 we should assert that we resume from the latest 
checkpoint when doing a regional failover in the {{SourceCoordinators}} in 
order to avoid losing input splits (see FLINK-20427). If the assumption does 
not hold, then we should fail the job globally so that we reset the master 
state to a consistent view of the state. Such a behaviour can act as a safety 
net in case that Flink ever tries to recover from not the latest available 
checkpoint.

cc [~sewen], [~jqin]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243021#comment-17243021
 ] 

MengYao edited comment on FLINK-20463 at 12/3/20, 9:07 AM:
---

 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        $FLINK_HOME/bin/sql-client.sh embedded

        *{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
                     order_id BIGINT,  
                     original_price DOUBLE, 
                     ctime BIGINT,  
                     ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  
                     WATERMARK FOR ts AS ts - INTERVAL '5' SECOND 
                    ) WITH (
                     'connector' = 'kafka',
                     'topic' = 'test01',
                     'properties.bootstrap.servers' = 'node1:9092',
                     'properties.group.id' = 'testGroup',
                     'format' = 'json',
                     'scan.startup.mode' = 'earliest-offset'
         *);*

        *{color:#00875a}// 3、execute query statement{color}*

        *{color:#57d9a3}Flink SQL>{color}* select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

!image-2020-12-03-17-04-01-463.png!

        *// 5、Can you add a general configuration item similar to MapReduce 
that can skip bad records, it can be used for json and csv*

        E.g*:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*


was (Author: mengyao):
 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        $FLINK_HOME/bin/sql-client.sh embedded

        *{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
                    order_id BIGINT,  
                    original_price DOUBLE, 
                    ctime BIGINT,  
                    ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  
                    WATERMARK FOR ts AS ts - INTERVAL '5' SECOND 
                   ) WITH (
                    'connector' = 'kafka',
                    'topic' = 'test01',
                    'properties.bootstrap.servers' = 'node1:9092',
                    'properties.group.id' = 'testGroup',
                    'format' = 'json',
                    'scan.startup.mode' = 'earliest-offset'
        *);*

        *{color:#00875a}// 3、execute query statement{color}*

        *{color:#57d9a3}Flink SQL>{color}* select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

!image-2020-12-03-17-04-01-463.png!

        *// 5、Can you add a general configuration item similar to MapReduce 
that can skip bad records, it can be used for json and csv*

        *例如:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporter: 谢波
>Priority: Major
> Attachments: image-2020-12-03-17-04-01-463.png
>
>
> can Flink SQL provide an option to ignore exception record?
> I have a table that maps kafka dat

[jira] [Commented] (FLINK-20297) Make `SerializerTestBase::getTestData` return List

2020-12-03 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243031#comment-17243031
 ] 

Dawid Wysakowicz commented on FLINK-20297:
--

Are there any other cases other than {{Unit}}, which is a very corner case 
scenario. {{Unit}} is a very special type in Scala similar to the {{Void}} 
type. Honestly I'd prefer not to change dozens of classes for that single case.

> Make `SerializerTestBase::getTestData` return List
> -
>
> Key: FLINK-20297
> URL: https://issues.apache.org/jira/browse/FLINK-20297
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Minor
>  Labels: pull-request-available
>
> Currently `SerializerTestBase::getTestData` return T[], which can not be 
> override by the Scala. It means that developer could not add scala serializer 
> test based on `SerializerTestBase`
> So I would propose to change the `SerializerTestBase::getTestData` to return 
> List



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20465) Fail globally when not resuming from the latest checkpoint in regional failover

2020-12-03 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-20465:
--
Fix Version/s: 1.12.1

> Fail globally when not resuming from the latest checkpoint in regional 
> failover
> ---
>
> Key: FLINK-20465
> URL: https://issues.apache.org/jira/browse/FLINK-20465
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Till Rohrmann
>Priority: Critical
> Fix For: 1.13.0, 1.12.1
>
>
> As a follow up for FLINK-20290 we should assert that we resume from the 
> latest checkpoint when doing a regional failover in the 
> {{SourceCoordinators}} in order to avoid losing input splits (see 
> FLINK-20427). If the assumption does not hold, then we should fail the job 
> globally so that we reset the master state to a consistent view of the state. 
> Such a behaviour can act as a safety net in case that Flink ever tries to 
> recover from not the latest available checkpoint.
> cc [~sewen], [~jqin]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20463) flink-1.11.2 -sql cannot ignore exception record

2020-12-03 Thread MengYao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243021#comment-17243021
 ] 

MengYao edited comment on FLINK-20463 at 12/3/20, 9:08 AM:
---

 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        $FLINK_HOME/bin/sql-client.sh embedded

        *{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
                     order_id BIGINT,  
                     original_price DOUBLE, 
                     ctime BIGINT,  
                     ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  
                     WATERMARK FOR ts AS ts - INTERVAL '5' SECOND 
                    ) WITH (
                     'connector' = 'kafka',
                     'topic' = 'test01',
                     'properties.bootstrap.servers' = 'node1:9092',
                     'properties.group.id' = 'testGroup',
                     'format' = 'json',
                     'scan.startup.mode' = 'earliest-offset'
         *);*

        *{color:#00875a}// 3、execute query statement{color}*

        *{color:#57d9a3}Flink SQL>{color}* select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

!image-2020-12-03-17-04-01-463.png!

        *// 5、Can you add a general configuration item similar to MapReduce 
that can skip bad records, it can be used for json and csv*

        E.g:*{color:#de350b}skip.fail.records=0{color}*(default 0、-1、>0)

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*


was (Author: mengyao):
 

*I have a problem similar to yours,**I defined a Kafka dynamic table in 
SQL-Client. However, due to the incorrect format of some elements in the Kafka 
topic, an exception was thrown in SQL-Client. Can we add a configuration item 
to ignore these error records?*

version = 1.11.2

module = Table & SQL

My Setps:

         *{color:#00875a}// 1、enter the command line{color}*

        $FLINK_HOME/bin/sql-client.sh embedded

        *{color:#00875a}// 2、create kafka dynamic table{color}*

        *{color:#57d9a3}Flink SQL{color}>* CREATE TABLE kfk_test01 (
                     order_id BIGINT,  
                     original_price DOUBLE, 
                     ctime BIGINT,  
                     ts AS TO_TIMESTAMP(FROM_UNIXTIME(ctime, '-MM-dd 
HH:mm:ss')),  
                     WATERMARK FOR ts AS ts - INTERVAL '5' SECOND 
                    ) WITH (
                     'connector' = 'kafka',
                     'topic' = 'test01',
                     'properties.bootstrap.servers' = 'node1:9092',
                     'properties.group.id' = 'testGroup',
                     'format' = 'json',
                     'scan.startup.mode' = 'earliest-offset'
         *);*

        *{color:#00875a}// 3、execute query statement{color}*

        *{color:#57d9a3}Flink SQL>{color}* select * from kfk_test01;

       *{color:#00875a}// 4、{color}*{color:#00875a}*Encountered a malformed 
element causing the query to fail(element context is: NULL or Empty)*{color}

!image-2020-12-03-17-04-01-463.png!

        *// 5、Can you add a general configuration item similar to MapReduce 
that can skip bad records, it can be used for json and csv*

        E.g*:{color:#de350b}skip.fail.records=0{color}(0、-1、>0)*

                   *{color:#de350b}0:The default value of 0 means that bad 
records are not allowed to be skipped,{color}*

                 *{color:#de350b}-1: means that all bad records can be 
skipped{color}*

                 *{color:#de350b}Any number> 0:indicates the maximum acceptable 
number of bad records{color}*

> flink-1.11.2 -sql cannot ignore exception record
> 
>
> Key: FLINK-20463
> URL: https://issues.apache.org/jira/browse/FLINK-20463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.2
> Environment: 1.11.2
> 2.11
>Reporter: 谢波
>Priority: Major
> Attachments: image-2020-12-03-17-04-01-463.png
>
>
> can Flink SQL provide an option to ignore exception record?
> I have a tab

[jira] [Commented] (FLINK-20427) Remove CheckpointConfig.setPreferCheckpointForRecovery because it can lead to data loss

2020-12-03 Thread Nico Kruber (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243032#comment-17243032
 ] 

Nico Kruber commented on FLINK-20427:
-

Ahh...Flink regional failover. Makes sense now: for these, the sources have the 
implicit assumption that they recover from the latest snapshot (while for a 
global failover, they restore from the snapshotted state). It would be good to 
add a safeguard these to ensure that the source is always recovering from the 
latest snapshot for regional failovers then (ideally it would then fall back to 
a global failover if this assumption is not true).
As for removing {{CheckpointConfig.setPreferCheckpointForRecovery}}: Let's 
evaluate with the user ml whether anyone is relying on this and if not, let's 
remove it and get rid of one more special case. The only use case I can think 
of which may benefit from this feature here is a low-latency use case which 
tolerates duplicates in the sinks but has very strong SLAs even in the failure 
case: these could work around the slow savepoint-restore via retained 
checkpoints. Retained checkpoints, however, don't really serve as a backup 
which you can roll back to in case of bugs - user-triggered checkpoints could 
be a better solution here, as mentioned, but they don't exist yet.

> Remove CheckpointConfig.setPreferCheckpointForRecovery because it can lead to 
> data loss
> ---
>
> Key: FLINK-20427
> URL: https://issues.apache.org/jira/browse/FLINK-20427
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Till Rohrmann
>Priority: Critical
> Fix For: 1.13.0
>
>
> The {{CheckpointConfig.setPreferCheckpointForRecovery}} allows to configure 
> whether Flink prefers checkpoints for recovery if the 
> {{CompletedCheckpointStore}} contains savepoints and checkpoints. This is 
> problematic because due to this feature, Flink might prefer older checkpoints 
> over newer savepoints for recovery. Since some components expect that the 
> always the latest checkpoint/savepoint is used (e.g. the 
> {{SourceCoordinator}}), it breaks assumptions and can lead to 
> {{SourceSplits}} which are not read. This effectively means that the system 
> loses data. Similarly, this behaviour can cause that exactly once sinks might 
> output results multiple times which violates the processing guarantees. 
> Hence, I believe that we should remove this setting because it changes 
> Flink's behaviour in some very significant way potentially w/o the user 
> noticing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20466) Table 'EXPR$1' not found with UNION ALL

2020-12-03 Thread wxmimperio (Jira)
wxmimperio created FLINK-20466:
--

 Summary: Table 'EXPR$1' not found with UNION ALL
 Key: FLINK-20466
 URL: https://issues.apache.org/jira/browse/FLINK-20466
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.11.1
Reporter: wxmimperio


 
{code:java}
CREATE TABLE table_01 (  aaa varchar,  bbb varchar) WITH(...);

CREATE TABLE table_02 (  aaa varchar,  bbb varchar) WITH(...);

create view my_view as
select aaa,bbb from (
   select aaa,bbb from table_01
   union all
   select aaa,bbb from table_02
);
create table bsql_log (  aaa varchar,  bbb varchar) with (  'connector' = 
'log');
insert into bsql_log SELECT aaa,bbb FROM my_view
{code}
 
Run the above code will report an error:
{code:java}
org.apache.calcite.runtime.CalciteContextException: From line 1, column 8 to 
line 1, column 15: Table 'EXPR$1' not found
{code}
But if I assign an alias to the result of union all, it can be normal.
{code:java}
create view my_view as
select aaa,bbb from ( 
 select aaa,bbb from table_01 
 union all 
 select aaa,bbb from table_02
) as union_result;
{code}
 
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20465) Fail globally when not resuming from the latest checkpoint in regional failover

2020-12-03 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-20465:
--
Description: 
As a follow up for FLINK-20290 we should assert that we resume from the latest 
checkpoint when doing a regional failover in the {{SourceCoordinators}} in 
order to avoid losing input splits (see FLINK-20427). If the assumption does 
not hold, then we should fail the job globally so that we reset the master 
state to a consistent view of the state. Such a behaviour can act as a safety 
net in case that Flink ever tries to recover from not the latest available 
checkpoint.

One idea how to solve it is to remember the latest completed checkpoint id 
somewhere along the way to the 
{{SplitAssignmentTracker.getAndRemoveUncheckpointedAssignment}} and failing 
when the restored checkpoint id is smaller.

cc [~sewen], [~jqin]

  was:
As a follow up for FLINK-20290 we should assert that we resume from the latest 
checkpoint when doing a regional failover in the {{SourceCoordinators}} in 
order to avoid losing input splits (see FLINK-20427). If the assumption does 
not hold, then we should fail the job globally so that we reset the master 
state to a consistent view of the state. Such a behaviour can act as a safety 
net in case that Flink ever tries to recover from not the latest available 
checkpoint.

cc [~sewen], [~jqin]


> Fail globally when not resuming from the latest checkpoint in regional 
> failover
> ---
>
> Key: FLINK-20465
> URL: https://issues.apache.org/jira/browse/FLINK-20465
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Till Rohrmann
>Priority: Critical
> Fix For: 1.13.0, 1.12.1
>
>
> As a follow up for FLINK-20290 we should assert that we resume from the 
> latest checkpoint when doing a regional failover in the 
> {{SourceCoordinators}} in order to avoid losing input splits (see 
> FLINK-20427). If the assumption does not hold, then we should fail the job 
> globally so that we reset the master state to a consistent view of the state. 
> Such a behaviour can act as a safety net in case that Flink ever tries to 
> recover from not the latest available checkpoint.
> One idea how to solve it is to remember the latest completed checkpoint id 
> somewhere along the way to the 
> {{SplitAssignmentTracker.getAndRemoveUncheckpointedAssignment}} and failing 
> when the restored checkpoint id is smaller.
> cc [~sewen], [~jqin]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20464) Some Table examples are not built correctly

2020-12-03 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-20464:
-
Summary: Some Table examples are not built correctly  (was: Some Table 
examples have wrong program-class defined)

> Some Table examples are not built correctly
> ---
>
> Key: FLINK-20464
> URL: https://issues.apache.org/jira/browse/FLINK-20464
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.12.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
> Fix For: 1.12.0
>
>
> Some examples were moved to the 
> {{org.apache.flink.table.examples.scala.basics}} package but the manifest 
> entry was not updated in the pom.xml. This means it is not possible to run 
> the examples without passing the class name explicitly.
> Examples that I noticed:
> * org.apache.flink.table.examples.scala.basics.StreamTableExample
> * org.apache.flink.table.examples.scala.basics.TPCHQuery3Table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14244: [FLINK-20299][docs-zh] Update Chinese table overview

2020-12-03 Thread GitBox


flinkbot edited a comment on pull request #14244:
URL: https://github.com/apache/flink/pull/14244#issuecomment-734665945


   
   ## CI report:
   
   * e2d25f123282470094c6ebe4122e04921304cc57 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10227)
 
   * 2c6bd4bafadb83252e0fa01b777b44539e320396 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10466)
 
   * 7e6ebb48a9a018d7f23fa72e0304d293a3706294 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10472)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14295: [FLINK-20424][Web Frontend] Make the percent of acknowledged checkpoint more accurately

2020-12-03 Thread GitBox


flinkbot edited a comment on pull request #14295:
URL: https://github.com/apache/flink/pull/14295#issuecomment-737756527


   
   ## CI report:
   
   * 95b395326189c894448b3e6741bb747337ec44cb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10473)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20464) Some Table examples are not built correctly

2020-12-03 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-20464:
-
Description: 
Some examples were moved to the 
{{org.apache.flink.table.examples.scala.basics}} package but the pom.xml. This 
means the example jars are not built correctly.

Examples that I noticed:
* org.apache.flink.table.examples.scala.basics.StreamTableExample
* org.apache.flink.table.examples.scala.basics.TPCHQuery3Table

  was:
Some examples were moved to the 
{{org.apache.flink.table.examples.scala.basics}} package but the manifest entry 
was not updated in the pom.xml. This means it is not possible to run the 
examples without passing the class name explicitly.

Examples that I noticed:
* org.apache.flink.table.examples.scala.basics.StreamTableExample
* org.apache.flink.table.examples.scala.basics.TPCHQuery3Table


> Some Table examples are not built correctly
> ---
>
> Key: FLINK-20464
> URL: https://issues.apache.org/jira/browse/FLINK-20464
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.12.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
> Fix For: 1.12.0
>
>
> Some examples were moved to the 
> {{org.apache.flink.table.examples.scala.basics}} package but the pom.xml. 
> This means the example jars are not built correctly.
> Examples that I noticed:
> * org.apache.flink.table.examples.scala.basics.StreamTableExample
> * org.apache.flink.table.examples.scala.basics.TPCHQuery3Table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15906) physical memory exceeded causing being killed by yarn

2020-12-03 Thread yang gang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243033#comment-17243033
 ] 

yang gang commented on FLINK-15906:
---

Hi,[~xintongsong] 

Thank you very much. By increasing the value of JVM overhead 
(taskmanager.memory.jvm-overhead.fraction=0.3), it has been observed that there 
is no exception physical memory exceed.

> physical memory exceeded causing being killed by yarn
> -
>
> Key: FLINK-15906
> URL: https://issues.apache.org/jira/browse/FLINK-15906
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: liupengcheng
>Priority: Major
>
> Recently, we encoutered this issue when testing TPCDS query with 100g data. 
> I first meet this issue when I only set the 
> `taskmanager.memory.total-process.size` to `4g` with `-tm` option. Then I try 
> to increase the jvmOverhead size with following arguments, but still failed.
> {code:java}
> taskmanager.memory.jvm-overhead.min: 640m
> taskmanager.memory.jvm-metaspace: 128m
> taskmanager.memory.task.heap.size: 1408m
> taskmanager.memory.framework.heap.size: 128m
> taskmanager.memory.framework.off-heap.size: 128m
> taskmanager.memory.managed.size: 1408m
> taskmanager.memory.shuffle.max: 256m
> {code}
> {code:java}
> java.lang.Exception: [2020-02-05 11:31:32.345]Container 
> [pid=101677,containerID=container_e08_1578903621081_4785_01_51] is 
> running 46342144B beyond the 'PHYSICAL' memory limit. Current usage: 4.04 GB 
> of 4 GB physical memory used; 17.68 GB of 40 GB virtual memory used. Killing 
> container.java.lang.Exception: [2020-02-05 11:31:32.345]Container 
> [pid=101677,containerID=container_e08_1578903621081_4785_01_51] is 
> running 46342144B beyond the 'PHYSICAL' memory limit. Current usage: 4.04 GB 
> of 4 GB physical memory used; 17.68 GB of 40 GB virtual memory used. Killing 
> container.Dump of the process-tree for 
> container_e08_1578903621081_4785_01_51 : |- PID PPID PGRPID SESSID 
> CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) 
> RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 101938 101677 101677 101677 (java) 25762 
> 3571 18867417088 1059157 /opt/soft/openjdk1.8.0/bin/java 
> -Dhadoop.root.logfile=syslog -Xmx1610612736 -Xms1610612736 
> -XX:MaxDirectMemorySize=402653184 -XX:MaxMetaspaceSize=134217728 
> -Dlog.file=/home/work/hdd5/yarn/zjyprc-analysis/nodemanager/application_1578903621081_4785/container_e08_1578903621081_4785_01_51/taskmanager.log
>  -Dlog4j.configuration=file:./log4j.properties 
> org.apache.flink.yarn.YarnTaskExecutorRunner -D 
> taskmanager.memory.shuffle.max=268435456b -D 
> taskmanager.memory.framework.off-heap.size=134217728b -D 
> taskmanager.memory.framework.heap.size=134217728b -D 
> taskmanager.memory.managed.size=1476395008b -D taskmanager.cpu.cores=1.0 -D 
> taskmanager.memory.task.heap.size=1476395008b -D 
> taskmanager.memory.task.off-heap.size=0b -D 
> taskmanager.memory.shuffle.min=268435456b --configDir . 
> -Djobmanager.rpc.address=zjy-hadoop-prc-st2805.bj -Dweb.port=0 
> -Dweb.tmpdir=/tmp/flink-web-4bf6cd3a-a6e1-4b46-b140-b8ac7bdffbeb 
> -Djobmanager.rpc.port=36769 -Dtaskmanager.memory.managed.size=1476395008b 
> -Drest.address=zjy-hadoop-prc-st2805.bj |- 101677 101671 101677 101677 (bash) 
> 1 1 118030336 733 /bin/bash -c /opt/soft/openjdk1.8.0/bin/java 
> -Dhadoop.root.logfile=syslog -Xmx1610612736 -Xms1610612736 
> -XX:MaxDirectMemorySize=402653184 -XX:MaxMetaspaceSize=134217728 
> -Dlog.file=/home/work/hdd5/yarn/zjyprc-analysis/nodemanager/application_1578903621081_4785/container_e08_1578903621081_4785_01_51/taskmanager.log
>  -Dlog4j.configuration=file:./log4j.properties 
> org.apache.flink.yarn.YarnTaskExecutorRunner -D 
> taskmanager.memory.shuffle.max=268435456b -D 
> taskmanager.memory.framework.off-heap.size=134217728b -D 
> taskmanager.memory.framework.heap.size=134217728b -D 
> taskmanager.memory.managed.size=1476395008b -D taskmanager.cpu.cores=1.0 -D 
> taskmanager.memory.task.heap.size=1476395008b -D 
> taskmanager.memory.task.off-heap.size=0b -D 
> taskmanager.memory.shuffle.min=268435456b --configDir . 
> -Djobmanager.rpc.address=zjy-hadoop-prc-st2805.bj -Dweb.port=0 
> -Dweb.tmpdir=/tmp/flink-web-4bf6cd3a-a6e1-4b46-b140-b8ac7bdffbeb 
> -Djobmanager.rpc.port=36769 -Dtaskmanager.memory.managed.size=1476395008b 
> -Drest.address=zjy-hadoop-prc-st2805.bj 1> 
> /home/work/hdd5/yarn/zjyprc-analysis/nodemanager/application_1578903621081_4785/container_e08_1578903621081_4785_01_51/taskmanager.out
>  2> 
> /home/work/hdd5/yarn/zjyprc-analysis/nodemanager/application_1578903621081_4785/container_e08_1578903621081_4785_01_51/taskmanager.err
> {code}
> I suspect there are some leaks or unexpected offheap memory usage.



--
This message was sent by Atlassian Jira
(v8.3.4#

[jira] [Updated] (FLINK-20464) Some Table examples are not built correctly

2020-12-03 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-20464:
-
Description: 
Some examples were moved to the 
{{org.apache.flink.table.examples.scala.basics}} package but the pom.xml. This 
means the example jars are not built correctly and do not contain the classes.

Examples that I noticed:
* org.apache.flink.table.examples.scala.basics.StreamTableExample
* org.apache.flink.table.examples.scala.basics.TPCHQuery3Table

  was:
Some examples were moved to the 
{{org.apache.flink.table.examples.scala.basics}} package but the pom.xml. This 
means the example jars are not built correctly.

Examples that I noticed:
* org.apache.flink.table.examples.scala.basics.StreamTableExample
* org.apache.flink.table.examples.scala.basics.TPCHQuery3Table


> Some Table examples are not built correctly
> ---
>
> Key: FLINK-20464
> URL: https://issues.apache.org/jira/browse/FLINK-20464
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.12.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
> Fix For: 1.12.0
>
>
> Some examples were moved to the 
> {{org.apache.flink.table.examples.scala.basics}} package but the pom.xml. 
> This means the example jars are not built correctly and do not contain the 
> classes.
> Examples that I noticed:
> * org.apache.flink.table.examples.scala.basics.StreamTableExample
> * org.apache.flink.table.examples.scala.basics.TPCHQuery3Table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20464) Some Table examples are not built correctly

2020-12-03 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-20464:
-
Description: 
Some examples were moved to the 
{{org.apache.flink.table.examples.scala.basics}} package but the pom.xml was 
not updated. This means the example jars are not built correctly and do not 
contain the classes.

Examples that I noticed:
* org.apache.flink.table.examples.scala.basics.StreamTableExample
* org.apache.flink.table.examples.scala.basics.TPCHQuery3Table

  was:
Some examples were moved to the 
{{org.apache.flink.table.examples.scala.basics}} package but the pom.xml. This 
means the example jars are not built correctly and do not contain the classes.

Examples that I noticed:
* org.apache.flink.table.examples.scala.basics.StreamTableExample
* org.apache.flink.table.examples.scala.basics.TPCHQuery3Table


> Some Table examples are not built correctly
> ---
>
> Key: FLINK-20464
> URL: https://issues.apache.org/jira/browse/FLINK-20464
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.12.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
> Fix For: 1.12.0
>
>
> Some examples were moved to the 
> {{org.apache.flink.table.examples.scala.basics}} package but the pom.xml was 
> not updated. This means the example jars are not built correctly and do not 
> contain the classes.
> Examples that I noticed:
> * org.apache.flink.table.examples.scala.basics.StreamTableExample
> * org.apache.flink.table.examples.scala.basics.TPCHQuery3Table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20464) Some Table examples are not built correctly

2020-12-03 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-20464:
-
Description: 
Some examples were moved to the 
{{org.apache.flink.table.examples.scala.basics}} package but the pom.xml was 
not updated. This means the example jars are not built correctly and do not 
contain the classes.

Examples that I noticed:
* org.apache.flink.table.examples.scala.basics.StreamTableExample
* org.apache.flink.table.examples.scala.basics.TPCHQuery3Table

We should update the {{includes}} sections e.g.:

{code}

StreamTableExample
package

jar




StreamTableExample





org.apache.flink.table.examples.scala.StreamTableExample




*   

org/apache/flink/table/examples/scala/StreamTableExample*
*


{code}

  was:
Some examples were moved to the 
{{org.apache.flink.table.examples.scala.basics}} package but the pom.xml was 
not updated. This means the example jars are not built correctly and do not 
contain the classes.

Examples that I noticed:
* org.apache.flink.table.examples.scala.basics.StreamTableExample
* org.apache.flink.table.examples.scala.basics.TPCHQuery3Table


> Some Table examples are not built correctly
> ---
>
> Key: FLINK-20464
> URL: https://issues.apache.org/jira/browse/FLINK-20464
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.12.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
> Fix For: 1.12.0
>
>
> Some examples were moved to the 
> {{org.apache.flink.table.examples.scala.basics}} package but the pom.xml was 
> not updated. This means the example jars are not built correctly and do not 
> contain the classes.
> Examples that I noticed:
> * org.apache.flink.table.examples.scala.basics.StreamTableExample
> * org.apache.flink.table.examples.scala.basics.TPCHQuery3Table
> We should update the {{includes}} sections e.g.:
> {code}
>   
>   StreamTableExample
>   package
>   
>   jar
>   
>   
>   
> StreamTableExample
>   
>   
> 
>   
> org.apache.flink.table.examples.scala.StreamTableExample
>   
> 
>   
> * 
>   
> org/apache/flink/table/examples/scala/StreamTableExample*
>   *
>   
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20464) Some Table examples are not built correctly

2020-12-03 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-20464:
-
Description: 
Some examples were moved to the 
{{org.apache.flink.table.examples.scala.basics}} package but the pom.xml was 
not updated. This means the example jars are not built correctly and do not 
contain the classes.

Examples that I noticed:
* org.apache.flink.table.examples.scala.basics.StreamTableExample
* org.apache.flink.table.examples.scala.basics.TPCHQuery3Table

We should update the {{includes}} sections e.g.:

{code}

StreamTableExample
package

jar


StreamTableExample






org.apache.flink.table.examples.scala.StreamTableExample




org/apache/flink/table/examples/scala/StreamTableExample*



{code}

  was:
Some examples were moved to the 
{{org.apache.flink.table.examples.scala.basics}} package but the pom.xml was 
not updated. This means the example jars are not built correctly and do not 
contain the classes.

Examples that I noticed:
* org.apache.flink.table.examples.scala.basics.StreamTableExample
* org.apache.flink.table.examples.scala.basics.TPCHQuery3Table

We should update the {{includes}} sections e.g.:

{code}

StreamTableExample
package

jar




StreamTableExample





org.apache.flink.table.examples.scala.StreamTableExample




*   

org/apache/flink/table/examples/scala/StreamTableExample*
*


{code}


> Some Table examples are not built correctly
> ---
>
> Key: FLINK-20464
> URL: https://issues.apache.org/jira/browse/FLINK-20464
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.12.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
> Fix For: 1.12.0
>
>
> Some examples were moved to the 
> {{org.apache.flink.table.examples.scala.basics}} package but the pom.xml was 
> not updated. This means the example jars are not built correctly and do not 
> contain the classes.
> Examples that I noticed:
> * org.apache.flink.table.examples.scala.basics.StreamTableExample
> * org.apache.flink.table.examples.scala.basics.TPCHQuery3Table
> We should update the {{includes}} sections e.g.:
> {code}
> 
>   StreamTableExample
>   package
>   
>   jar
>   
>   
>   StreamTableExample
> 
>   
>   
>   
> org.apache.flink.table.examples.scala.StreamTableExample
>   
>   
>   
>   
> org/apache/flink/table/examples/scala/StreamTableExample*
>   
>   
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19984) Add TypeSerializerTestCoverageTest to check whether tests based on SerializerTestBase and TypeSerializerUpgradeTestBase

2020-12-03 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-19984:
-
Fix Version/s: 1.12.0

> Add TypeSerializerTestCoverageTest to check whether tests based on 
> SerializerTestBase and TypeSerializerUpgradeTestBase
> ---
>
> Key: FLINK-19984
> URL: https://issues.apache.org/jira/browse/FLINK-19984
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Affects Versions: 1.12.0
>Reporter: Nicholas Jiang
>Assignee: Nicholas Jiang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently, we have {{TypeInfoTestCoverageTest}} which checks that we have a 
> test that extends {{TypeInformationTestBase}} for all type infos. But 
> {{TypeSerializer}} doesn’t have the same thing that would verify that 
> {{TypeSerializer}} has tests that extend {{SerializerTestBase}} and 
> {{TypeSerializerUpgradeTestBase}}. Therefore we don’t know if test coverage 
> of {{TypeSerializer}} is good.
> This would add {{TypeSerializerTestCoverageTest}} to check whether to have 
> tests based on {{SerializerTestBase}} and {{TypeSerializerUpgradeTestBase}} 
> because all serializers should have tests based on both of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19984) Add TypeSerializerTestCoverageTest to check whether tests based on SerializerTestBase and TypeSerializerUpgradeTestBase

2020-12-03 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-19984.

Resolution: Fixed

master: d9c5b436143d063a2b36950477d2f84e833ddfc6
release-1.12: c40dc919ad76c4e8a3aea58d74e998dee1504e17

> Add TypeSerializerTestCoverageTest to check whether tests based on 
> SerializerTestBase and TypeSerializerUpgradeTestBase
> ---
>
> Key: FLINK-19984
> URL: https://issues.apache.org/jira/browse/FLINK-19984
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Nicholas Jiang
>Assignee: Nicholas Jiang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently, we have {{TypeInfoTestCoverageTest}} which checks that we have a 
> test that extends {{TypeInformationTestBase}} for all type infos. But 
> {{TypeSerializer}} doesn’t have the same thing that would verify that 
> {{TypeSerializer}} has tests that extend {{SerializerTestBase}} and 
> {{TypeSerializerUpgradeTestBase}}. Therefore we don’t know if test coverage 
> of {{TypeSerializer}} is good.
> This would add {{TypeSerializerTestCoverageTest}} to check whether to have 
> tests based on {{SerializerTestBase}} and {{TypeSerializerUpgradeTestBase}} 
> because all serializers should have tests based on both of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19984) Add TypeSerializerTestCoverageTest to check whether tests based on SerializerTestBase and TypeSerializerUpgradeTestBase

2020-12-03 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-19984:
-
Affects Version/s: (was: 1.12.0)

> Add TypeSerializerTestCoverageTest to check whether tests based on 
> SerializerTestBase and TypeSerializerUpgradeTestBase
> ---
>
> Key: FLINK-19984
> URL: https://issues.apache.org/jira/browse/FLINK-19984
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Nicholas Jiang
>Assignee: Nicholas Jiang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently, we have {{TypeInfoTestCoverageTest}} which checks that we have a 
> test that extends {{TypeInformationTestBase}} for all type infos. But 
> {{TypeSerializer}} doesn’t have the same thing that would verify that 
> {{TypeSerializer}} has tests that extend {{SerializerTestBase}} and 
> {{TypeSerializerUpgradeTestBase}}. Therefore we don’t know if test coverage 
> of {{TypeSerializer}} is good.
> This would add {{TypeSerializerTestCoverageTest}} to check whether to have 
> tests based on {{SerializerTestBase}} and {{TypeSerializerUpgradeTestBase}} 
> because all serializers should have tests based on both of them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] aljoscha closed pull request #14120: [FLINK-19984][core] Add TypeSerializerTestCoverageTest to check whether tests based on SerializerTestBase and TypeSerializerUpgradeTestBase

2020-12-03 Thread GitBox


aljoscha closed pull request #14120:
URL: https://github.com/apache/flink/pull/14120


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] aljoscha commented on pull request #14120: [FLINK-19984][core] Add TypeSerializerTestCoverageTest to check whether tests based on SerializerTestBase and TypeSerializerUpgradeTestBase

2020-12-03 Thread GitBox


aljoscha commented on pull request #14120:
URL: https://github.com/apache/flink/pull/14120#issuecomment-73871


   Thanks for the contribution, @SteNicholas! And thanks for the review, 
@guoweiM! I now merged this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 commented on pull request #11338: [FLINK-10052][ha] Tolerate temporarily suspended ZooKeeper connections

2020-12-03 Thread GitBox


wangyang0918 commented on pull request #11338:
URL: https://github.com/apache/flink/pull/11338#issuecomment-737778588


   Some users also complain about the same issue in the 
`user...@flink.apache.org`. When they restart the ZooKeeper server node one by 
one, they find all the Flink running jobs have failed over.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-20464) Some Table examples are not built correctly

2020-12-03 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-20464:


Assignee: Dawid Wysakowicz

> Some Table examples are not built correctly
> ---
>
> Key: FLINK-20464
> URL: https://issues.apache.org/jira/browse/FLINK-20464
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.12.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Critical
> Fix For: 1.12.0
>
>
> Some examples were moved to the 
> {{org.apache.flink.table.examples.scala.basics}} package but the pom.xml was 
> not updated. This means the example jars are not built correctly and do not 
> contain the classes.
> Examples that I noticed:
> * org.apache.flink.table.examples.scala.basics.StreamTableExample
> * org.apache.flink.table.examples.scala.basics.TPCHQuery3Table
> We should update the {{includes}} sections e.g.:
> {code}
> 
>   StreamTableExample
>   package
>   
>   jar
>   
>   
>   StreamTableExample
> 
>   
>   
>   
> org.apache.flink.table.examples.scala.StreamTableExample
>   
>   
>   
>   
> org/apache/flink/table/examples/scala/StreamTableExample*
>   
>   
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14244: [FLINK-20299][docs-zh] Update Chinese table overview

2020-12-03 Thread GitBox


flinkbot edited a comment on pull request #14244:
URL: https://github.com/apache/flink/pull/14244#issuecomment-734665945


   
   ## CI report:
   
   * 2c6bd4bafadb83252e0fa01b777b44539e320396 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10466)
 
   * 7e6ebb48a9a018d7f23fa72e0304d293a3706294 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10472)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14207: [FLINK-20250][table-runtime] NPE when invoking AsyncLookupJoinRunner#close method

2020-12-03 Thread GitBox


wuchong commented on pull request #14207:
URL: https://github.com/apache/flink/pull/14207#issuecomment-737781867


   cc @leonardBang 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20297) Make `SerializerTestBase::getTestData` return List

2020-12-03 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243055#comment-17243055
 ] 

Guowei Ma commented on FLINK-20297:
---

I don't find any other case. Properly you are right.

I think we could add the `UnitSerializerTest` to the white name list in the 
`TypeSerializerTestConverageTest`

> Make `SerializerTestBase::getTestData` return List
> -
>
> Key: FLINK-20297
> URL: https://issues.apache.org/jira/browse/FLINK-20297
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Minor
>  Labels: pull-request-available
>
> Currently `SerializerTestBase::getTestData` return T[], which can not be 
> override by the Scala. It means that developer could not add scala serializer 
> test based on `SerializerTestBase`
> So I would propose to change the `SerializerTestBase::getTestData` to return 
> List



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger opened a new pull request #14296: [FLINK-20455] Add check for jar file contents

2020-12-03 Thread GitBox


rmetzger opened a new pull request #14296:
URL: https://github.com/apache/flink/pull/14296


   
   ## What is the purpose of the change
   
   During the release validation, we noticed several jar files containing 
LICENSE files in their root.
   
   This change adds some basic checks to validate the jar files.
   
   NOTICE: The check currently fails because the examples lack LICENSE files.
   
   ## Brief change log
   
   - Split license checker into notice and jar checker
   - add jar checker
   - fix flink-table related issues
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20455) Add check to LicenseChecker for top level /LICENSE files in shaded jars

2020-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20455:
---
Labels: pull-request-available  (was: )

> Add check to LicenseChecker for top level /LICENSE files in shaded jars
> ---
>
> Key: FLINK-20455
> URL: https://issues.apache.org/jira/browse/FLINK-20455
> Project: Flink
>  Issue Type: Task
>  Components: Build System / CI
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> During the release verification of the 1.12.0 release, we noticed several 
> modules containing LICENSE files in the jar file, which are not Apache 
> licenses.
> This could mislead users that the JARs are licensed not according to the ASL, 
> but something else.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14296: [FLINK-20455] Add check for jar file contents

2020-12-03 Thread GitBox


flinkbot commented on pull request #14296:
URL: https://github.com/apache/flink/pull/14296#issuecomment-737792010


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1a102565515cc8116fa304d69dd1746739f8aa3e (Thu Dec 03 
09:42:08 UTC 2020)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-20455).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol merged pull request #13722: [FLINK-19636][coordination] Add DeclarativeSlotPool

2020-12-03 Thread GitBox


zentol merged pull request #13722:
URL: https://github.com/apache/flink/pull/13722


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20467) Fix the Example in Python DataStream Doc

2020-12-03 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-20467:


 Summary: Fix the Example in Python DataStream Doc
 Key: FLINK-20467
 URL: https://issues.apache.org/jira/browse/FLINK-20467
 Project: Flink
  Issue Type: Bug
  Components: API / Python, Documentation
Affects Versions: 1.12.0, 1.13.0
Reporter: Huang Xingbo
 Fix For: 1.12.0, 1.13.0


Currently the example of MapFunction can't work. We need to fix it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19715) Optimize re-assignment of excess resources

2020-12-03 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-19715.

Resolution: Fixed

master: c1b96de2702098a93ac210e8983bb1b7df3097c2

> Optimize re-assignment of excess resources
> --
>
> Key: FLINK-19715
> URL: https://issues.apache.org/jira/browse/FLINK-19715
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.13.0
>
>
> The {{JobScopedResourceTracker}} tracks acquired resources that exceed the 
> jobs requirements as {{excess}} resources.
> Whenever the requirements increase, or a (non-excess) resource was lost, we 
> try to assign any excess resources we have to fill either fulfill the 
> requirements of fill in the lost resource.
> This re-assignment is currently implemented by doing a full copy of map 
> containing the excess resources, and going through the usual code path for 
> acquired resources.
> This is fine in terms of correctness (although it can cause misleading log 
> messages), but in the worst case, where we cannot re-assign any excess 
> resources, we not only rebuild the original map in the process, but also 
> potentially iterate over every outstanding requirement for every excess slot.
> We should optimize this step by iterating over the excess resource map once, 
> removing items on-demand and aborting early for a given excess resource 
> profile if no matching requirement could be found.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19636) Add declarative SlotPool

2020-12-03 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-19636.

Fix Version/s: (was: 1.12.0)
   1.13.0
   Resolution: Fixed

master: c2db57efc571a55cffd14c4a698df98b304c74b3

> Add declarative SlotPool
> 
>
> Key: FLINK-19636
> URL: https://issues.apache.org/jira/browse/FLINK-19636
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19809) Add ServiceConnectionManager

2020-12-03 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-19809:
-
Issue Type: Task  (was: Improvement)

> Add ServiceConnectionManager
> 
>
> Key: FLINK-19809
> URL: https://issues.apache.org/jira/browse/FLINK-19809
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> The Slotpool has to interact with the ResourceManager to declare the resource 
> requirements.
> We do not want to provide full access to the ResourceManagerGateway (and as 
> such should wrap it in some form), but we also have to handle the case where 
> no ResourceManager is connected.
> Introduce a component for handling this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] shuiqiangchen opened a new pull request #14297: [FLINK-20467][python][doc] Fix the Example in Python DataStream Doc.

2020-12-03 Thread GitBox


shuiqiangchen opened a new pull request #14297:
URL: https://github.com/apache/flink/pull/14297


   
   
   ## What is the purpose of the change
   
   *Currently the example of MapFunction can't work. We need to fix it.*
   
   
   ## Brief change log
   
   - *Fix the wrong definition for MapFunction in Python DataStream tutorial 
doc.*
   
   
   ## Verifying this change
   
   This is a doc update without any test case coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20467) Fix the Example in Python DataStream Doc

2020-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20467:
---
Labels: pull-request-available  (was: )

> Fix the Example in Python DataStream Doc
> 
>
> Key: FLINK-20467
> URL: https://issues.apache.org/jira/browse/FLINK-20467
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Documentation
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.13.0
>
>
> Currently the example of MapFunction can't work. We need to fix it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol commented on a change in pull request #14296: [FLINK-20455] Add check for jar file contents

2020-12-03 Thread GitBox


zentol commented on a change in pull request #14296:
URL: https://github.com/apache/flink/pull/14296#discussion_r535024954



##
File path: 
tools/ci/java-ci-tools/src/main/java/org/apache/flink/tools/ci/licensecheck/JarFileChecker.java
##
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tools.ci.licensecheck;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.List;
+import java.util.stream.Collectors;
+import java.util.zip.ZipEntry;
+import java.util.zip.ZipInputStream;
+
+/**
+ * Checks the Jar files created by the build process
+ */
+public class JarFileChecker {
+   private static final Logger LOG = 
LoggerFactory.getLogger(JarFileChecker.class);
+
+   public int run(Path path) throws IOException {
+   List files = getBuildJars(path);
+
+   LOG.info("considering jar files " + files);
+
+   int severeIssues = 0;
+   for (Path file: files) {
+   severeIssues += checkJar(file);
+   }
+
+   return severeIssues;
+   }
+
+   private int checkJar(Path file) throws IOException {
+   int severeIssues = 0;
+   boolean metaInfNoticeSeen = false;
+   boolean metaInfLicenseSeen = false;
+
+   try (ZipInputStream zis = new ZipInputStream(new 
FileInputStream(file.toFile( {
+   ZipEntry ze;
+   while ((ze = zis.getNextEntry()) != null) {
+   if (ze.getName().equals("LICENSE")) {
+   LOG.error("Jar file {} contains a 
LICENSE file in the root folder", file);
+   severeIssues++;
+   }
+   if (ze.getName().equals("META-INF/NOTICE")) {
+   metaInfNoticeSeen = true;
+   }
+   if (ze.getName().equals("META-INF/LICENSE")) {

Review comment:
   we should also reject META-INF/LICENSE.txt

##
File path: 
tools/ci/java-ci-tools/src/main/java/org/apache/flink/tools/ci/licensecheck/JarFileChecker.java
##
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tools.ci.licensecheck;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.List;
+import java.util.stream.Collectors;
+import java.util.zip.ZipEntry;
+import java.util.zip.ZipInputStream;
+
+/**
+ * Checks the Jar files created by the build process
+ */
+public class JarFileChecker {
+   private static final Logger LOG = 
LoggerFactory.getLogger(JarFileChecker.class);
+
+   public int run(Path path) throws IOException {
+   List files = getBuildJars(path);
+
+   LOG.info("considering jar files " + files);
+
+   int severeIssues = 0;
+   for (Path file: files) {
+   severeIssues += checkJar(file);
+   }
+
+   return severeIssues;
+   

[GitHub] [flink] flinkbot commented on pull request #14297: [FLINK-20467][python][doc] Fix the Example in Python DataStream Doc.

2020-12-03 Thread GitBox


flinkbot commented on pull request #14297:
URL: https://github.com/apache/flink/pull/14297#issuecomment-737804369


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c12886112af47e5f7984f78f5108e1d1e088e045 (Thu Dec 03 
09:52:36 UTC 2020)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-20467).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14296: [FLINK-20455] Add check for jar file contents

2020-12-03 Thread GitBox


flinkbot commented on pull request #14296:
URL: https://github.com/apache/flink/pull/14296#issuecomment-737805429


   
   ## CI report:
   
   * 1a102565515cc8116fa304d69dd1746739f8aa3e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys opened a new pull request #14298: [FLINK-20464] Some Table examples are not built correctly

2020-12-03 Thread GitBox


dawidwys opened a new pull request #14298:
URL: https://github.com/apache/flink/pull/14298


   ## What is the purpose of the change
   
   Fixes the Table examples that were built incorrectly.
   
   
   ## Verifying this change
   
   Run the examples.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20464) Some Table examples are not built correctly

2020-12-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20464:
---
Labels: pull-request-available  (was: )

> Some Table examples are not built correctly
> ---
>
> Key: FLINK-20464
> URL: https://issues.apache.org/jira/browse/FLINK-20464
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.12.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Some examples were moved to the 
> {{org.apache.flink.table.examples.scala.basics}} package but the pom.xml was 
> not updated. This means the example jars are not built correctly and do not 
> contain the classes.
> Examples that I noticed:
> * org.apache.flink.table.examples.scala.basics.StreamTableExample
> * org.apache.flink.table.examples.scala.basics.TPCHQuery3Table
> We should update the {{includes}} sections e.g.:
> {code}
> 
>   StreamTableExample
>   package
>   
>   jar
>   
>   
>   StreamTableExample
> 
>   
>   
>   
> org.apache.flink.table.examples.scala.StreamTableExample
>   
>   
>   
>   
> org/apache/flink/table/examples/scala/StreamTableExample*
>   
>   
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   >