[jira] [Created] (FLINK-20817) kafka source have 200 field,Causes the class to be generated failed

2020-12-30 Thread WeiNan Zhao (Jira)
WeiNan Zhao created FLINK-20817: --- Summary: kafka source have 200 field,Causes the class to be generated failed Key: FLINK-20817 URL: https://issues.apache.org/jira/browse/FLINK-20817 Project: Flink

Re: [ANNOUNCE] New formatting rules are now in effect

2020-12-30 Thread Jark Wu
Hi, I have played with the format plugin these days and found some problems. Maybe some of them are personal taste. 1) Is it possible to disable auto-format for some code blocks? For example, the format of code generation StructuredObjectConverter#generateCode is manually adjusted for

[jira] [Created] (FLINK-20819) flink : Connectors : JDBC test failing on Red Hat 8.x PowerPC Linux

2020-12-30 Thread Priya (Jira)
Priya created FLINK-20819: - Summary: flink : Connectors : JDBC test failing on Red Hat 8.x PowerPC Linux Key: FLINK-20819 URL: https://issues.apache.org/jira/browse/FLINK-20819 Project: Flink Issue

Re: Did Flink 1.11 break backwards compatibility for the table environment?

2020-12-30 Thread Till Rohrmann
Are these limitations documented somewhere @Jark? I couldn't find it on the quick. If not, then we should update the documentation accordingly. In particular the problem with using the RowData as a key makes FLINK-16998 not easy to work around. Cheers, Till On Wed, Dec 30, 2020 at 11:20 AM Jark

[jira] [Created] (FLINK-20820) Rename o.a.f.table.runtime.generated package in blink runtime

2020-12-30 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-20820: Summary: Rename o.a.f.table.runtime.generated package in blink runtime Key: FLINK-20820 URL: https://issues.apache.org/jira/browse/FLINK-20820 Project: Flink

Re: [ANNOUNCE] New formatting rules are now in effect

2020-12-30 Thread Chesnay Schepler
1) No, it is not possible to exclude certain code blocks from formatting. There is a workaround though for this case; you can add en empty comment (//) to the end of a line to prevent subsequent lines from being added to it. https://github.com/google/google-java-format/issues/137 Note that

[jira] [Created] (FLINK-20818) End to end test produce excessive amount of logs

2020-12-30 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-20818: - Summary: End to end test produce excessive amount of logs Key: FLINK-20818 URL: https://issues.apache.org/jira/browse/FLINK-20818 Project: Flink Issue

Re: Did Flink 1.11 break backwards compatibility for the table environment?

2020-12-30 Thread Till Rohrmann
Hi Guenther, sorry for overlooking your colleague's email. I think the answer to your problem is twofold. The underlying problem is that your query seems to use `RowData` as a key for some keyed operation. Since changing the key format might entail that keys need to be differently partitioned,

Re: Did Flink 1.11 break backwards compatibility for the table environment?

2020-12-30 Thread Jark Wu
Hi Guenther, If you are using the old planner in 1.9, and using the old planner in 1.11, then a state migration is needed because of the new added RowKind field. This is documented in the 1.11 release note [1]. If you are using the old planner in 1.9, and using the blink planner in 1.11, the

Re: Support local aggregate push down for Blink batch planner

2020-12-30 Thread Sebastian Liu
Hi Jark, Thx a lot for your quick reply and valuable suggestions. For (1): Agree: Since we are in the period of upgrading the new table source api, we really should consider the new interface for the new optimize rule. If the new rule doesn't use the new api, we'll have to upgrade it sooner or

[jira] [Created] (FLINK-20821) `select row(map[1,2],'ab')` parses failed

2020-12-30 Thread godfrey he (Jira)
godfrey he created FLINK-20821: -- Summary: `select row(map[1,2],'ab')` parses failed Key: FLINK-20821 URL: https://issues.apache.org/jira/browse/FLINK-20821 Project: Flink Issue Type: Bug

Re: Did Flink 1.11 break backwards compatibility for the table environment?

2020-12-30 Thread Guenther Starnberger
On Wed, Dec 30, 2020 at 10:47 AM Till Rohrmann wrote: Till, > sorry for overlooking your colleague's email. Not a problem at all! Thanks for the response to my email. > A bit confusing is that FLINK-16998 explicitly > states that this change is not breaking backwards compatibility. The

[jira] [Created] (FLINK-20823) Update documentation to mention Table/SQL API doesn't provide cross-major-version state compatibility

2020-12-30 Thread Jark Wu (Jira)
Jark Wu created FLINK-20823: --- Summary: Update documentation to mention Table/SQL API doesn't provide cross-major-version state compatibility Key: FLINK-20823 URL: https://issues.apache.org/jira/browse/FLINK-20823

Re: Did Flink 1.11 break backwards compatibility for the table environment?

2020-12-30 Thread Jark Wu
Hi Guenther, I think it's safe to use legacy mode in your case, because the RowKind is never used in the old planner. Hi Till, It seems that the cross-major-version state incompatibility is not documented. I created FLINK-20823 to update the documentation. Best, Jark On Thu, 31 Dec 2020 at

[jira] [Created] (FLINK-20824) BlockingShuffleITCase. testSortMergeBlockingShuffle test failed with "Inconsistent availability: expected true"

2020-12-30 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-20824: Summary: BlockingShuffleITCase. testSortMergeBlockingShuffle test failed with "Inconsistent availability: expected true" Key: FLINK-20824 URL:

[jira] [Created] (FLINK-20822) Don't check whether a function is generic in hive catalog

2020-12-30 Thread Rui Li (Jira)
Rui Li created FLINK-20822: -- Summary: Don't check whether a function is generic in hive catalog Key: FLINK-20822 URL: https://issues.apache.org/jira/browse/FLINK-20822 Project: Flink Issue Type:

Re: Did Flink 1.11 break backwards compatibility for the table environment?

2020-12-30 Thread Guenther Starnberger
On Wed, Dec 30, 2020 at 11:21 AM Jark Wu wrote: Jark, > If you are using the old planner in 1.9, and using the old planner in 1.11, > then a state migration is > needed because of the new added RowKind field. This is documented in the > 1.11 release note [1]. Yes - that's exactly the setup