WeiNan Zhao created FLINK-20817:
---
Summary: kafka source have 200 field,Causes the class to be
generated failed
Key: FLINK-20817
URL: https://issues.apache.org/jira/browse/FLINK-20817
Project: Flink
Hi,
I have played with the format plugin these days and found some problems.
Maybe some of them are personal taste.
1) Is it possible to disable auto-format for some code blocks?
For example, the format of code generation
StructuredObjectConverter#generateCode is manually
adjusted for
Priya created FLINK-20819:
-
Summary: flink : Connectors : JDBC test failing on Red Hat 8.x
PowerPC Linux
Key: FLINK-20819
URL: https://issues.apache.org/jira/browse/FLINK-20819
Project: Flink
Issue
Are these limitations documented somewhere @Jark? I couldn't find it on the
quick. If not, then we should update the documentation accordingly. In
particular the problem with using the RowData as a key makes FLINK-16998
not easy to work around.
Cheers,
Till
On Wed, Dec 30, 2020 at 11:20 AM Jark
Chesnay Schepler created FLINK-20820:
Summary: Rename o.a.f.table.runtime.generated package in blink
runtime
Key: FLINK-20820
URL: https://issues.apache.org/jira/browse/FLINK-20820
Project: Flink
1) No, it is not possible to exclude certain code blocks from formatting.
There is a workaround though for this case; you can add en empty comment
(//) to the end of a line to prevent subsequent lines from being added
to it.
https://github.com/google/google-java-format/issues/137
Note that
Till Rohrmann created FLINK-20818:
-
Summary: End to end test produce excessive amount of logs
Key: FLINK-20818
URL: https://issues.apache.org/jira/browse/FLINK-20818
Project: Flink
Issue
Hi Guenther,
sorry for overlooking your colleague's email.
I think the answer to your problem is twofold. The underlying problem is
that your query seems to use `RowData` as a key for some keyed operation.
Since changing the key format might entail that keys need to be differently
partitioned,
Hi Guenther,
If you are using the old planner in 1.9, and using the old planner in 1.11,
then a state migration is
needed because of the new added RowKind field. This is documented in the
1.11 release note [1].
If you are using the old planner in 1.9, and using the blink planner in
1.11, the
Hi Jark, Thx a lot for your quick reply and valuable suggestions.
For (1): Agree: Since we are in the period of upgrading the new table
source api,
we really should consider the new interface for the new optimize rule. If
the new rule
doesn't use the new api, we'll have to upgrade it sooner or
godfrey he created FLINK-20821:
--
Summary: `select row(map[1,2],'ab')` parses failed
Key: FLINK-20821
URL: https://issues.apache.org/jira/browse/FLINK-20821
Project: Flink
Issue Type: Bug
On Wed, Dec 30, 2020 at 10:47 AM Till Rohrmann wrote:
Till,
> sorry for overlooking your colleague's email.
Not a problem at all! Thanks for the response to my email.
> A bit confusing is that FLINK-16998 explicitly
> states that this change is not breaking backwards compatibility.
The
Jark Wu created FLINK-20823:
---
Summary: Update documentation to mention Table/SQL API doesn't
provide cross-major-version state compatibility
Key: FLINK-20823
URL: https://issues.apache.org/jira/browse/FLINK-20823
Hi Guenther,
I think it's safe to use legacy mode in your case,
because the RowKind is never used in the old planner.
Hi Till,
It seems that the cross-major-version state incompatibility is
not documented.
I created FLINK-20823 to update the documentation.
Best,
Jark
On Thu, 31 Dec 2020 at
Huang Xingbo created FLINK-20824:
Summary: BlockingShuffleITCase. testSortMergeBlockingShuffle test
failed with "Inconsistent availability: expected true"
Key: FLINK-20824
URL:
Rui Li created FLINK-20822:
--
Summary: Don't check whether a function is generic in hive catalog
Key: FLINK-20822
URL: https://issues.apache.org/jira/browse/FLINK-20822
Project: Flink
Issue Type:
On Wed, Dec 30, 2020 at 11:21 AM Jark Wu wrote:
Jark,
> If you are using the old planner in 1.9, and using the old planner in 1.11,
> then a state migration is
> needed because of the new added RowKind field. This is documented in the
> 1.11 release note [1].
Yes - that's exactly the setup
17 matches
Mail list logo