@Dawid Yes we will include it into RC3. Thanks for the note.
@Till Thanks for the quick fix and the note.
I've checked and confirmed there are no open issues under 1.10.1, neither
any resolved/closed ones under 1.10.2, so will start to prepare RC3.
Best Regards,
Yu
On Wed, 6 May 2020 at
Hi Lec,
You can use `StreamTableEnvironment#toRetractStream(table, Row.class)` to
get a `DataStream>`.
The true Boolean flag indicates an add message, a false flag indicates a
retract (delete) message. So you can just simply apply
a flatmap function after this to ignore the false messages. Then
Thanks all for the discussion, I have updated FLIP-105 and FLIP-122 to use
the new format option keys.
FLIP-105:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=147427289
FLIP-122:
Xintong Song created FLINK-17551:
Summary: Documentation of stable releases are actually built on
top of snapshot code bases.
Key: FLINK-17551
URL: https://issues.apache.org/jira/browse/FLINK-17551
Cheng Shi created FLINK-17550:
-
Summary: set a retract switch
Key: FLINK-17550
URL: https://issues.apache.org/jira/browse/FLINK-17550
Project: Flink
Issue Type: New Feature
Reporter:
Hi:
During the execution of flink, especially the sql API, many operations
in DataStream are not available. In many cases, we don't care about the
DELETE record when retracting. Is it possible to set a switch so that the
DELETE record when retracting is not processed? In other words, the
Canbin Zheng created FLINK-17549:
Summary: Support running Stateful Functions on native Kubernetes
setup
Key: FLINK-17549
URL: https://issues.apache.org/jira/browse/FLINK-17549
Project: Flink
Rafael Wicht created FLINK-17548:
Summary: Table API Schema from not working
Key: FLINK-17548
URL: https://issues.apache.org/jira/browse/FLINK-17548
Project: Flink
Issue Type: Bug
Roman Khachatryan created FLINK-17547:
-
Summary: Support unaligned checkpoints for records spilled to files
Key: FLINK-17547
URL: https://issues.apache.org/jira/browse/FLINK-17547
Project: Flink
Andrey Zagrebin created FLINK-17546:
---
Summary: Consider setting the number of TM CPU cores to the actual
number of cores
Key: FLINK-17546
URL: https://issues.apache.org/jira/browse/FLINK-17546
Hi Godfrey,
This looks good to me.
One side note, indicating unique constraints with "UNQ" is probably not
enough.
There might be multiple unique constraints and users would like to know
which field combinations are unique.
So in your example above, "UNQ(f2, f3)" might be a better marker.
Just
Hi @fhue...@gmail.com @Timo Walther @Dawid Wysakowicz
What do you think we limit watermark must be defined on top-level column ?
if we do that, we can add an expression column to represent watermark like
compute column,
An example of all cases:
create table MyTable (
f0 BIGINT NOT NULL,
f1
John Lonergan created FLINK-17545:
-
Summary: CLONE - NPE JDBCUpsertOutputFormat
Key: FLINK-17545
URL: https://issues.apache.org/jira/browse/FLINK-17545
Project: Flink
Issue Type: Improvement
John Lonergan created FLINK-17544:
-
Summary: NPE JDBCUpsertOutputFormat
Key: FLINK-17544
URL: https://issues.apache.org/jira/browse/FLINK-17544
Project: Flink
Issue Type: Improvement
Thanks a lot for the detailed explanation, makes complete sense.
Gyula
On Wed, May 6, 2020 at 3:53 PM Jingsong Li wrote:
> Hi,
>
> Insert overwrite comes from Batch SQL in Hive.
> It means overwriting whole table/partition instead of overwriting per key.
> So if "insert overwrite kudu_table",
Hi,
Insert overwrite comes from Batch SQL in Hive.
It means overwriting whole table/partition instead of overwriting per key.
So if "insert overwrite kudu_table", should delete whole table in kudu
first, and then insert new data to the table in kudu.
The same semantics should be used in
Chesnay Schepler created FLINK-17543:
Summary: Rerunning failed azure jobs fails when uploading logs
Key: FLINK-17543
URL: https://issues.apache.org/jira/browse/FLINK-17543
Project: Flink
Zhu Zhu created FLINK-17542:
---
Summary: Unify slot request timeout handling for streaming and
batch tasks
Key: FLINK-17542
URL: https://issues.apache.org/jira/browse/FLINK-17542
Project: Flink
Hi all!
While working on a Table Sink implementation for Kudu (key-value store) ,
we got a bit confused about the expected semantics of UpsertStreamTableSink
vs OverwritableTableSink and statements like INSERT vs INSERT OVERWRITE
I am wondering what external operation should each incoming record
Cool, so let's go with:
format = json
json.fail-on-missing-field = true
json.ignore-parse-error = true
value.format = json
value.json.fail-on-missing-field = true
value.json.ignore-parse-error = true
Regards,
Timo
On 06.05.20 12:39, Jingsong Li wrote:
Hi,
+1 to:
format = parquet
Hi,
+1 to:
format = parquet
parquet.compression = ...
parquet.block.size = ...
parquet.page.size = ...
For the formats like parquet and orc,
Not just Flink itself, but this way also let Flink keys compatible with the
property keys of Hadoop / Hive / Spark.
And like Jark said, this way works for
Hi,
I think Timo proposed a good idea to make both side happy. That is:
format = json
json.fail-on-missing-field = true
json.ignore-parse-error = true
value.format = json
value.json.fail-on-missing-field = true
value.json.ignore-parse-error = true
This is a valid hierarchies. Besides, it's
-原始邮件-
发件人:"刘大龙"
发送时间:2020-05-06 17:55:25 (星期三)
收件人: "Jark Wu"
抄送:
主题: Re: Re: Re: The use of state ttl incremental cleanup strategy in sql
deduplication resulting in significant performance degradation
Thanks for your tuning ideas, I will test it later. Just to emphasize, I use
I've merged the fix for FLINK-17514.
Cheers,
Till
On Wed, May 6, 2020 at 10:53 AM Dawid Wysakowicz
wrote:
> Hi all,
>
> I wonder if we could also include FLINK-17313 which I backported into
> 1.10 branch yesterday.
>
> Best,
>
> Dawid
>
> On 06/05/2020 07:26, Yu Li wrote:
> > Thanks Till and
Hi LakeShen,
`state.backend.rocksdb.localdir` defines the directory in which RocksDB
will store its local files. Local files are RocksDB's SST and metadata
files for example. This directory does not need to be persisted. If the
config option is not configured, then it will use the nodes temporary
Maybe we could say that the new release manager will do this (or at least
make sure that updating the roadmap will happen). The community usually
tries to find a release manager at the beginning of the release cycle.
Cheers,
Till
On Tue, May 5, 2020 at 5:42 PM Marta Paes Moreira
wrote:
> I
Hi, everyone ~
Allows me to share some thoughts here.
Personally i think for SQL, "format" is obviously better than "format.name", it
is more concise and straight-forward, similar with Presto FORMAT[2] and KSQL
VALUE_FORMAT[1]; i think we move from "connector.type" to "connector" for the
same
Timo Walther created FLINK-17541:
Summary: Support inline structured types
Key: FLINK-17541
URL: https://issues.apache.org/jira/browse/FLINK-17541
Project: Flink
Issue Type: Sub-task
Hi all:
why StreamTableEnvironmentImpl#isEagerOperationTranslation() always return
true?
Hi all,
I wonder if we could also include FLINK-17313 which I backported into
1.10 branch yesterday.
Best,
Dawid
On 06/05/2020 07:26, Yu Li wrote:
> Thanks Till and Thomas, will include fix for both FLINK-17496 and
> FLINK-17514 in the next RC.
>
> Best Regards,
> Yu
>
>
> On Tue, 5 May 2020
Thanks for clarifying, that was not clear to me.
That sounds fine to me, given that it just adds extra information, not
changes existing one.
On Wed, May 6, 2020 at 9:06 AM Till Rohrmann wrote:
> Just for clarifications and as Yang already pointed out: The discussion
> here is about also
Just for clarifications and as Yang already pointed out: The discussion
here is about also creating the log, out and err files as well as keeping
writing to STDOUT and STDERR.
Hence, there should be no regression for K8s users. The main problem, as
Chesnay pointed out, could be the increased disk
32 matches
Mail list logo