Arpith Prakash created FLINK-19359:
--
Summary: Restore from Checkpoint fails if checkpoint folders is
corrupt/partial
Key: FLINK-19359
URL: https://issues.apache.org/jira/browse/FLINK-19359
Project:
Previous APIs discussed have been trying to do more in the framework. If we
take a different approach to a lighter framework, these sets of
minimal APIs are probably good enough. Sink can handle the bookkeeping,
merge, retry logics.
/**
* CommT is the DataFile in Iceberg
* GlobalCommT is the
Is there a better way?
I'm am idealist with regard to streaming SQL semantics, and I'm going
to make the 'slippery slope' argument that if we add a TIMEOUT
parameter to MATCH_RECOGNIZE, won't we also need to add it to GROUP BY
and JOIN? (Because those are also "blocking" operators.)
Maybe JOIN
On 22.09.20 11:10, Guowei Ma wrote:
1. I think maybe we could add a EOI interface to the `GlobalCommit`. So
that we could make `write success file` be available in both batch and
stream execution mode.
We could, yes. I'm now hesitant because we're adding more things but I
think it should be
Xuannan Su created FLINK-19343:
--
Summary: FLIP-36: Support Interactive Programming in Flink
Key: FLINK-19343
URL: https://issues.apache.org/jira/browse/FLINK-19343
Project: Flink
Issue Type:
No concerns from my side.
On Fri, Sep 18, 2020 at 8:25 AM Chesnay Schepler wrote:
> Hello,
>
> I'd like to kickoff the next release of flink-shaded, which will contain
> a bump to netty (4.1.49) and snakeyaml (1.27).
>
> Any concerns? Any other dependency people want upgrade for the 1.12?
>
>
Xuannan Su created FLINK-19348:
--
Summary: Introduce CacheSource and CacheSink
Key: FLINK-19348
URL: https://issues.apache.org/jira/browse/FLINK-19348
Project: Flink
Issue Type: Sub-task
Xuannan Su created FLINK-19349:
--
Summary: StreamGraph handle CacheSource and CacheSink
Key: FLINK-19349
URL: https://issues.apache.org/jira/browse/FLINK-19349
Project: Flink
Issue Type:
Xuannan Su created FLINK-19350:
--
Summary: StreamingJobGraphGenerator generate job graph with cached
node
Key: FLINK-19350
URL: https://issues.apache.org/jira/browse/FLINK-19350
Project: Flink
Xuannan Su created FLINK-19351:
--
Summary: StreamingJobGraphGenerator set the caching node to
BoundedBlockingType
Key: FLINK-19351
URL: https://issues.apache.org/jira/browse/FLINK-19351
Project: Flink
Hi,
We are seeing an issue with Flink on our production. The version is 1.7
which we use.
We started seeing sudden lag on kafka, and the consumers were no longer
working/accepting messages. On trying to enable debug mode, the below
errors were seen
[image: image.jpeg]
I am not sure why this
Timo Walther created FLINK-19341:
Summary: Update API module for FLIP-107
Key: FLINK-19341
URL: https://issues.apache.org/jira/browse/FLINK-19341
Project: Flink
Issue Type: Sub-task
Ah sorry, I think I now see what you mean. I think it's ok to add a
`List recoverCommittables(List)`
method.
On 22.09.20 09:42, Aljoscha Krettek wrote:
On 22.09.20 06:06, Steven Wu wrote:
In addition, it is undesirable to do the committed-or-not check in the
commit method, which happens for
Jingsong Lee created FLINK-19356:
Summary: Introduce FileLifeCycleListener to Buckets
Key: FLINK-19356
URL: https://issues.apache.org/jira/browse/FLINK-19356
Project: Flink
Issue Type: New
On 22.09.20 06:06, Steven Wu wrote:
In addition, it is undesirable to do the committed-or-not check in the
commit method, which happens for each checkpoint cycle. CommitResult
already indicates SUCCESS or not. when framework calls commit with a list
of GlobalCommittableT, it should be certain
Hi, everyone!
When using flink sql DDL to create a mysql mapping table, does flink
support the automatic rendering of the target table schema if we put no
column-names in `create table table_name_mapping2mysql () with (...)`? If this
feature is not supported, is it necessary
Dian Fu created FLINK-19340:
---
Summary: AggregateITCase.testListAggWithDistinct failed with
"expected: but was:"
Key: FLINK-19340
URL: https://issues.apache.org/jira/browse/FLINK-19340
Project: Flink
Hi, Roc.
*Note:* in the future, please send this type of questions to the user
mailing list instead (u...@flink.apache.org)!
If I understand your question correctly, this is possible using the LIKE
clause and a registered catalog. There is currently no implementation for
the MySQL JDBC catalog,
Yingjie Cao created FLINK-19344:
---
Summary:
DispatcherResourceCleanupTest#testJobSubmissionUnderSameJobId is unstable on
Azure Pipeline
Key: FLINK-19344
URL: https://issues.apache.org/jira/browse/FLINK-19344
Dawid Wysakowicz created FLINK-19339:
Summary: Support Avro's unions with logical types
Key: FLINK-19339
URL: https://issues.apache.org/jira/browse/FLINK-19339
Project: Flink
Issue Type:
Dear community,
happy to share a brief community update for the past week. A lot of FLIP
votes are currently ongoing on the dev@ mailing list. I've covered this
FLIP previously, so skipping those this time. Besides that, a couple of
release related updates and again multiple new Committers.
Leonard Xu created FLINK-19342:
--
Summary: stop overriding convertFrom() in FlinkPlannerImpl after
upgrade calcite to 1.23
Key: FLINK-19342
URL: https://issues.apache.org/jira/browse/FLINK-19342
Project:
Thanks @aljoscha summary. I agree we should postpone the discussion of the
sink topology first and focus on the normal file sink and IcebergSink in
the Flink 1.12.
I have three little questions:
1. I think maybe we could add a EOI interface to the `GlobalCommit`. So
that we could make `write
Xuannan Su created FLINK-19346:
--
Summary: Generate and put ClusterPartitionDescriptor of
ClusterPartition in JobResult when job finishes
Key: FLINK-19346
URL: https://issues.apache.org/jira/browse/FLINK-19346
Xuannan Su created FLINK-19347:
--
Summary: Generate InputGateDeploymentDescriptor from a JobVertex
with ClusterPartitionDescriptor
Key: FLINK-19347
URL: https://issues.apache.org/jira/browse/FLINK-19347
+1 from my side
On Fri, Sep 18, 2020 at 4:54 PM Stephan Ewen wrote:
> Having the watermark lag metric was the important part from my side.
>
> So +1 to go ahead.
>
> On Fri, Sep 18, 2020 at 4:11 PM Becket Qin wrote:
>
> > Hey Stephan,
> >
> > Thanks for the quick reply. I actually forgot to
Jingsong Lee created FLINK-19345:
Summary: Introduce File streaming sink compaction
Key: FLINK-19345
URL: https://issues.apache.org/jira/browse/FLINK-19345
Project: Flink
Issue Type: New
Xuannan Su created FLINK-19354:
--
Summary: Add invalidateCache() method in CachedTable
Key: FLINK-19354
URL: https://issues.apache.org/jira/browse/FLINK-19354
Project: Flink
Issue Type:
Xuannan Su created FLINK-19352:
--
Summary: Add cache() method to Table
Key: FLINK-19352
URL: https://issues.apache.org/jira/browse/FLINK-19352
Project: Flink
Issue Type: Sub-task
Xuannan Su created FLINK-19353:
--
Summary: BlinkPlanner translate and optimize CacheOperation
Key: FLINK-19353
URL: https://issues.apache.org/jira/browse/FLINK-19353
Project: Flink
Issue Type:
Xuannan Su created FLINK-19355:
--
Summary: Add close() method to TableEnvironment
Key: FLINK-19355
URL: https://issues.apache.org/jira/browse/FLINK-19355
Project: Flink
Issue Type: Sub-task
On 22.09.20 13:26, Guowei Ma wrote:
Actually I am not sure adding `isAvailable` is enough. Maybe it is not.
But for the initial version I hope we could make the sink api sync because
there is already a lot of stuff that has to finish. :--)
I agree, for the first version we should stick to a
Jingsong Lee created FLINK-19357:
Summary: Introduce createBucketWriter to BucketsBuilder
Key: FLINK-19357
URL: https://issues.apache.org/jira/browse/FLINK-19357
Project: Flink
Issue Type:
Jun Zhang created FLINK-19358:
-
Summary: when submit job on application mode with HA,the jobid
will be 00
Key: FLINK-19358
URL: https://issues.apache.org/jira/browse/FLINK-19358
Project: Flink
>> I believe that we could support such an async sink writer
>> very easily in the future. What do you think?
>> How would you see the expansion in the future? Do you mean just adding
`isAvailable()` method with a default implementation later on?
Hi @piotr
Actually I am not sure adding
It is fine to leave the CommitResult/RETRY outside the scope of framework.
Then the framework might need to provide some hooks in the
checkpoint/restore logic. because the commit happened in the post
checkpoint completion step, sink needs to update the internal state when
the commit is successful
Rui Li created FLINK-19361:
--
Summary: Create a synchronized metastore client to talk to a
remote HMS
Key: FLINK-19361
URL: https://issues.apache.org/jira/browse/FLINK-19361
Project: Flink
Issue
hailong wang created FLINK-19362:
Summary: Remove confusing comment for `DOT` operator codegen
Key: FLINK-19362
URL: https://issues.apache.org/jira/browse/FLINK-19362
Project: Flink
Issue
Chesnay Schepler created FLINK-19360:
Summary: Flink fails if JAVA_HOME contains spaces
Key: FLINK-19360
URL: https://issues.apache.org/jira/browse/FLINK-19360
Project: Flink
Issue Type:
Hi Ramya,
Unfortunately your images are blocked. Could you upload them somewhere and
post the links here?
Also I think that the TaskManager logs may be able to help a bit more.
Could you please provide them here?
Cheers,
Kostas
On Tue, Sep 22, 2020 at 8:58 AM Ramya Ramamurthy wrote:
> Hi,
>
>
Hi Yu,
To restate the motivation of this FLIP, the issue we are trying to solve is
that users do not understand how to choose between the different state
backends today. The goal is to decouple checkpoint storage from local state
storage so users can reason about these configurations separately.
hailong wang created FLINK-19363:
Summary: Code of split method grows beyond 64 KB
Key: FLINK-19363
URL: https://issues.apache.org/jira/browse/FLINK-19363
Project: Flink
Issue Type:
I think we should go with something like
List filterRecoveredCommittables(List<>)
to keep things simple. This should also be easy to do from the framework
side and then the sink doesn't need to do any custom state handling.
Best,
Aljoscha
On 22.09.20 16:03, Steven Wu wrote:
Previous APIs
Jingsong Lee created FLINK-19366:
Summary: [umbrella] Migrate Filesystem and Hive to new Table
connector interface
Key: FLINK-19366
URL: https://issues.apache.org/jira/browse/FLINK-19366
Project:
Rui Li created FLINK-19368:
--
Summary: TableEnvHiveConnectorITCase fails with Hive-3.x
Key: FLINK-19368
URL: https://issues.apache.org/jira/browse/FLINK-19368
Project: Flink
Issue Type: Test
Huang Xingbo created FLINK-19364:
Summary: Add Batch Physical Pandas Group Window Aggregate Rule and
RelNode
Key: FLINK-19364
URL: https://issues.apache.org/jira/browse/FLINK-19364
Project: Flink
Congrats Godfrey! Well deserved!
Best,
Danny Chan
在 2020年9月21日 +0800 AM10:16,dev@flink.apache.org,写道:
>
> Congrats Godfrey! Well deserved!
Jingsong Lee created FLINK-19367:
Summary: Migrate Filesystem source to FLIP-27 source interface
Key: FLINK-19367
URL: https://issues.apache.org/jira/browse/FLINK-19367
Project: Flink
Issue
>> I think we should go with something like
>> List filterRecoveredCommittables(List<>)
>> to keep things simple. This should also be easy to do from the framework
>> side and then the sink doesn't need to do any custom state handling.
I second Aljoscha's proposal. For the first version there
Hi, all
Thank everyone very much for your ideas and suggestions. I would try to
summarize again the consensus :). Correct me if I am wrong or misunderstand
you.
## Consensus-1
1. The motivation of the unified sink API is to decouple the sink
implementation from the different runtime execution
*bq. To restate the motivation of this FLIP, the issue we are trying to
solve is that users do not understand how to choose between the different
state backends today.*
Yes, but my point is that now users may not understand how to choose
between different checkpoint storages. We cannot resolve one
Rui Li created FLINK-19365:
--
Summary: Migrate Hive table source to new source interface
Key: FLINK-19365
URL: https://issues.apache.org/jira/browse/FLINK-19365
Project: Flink
Issue Type: Task
Congratulations :)
Best,
zb
53 matches
Mail list logo