Tzu-Li (Gordon) Tai created FLINK-10779:
---
Summary: Update Java / Scala StatefulJobSavepointMigrationITCase
for 1.7
Key: FLINK-10779
URL: https://issues.apache.org/jira/browse/FLINK-10779
Tzu-Li (Gordon) Tai created FLINK-10778:
---
Summary: Update TypeSerializerSnapshotMigrationTestBase and
subclasses for 1.7
Key: FLINK-10778
URL: https://issues.apache.org/jira/browse/FLINK-10778
Tzu-Li (Gordon) Tai created FLINK-10783:
---
Summary: Update WindowOperatorMigrationTest for 1.7
Key: FLINK-10783
URL: https://issues.apache.org/jira/browse/FLINK-10783
Project: Flink
ChuanHaiTan created FLINK-10775:
---
Summary: Quarantined address
[akka.tcp://flink@flink-jobmanager:6123] is still unreachable or has not been
restarted. Keeping it quarantined.
Key: FLINK-10775
URL:
Tzu-Li (Gordon) Tai created FLINK-10788:
---
Summary: Update ContinuousFileProcessingMigrationTest for 1.7
Key: FLINK-10788
URL: https://issues.apache.org/jira/browse/FLINK-10788
Project: Flink
zhijiang created FLINK-10790:
Summary: Refactor all the StreamPartitioner implementations into
runtime module
Key: FLINK-10790
URL: https://issues.apache.org/jira/browse/FLINK-10790
Project: Flink
Tzu-Li (Gordon) Tai created FLINK-10777:
---
Summary: Update TypeSerializerSnapshotMigrationITCase for Flink 1.7
Key: FLINK-10777
URL: https://issues.apache.org/jira/browse/FLINK-10777
Project:
Tzu-Li (Gordon) Tai created FLINK-10780:
---
Summary: Update Java / Scala
StatefulJobWBroadcastStateMigrationITCase for 1.7
Key: FLINK-10780
URL: https://issues.apache.org/jira/browse/FLINK-10780
Tzu-Li (Gordon) Tai created FLINK-10784:
---
Summary: Update FlinkKafkaConsumerBaseMigrationTest for 1.7
Key: FLINK-10784
URL: https://issues.apache.org/jira/browse/FLINK-10784
Project: Flink
Tzu-Li (Gordon) Tai created FLINK-10776:
---
Summary: Update migration tests for Flink 1.7
Key: FLINK-10776
URL: https://issues.apache.org/jira/browse/FLINK-10776
Project: Flink
Issue
Tzu-Li (Gordon) Tai created FLINK-10781:
---
Summary: Update BucketingSinkMigrationTest for Flink 1.7
Key: FLINK-10781
URL: https://issues.apache.org/jira/browse/FLINK-10781
Project: Flink
Tzu-Li (Gordon) Tai created FLINK-10786:
---
Summary: Update CEPMigrationTest for 1.7
Key: FLINK-10786
URL: https://issues.apache.org/jira/browse/FLINK-10786
Project: Flink
Issue Type:
Tzu-Li (Gordon) Tai created FLINK-10789:
---
Summary: Some new serializer snapshots added after 1.6 are not
implementing the new TypeSerializerSnapshot interface
Key: FLINK-10789
URL:
vinoyang created FLINK-10791:
Summary: Provide end-to-end test for Kafka 0.11 connector
Key: FLINK-10791
URL: https://issues.apache.org/jira/browse/FLINK-10791
Project: Flink
Issue Type: Test
Tzu-Li (Gordon) Tai created FLINK-10782:
---
Summary: Update AbstractKeyedOperatorRestoreTestBase for 1.7
Key: FLINK-10782
URL: https://issues.apache.org/jira/browse/FLINK-10782
Project: Flink
Tzu-Li (Gordon) Tai created FLINK-10787:
---
Summary: Update AbstractNonKeyedOperatorRestoreTestBase for 1.7
Key: FLINK-10787
URL: https://issues.apache.org/jira/browse/FLINK-10787
Project: Flink
vinoyang created FLINK-10792:
Summary: Extend SQL client end-to-end to test KafkaTableSink for
kafka 0.11 connector
Key: FLINK-10792
URL: https://issues.apache.org/jira/browse/FLINK-10792
Project: Flink
Stefan Richter created FLINK-10793:
--
Summary: Change visibility of TtlValue and TtlSerializer to public
for external tools
Key: FLINK-10793
URL: https://issues.apache.org/jira/browse/FLINK-10793
Thanks a lot for sharing the code with the community Yadong!
It looks really cool and I also want to give it a try to see how easy it is
to start Flink with it.
If it is already implemented and working, we could also think about adding
it to Flink and add a feature flag to switch between the old
Thanks Aljoscha for bringing us this discussion!
1. I think one of the reason about separating `advance()` and
`getCurrent()` is that we have several different types returned by source.
Not just the `record`, but also the timestamp of record and the watermark.
If we don't separate these into
Hi Boris,
Thanks for sharing the code that you'd like to contribute for FLIP-23.
I have a quick look at the repository and collected some stats to estimate
the reviewing effort for the contribution.
There are approx 1900 lines of Java and 2000 lines of Scala code.
This is a reasonable size that
Thanks for sharing this design document with the community Yingjie.
I like the design to pass the job specific blacklisted TMs as a scheduling
constraint. This makes a lot of sense to me.
Cheers,
Till
On Fri, Nov 2, 2018 at 4:51 PM yingjie wrote:
> Hi everyone,
>
> This post proposes the
Congxian Qiu created FLINK-10794:
Summary: Do not create checkpointStorage when checkpoint is
disabled
Key: FLINK-10794
URL: https://issues.apache.org/jira/browse/FLINK-10794
Project: Flink
I updated the FLIP [1] with some Javadoc for the SplitReader to outline what I
had in mind with the interface. Sorry for not doing that earlier, it's not
quite clear how the methods should work from the name alone.
The gist of it is that advance() should be non-blocking, so
Hi Fabian, these are great questions! I have some quick thoughts on some of
these.
Optimization opportunities: I think that you are right UDFs are more like
blackboxes today. However this can change if we let user develop UDFs
symbolically in the future (i.e., Flink will look inside the UDF code,
Flavio Pompermaier created FLINK-10795:
--
Summary: STDDEV_POP error
Key: FLINK-10795
URL: https://issues.apache.org/jira/browse/FLINK-10795
Project: Flink
Issue Type: Improvement
Hi All,
As Jincheng brought up in the previous email, there are a set of
improvements needed to make Table API more complete/self-contained. To give
a better overview on this, Jincheng, Jiangjie, Shaoxuan and myself
discussed offline a bit and came up with an initial outline.
Table API
Hey Boris,
We have developed something very similar for our needs, but we faced some
issues when running it in HA mode, it was mainly because of the fact that
Tensorflow uses native functions and this caused some issues when connected
with automatic job restarts.
As far as I remember, the issue
Hi everyone,
Please review and vote on the release candidate #1 for the version 1.7.0,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release notes [1],
*
Bowen Li created FLINK-10796:
Summary: Add a default external catalog (as FlinkInMemoryCatalog)
to CatalogManager
Key: FLINK-10796
URL: https://issues.apache.org/jira/browse/FLINK-10796
Project: Flink
Xiening Dai created FLINK-10797:
---
Summary: "IntelliJ Setup" link is broken in Readme.md
Key: FLINK-10797
URL: https://issues.apache.org/jira/browse/FLINK-10797
Project: Flink
Issue Type: Bug
xinchun created FLINK-10799:
---
Summary: YARN mode JobManager JVM memory args add -XmsXXXm
Key: FLINK-10799
URL: https://issues.apache.org/jira/browse/FLINK-10799
Project: Flink
Issue Type:
Hi all,
I think it's good to enhance the functionality and productivity of Table
API, but still I think SQL + DataStream is a better choice from user
experience
1. The unification of batch and stream processing is very attractive, and
many our users are moving their batch-processing applications
Thanks yangyu for launching this discussion.
I really like this proposal. We ever found this scene frequently that some long
tail tasks to delay the total batch job execution time in production.
We also have some thoughts for bringing this mechanism. Looking forward to your
detail design doc,
vinoyang created FLINK-10798:
Summary: Add the version number of Flink 1.7 to MigrationVersion
Key: FLINK-10798
URL: https://issues.apache.org/jira/browse/FLINK-10798
Project: Flink
Issue Type:
Hi Xiaogang,
Thanks for your feedback, I will share my thoughts here:
First, enhancing TableAPI does not mean weakening SQL. We also need to
enhance the functionality of SQL, such as @Xuefu's ongoing integration of
the hive SQL ecosystem.
In addition,SQL and TableAPI are two different API forms
Thanks for updating the wiki, Aljoscha.
The isDone()/advance()/getCurrent() API looks more similar to
hasNext()/isNextReady()/getNext(), but implying some different behaviors.
If users call getCurrent() twice without calling advance() in between, will
they get the same record back? From the API
Hi everyone,
We propose task speculative execution for Flink batch in this message as
follows.
In the batch mode, the job is usually divided into multiple parallel tasks
executed cross many nodes in the cluster. It is common to encounter the
performance degradation on some nodes due to hardware
Hi,
Thanks Till for preparing the RC1 for Flink 1.7.0!
I checked a few things, but there seem to be some issues with the release
candidate.
+ Built Flink 1.7.0 from sources and ran all tests(On Darwin Kernel Version
17.7.0
Thu Jun 21 22:53:14 PDT 2018; root:xnu-4570.71.2~1/RELEASE_X86_64).
+
Hi there,
As communicated in an email thread, I'm proposing Flink-Hive metastore
integration. I have a draft design doc that I'd like to convert it to a FLIP.
Thus, it would be great if anyone who can grant me the write access to
Confluence. My Confluence ID is xuefu.
@Timo Waltherand
Hi Xinggang,
Thanks for the comments. Please see the responses inline below.
On Tue, Nov 6, 2018 at 11:28 AM SHI Xiaogang wrote:
> Hi all,
>
> I think it's good to enhance the functionality and productivity of Table
> API, but still I think SQL + DataStream is a better choice from user
>
Hi Rong Rong,
Sorry for the late reply, And thanks for your feedback! We will continue
to add more convenience features to the TableAPI, such as map, flatmap,
agg, flatagg, iteration etc. And I am very happy that you are interested on
this proposal. Due to this is a long-term continuous work, we
zhijiang created FLINK-10800:
Summary: Abstract the StreamPartitionerTest for common codes
Key: FLINK-10800
URL: https://issues.apache.org/jira/browse/FLINK-10800
Project: Flink
Issue Type:
Hi jingcheng,
Thanks for your proposal. I think it is a helpful enhancement for TableAPI
which is a solid step forward for TableAPI.
It doesn't weaken SQL or DataStream, because the conversion between
DataStream and Table still works.
People with advanced cases (e.g. complex and fine-grained
44 matches
Mail list logo