In the case of normal Flink job, I agree we can infer the table type from
the queries. However, for SQL client, the query is adhoc and not known
beforehand. In such case, we might want to enforce the table open mode at
startup time, so users won't accidentally write to a Kafka topic that is
Thanks Tzu-Li,
Then we can do some refactor/renaming, I opened
https://issues.apache.org/jira/browse/FLINK-10494 under
https://issues.apache.org/jira/browse/FLINK-10392
Jin
On 10/3/18, 9:48 PM, "Tzu-Li Chen" wrote:
Hi Sun,
JobManager.scala is component of legacy mode, they
JIN SUN created FLINK-10494:
---
Summary: Rename 'JobManager' to J'obMaster' for some classes in
JobMaster folder
Key: FLINK-10494
URL: https://issues.apache.org/jira/browse/FLINK-10494
Project: Flink
Elias Levy created FLINK-10493:
--
Summary: Macro generated CaseClassSerializer considered harmful
Key: FLINK-10493
URL: https://issues.apache.org/jira/browse/FLINK-10493
Project: Flink
Issue
For sure.
I am targeting to have the Operator for applications running Flink version
1.6 and above. From an integration standpoint, the ability to control the
Flink Application from outside the cluster using APIs is a huge deal. So
any improvement on that area is a huge value addition.
Thanks,
The second alternative, with the addition of methods that take functions
with Scala types, seems the most sensible. I wonder if there is a need
then to maintain the *J Java parameter methods, or whether users could just
access the functionality by converting the Scala DataStreams to Java via
Hi Timo,
Thanks for putting together the proposal!
I really love the idea to combining solution for historic and recent data
and left some suggestions on that part.
Regarding the table type, e.g. for kafka streams, I agree with @hequn's
idea that it should be pretty much inferable from the SQL
Hi,
I'm currently working on https://issues.apache.org/jira/browse/FLINK-7811, with
the goal of adding support for Scala 2.12. There is a bit of a hurdle and I
have to explain some context first.
With Scala 2.12, lambdas are implemented using the lambda mechanism of Java 8,
i.e. Scala lambdas
Hi,
Thanks a lot for the proposal. I like the idea to unify table definitions.
I think we can drop the table type since the type can be derived from the
sql, i.e, a table be inserted can only be a sink table.
I left some minor suggestions in the document, mainly include:
- Maybe we also need to
Dear community,
this is the weekly community update thread #40. Please post any news and
updates you want to share with the community to this thread.
# Discussing feature freeze for Flink 1.7
The community is currently discussing the feature freeze for Flink 1.7 [1].
The 22nd of October is
Hi,
I have a branch in my Github repository to test the TPC-H queries [1] [2].
All queries are supported (four need to be slightly rewritten).
When checking the results of the benchmark, please keep in mind that so far
we focused our efforts on extending the functionality and unified semantics
Piotr Nowojski created FLINK-10491:
--
Summary: Deadlock during spilling data in SpillableSubpartition
Key: FLINK-10491
URL: https://issues.apache.org/jira/browse/FLINK-10491
Project: Flink
Tzu-Li (Gordon) Tai created FLINK-10492:
---
Summary: Document generic support for configuring AwsKinesisClient
in the Kinesis Consumer
Key: FLINK-10492
URL: https://issues.apache.org/jira/browse/FLINK-10492
One challenge would be duplicate keys in this context.
Am Do., 4. Okt. 2018 um 10:17 Uhr schrieb Till Rohrmann <
trohrm...@apache.org>:
> Hi Daniel,
>
> I don't think that there is a fundamental problem of having MapState
> available for operator state. First, there are some questions to be
>
Stefan Richter created FLINK-10490:
--
Summary: OperatorSnapshotUtil should probably use
SavepointV2Serializer
Key: FLINK-10490
URL: https://issues.apache.org/jira/browse/FLINK-10490
Project: Flink
Alexis Sarda-Espinosa created FLINK-10489:
-
Summary: Inconsistent window information for streams with
EventTime characteristic
Key: FLINK-10489
URL: https://issues.apache.org/jira/browse/FLINK-10489
Great to hear that you intend to open source your K8s operators. I would be
keen to see what and how you do things with the operator. If there are
things to change on the Flink side in order to improve the integration,
then let's discuss them.
Cheers,
Till
On Wed, Oct 3, 2018 at 2:52 AM Jin Sun
Hi Daniel,
I don't think that there is a fundamental problem of having MapState
available for operator state. First, there are some questions to be
answered though: How do you union map state and how do you split map state
in case of repartitioning. Once this has been answered one needs to
Thanks a lot for the proposal, Timo. I left a few comments. Also, it seems
the example in the doc does not have the table type (source, sink and both)
property anymore. Are you suggesting drop it? I think the table type
properties is still useful as it can restrict a certain connector to be
only
Fabian Hueske created FLINK-10488:
-
Summary: Add DISTINCT operator for streaming tables that leverages
time attributes
Key: FLINK-10488
URL: https://issues.apache.org/jira/browse/FLINK-10488
Project:
20 matches
Mail list logo