Hi everyone,
sorry for being so late with this request, but fixing a couple of down
stream bugs had higher priority than this issue and were also blocking
it. Nevertheless, I would like to ask for permission to merge the
FLINK-19980[1] to the 1.13 branch as an experimental feature to add the
re is a new RC or 1.13.1 otherwise. Of course if there are no
objections from others.
Best,
Dawid
On 21/04/2021 10:52, Timo Walther wrote:
> Hi everyone,
>
> sorry for being so late with this request, but fixing a couple of
down
> stream bugs
Hi Konstantin,
thanks for starting the discussion. From the Table API side, we also
fixed a couple of critical issues already that justify releasing a
1.13.1 asap.
Personally, I would like to include
https://issues.apache.org/jira/browse/FLINK-22666 that fixes some last
issues with the Scal
Hi Konstantin,
thanks for starting this discussion. I was also about to provide some
feedback because I have the feeling that the bot is too aggressive at
the moment.
Even a 14 days interval is a short period of time for bigger efforts
that might include several subtasks. Currently, if we sp
Hi Ingo,
thanks for giving FLIP-129 an update before finally implementing it. I
wouldn't start with a voting thread right away but collect more feedback
from the community in a [DISCUSS] thread before.
Also, voting threads should be performed on the updated wiki page and
include the voting d
+1 (binding)
Thanks for driving this.
Regards,
Timo
On 21.06.21 13:24, Ingo Bürk wrote:
Hi everyone,
thanks for all the feedback so far. Based on the discussion[1] we seem to
have consensus, so I would like to start a vote on FLIP-129 for which the
FLIP has now also been updated[2].
The vote
Hi everyone,
I'm sending this email to make sure everyone is on the same page about
slowly deprecating the DataSet API.
There have been a few thoughts mentioned in presentations, offline
discussions, and JIRA issues. However, I have observed that there are
still some concerns or different op
Hi Jack,
thanks for sharing your proposal with us. I totally understand the
issues that you are trying to solve. Having a more flexible type support
in the connectors is definitely a problem that we would like to address
in the mid term. It is already considered in on our internal roadmap
pla
Hi everyone,
as discussed previously [1] and tracked in FLINK-14437, we executed the
last step of FLIP-32 and removed all occurences of the term "blink" in
the code base.
This includes renaming the following Maven modules:
flink-table-planner-blink -> flink-table-planner
flink-table-runtime-
Thanks for writing this up, this also reflects my understanding.
I think a blog post would be nice, ideally with an explicit call for
feedback so we learn about user concerns.
A blog post has a lot more reach than an ML thread.
Best,
Stephan
On Wed, Jun 23, 2021 at 12:23 PM Timo Walther
wro
.@apache.org
wrote:
Hi Timo,
Thanks for joining the discussion. All rules
except
the
unassigned
rule
do
not apply to Sub-Tasks actually (like
deprioritization,
closing).
Additionally, activity on a Sub-Taks counts as
activity
for
the
parent.
So,
the parent ticket would not be t
+1 (binding)
I went through all commits one more time and could not spot anything
that would block a release.
Thanks Chesnay!
Timo
On 15.07.21 09:02, Chesnay Schepler wrote:
Hi everyone,
Please review and vote on the release candidate #1 for the version 14.0,
as follows:
[ ] +1, Approve t
Hi Srini,
welcome aboard! Great to see more adoption in the SQL space. Looking
forward to collaboration.
Regards,
Timo
On 19.07.21 10:58, Till Rohrmann wrote:
Hi Srini,
Welcome to the Flink community :-) Great to hear what you are planning to
do with Flink at LinkedIn. I think sharing this
Hi Dominik,
`toAppendStream` is soft deprecated in Flink 1.13 and will be deprecated
in Flink 1.13. It uses the old type system and might not match perfectly
with the other reworked type system in new functions and sources.
For SQL, a lot of Avro classes need to be treated as RAW types. But w
Sorry, I meant "will be deprecated in Flink 1.14"
On 09.08.21 19:32, Timo Walther wrote:
Hi Dominik,
`toAppendStream` is soft deprecated in Flink 1.13 and will be deprecated
in Flink 1.13. It uses the old type system and might not match perfectly
with the other reworked type sys
Hi everyone,
I'm not deeply involved in the discussion but I quickly checked out the
proposed interfaces because it seems they are using Table API heavily
and would like to leave some feedback here:
I have the feeling that the proposed interfaces are a bit too simplified.
Methods like `Table
Hi everyone,
this sounds definitely like a bug to me. Computing metadata might be
very expensive and a connector might expose a long list of metadata
keys. It was therefore intended to project the metadata if possible. I'm
pretty sure that this worked before (at least when implementing
Suppor
Hi Xingcan,
in theory there should be no hard blocker for supporting this. The
implementation should be flexible enough at most locations. We just
adopted 38 from the Blink code base which adopted it from Hive.
However, this could be a breaking change for existing pipelines and we
would need
Hi Ingo,
thanks for starting this discussion. Having more automation is
definitely desirable. Esp. in the API / SDK areas where we frequently
have to add similar comments to PRs. The more checks the better. We
definitely also need more guidelines (e.g. how to develop a Flink
connector) but au
Hi Martijn,
we are currently in the process of backporting a couple of PRs to the
1.13 branch to fix the partially broken primary key support in Flink
SQL. See FLINK-20374 for more information.
I hope we can finalize this in one or two days. It should be completed
in 1.13.3 to avoid confu
Hi Zheng,
I'm very sorry for the inconvenience that we have caused with our API
changes. We are trying our best to avoid API breaking changes. Thanks
for giving us feedback.
There has been a reason why Table API was marked as @PublicEvolving
instead of @Public. Over the last two years, we ha
quot; for which
we can evolve quicker.
That sounds great ! I'm glad to see that we are making the API more
friendly !
[1]. https://github.com/danny0405
On Tue, Sep 28, 2021 at 3:52 PM Timo Walther wrote:
Hi Zheng,
I'm very sorry for the inconvenience that we have caused with o
, Martijn Visser wrote:
Hi Timo,
Sounds good to me.
However, I still need a PMC to make to the release. I definitely
volunteer
to help out with keeping track of things, communication etc. Hopefully
there's a PMC who can help.
Best regards,
Martijn
Op ma 27 sep. 2021 om 11:32 schreef Timo Wa
Hi Francesco,
thanks for starting this discussion. It is definitely time to clean up
more connectors and formats that were used for the old planner but are
actually not intended for the DataStream API.
+1 for deprecating and dropping the mentioned formats. Users can either
use Table API or i
Hi Thomas,
thanks for your feedback.
The error that you are experiencing is definitely a bug in 1.13.3 and
the missing method should be reintroduced in the next patch version to
make code compiled against older patch versions run again.
Regarding the discussion points:
I agree that flink-ta
Hi,
this is an interesting idea. But as far as I can see, by looking at
other SQL engines like Microsoft SQL Server:
https://docs.microsoft.com/en-us/sql/t-sql/queries/select-over-clause-transact-sql?view=sql-server-ver15
The range is a well-defined set of keywords and CURRENT_TIMESTAMP is no
Hi Sergey,
thanks for this nice demo video. It looks very nice and makes the SQL
Client an even more useful tool.
What I miss a bit in the FLIP is the implementation details.
For example, who is responsible for parsing comments? I guess the SQL
Client and not the Flink SQL parser will take c
ds.
Currently I made it pretty straightforward:
if a word not inside quoted string, not inside a comment or a hint
and matches anything from
SQL92 (
*org.apache.calcite.sql.parser.SqlAbstractParserImpl#getSql92ReservedWords*)),
then it will be highlighted as a keyword.
On Tue, Nov 2, 2021 at 12:
+1 (binding) thanks for working on this.
Regards,
Timo
On 05.11.21 10:14, Sergey Nuyanzin wrote:
Also there is a short demo showing some of the features mentioned in this
FLIP.
It is available at https://asciinema.org/a/446247?speed=3.0 (It was also
mentioned in [DISCUSS] thread)
On Wed, Nov
Hi everyone,
sorry for the delay in joining this thread. I went through the FLIP and
have some comments (maybe overlapping with Stephan's comments, which I
haven't read yet):
a. > More importantly, in order to solve the cognitive bar...
It would be great if we can add not only `Receive any t
Hi everyone,
even though the DISCUSS thread was open for 2 weeks. I have the feeling
that the VOTE was initiated to quickly. At least a final "I will start
the vote soon. Last call for comments." would have been nice.
I also added some comments in the DISCUSS thread. Let's hope we can
resolv
Hi everyone,
On behalf of the PMC, I'm very happy to announce Jing Zhang as a new
Flink committer.
Jing has been very active in the Flink community esp. in the Table/SQL
area for quite some time: 81 PRs [1] in total and is also active on
answering questions on the user mailing list. She is c
Hi everyone,
thanks for finally have this discussion on the mailing list. As both a
contributor and user, I have experienced a couple issues around
nullability coming out of nowhere in a pipeline. This discussion should
not only cover CAST but failure handling in general.
Let me summarize m
Hi everyone,
as many of you know, one of the biggest weaknesses of Flink's Table &
SQL API are the difficulties around stateful upgrades between Flink
minor versions (e.g. 1.13->1.14). Currently, we cannot provide any
backwards guarantees in those scenarios and need to force users to
reproces
Hi all,
let me add a few more words on this topic:
The @PublicEvolving interface is kind of a staging annotation for
@Public. We still need to be careful when to change classes with
@PublicEvolving annotation. Usually, it still involves a proper
deprecation process over 1-2 releases to give d
Thanks Danny.
I will merge FLINK-28861 in couple of minutes to master. I will open a
PR for 1.15 shortly. This issue is pretty tricky, we should add a
warning to 1.15.0 and 1.15.1 releases as it won't be easy to perform
stateful upgrades in between 1.15.x patch versions for pipelines that
use
Congratulations and welcome to the committer team :-)
Regards,
Timo
On 17.08.22 12:50, Yuxin Tan wrote:
Congratulations, Lijie!
Best,
Yuxin
Guowei Ma 于2022年8月17日周三 18:42写道:
Congratulations, Lijie. Welcome on board~!
Best,
Guowei
On Wed, Aug 17, 2022 at 6:25 PM Zhu Zhu wrote:
Hi ever
Congratulations and welcome to the committer team :-)
Regards,
Timo
On 18.08.22 07:19, Lijie Wang wrote:
Congratulations, Junhan!
Best,
Lijie
Leonard Xu 于2022年8月18日周四 11:31写道:
Congratulations, Junhan!
Best,
2022年8月18日 上午11:27,Zhipeng Zhang 写道:
Congratulations, Junhan!
Xintong Song 于
Hi Ran,
what would be the data type of this dynamic metadata column? The planner
and many parts of the stack will require a data type.
Personally, I feel connector developers can already have the same
functionality by declaring a metadata column as `MAP`.
This is what we expose already as `d
d now even in the future rather than
check and deny these new metadata columns or modify connector
implementation frequently to support it.
And it's an option to configure by using 'DYNAMIC' at the metadata
column(or other better implementations).
[1]
https://nightlies.apache.org/fli
jects).
Congratulations and welcome, Martijn!
Cheers,
Timo Walther
(On behalf of the Apache Flink PMC)
+1 (binding)
Please make also sure to update ExpressionReducer during implementation.
This is not mentioned in the FLIP yet.
Thanks,
Timo
On 22.09.22 10:41, Piotr Nowojski wrote:
+1 (binding)
czw., 22 wrz 2022 o 08:21 Lincoln Lee napisał(a):
Hi everyone,
Thanks for all your feedback for
Makes sense to me. The serializer stack is pretty complex right now, the
more legacy we remove the better.
Regards,
Timo
On 20.10.22 12:49, Chesnay Schepler wrote:
+1
Sounds like a good reason to drop these long-deprecated APIs.
On 19/10/2022 15:13, Piotr Nowojski wrote:
Hi devs,
I would
Actually, the new type inference stack for UDFs is smart enough to solve
this issue. It could derive a data type for the array from the
surrounding call (expected data type).
So this can be supported with the right type inference logic:
cast(ARRAY() as int)
Unfortunately, ARRAY is fully mana
+1 (binding)
Thanks,
Timo
On 23.11.22 15:53, Piotr Nowojski wrote:
+1 (binding)
czw., 10 lis 2022 o 11:38 Martijn Visser
napisał(a):
+1 (binding)
- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified license
Hi everyone,
discussions around ConfigOption seem to be very popular recently. So I
would also like to get some opinions on a different topic.
How do we represent hierarchies in ConfigOption? In FLIP-122, we agreed
on the following DDL syntax:
CREATE TABLE fs_table (
...
) WITH (
'connect
Hi Xuannan,
sorry, for not entering the discussion earlier. Could you please update
the FLIP to how it would like after FLIP-84? I think your proposal makes
sense to me and aligns well with the other efforts from an API
perspective. However, here are some thought from my side:
It would be ni
Hi Leonard,
this sounds like a nice refactoring for consistency. +1 from my side.
However, I'm not sure how much backwards compatibility is required.
Maybe others can comment on this.
Thanks,
Timo
On 30.04.20 14:09, Leonard Xu wrote:
Hi, dear community
Recently, I’m thinking to refactor th
on't see such requirement, and everything works
fine by now.
So I'm in favor of "format=json". But if the community insist to follow
code style on this, I'm also fine with the longer one.
Btw, I also CC user mailing list to listen more user's feedback.
Because I
thi
ld go with the first option:
format.kind: json
format.fail-on-missing-field: true
If fail-on-missing-field is specific for json, then one could go with
format: json
json.fail-on-missing-field: true
or
format.kind: json
format.json.fail-on-missing-field: true
Cheers,
Till
On Fri, May
time | TIMESTAMP(3)| false | (NULL) | (NULL)
| f4.nested.rowtime - INTERVAL '3' SECOND |
++---+---+---+-+---+
Thanks,
Fabian
Am Mi., 6. Mai 2020 um 17:51 Uhr schrieb godfrey he <
godfre...@gmail.com
:
Hi @fhue...@gmail.com @Timo Wa
Hi Fabian,
thanks for the proposal. I agree that we should have consensus on the
SQL syntax as well and thus finalize the concepts introduced in FLIP-84.
I would favor Jark's proposal. I would like to propose the following syntax:
BEGIN STATEMENT SET;
INSERT INTO ...;
INSERT INTO ...;
END
Hi Leonard,
thanks for the summary.
After reading all of the previous arguments and working on FLIP-95. I
would also lean towards the conclusion of not adding the TEMPORAL keyword.
After FLIP-95, what we considered as a CREATE TEMPORAL TABLE can be
represented as a CREATE TABLE with PRIMARY
eat if flink can
support flexible execution modes.
Or Flink core just defines the syntax, provides parser and supports a
default execution mode.
The downstream projects can use the APIs and parsed results to decide how
to execute a sql.
Best,
Godfrey
Timo Walther 于2020年6月17日周三 下午6:32写道:
Hi Fabian
Hi Jark,
thanks for working on this issue. It is time to fix this last part of
inconsistency in the API. I also like the core parts of the FLIP, esp.
that TableDescriptor is one entity that can be passed to different
methods. Here is some feedback from my side:
1) +1 for just `column(...)`
I agree with Dian. We can release a 1.11.2 shortly afterwards. Only
regressions compared to 1.11.0 should block this vote.
Regards,
Timo
On 17.07.20 10:48, Dian Fu wrote:
Generally, I tend to continue the vote if this is not a blocking issue for the
following reasons:
- As discussed in the di
word if we need a choice.
8) `Connector.option(...)` class should also accept `ConfigOption`
I’m slightly -1 for this, ConfigOption may not work because the key for
format configOption has not format prefix eg: FAIL_ON_MISSING_FIELD of
json, we need “json.fail-on-missing-field” rather than
“fail-on-
+1
Thanks for driving this Jark.
Regards,
Timo
On 24.07.20 12:42, Jark Wu wrote:
Hi all,
I would like to start the vote for FLIP-129 [1], which is discussed and
reached consensus in the discussion thread [2].
The vote will be open until 27th July (72h), unless there is an objection
or not en
Hi Leonard,
sorry for jumping into the discussion so late. But I have two questions:
1) Naming: Is operation time a good term for this concept? If I read
"The operation time is the time when the changes happened in system." or
"The system time of DML execution in database", why don't we call i
Hi Xuannan,
sorry for joining the discussion so late. I agree that this is a very
nice and useful feature. However, the impact it has to many components
in the stack requires more discussion in my opinion.
1) Separation of concerns:
The current design seems to mix different layers. We should
+1 (binding)
I went through the commit diff and changed files of this release. Could
not spot anything suspicious.
Regards,
Timo
On 11.08.20 14:47, Zhu Zhu wrote:
Hi everyone,
Please review and vote on the release candidate #1 for the version 1.10.2,
as follows:
[ ] +1, Approve the release
Hi everyone,
I would like to propose a FLIP that aims to resolve the remaining
shortcomings in the Table API:
https://cwiki.apache.org/confluence/display/FLINK/FLIP-136%3A++Improve+interoperability+between+DataStream+and+Table+API
The Table API has received many new features over the last yea
SQL or Table API) together. The statements in a
statement
set are jointly optimized and executed as a single Flink job.
Maybe if you add this to the FLIP it will help other readers as well.
Best,
David
On Wed, Aug 19, 2020 at 10:22 AM Timo Walther wrote:
Hi everyone,
I would like to
+1
Thanks for removing legacy.
Regards,
Timo
On 28.08.20 11:55, David Anderson wrote:
+1
David
On Fri, Aug 28, 2020 at 9:41 AM Dawid Wysakowicz
wrote:
Hi all,
I would like to start a vote for removing deprecated, but Public(Evolving)
methods in the upcoming 1.12 release:
- XxxDataSt
Hi Wei,
is `reset_accumulator` still necessary? We dropped it recently in the
Java API because it was not used anymore by the planner.
Regards,
Timo
On 31.08.20 15:00, Wei Zhong wrote:
Hi Jincheng & Xingbo,
Thanks for your suggestions.
I agree that we should keep the user interface uniform
ullable Map fieldNames)"
- Maybe `List fieldNames` or `String[] fieldNames` parameter is
enough and more handy than Map ?
- Currently, the fieldNames member variable is mutable, is it on purpose?
Can we make it immutable? For example, only accept from the constructor.
- Why do we accept a null
eamExecutionEnvironment#execute.
2. @Timo What is the interaction between Row setters from the different
modes? What happens if the user calls both in different order. E.g.
row.setField(0, "ABC");
row.setField("f0", "ABC"); // is this a valid call ?
or
row.setFi
I tested the quickstarts, the SBT build, the PlanVisualizer, and the
HistoryServer. I could not find any serious problems. However, we have
to update the quickstart scripts, once 1.3 is released.
Timo
Am 24.05.17 um 16:05 schrieb Chesnay Schepler:
I've found a small problem in the yarn user-j
I don't think that FLINK-6780 is a blocker, because the Table API is
still a new feature. FLINK-6736 was also a hard bug. However, if there
will be a RC4, a fix should be included.
Regards,
Timo
Am 31.05.17 um 02:55 schrieb Haohui Mai:
Hi,
We have discovered https://issues.apache.org/jira/b
I merged all Table API related PRs.
I'm also fine with a 1.3.1 release this or next week.
Am 31.05.17 um 14:08 schrieb Till Rohrmann:
I would be ok to quickly release 1.3.1 once the the respective PRs have
been merged.
Just for your information, I'm not yet through with the testing of the typ
We should also include FLINK-6783. It seems that
WindowedStream::aggregate is broken right now.
Am 31.05.17 um 14:31 schrieb Timo Walther:
I merged all Table API related PRs.
I'm also fine with a 1.3.1 release this or next week.
Am 31.05.17 um 14:08 schrieb Till Rohrmann:
I would be
Hi Stefano,
I implemented the overlap according to Calcite's implementation. Maybe
they changed the behavior in the mean time. I agree we should try to
stay in sync with Calcite. What do other DB vendors do? Feel free to
open an issue about this.
Regards,
Timo
Am 30.05.17 um 14:24 schrieb
Walther wrote:
We should also include FLINK-6783. It seems that WindowedStream::aggregate is
broken right now.
Am 31.05.17 um 14:31 schrieb Timo Walther:
I merged all Table API related PRs.
I'm also fine with a 1.3.1 release this or next week.
Am 31.05.17 um 14:08 schrieb Till Rohrman
I'm working on https://issues.apache.org/jira/browse/FLINK-6896 and
https://issues.apache.org/jira/browse/FLINK-6881. I try to open a PR for
both today.
Timo
Am 19.06.17 um 14:54 schrieb Robert Metzger:
Fabian and SunJincheng, it looks like we are cancelling the 1.3.1 RC1.
So there is the op
FLINK-6881 and FLINK-6896 are merged. The Table API is ready for a new RC.
Timo
Am 19.06.17 um 17:00 schrieb jincheng sun:
Thanks @Timo!
2017-06-19 22:02 GMT+08:00 Timo Walther :
I'm working on https://issues.apache.org/jira/browse/FLINK-6896 and
https://issues.apache.org/jira/browse/
+1 (binding)
I tested the following:
- built from source
- tested the web interface
- ran some streaming programs
It seems that Flink cannot be built if the path contains spaces. I added
an issue for this (https://issues.apache.org/jira/browse/FLINK-6987).
It seems that this error was presen
I just opened a PR which should be included in the next bug fix release
for the Table API:
https://issues.apache.org/jira/browse/FLINK-7005
Timo
Am 23.06.17 um 14:09 schrieb Robert Metzger:
Thanks Haohui.
The first main task for the release management is to come up with a
timeline :)
Lets jus
Hurray! Finally IntStreams, LongStreams, etc. in our stream processor ;-)
Timo
Am 18.07.17 um 16:31 schrieb Stephan Ewen:
Hi all!
Over the last days, there was a longer poll running concerning dropping the
support for Java 7.
The feedback from users was unanimous - in favor of dropping Java 7
Hi Mike,
do you run Flink locally or in a cluster? You have to make sure that VM
argument -Djava.library.path is set for all Flink JVMs. Job Manager and
Task Managers might run in separate JVMs. Make also sure that the
library is accessible from all node. I don't know what happens if the
file
Weitergeleitete Nachricht
Betreff:Re: AVRO Union type support in Flink
Datum: Wed, 19 Jul 2017 10:26:24 -0400
Von:Vishnu Viswanath
An: Timo Walther
Hi Timo,
Thanks for checking that. I did not try yet. My current application uses
Cascading and it has the limitation that
(aljos...@apache.org
)
wrote:
Gordon and I found this (in my opinion) blocking issue:
https://issues.apache.org/jira/browse/FLINK-7041 <
https://issues.apache.org/jira/browse/FLINK-7041>
I’m trying to quickly provide a fix.
On 26. Jun 2017, at 15:30, Timo Walther wrote:
I just opened
Hi Ramanji,
you can find the source code of the examples here:
https://github.com/apache/flink/blob/master/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/wordcount/WordCount.java
A general introduction how the cluster execution works can be found here:
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:95)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:263)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)
On Mon, Aug 7, 2017 at 5:56 PM, Timo Walther
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:745)
On Mon, Aug 7, 2017 at 6:49 PM, Timo Walther wrote:
Make sure that the file exists and is accessible from all Flink tasks
managers.
Am 07.08.17 um 14:35 schrieb P. Ramanjaneya Reddy:
Thank you T
I had a offline discussion with Ufuk. I looked into the docs build
scripts recently, I can take care of removing the old docs. There other
issues that need to be fixed before the next release as well:
- Drop docs < 1.0
- Make Javadocs with Scala build again
- Build all docs >= 1.0 again (esp. t
I also think we shouldn't publish releases regularly, just to have a
release regularly.
Maybe we can do time-based releases more flexible: Instead of
feature-freeze after 3 months, 1 month testing. We could do it like
feature-freeze 3 months after the last release, unlimited testing. This
wou
@Chesnay do you know more about it?
Am 10/1/17 um 4:45 PM schrieb Michael Fong:
Hi, all,
Is there any existing metrics implemented to any src/sink connectors? I am
thinking to add some metrics to C* connector that might help to give a
early sign of scalability problem (like back pressure) fro
Hi Chen,
I think in a long-term perspective it makes sense to support things like
this. The next big step is dynamic scaling without stopping the
execution. Partial upgrades could be addressed afterwards, but I'm not
aware of any plans.
Until then, I would recommend a different architecture
Nachricht
Betreff:Re: Flink Watermark and timing
Datum: Tue, 3 Oct 2017 06:37:13 +0200
Von:Björn Zachrisson
An: Timo Walther
Hi Timo,
One more question regarding that to clarify.
Where do i specify in which window a event that arrives on the exact
window-borderline, window
+1 (binding)
- build the source locally
- run various table programs
- checked the resource consumption of table programs with retention
enabled and disabled
- built a quickstart project
- tested the web ui submission (found
https://issues.apache.org/jira/browse/FLINK-8187 but this is non-bloc
Hey everyone,
in the past, the community already asked about having a way to write
Flink jobs without extensive programming skills. During the last year we
have put a lot of effort in the core of our Table & SQL API. Now it is
time to improve the tooling around it as well and make Flink more
Hi Ghassan,
in your example the result 3.5 is correct. The query is executed with
standard SQL semantics. You only group by ProductID and since it is the
same for all elements, the average is 3.5.
The second "review-3" does not replace anything. In general, the
replacement would happen in th
Hi Amit,
are you using lambdas as parameters of a Flink Function or in a member
variable? If yes, can you share an lambda example that fails?
Regards,
Timo
Am 1/3/18 um 11:41 AM schrieb Amit Jain:
Hi,
I'm writing a job to merge old data with changelogs using DataSet API where
I'm reading c
.map(**t -> t.f1*
*)*.returns(GenericRecord.class);
On Wed, Jan 3, 2018 at 4:18 PM, Timo Walther wrote:
Hi Amit,
are you using lambdas as parameters of a Flink Function or in a member
variable? If yes, can you share an lambda example that fails?
Regards,
Timo
Am 1/3/18 um 11
ypes "CREATE MATERIALIZED
VIEW" and "SELECT" is not clear to me. Add a subsection and explain what is
described there?
- Implementation plan: Add which result retrieval modes will be supported
in the initial version? Which configuration will be available?
Best, Fabian
2017-12-19
@Jincheng: Yes, I think we should include the two Table API PRs.
Am 1/16/18 um 5:28 PM schrieb jincheng sun:
Thanks for starting the discussion Gordon.
I think it is better to add the commits of
`https://issues.apache.org/jira/browse/FLINK-8355`
and
`https://issues.apache.org/jira/browse/FLIN
Hi Aljoscha,
it would be great if we can include the first version of the SQL client
(see FLIP-24, Implementation Plan 1). I will open a PR this week. I
think we can merge this with explicit "experimental/alpha" status. It is
far away from feature completeness but will be a great tool for Flin
386] Flink Elasticsearch 5 connector is not compatible with
Elasticsearch 5.2+ client
have long been awaited and there was one PR from me and from someone else
showing the interest ;) So if you could consider it for 1.5 that would be
great!
Thanks!
--
Christophe
On Mon, Feb 5, 2018 at 2:
es related to scala api and documentation.
Thanks,
Kostas
On Feb 5, 2018, at 5:37 PM, Timo Walther wrote:
Hi Shuyi,
I will take a look at it again this week. I'm pretty sure it will be
part of 1.5.0.
Regards,
Timo
Am 2/5/18 um 5:25 PM schrieb Shuyi Chen:
Hi Aljoscha, can we get this f
Hi Amit,
how is the memory consumption when the jobs get stuck? Is the Java GC
active? Are you using off-heap memory?
Regards,
Timo
Am 2/12/18 um 10:10 AM schrieb Amit Jain:
Hi,
We have created Batch job where we are trying to merge set of S3
directories in TextFormat with the old snapshot
601 - 700 of 1422 matches
Mail list logo