Hi Flink Community,
On behalf of the data Artisans team, I’d like to announce that the sessions
for Flink Forward San Francisco are now online!
Check out the great lineup of speakers from companies such as American
Express, Comcast, Capital One, eBay, Google, Lyft, Netflix, Uber, Yelp, and
Thanks Gordon for managing the release!
2018-02-15 13:22 GMT+01:00 Tzu-Li Tai :
> The voting time has passed and I'm happy to announce that we've collected
> enough votes to release this RC as Flink 1.4.1.
>
> +1 votes:
> - Fabian (binding)
> - Aljoscha (binding)
> - Timo
Hi Niels,
Jörn is right, although offering different methods, Flink's InputFormat is
very similar to Hadoop's InputFormat interface.
The InputFormat.createInputSplits() method generates splits that can be
read in parallel.
The FileInputFormat splits files by fixed boundaries (usually HDFS
vailable on Table but not on SQL API
> and I was wondering if that is a must obey rule during development.
>
> --
> Rong
>
> On Wed, Feb 14, 2018 at 2:32 AM, Fabian Hueske <fhue...@gmail.com> wrote:
>
> > Hi Rong,
> >
> > Thanks for taking the initiativ
Hi,
you wrote to the Apache Flink development mailing list.
I think your question should go to the Apache Beam user mailing list:
u...@beam.apache.org
Best, Fabian
2018-02-22 14:35 GMT+01:00 shankara :
> I am new to apache beam and spring cloud dataflow. I am trying to
Hi Zhentaowang,
no, you have not become a Flink committer.
Flink committers are nominated by the Flink PMC (same applies to any Apache
project).
Before a person is proposed as a committer by a Flink PMC member and
accepted by the PMC as a committer, the person should have made valuable
Hi Timo,
thanks for putting this FLIP together. I think this will be a great feature
for Flink.
I think it makes sense to show the long term goals of the SQL client/server
architecture, but the current description makes it a bit difficult to
figure out what will be part of this FLIP and what
Hi,
I've merged the proposed changes for the website.
As usual, we can incrementally refine and improve it.
Best,
Fabian
2018-06-15 16:16 GMT+02:00 Fabian Hueske :
> Hi,
>
> I'm planning to put the reworked website online next week.
> You can have a look at PR #109 [1] to check
Hi Minglei,
1. Not sure if you are asking for a specific problem, but IMO the main
challenge is that there are many different ways (and meanings) to join two
streams. The required semantics always depend on the concrete use case. If
you want to perform an simple equality join with SQL semantics,
IRA as well.
> >
> > https://issues.apache.org/jira/browse/FLINK-4582
> >
> >
> > On Mon, Jul 30, 2018 at 1:25 AM Fabian Hueske wrote:
> >
> > > Hi Ying,
> > >
> > > Thanks for considering to contribute the connector!
> > >
> >
I've filed this under FLINK-10007 [1].
Cheers, Fabian
[1] https://issues.apache.org/jira/browse/FLINK-10007
2018-08-02 11:10 GMT+02:00 Ufuk Celebi :
> We fixed this for the Flink docs a while back in
> https://github.com/apache/flink/pull/5395, but didn't think of the
> flink-web repo which
Thanks Chesnay!
2018-07-31 10:59 GMT+02:00 vino yang :
> Thanks for releasing Flink 1.5.2, Chesnay!
>
> Thanks.
> Vino.
>
> 2018-07-31 16:49 GMT+08:00 Till Rohrmann :
>
> > Thanks Chesnay for being our release manager and thanks to the community
> > for all the work!
> >
> > Cheers,
> > Till
> >
icular, I am wondering
> >> what's
> >> > the *"resharding
> >> > > behavior"* mentioned in FLINK-4582.
> >> > >
> >> > > Thanks a lot!
> >> > >
> >> > > -
> >> > > Ying
> >> &
Hi everyone,
I'd like to discuss a proposal to improve the tutorials / quickstart guides
of Flink's documentation.
I think the current tutorials have a few issues that should be fix in order
to help our (future) users getting started with Flink.
I propose to add a single "Tutorials" section to
Fabian Hueske :
> Hi,
>
> Thanks for the feedback.
> I've had a look at the structure that was derived from my initial message
> and adjusted it to my current plans (see below).
>
> The main ideas are:
> * add a new Tutorials section and move all existing tutorials there
& Operations
> > > - ...
> > > - Debugging & Monitoring
> > > - ...
> > >
> > > - Internals
> > > - ...
> > > ```
> > >
> > > Aljoscha Krettek 于2018年8月9日周四 下午11:29写道:
> > >
> > > > +1
Thanks for starting the discussion Florian.
I'm also in favor of both A options.
Option A for the outer joins is also is closest to the join syntax of the
DataSet API.
Thanks,
Fabian
2018-08-13 20:50 GMT+02:00 Elias Levy :
> As a developer, while not quite a succinct, I feel that option A in
Hi Dominik,
The SQL Client supports the same subset of SQL that you get with Java /
Scala embedded queries.
The documentation [1] covers all supported operations.
There are some limitations because certain operators require special time
attributes (row time or processing time attributes) which
Hi,
Thanks fort starting this discussion Hequn!
These are a tricky questions.
1) Handling empty deletes in UpsertSource.
I think forwarding empty deletes would only have a semantic difference if
the output is persisted in a non-empty external table, e.g., a Cassandra
table with entries.
If we
, the result is still undefined (the join case that you mentioned),
> since Flink could skip or ingest extra any number of messages (deletes or
> not).*
>
> I think the result is clear if we clearly define that the upsert source
> ignore empty deletes. Flink only skip or ingest messa
Hi Renjie,
Did you intend to send this mail to dev@arrow.a.o instead of dev@flink.a.o?
Best, Fabian
2018-08-20 4:39 GMT+02:00 Renjie Liu :
> cc:Sunchao and Any
>
> -- Forwarded message -
> From: Uwe L. Korn
> Date: Sun, Aug 19, 2018 at 5:08 PM
> Subject: Re: [DISCUSS] Rust add
I agree to remove the slides section.
A lot of the content is out-dated and hence not only useless but might
sometimes even cause confusion.
Best,
Fabian
Am Mo., 27. Aug. 2018 um 08:29 Uhr schrieb Renjie Liu <
liurenjie2...@gmail.com>:
> Hi, Stephan:
> Can we put project wiki in some place? I
Hi Kevin,
Welcome to the Flink community!
The documentation is located in a folder in the regular code repository [1]
and written in Markdown format.
You can contribute to the documentation by opening pull requests against
the repository.
The contribution guidelines give a bit more info on the
Hi everyone,
I'd like to announce the program for Flink Forward Berlin 2018.
The program committee [1] assembled a program of about 50 talks on use
cases, operations, ecosystem, tech deep dive, and research topics.
The conference will host speakers from Airbnb, Amazon, Google, ING, Lyft,
ertSink later.
>
> Thanks again for all the suggestions. It really helps me a lot.
> Best, Hequn.
>
>
> On Tue, Aug 28, 2018 at 9:47 PM Fabian Hueske wrote:
>
> > Hi Hequn, hi Piotr,
> >
> > Thanks for pushing this discussion forward and sorry for not respondi
Congratulations Gary!
2018-09-07 16:29 GMT+02:00 Thomas Weise :
> Congrats, Gary!
>
> On Fri, Sep 7, 2018 at 4:17 PM Dawid Wysakowicz
> wrote:
>
> > Congratulations Gary! Well deserved!
> >
> > On 07/09/18 16:00, zhangmingleihe wrote:
> > > Congrats Gary!
> > >
> > > Cheers
> > > Minglei
> > >
Hi Amol,
The memory consumption depends on the query/operation that you are doing.
Time-based operations like group-window-aggregations,
over-window-aggregations, or window-joins can automatically clean up their
state once data is not no longer needed.
Operations such as non-windowed aggregations
ording to above conversation flink will persist state forever for non
> windowed operations. I want to know how flink persiat the state i.e.
> Database or file system or in memory etc.
>
> On Wed, 4 Jul 2018 at 2:12 PM, Fabian Hueske wrote:
>
>> Hi Amol,
>>
>> The m
I agree, option (2) would be the easiest approach for the users.
2018-03-01 0:00 GMT+01:00 Rong Rong :
> Hi Timo,
>
> Thanks for the initiating the SQL client effort. I agree with Xingcan's
> points, adding to it (1) most of the user for SQL client would very likely
> to
Hi Deepak,
You can open JIRAs for bugs you discovered or minor improvements.
Larger features should be discussed on the dev mailing list first.
I'd suggest to start contributing by fixing a bug.
Best, Fabian
2018-03-13 3:39 GMT+01:00 Deepak Sharma :
> Hi Flink team!
>
>
Thanks for managing this release Gordon!
Cheers, Fabian
2018-03-15 21:24 GMT+01:00 Tzu-Li (Gordon) Tai :
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.3.3, which is the third bugfix release for the Apache Flink 1.3
> series.
>
>
I don't think it is a good idea to expose the internal state of a query as
queryable state for the following reasons:
1. plan generation: the streaming programs and operators are created based
on the optimized plan. A user cannot know which operators a query will run
on. Naming the queryable
+1
2018-03-08 10:20 GMT-08:00 Ted Yu :
> +1 to Aljoscha's proposal.
>
+1
- checked hashes and signatures
- build from src.tgz with unit tests (mvn clean package)
- checked commits for dependency changes and didn't find any
Thanks, Fabian
2018-03-15 14:06 GMT+01:00 Chesnay Schepler :
> Given that the commit only affects a test I'd be in favor
),
> > and included all HCatalog related JIRAs as subtasks. This make it easier
> to
> > track all HCatalog related effort in Flink. Thanks.
> >
> > Shuyi
> >
> > On Fri, Apr 13, 2018 at 12:36 PM, Fabian Hueske <fhue...@gmail.com>
> wrote:
> >
>
Hi everybody,
An HCatalog integration with the Table API/SQL would be great and be
helpful for many users!
A big +1 to that.
Thank you,
Fabian
Shuyi Chen schrieb am Mi., 11. Apr. 2018, 14:36:
> Thanks a lot, Rong and Peter.
>
> AFAIK, there is a flink hcatalog connector
Hi,
Ken's approach of having a joint data type and unioning the streams is
good. This will work seamlessly with checkpoints. Timo (in CC) used the
same approach to implement a prototype of a multi-way join.
A Tuple won't work though because the Tuple serializer does not support
null fields. You
Hi Amit,
The network stack has been redesigned for the upcoming Flink 1.5 release.
The issue might have been fixed by that.
There's already a first release candidate for Flink 1.5.0 available [1].
It would be great if you would have the chance to check if the bug is still
present.
Best, Fabian
Should we raise FLINK-8533 as a blocker for 1.5.0?
@Eron Can you post this in the "[DISCUSS] Releasing Flink 1.5.0" thread to
make sure it doesn't get lost?
Thanks,
Fabian
2018-04-18 13:05 GMT+02:00 Stephan Ewen :
> I see that this is an important issue.
>
> Will try to
Hi Sijie, hi Pulsar community!
Thanks for the detailed overview of Pulsar.
I like the idea of adding a Pulsar connector to Flink.
As Gordon mentioned, the Flink community would like to ensure that the
connector is maintained after being added.
We experienced that connector maintenance, including
A user reported a regression from 1.4.2.
A batch job with a DeltaIteration gets stuck when being executed in a
LocalEnvironment [1].
Cheers, Fabian
[1] https://issues.apache.org/jira/browse/FLINK-9242
2018-04-23 21:09 GMT+02:00 Shuyi Chen :
> Hi Aljoscha and Till,
>
> I've
cumented
> here
> <https://docs.google.com/document/d/1PPwHMDHGWMOO8YGv08BNi8Fqe_h7SjeALRzmW-ZxSfY/edit?usp=sharing>,
> and have run it by a few Flink PMC/committers.
>
> What do you think? We'd love to hear feedbacks from you
>
> Thanks,
> Bowen
>
>
>
Hi,
Liu and Hequn are right.
You need to pass at least one parameter into the table function, i.e.,
select t.col_1 from test t left join lateral
table(dim_test(SOME_ATTRIBUTE)) b on true
Best, Fabian
2018-02-24 13:24 GMT+01:00 ZhenBao Ye :
> hi,i was use 1.4.0。
>
>
gt;
> > > Thanks,
> > > Thomas
> > >
> > >
> > >
> > > On Tue, Sep 25, 2018 at 6:17 AM Tzu-Li Chen
> > wrote:
> > >
> > >> Hi Fabian,
> > >>
> > >> You convinced me. I miss the advantage we c
Hi everybody,
I'm currently creating a slide set for a Flink intro talk [1].
The content will be mostly based on pages of the recently updated website
* Main page [2]
* What is Apache Flink? [3]
* Use cases [4]
* Powered By [5]
The idea is to have a good set of slides that anybody can use to
Hi Xuefu,
I gave (hopefully) your Jira user (xuefuz) Contributor permissions for
Flink's Jira.
You can now assign issues to yourself.
Best, Fabian
Am Fr., 12. Okt. 2018 um 01:18 Uhr schrieb Zhang, Xuefu <
xuef...@alibaba-inc.com>:
> Hi there,
>
> Could anyone kindly add me as a contributor to
Hi Paul,
If I got your proposal right, you'd like to fire a Trigger right before a
checkpoint is taken, correct?
So, before taking a checkpoint, a Trigger would fire and the operator would
process and emit some intermediate results.
This approach would not completely solve the consistency issue
g a lower isolation level.
>
> Thanks a lot!
>
> Best,
> Paul Lam
>
> > 在 2018年10月15日,15:47,Fabian Hueske 写道:
> >
> > Hi Paul,
> >
> > If I got your proposal right, you'd like to fire a Trigger right before a
> > checkpoint is taken, co
h the current API?
>
> Thank you very much!
>
> Best,
> Paul Lam
>
>
> 在 2018年10月15日,19:45,Fabian Hueske 写道:
>
> Hi Paul,
>
> I think this would be very tricky to implement and interfere with many
> parts of the system like state backends, checkpo
repository
+ contributors can learn about the review process before opening a PR
On the downside, the template grows a bit at the end.
What do you think?
Best, Fabian
Am Mo., 24. Sep. 2018 um 15:51 Uhr schrieb Fabian Hueske :
> Hi,
>
> Coming back to the original topic of the th
s.
>
> Best, Hequn
>
> On Sat, Oct 13, 2018 at 10:12 AM jincheng sun
> wrote:
>
>> @Fabian Hueske Thanks for create the slide.
>> I think it is very important for the construction of the flink ecosystem.
>> The content of the slide outline is very comprehensive. I think
Hi,
Thanks Chesnay for preparing the RC1 for Flink 1.6.2.
I checked a few things, but there seem to be some issues with the release
candidate.
* no dependencies added or changed since Flink 1.6.1
* building the source distribution without tests succeeds, however, a
second build fails due to
Hi,
I checked the following things:
* no dependencies added or changed since Flink 1.5.4
* compiling the source distribution without tests succeeds
* compiling the source distribution with tests fails (see exception
appended below). When I restart the compilation, it goes past flink-hbase
but
> header?
>
> On 18.10.2018 23:16, Fabian Hueske wrote:
> > Hi,
> >
> > Thanks Chesnay for preparing the RC1 for Flink 1.6.2.
> >
> > I checked a few things, but there seem to be some issues with the release
> > candidate.
> >
> > * no depende
2 years.
>
> Given that no listed issue is new in this release I would not cancel the
> RC.
>
> On 18.10.2018 23:16, Fabian Hueske wrote:
> > Hi,
> >
> > Thanks Chesnay for preparing the RC1 for Flink 1.6.2.
> >
> > I checked a few things, but there se
le tests aren't run, and hence
> the bridge never being loaded.
>
> The ES5 issue is an OS-incompatibility caused by ES. This issue should
> not be new since we didn't modify the ES5 version in 2 years.
>
> Given that neither issue is new in this release I would not cancel the
Yes, exactly
Am Fr., 19. Okt. 2018 um 13:49 Uhr schrieb Chesnay Schepler <
ches...@apache.org>:
> When you went past hbase, which was the other module you failed? I would
> guess you also failed in the ES5 module here like for 1.6.2?
>
> On 18.10.2018 23:55, Fabian Hu
Hi,
I merged the PR.
The review process is documented at [1].
Best, Fabian
[1] https://flink.apache.org/reviewing-prs.html
Am Mi., 10. Okt. 2018 um 17:48 Uhr schrieb Fabian Hueske :
> Hi all,
>
> I opened a PR [1] to add the PR review guide to the Flink website.
>
> Cheers
/6873
Am Do., 18. Okt. 2018 um 07:34 Uhr schrieb jincheng sun <
sunjincheng...@gmail.com>:
> I like @Fabian Hueske 's proposal, currently design
> the
> template is pretty good idea. Because the template is convenient for
> contributors to follow the norms to community co
Hi Dom,
I think support for Kafka keys would be covered by Timo's proposal for
improvements of the source / sink connectors [1].
See the section on "Concat multiple formats for accessing
connector-specific properties" in the proposal document [2].
Best, Fabian
[1]
Hi Boris,
Thanks for sharing the code that you'd like to contribute for FLIP-23.
I have a quick look at the repository and collected some stats to estimate
the reviewing effort for the contribution.
There are approx 1900 lines of Java and 2000 lines of Scala code.
This is a reasonable size that
oke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Am Fr., 19. Okt. 2018 um 12:17 Uhr schrieb Fabian Hueske :
> Thanks Chesnay,
> I'll check again on the error that failed the build.
>
>
>
>
>
> Am Fr., 19. Okt. 2018 um 12:13 Uhr schrieb Chesnay Scheple
the release on it.
>
> On 19.10.2018 10:44, Chesnay Schepler wrote:
> > I've opened https://issues.apache.org/jira/browse/FLINK-10608 for the
> > RAT issue.
> >
> > On 19.10.2018 10:38, Fabian Hueske wrote:
> >> It's these two:
> >>
> >>
Yes, that is my understanding as well.
Manual time management would be another difference.
Something still to be discussed would be whether (or to what extent) it
would be possible to define the physical execution plan with hints or
methods like partitionByHash and sortPartition.
Best, Fabian
Hi Joey,
First of all, sorry for the bad contribution experience so far.
I don't know the BlobManager well enough, otherwise I'd had a look at your
PR.
The Flink community (and committers especially) are currently overwhelmed
with the number of PRs.
We receive all kinds of contributions from
Hi,
you can unsubscribe and resubscribe to the mailing list as described here
[1].
Best, Fabian
[1] https://flink.apache.org/community.html#mailing-lists
Am Mi., 7. Nov. 2018 um 12:29 Uhr schrieb florent tragni <
florent.tra...@precodata.com>:
> Could you replace my email address by
Hi,
* Re emit:
I think we should start with a well understood semantics of full
replacement. This is how the other agg functions work.
As was said before, there are open questions regarding an append mode
(checkpointing, whether supporting retractions or not and if yes how to
declare them, ...).
> >
> https://mail.google.com/mail/u/0/#search/xiaowei/FMfcgxvzLWzfvCnmvMzzSfxHTSfdwLkB
> > > <
> > >
> >
> https://mail.google.com/mail/u/0/#search/xiaowei/FMfcgxvzLWzfvCnmvMzzSfxHTSfdwLkB
> > > >
> > > .
> > >
> > > The Ta
on. I have a draft design doc that I'd like to convert it to a
> FLIP. Thus, it would be great if anyone who can grant me the write access
> to Confluence. My Confluence ID is xuefu.
>
> @Timo Waltherand @Fabian Hueske, it would be nice if any of you can help
> on this.
>
> Thanks,
> Xuefu
>
>
t;>> https://docs.google.com/document/d/1SkppRD_rE3uOKSN-LuZCqn4f7dz0zW5aa6T_hBZq5_o/edit?usp=sharing
>>> Once we reach an agreement, I can convert it to a FLIP.
>>>
>>> Thanks,
>>> Xuefu
>>>
>>>
>>>
>>> --
>>>
Hi,
Thanks for the great design document!
It answers my question regarding handling of retraction messages.
Overall, I like the proposal. It is well scoped and the proposed changes
are well described.
I left a question regarding the handling of time attributes for
multi-column output functions.
So
> I think we should in-depth discussion in a new threading.
>
> BTW, I find that you have had leave the very useful comments in the google
> doc:
>
> https://docs.google.com/document/d/1tnpxg31EQz2-MEzSotwFzqatsB4rNLz0I-l_vPa5H4Q/edit#
>
> Thanks again for both your
is quite involved.
>
> I totally agree that we have to discuss in depth the changes in the API and
> let our community APIs continue to develop in the right direction. Thanks
> again for the reply, and looking forward to your feedback!:)
>
> Best,
> Jincheng
>
> Fabian Hu
.as('w, 'k1, 'k2, 'col1, 'col2)
> .select('k1, 'col1, 'w.rowtime as 'rtime)
>
> What to you think? @Fabian @Xiaowei
>
> Thanks,
> Jincheng
>
> Fabian Hueske 于2018年11月9日周五 下午6:35写道:
>
> > Hi Jincheng,
> >
> > Thanks for the summary!
> > I like the
t; implied in the result table and appears at the beginning. You can use a
> > select method if you want to modify this behavior. I think that
> eventually
> > we will have some API which allows other expressions as additional
> > parameters, but I think it's better to do that af
Hi Rafi,
Welcome to the Flink community!
I gave you contributor permissions for JIRA.
Best, Fabian
Am Mo., 8. Okt. 2018 um 21:18 Uhr schrieb Rafi Aroch :
> Hi,
>
> I would like to assign an issue to myself. Could someone assign contributor
> permissions to my user?
> My username in JIRA is:
Yes, let's do it this way.
The wrapper classes are probably not too complex and can be easily tested.
We have the same for the Hadoop interfaces, although I think only the
Input- and OutputFormatWrappers are actually used.
Am Di., 9. Okt. 2018 um 09:46 Uhr schrieb Chesnay Schepler <
Hi,
I think watermark / event-time skew is a problem that many users are
struggling with.
A built-in primitive to align event-time would be a great feature!
However, there are also some cases when it would be useful for different
streams to have diverging event-time, such as an interval join [1]
Hi Xuefu,
Welcome to the Flink community and thanks for starting this discussion!
Better Hive integration would be really great!
Can you go into details of what you are proposing? I can think of a couple
ways to improve Flink in that regard:
* Support for Hive UDFs
* Support for Hive metadata
+1
Am Di., 2. Okt. 2018 um 14:50 Uhr schrieb Till Rohrmann <
trohrm...@apache.org>:
> Great idea Chesnay. +1 for running Hadoop 2.4 in a cron job. This will help
> us to cut down our Travis time by almost 2.
>
> Cheers,
> Till
>
> On Tue, Oct 2, 2018 at 1:49 PM Chesnay Schepler
> wrote:
>
> >
+1 to drop it.
Thanks, Fabian
Am Sa., 29. Sep. 2018 um 12:05 Uhr schrieb Niels Basjes :
> I would drop it.
>
> Niels Basjes
>
> On Sat, 29 Sep 2018, 10:38 Kostas Kloudas,
> wrote:
>
> > +1 to drop it as nobody seems to be willing to maintain it and it also
> > stands in the way for future
. Feb. 2018 um 13:11 Uhr schrieb Stavros Kontopoulos <
st.kontopou...@gmail.com>:
> Thanx @Fabian. I will update the document accordingly wrt metrics.
> I agree there are pros and cons.
>
> Best,
> Stavros
>
>
> On Wed, Jan 31, 2018 at 1:07 AM, Fabian Hueske wrote:
>
I think the new source interface would be designed to be able to leverage
shared state to achieve time alignment.
I don't think this would be possible without some kind of shared state.
The problem of tasks that are far ahead in time cannot be solved with
back-pressure.
That's because a task
>>> Peter Huang 于2018年10月9日周二 下午1:54写道:
> >>>
> >>>> +1
> >>>>
> >>>> On Mon, Oct 8, 2018 at 7:47 PM Thomas Weise wrote:
> >>>>
> >>>>> +1
> >>>>>
> >>>>>
> >
Thanks for the proposal Timo!
I've done a pass and added some comments (mostly asking for clarification,
details).
Overall, this is going into a very good direction.
I think the tables which are stored in different systems and using a format
definition to define other formats require some more
One challenge would be duplicate keys in this context.
Am Do., 4. Okt. 2018 um 10:17 Uhr schrieb Till Rohrmann <
trohrm...@apache.org>:
> Hi Daniel,
>
> I don't think that there is a fundamental problem of having MapState
> available for operator state. First, there are some questions to be
>
Hi,
I have a branch in my Github repository to test the TPC-H queries [1] [2].
All queries are supported (four need to be slightly rewritten).
When checking the results of the benchmark, please keep in mind that so far
we focused our efforts on extending the functionality and unified semantics
+1 binding
* I checked the diffs and did not find any added dependencies or updated
dependency versions.
* I checked the sha hash and signatures of all release artifacts.
Best, Fabian
2018-09-15 23:26 GMT+02:00 Till Rohrmann :
> Hi everyone,
> Please review and vote on the release candidate #1
+1 binding
* I checked the diffs and did not find any added dependencies or updated
dependency versions.
* I checked the sha hash and signatures of all release artifacts.
Best, Fabian
2018-09-19 11:43 GMT+02:00 Gary Yao :
> +1 (non-binding)
>
> Ran test suite from the flink-jepsen project on
Hi,
Coming back to the original topic of the thread: How to implement the
guided review process.
I am in favor of starting with a low-tech solution.
We design a review template with a checkbox for the five questions (see
[1]) and a link to the detailed description of the review process ([1] will
Thanks for separating the threads Stephan!
(1) Do we agree on the five basic steps below?*
+1 to the five steps and making the third question in the proposal the
first.
(2) How do we understand that consensus is reached about adding the
feature?
+1 to lazy consensus with one committer's +1
(3)
Hi,
I think questions about Flink should be posted on the public mailing lists
instead of asking just a single expert.
There's many reasons for that:
* usually more than one person can answer the question (what if the expert
is not available?)
* non-committers can join the discussion and
Hi,
I'd like to suggest that we keep this thread focused on discussing
Stephan's proposal, i.e., introducing a structured PR review process.
Tison and Piotr raised some good points related to PR reviews that are
definitely worth discussing, but I think we should do that on different
threads and
Hi Chesnay,
Thank you for the proposal.
I think this is a good idea.
We follow a similar approach already for Hadoop dependencies and connectors
(although in application space).
+1
Fabian
Am Fr., 18. Jan. 2019 um 10:59 Uhr schrieb Chesnay Schepler <
ches...@apache.org>:
> Hello,
>
> the
Hi,
Welcome to the Flink community!
I gave you contributor permissions and assigned FLINK-11311 to you.
Best, Fabian
Am So., 13. Jan. 2019 um 16:02 Uhr schrieb Benchao Li :
> Hi, everyone
>
> I would like to make contribution to JIRA(FLINK-11311). Would anyone
> kindly give me the contribution
Hi ildglh,
Welcome as well!
Done.
Best, Fabian
Am Mo., 21. Jan. 2019 um 11:54 Uhr schrieb ildglh :
> Hi, guys
>
> Could anyone kindly give me the contributor permission?
> My JIRA id is ildglh.
>
> Best regards,
> ildglh
>
>
>
> --
> Sent from:
Hi Kezhu Wang,
Welcome to the Flink community.
I gave you contributor permissions.
Best, Fabian
Am So., 20. Jan. 2019 um 21:00 Uhr schrieb Kezhu Wang :
> Hi guys:
>
> Could someone give me contributor permission?
>
> My JIRA username is kezhuw
>
> Thanks,
> Kezhu Wang
>
HI Hongtao,
Welcome to the Flink community!
I gave you contributor permissions for Jira.
Best, Fabian
Am Di., 22. Jan. 2019 um 07:04 Uhr schrieb 张洪涛 :
> Hi Guys,
>
> Could anyone give me the contributor permission ?
>
> My Jira ID is hongtao12310
>
> Regards,
> Hongtao
>
Hi Liya Fan,
Welcome to the Flink community!
I gave you contributor permissions for Jira.
Best, Fabian
Am Mi., 23. Jan. 2019 um 08:01 Uhr schrieb Run :
> Hi Guys,
>
>
> I want to contribute to Apache Flink.
> Would you please give me the permission as a contributor?
> My JIRA ID is fan_li_ya.
Hi Elias,
Q1: Can you post the exception that you receive?
Q2: This is already possible today by converting a Table into a DataSet and
registering that DataSet again as a Table.
Under the hood, the following is happening: As you said, Tables are views
(or logical plans). Whenever, a Table is
601 - 700 of 1245 matches
Mail list logo