Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-19 Thread Tzu-Li (Gordon) Tai
Vote thread for RC3 has been started:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Apache-Flink-1-9-0-release-candidate-3-td31988.html

On Mon, Aug 19, 2019 at 6:32 PM Tzu-Li (Gordon) Tai 
wrote:

> Thanks for the comments and fast fixes.
>
> @Becket Qin  I've quickly looked at the changes to
> the PubSub connector. Given that it is a API-breaking change and is quite
> local as a configuration change, I've decided to include that change in RC3.
> @Jark @Timo Walther  I'll be adding FLINK-13699 as
> well.
>
> Quick update regarding the LICENSE issue with flink-runtime-web: I've
> doubled checked this and the licenses for the new bundled Javascript
> dependencies are actually already correctly present under the root
> licenses-binary/ directory, so we actually don't need additional changes
> for this.
>
> I've started to create RC3 now, will post the vote as soon as it is ready.
>
> Cheers,
> Gordon
>
> On Mon, Aug 19, 2019 at 3:28 PM Stephan Ewen  wrote:
>
>> Looking at FLINK-13699, it seems to be very local to Table API and HBase
>> connector.
>> We can cherry-pick that without re-running distributed tests.
>>
>>
>> On Mon, Aug 19, 2019 at 1:46 PM Till Rohrmann 
>> wrote:
>>
>> > I've merged the fix for FLINK-13752. Hence we are good to go to create
>> the
>> > new RC.
>> >
>> > Cheers,
>> > Till
>> >
>> > On Mon, Aug 19, 2019 at 1:30 PM Timo Walther 
>> wrote:
>> >
>> > > I support Jark's fix for FLINK-13699 because it would be disappointing
>> > > if both DDL and connectors are ready to handle DATE/TIME/TIMESTAMP
>> but a
>> > > little component in the middle of the stack is preventing an otherwise
>> > > usable feature. The changes are minor.
>> > >
>> > > Thanks,
>> > > Timo
>> > >
>> > >
>> > > Am 19.08.19 um 13:24 schrieb Jark Wu:
>> > > > Hi Gordon,
>> > > >
>> > > > I agree that we should pick the minimal set of changes to shorten
>> the
>> > > > release testing time.
>> > > > However, I would like to include FLINK-13699 into RC3. FLINK-13699
>> is a
>> > > > critical DDL issue, and is a small change to flink table (won't
>> affect
>> > > the
>> > > > runtime feature and stability).
>> > > > I will do some tests around sql and blink planner if the RC3 include
>> > this
>> > > > fix.
>> > > >
>> > > > But if the community against to include it, I'm also fine with
>> having
>> > it
>> > > in
>> > > > the next minor release.
>> > > >
>> > > > Thanks,
>> > > > Jark
>> > > >
>> > > > On Mon, 19 Aug 2019 at 16:16, Stephan Ewen 
>> wrote:
>> > > >
>> > > >> +1 for Gordon's approach.
>> > > >>
>> > > >> If we do that, we can probably skip re-testing everything and
>> mainly
>> > > need
>> > > >> to verify the release artifacts (signatures, build from source,
>> etc.).
>> > > >>
>> > > >> If we open the RC up for changes, I fear a lot of small issues will
>> > > rush in
>> > > >> and destabilize the candidate again, meaning we have to do another
>> > > larger
>> > > >> testing effort.
>> > > >>
>> > > >>
>> > > >>
>> > > >> On Mon, Aug 19, 2019 at 9:48 AM Becket Qin 
>> > > wrote:
>> > > >>
>> > > >>> Hi Gordon,
>> > > >>>
>> > > >>> I remember we mentioned earlier that if there is an additional
>> RC, we
>> > > can
>> > > >>> piggyback the GCP PubSub API change (
>> > > >>> https://issues.apache.org/jira/browse/FLINK-13231). It is a small
>> > > patch
>> > > >> to
>> > > >>> avoid future API change. So should be able to merge it very
>> shortly.
>> > > >> Would
>> > > >>> it be possible to include that into RC3 as well?
>> > > >>>
>> > > >>> Thanks,
>> > > >>>
>> > > >>> Jiangjie (Becket) Qin
>> > > >>>
>> > > >>> On Mon, Aug 19, 2019 at 9:43 AM Tzu-Li (Gordon) Tai <
>> > > tzuli...@apache.org
>> > > >>>
>> > > >>> wrote:
>> > > >>>
>> > >  Hi,
>> > > 
>> > >  https://issues.apache.org/jira/browse/FLINK-13752 turns out to
>> be
>> > an
>> > >  actual
>> > >  blocker, so we would have to close this RC now in favor of a new
>> > one.
>> > > 
>> > >  Since we are already quite past the planned release time for
>> 1.9.0,
>> > I
>> > > >>> would
>> > >  like to limit the new changes included in RC3 to only the
>> following:
>> > >  - https://issues.apache.org/jira/browse/FLINK-13752
>> > >  - Fix license and notice file issues that Kurt had found with
>> > >  flink-runtime-web and flink-state-processing-api
>> > > 
>> > >  This means that I will not be creating RC3 with the release-1.9
>> > branch
>> > > >> as
>> > >  is, but essentially only cherry-picking the above mentioned
>> changes
>> > on
>> > > >>> top
>> > >  of RC2.
>> > >  The minimal set of changes on top of RC2 should allow us to carry
>> > most
>> > > >> if
>> > >  not all of the already existing votes without another round of
>> > > >> extensive
>> > >  testing, and allow us to have a shortened voting time.
>> > > 
>> > >  I understand that there are other issues mentioned in this thread
>> > that
>> > > >>> are
>> > >  already spotted and 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-19 Thread Tzu-Li (Gordon) Tai
Thanks for the comments and fast fixes.

@Becket Qin  I've quickly looked at the changes to
the PubSub connector. Given that it is a API-breaking change and is quite
local as a configuration change, I've decided to include that change in RC3.
@Jark @Timo Walther  I'll be adding FLINK-13699 as well.

Quick update regarding the LICENSE issue with flink-runtime-web: I've
doubled checked this and the licenses for the new bundled Javascript
dependencies are actually already correctly present under the root
licenses-binary/ directory, so we actually don't need additional changes
for this.

I've started to create RC3 now, will post the vote as soon as it is ready.

Cheers,
Gordon

On Mon, Aug 19, 2019 at 3:28 PM Stephan Ewen  wrote:

> Looking at FLINK-13699, it seems to be very local to Table API and HBase
> connector.
> We can cherry-pick that without re-running distributed tests.
>
>
> On Mon, Aug 19, 2019 at 1:46 PM Till Rohrmann 
> wrote:
>
> > I've merged the fix for FLINK-13752. Hence we are good to go to create
> the
> > new RC.
> >
> > Cheers,
> > Till
> >
> > On Mon, Aug 19, 2019 at 1:30 PM Timo Walther  wrote:
> >
> > > I support Jark's fix for FLINK-13699 because it would be disappointing
> > > if both DDL and connectors are ready to handle DATE/TIME/TIMESTAMP but
> a
> > > little component in the middle of the stack is preventing an otherwise
> > > usable feature. The changes are minor.
> > >
> > > Thanks,
> > > Timo
> > >
> > >
> > > Am 19.08.19 um 13:24 schrieb Jark Wu:
> > > > Hi Gordon,
> > > >
> > > > I agree that we should pick the minimal set of changes to shorten the
> > > > release testing time.
> > > > However, I would like to include FLINK-13699 into RC3. FLINK-13699
> is a
> > > > critical DDL issue, and is a small change to flink table (won't
> affect
> > > the
> > > > runtime feature and stability).
> > > > I will do some tests around sql and blink planner if the RC3 include
> > this
> > > > fix.
> > > >
> > > > But if the community against to include it, I'm also fine with having
> > it
> > > in
> > > > the next minor release.
> > > >
> > > > Thanks,
> > > > Jark
> > > >
> > > > On Mon, 19 Aug 2019 at 16:16, Stephan Ewen  wrote:
> > > >
> > > >> +1 for Gordon's approach.
> > > >>
> > > >> If we do that, we can probably skip re-testing everything and mainly
> > > need
> > > >> to verify the release artifacts (signatures, build from source,
> etc.).
> > > >>
> > > >> If we open the RC up for changes, I fear a lot of small issues will
> > > rush in
> > > >> and destabilize the candidate again, meaning we have to do another
> > > larger
> > > >> testing effort.
> > > >>
> > > >>
> > > >>
> > > >> On Mon, Aug 19, 2019 at 9:48 AM Becket Qin 
> > > wrote:
> > > >>
> > > >>> Hi Gordon,
> > > >>>
> > > >>> I remember we mentioned earlier that if there is an additional RC,
> we
> > > can
> > > >>> piggyback the GCP PubSub API change (
> > > >>> https://issues.apache.org/jira/browse/FLINK-13231). It is a small
> > > patch
> > > >> to
> > > >>> avoid future API change. So should be able to merge it very
> shortly.
> > > >> Would
> > > >>> it be possible to include that into RC3 as well?
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> Jiangjie (Becket) Qin
> > > >>>
> > > >>> On Mon, Aug 19, 2019 at 9:43 AM Tzu-Li (Gordon) Tai <
> > > tzuli...@apache.org
> > > >>>
> > > >>> wrote:
> > > >>>
> > >  Hi,
> > > 
> > >  https://issues.apache.org/jira/browse/FLINK-13752 turns out to be
> > an
> > >  actual
> > >  blocker, so we would have to close this RC now in favor of a new
> > one.
> > > 
> > >  Since we are already quite past the planned release time for
> 1.9.0,
> > I
> > > >>> would
> > >  like to limit the new changes included in RC3 to only the
> following:
> > >  - https://issues.apache.org/jira/browse/FLINK-13752
> > >  - Fix license and notice file issues that Kurt had found with
> > >  flink-runtime-web and flink-state-processing-api
> > > 
> > >  This means that I will not be creating RC3 with the release-1.9
> > branch
> > > >> as
> > >  is, but essentially only cherry-picking the above mentioned
> changes
> > on
> > > >>> top
> > >  of RC2.
> > >  The minimal set of changes on top of RC2 should allow us to carry
> > most
> > > >> if
> > >  not all of the already existing votes without another round of
> > > >> extensive
> > >  testing, and allow us to have a shortened voting time.
> > > 
> > >  I understand that there are other issues mentioned in this thread
> > that
> > > >>> are
> > >  already spotted and merged to release-1.9, especially for the
> Blink
> > > >>> planner
> > >  and DDL, but I suggest not to include them in RC3.
> > >  I think it would be better to collect all the remaining issues for
> > > >> those
> > >  over a period of time, and include them as 1.9.1 which can ideally
> > > also
> > >  happen a few weeks soon after 1.9.0.
> > > 
> > >  What do you think? If 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-19 Thread Stephan Ewen
Looking at FLINK-13699, it seems to be very local to Table API and HBase
connector.
We can cherry-pick that without re-running distributed tests.


On Mon, Aug 19, 2019 at 1:46 PM Till Rohrmann  wrote:

> I've merged the fix for FLINK-13752. Hence we are good to go to create the
> new RC.
>
> Cheers,
> Till
>
> On Mon, Aug 19, 2019 at 1:30 PM Timo Walther  wrote:
>
> > I support Jark's fix for FLINK-13699 because it would be disappointing
> > if both DDL and connectors are ready to handle DATE/TIME/TIMESTAMP but a
> > little component in the middle of the stack is preventing an otherwise
> > usable feature. The changes are minor.
> >
> > Thanks,
> > Timo
> >
> >
> > Am 19.08.19 um 13:24 schrieb Jark Wu:
> > > Hi Gordon,
> > >
> > > I agree that we should pick the minimal set of changes to shorten the
> > > release testing time.
> > > However, I would like to include FLINK-13699 into RC3. FLINK-13699 is a
> > > critical DDL issue, and is a small change to flink table (won't affect
> > the
> > > runtime feature and stability).
> > > I will do some tests around sql and blink planner if the RC3 include
> this
> > > fix.
> > >
> > > But if the community against to include it, I'm also fine with having
> it
> > in
> > > the next minor release.
> > >
> > > Thanks,
> > > Jark
> > >
> > > On Mon, 19 Aug 2019 at 16:16, Stephan Ewen  wrote:
> > >
> > >> +1 for Gordon's approach.
> > >>
> > >> If we do that, we can probably skip re-testing everything and mainly
> > need
> > >> to verify the release artifacts (signatures, build from source, etc.).
> > >>
> > >> If we open the RC up for changes, I fear a lot of small issues will
> > rush in
> > >> and destabilize the candidate again, meaning we have to do another
> > larger
> > >> testing effort.
> > >>
> > >>
> > >>
> > >> On Mon, Aug 19, 2019 at 9:48 AM Becket Qin 
> > wrote:
> > >>
> > >>> Hi Gordon,
> > >>>
> > >>> I remember we mentioned earlier that if there is an additional RC, we
> > can
> > >>> piggyback the GCP PubSub API change (
> > >>> https://issues.apache.org/jira/browse/FLINK-13231). It is a small
> > patch
> > >> to
> > >>> avoid future API change. So should be able to merge it very shortly.
> > >> Would
> > >>> it be possible to include that into RC3 as well?
> > >>>
> > >>> Thanks,
> > >>>
> > >>> Jiangjie (Becket) Qin
> > >>>
> > >>> On Mon, Aug 19, 2019 at 9:43 AM Tzu-Li (Gordon) Tai <
> > tzuli...@apache.org
> > >>>
> > >>> wrote:
> > >>>
> >  Hi,
> > 
> >  https://issues.apache.org/jira/browse/FLINK-13752 turns out to be
> an
> >  actual
> >  blocker, so we would have to close this RC now in favor of a new
> one.
> > 
> >  Since we are already quite past the planned release time for 1.9.0,
> I
> > >>> would
> >  like to limit the new changes included in RC3 to only the following:
> >  - https://issues.apache.org/jira/browse/FLINK-13752
> >  - Fix license and notice file issues that Kurt had found with
> >  flink-runtime-web and flink-state-processing-api
> > 
> >  This means that I will not be creating RC3 with the release-1.9
> branch
> > >> as
> >  is, but essentially only cherry-picking the above mentioned changes
> on
> > >>> top
> >  of RC2.
> >  The minimal set of changes on top of RC2 should allow us to carry
> most
> > >> if
> >  not all of the already existing votes without another round of
> > >> extensive
> >  testing, and allow us to have a shortened voting time.
> > 
> >  I understand that there are other issues mentioned in this thread
> that
> > >>> are
> >  already spotted and merged to release-1.9, especially for the Blink
> > >>> planner
> >  and DDL, but I suggest not to include them in RC3.
> >  I think it would be better to collect all the remaining issues for
> > >> those
> >  over a period of time, and include them as 1.9.1 which can ideally
> > also
> >  happen a few weeks soon after 1.9.0.
> > 
> >  What do you think? If there are not objections, I would proceed with
> > >> this
> >  plan and push out a new RC by the end of today (Aug. 19th CET).
> > 
> >  Regards,
> >  Gordon
> > 
> >  On Mon, Aug 19, 2019 at 4:09 AM Zili Chen 
> > >> wrote:
> > > We should investigate the performance regression but regardless the
> > > regression I vote +1
> > >
> > > Have verified following things
> > >
> > > - Jobs running on YARN x (Session & Per Job) with high-availability
> > > enabled.
> > > - Simulate JM and TM failures.
> > > - Simulate temporary network partition.
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > Stephan Ewen  于2019年8月18日周日 下午10:12写道:
> > >
> > >> For reference, this is the JIRA issue about the regression in
> > >>> question:
> > >> https://issues.apache.org/jira/browse/FLINK-13752
> > >>
> > >>
> > >> On Fri, Aug 16, 2019 at 10:57 AM Guowei Ma 
> >  wrote:
> > >>> Hi, till
> > >>> I can 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-19 Thread Till Rohrmann
I've merged the fix for FLINK-13752. Hence we are good to go to create the
new RC.

Cheers,
Till

On Mon, Aug 19, 2019 at 1:30 PM Timo Walther  wrote:

> I support Jark's fix for FLINK-13699 because it would be disappointing
> if both DDL and connectors are ready to handle DATE/TIME/TIMESTAMP but a
> little component in the middle of the stack is preventing an otherwise
> usable feature. The changes are minor.
>
> Thanks,
> Timo
>
>
> Am 19.08.19 um 13:24 schrieb Jark Wu:
> > Hi Gordon,
> >
> > I agree that we should pick the minimal set of changes to shorten the
> > release testing time.
> > However, I would like to include FLINK-13699 into RC3. FLINK-13699 is a
> > critical DDL issue, and is a small change to flink table (won't affect
> the
> > runtime feature and stability).
> > I will do some tests around sql and blink planner if the RC3 include this
> > fix.
> >
> > But if the community against to include it, I'm also fine with having it
> in
> > the next minor release.
> >
> > Thanks,
> > Jark
> >
> > On Mon, 19 Aug 2019 at 16:16, Stephan Ewen  wrote:
> >
> >> +1 for Gordon's approach.
> >>
> >> If we do that, we can probably skip re-testing everything and mainly
> need
> >> to verify the release artifacts (signatures, build from source, etc.).
> >>
> >> If we open the RC up for changes, I fear a lot of small issues will
> rush in
> >> and destabilize the candidate again, meaning we have to do another
> larger
> >> testing effort.
> >>
> >>
> >>
> >> On Mon, Aug 19, 2019 at 9:48 AM Becket Qin 
> wrote:
> >>
> >>> Hi Gordon,
> >>>
> >>> I remember we mentioned earlier that if there is an additional RC, we
> can
> >>> piggyback the GCP PubSub API change (
> >>> https://issues.apache.org/jira/browse/FLINK-13231). It is a small
> patch
> >> to
> >>> avoid future API change. So should be able to merge it very shortly.
> >> Would
> >>> it be possible to include that into RC3 as well?
> >>>
> >>> Thanks,
> >>>
> >>> Jiangjie (Becket) Qin
> >>>
> >>> On Mon, Aug 19, 2019 at 9:43 AM Tzu-Li (Gordon) Tai <
> tzuli...@apache.org
> >>>
> >>> wrote:
> >>>
>  Hi,
> 
>  https://issues.apache.org/jira/browse/FLINK-13752 turns out to be an
>  actual
>  blocker, so we would have to close this RC now in favor of a new one.
> 
>  Since we are already quite past the planned release time for 1.9.0, I
> >>> would
>  like to limit the new changes included in RC3 to only the following:
>  - https://issues.apache.org/jira/browse/FLINK-13752
>  - Fix license and notice file issues that Kurt had found with
>  flink-runtime-web and flink-state-processing-api
> 
>  This means that I will not be creating RC3 with the release-1.9 branch
> >> as
>  is, but essentially only cherry-picking the above mentioned changes on
> >>> top
>  of RC2.
>  The minimal set of changes on top of RC2 should allow us to carry most
> >> if
>  not all of the already existing votes without another round of
> >> extensive
>  testing, and allow us to have a shortened voting time.
> 
>  I understand that there are other issues mentioned in this thread that
> >>> are
>  already spotted and merged to release-1.9, especially for the Blink
> >>> planner
>  and DDL, but I suggest not to include them in RC3.
>  I think it would be better to collect all the remaining issues for
> >> those
>  over a period of time, and include them as 1.9.1 which can ideally
> also
>  happen a few weeks soon after 1.9.0.
> 
>  What do you think? If there are not objections, I would proceed with
> >> this
>  plan and push out a new RC by the end of today (Aug. 19th CET).
> 
>  Regards,
>  Gordon
> 
>  On Mon, Aug 19, 2019 at 4:09 AM Zili Chen 
> >> wrote:
> > We should investigate the performance regression but regardless the
> > regression I vote +1
> >
> > Have verified following things
> >
> > - Jobs running on YARN x (Session & Per Job) with high-availability
> > enabled.
> > - Simulate JM and TM failures.
> > - Simulate temporary network partition.
> >
> > Best,
> > tison.
> >
> >
> > Stephan Ewen  于2019年8月18日周日 下午10:12写道:
> >
> >> For reference, this is the JIRA issue about the regression in
> >>> question:
> >> https://issues.apache.org/jira/browse/FLINK-13752
> >>
> >>
> >> On Fri, Aug 16, 2019 at 10:57 AM Guowei Ma 
>  wrote:
> >>> Hi, till
> >>> I can send the job to you offline.
> >>> It is just a datastream job and does not use
> >> TwoInputSelectableStreamTask.
> >>> A->B
> >>>   \
> >>> C
> >>>   /
> >>> D->E
> >>> Best,
> >>> Guowei
> >>>
> >>>
> >>> Till Rohrmann  于2019年8月16日周五 下午4:34写道:
> >>>
>  Thanks for reporting this issue Guowei. Could you share a bit
> >>> more
> >>> details
>  what the job exactly does and which operators it uses? Does 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-19 Thread Timo Walther
I support Jark's fix for FLINK-13699 because it would be disappointing 
if both DDL and connectors are ready to handle DATE/TIME/TIMESTAMP but a 
little component in the middle of the stack is preventing an otherwise 
usable feature. The changes are minor.


Thanks,
Timo


Am 19.08.19 um 13:24 schrieb Jark Wu:

Hi Gordon,

I agree that we should pick the minimal set of changes to shorten the
release testing time.
However, I would like to include FLINK-13699 into RC3. FLINK-13699 is a
critical DDL issue, and is a small change to flink table (won't affect the
runtime feature and stability).
I will do some tests around sql and blink planner if the RC3 include this
fix.

But if the community against to include it, I'm also fine with having it in
the next minor release.

Thanks,
Jark

On Mon, 19 Aug 2019 at 16:16, Stephan Ewen  wrote:


+1 for Gordon's approach.

If we do that, we can probably skip re-testing everything and mainly need
to verify the release artifacts (signatures, build from source, etc.).

If we open the RC up for changes, I fear a lot of small issues will rush in
and destabilize the candidate again, meaning we have to do another larger
testing effort.



On Mon, Aug 19, 2019 at 9:48 AM Becket Qin  wrote:


Hi Gordon,

I remember we mentioned earlier that if there is an additional RC, we can
piggyback the GCP PubSub API change (
https://issues.apache.org/jira/browse/FLINK-13231). It is a small patch

to

avoid future API change. So should be able to merge it very shortly.

Would

it be possible to include that into RC3 as well?

Thanks,

Jiangjie (Becket) Qin

On Mon, Aug 19, 2019 at 9:43 AM Tzu-Li (Gordon) Tai 
Hi,

https://issues.apache.org/jira/browse/FLINK-13752 turns out to be an
actual
blocker, so we would have to close this RC now in favor of a new one.

Since we are already quite past the planned release time for 1.9.0, I

would

like to limit the new changes included in RC3 to only the following:
- https://issues.apache.org/jira/browse/FLINK-13752
- Fix license and notice file issues that Kurt had found with
flink-runtime-web and flink-state-processing-api

This means that I will not be creating RC3 with the release-1.9 branch

as

is, but essentially only cherry-picking the above mentioned changes on

top

of RC2.
The minimal set of changes on top of RC2 should allow us to carry most

if

not all of the already existing votes without another round of

extensive

testing, and allow us to have a shortened voting time.

I understand that there are other issues mentioned in this thread that

are

already spotted and merged to release-1.9, especially for the Blink

planner

and DDL, but I suggest not to include them in RC3.
I think it would be better to collect all the remaining issues for

those

over a period of time, and include them as 1.9.1 which can ideally also
happen a few weeks soon after 1.9.0.

What do you think? If there are not objections, I would proceed with

this

plan and push out a new RC by the end of today (Aug. 19th CET).

Regards,
Gordon

On Mon, Aug 19, 2019 at 4:09 AM Zili Chen 

wrote:

We should investigate the performance regression but regardless the
regression I vote +1

Have verified following things

- Jobs running on YARN x (Session & Per Job) with high-availability
enabled.
- Simulate JM and TM failures.
- Simulate temporary network partition.

Best,
tison.


Stephan Ewen  于2019年8月18日周日 下午10:12写道:


For reference, this is the JIRA issue about the regression in

question:

https://issues.apache.org/jira/browse/FLINK-13752


On Fri, Aug 16, 2019 at 10:57 AM Guowei Ma 

wrote:

Hi, till
I can send the job to you offline.
It is just a datastream job and does not use

TwoInputSelectableStreamTask.

A->B
  \
C
  /
D->E
Best,
Guowei


Till Rohrmann  于2019年8月16日周五 下午4:34写道:


Thanks for reporting this issue Guowei. Could you share a bit

more

details

what the job exactly does and which operators it uses? Does the

job

uses

the new `TwoInputSelectableStreamTask` which might cause the

performance

regression?

I think it is important to understand where the problem comes

from

before

we proceed with the release.

Cheers,
Till

On Fri, Aug 16, 2019 at 10:27 AM Guowei Ma <

guowei@gmail.com

wrote:

Hi,
-1
We have a benchmark job, which includes a two-input operator.
This job has a big performance regression using 1.9 compared

to

1.8.

It's still not very clear why this regression happens.

Best,
Guowei


Yu Li  于2019年8月16日周五 下午3:27写道:


+1 (non-binding)

- checked release notes: OK
- checked sums and signatures: OK
- source release
  - contains no binaries: OK
  - contains no 1.9-SNAPSHOT references: OK
  - build from source: OK (8u102)
  - mvn clean verify: OK (8u102)
- binary release
  - no examples appear to be missing
  - started a cluster; WebUI reachable, example ran

successfully

- repository appears to contain all expected artifacts

Best Regards,
Yu


On Fri, 16 Aug 2019 at 06:06, Bowen Li <


Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-19 Thread Jark Wu
Hi Gordon,

I agree that we should pick the minimal set of changes to shorten the
release testing time.
However, I would like to include FLINK-13699 into RC3. FLINK-13699 is a
critical DDL issue, and is a small change to flink table (won't affect the
runtime feature and stability).
I will do some tests around sql and blink planner if the RC3 include this
fix.

But if the community against to include it, I'm also fine with having it in
the next minor release.

Thanks,
Jark

On Mon, 19 Aug 2019 at 16:16, Stephan Ewen  wrote:

> +1 for Gordon's approach.
>
> If we do that, we can probably skip re-testing everything and mainly need
> to verify the release artifacts (signatures, build from source, etc.).
>
> If we open the RC up for changes, I fear a lot of small issues will rush in
> and destabilize the candidate again, meaning we have to do another larger
> testing effort.
>
>
>
> On Mon, Aug 19, 2019 at 9:48 AM Becket Qin  wrote:
>
> > Hi Gordon,
> >
> > I remember we mentioned earlier that if there is an additional RC, we can
> > piggyback the GCP PubSub API change (
> > https://issues.apache.org/jira/browse/FLINK-13231). It is a small patch
> to
> > avoid future API change. So should be able to merge it very shortly.
> Would
> > it be possible to include that into RC3 as well?
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Mon, Aug 19, 2019 at 9:43 AM Tzu-Li (Gordon) Tai  >
> > wrote:
> >
> > > Hi,
> > >
> > > https://issues.apache.org/jira/browse/FLINK-13752 turns out to be an
> > > actual
> > > blocker, so we would have to close this RC now in favor of a new one.
> > >
> > > Since we are already quite past the planned release time for 1.9.0, I
> > would
> > > like to limit the new changes included in RC3 to only the following:
> > > - https://issues.apache.org/jira/browse/FLINK-13752
> > > - Fix license and notice file issues that Kurt had found with
> > > flink-runtime-web and flink-state-processing-api
> > >
> > > This means that I will not be creating RC3 with the release-1.9 branch
> as
> > > is, but essentially only cherry-picking the above mentioned changes on
> > top
> > > of RC2.
> > > The minimal set of changes on top of RC2 should allow us to carry most
> if
> > > not all of the already existing votes without another round of
> extensive
> > > testing, and allow us to have a shortened voting time.
> > >
> > > I understand that there are other issues mentioned in this thread that
> > are
> > > already spotted and merged to release-1.9, especially for the Blink
> > planner
> > > and DDL, but I suggest not to include them in RC3.
> > > I think it would be better to collect all the remaining issues for
> those
> > > over a period of time, and include them as 1.9.1 which can ideally also
> > > happen a few weeks soon after 1.9.0.
> > >
> > > What do you think? If there are not objections, I would proceed with
> this
> > > plan and push out a new RC by the end of today (Aug. 19th CET).
> > >
> > > Regards,
> > > Gordon
> > >
> > > On Mon, Aug 19, 2019 at 4:09 AM Zili Chen 
> wrote:
> > >
> > > > We should investigate the performance regression but regardless the
> > > > regression I vote +1
> > > >
> > > > Have verified following things
> > > >
> > > > - Jobs running on YARN x (Session & Per Job) with high-availability
> > > > enabled.
> > > > - Simulate JM and TM failures.
> > > > - Simulate temporary network partition.
> > > >
> > > > Best,
> > > > tison.
> > > >
> > > >
> > > > Stephan Ewen  于2019年8月18日周日 下午10:12写道:
> > > >
> > > > > For reference, this is the JIRA issue about the regression in
> > question:
> > > > >
> > > > > https://issues.apache.org/jira/browse/FLINK-13752
> > > > >
> > > > >
> > > > > On Fri, Aug 16, 2019 at 10:57 AM Guowei Ma 
> > > wrote:
> > > > >
> > > > > > Hi, till
> > > > > > I can send the job to you offline.
> > > > > > It is just a datastream job and does not use
> > > > > TwoInputSelectableStreamTask.
> > > > > > A->B
> > > > > >  \
> > > > > >C
> > > > > >  /
> > > > > > D->E
> > > > > > Best,
> > > > > > Guowei
> > > > > >
> > > > > >
> > > > > > Till Rohrmann  于2019年8月16日周五 下午4:34写道:
> > > > > >
> > > > > > > Thanks for reporting this issue Guowei. Could you share a bit
> > more
> > > > > > details
> > > > > > > what the job exactly does and which operators it uses? Does the
> > job
> > > > > uses
> > > > > > > the new `TwoInputSelectableStreamTask` which might cause the
> > > > > performance
> > > > > > > regression?
> > > > > > >
> > > > > > > I think it is important to understand where the problem comes
> > from
> > > > > before
> > > > > > > we proceed with the release.
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Till
> > > > > > >
> > > > > > > On Fri, Aug 16, 2019 at 10:27 AM Guowei Ma <
> guowei@gmail.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi,
> > > > > > > > -1
> > > > > > > > We have a benchmark job, which includes a two-input operator.
> > > > > > > > This job has a big performance 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-19 Thread Stephan Ewen
+1 for Gordon's approach.

If we do that, we can probably skip re-testing everything and mainly need
to verify the release artifacts (signatures, build from source, etc.).

If we open the RC up for changes, I fear a lot of small issues will rush in
and destabilize the candidate again, meaning we have to do another larger
testing effort.



On Mon, Aug 19, 2019 at 9:48 AM Becket Qin  wrote:

> Hi Gordon,
>
> I remember we mentioned earlier that if there is an additional RC, we can
> piggyback the GCP PubSub API change (
> https://issues.apache.org/jira/browse/FLINK-13231). It is a small patch to
> avoid future API change. So should be able to merge it very shortly. Would
> it be possible to include that into RC3 as well?
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Aug 19, 2019 at 9:43 AM Tzu-Li (Gordon) Tai 
> wrote:
>
> > Hi,
> >
> > https://issues.apache.org/jira/browse/FLINK-13752 turns out to be an
> > actual
> > blocker, so we would have to close this RC now in favor of a new one.
> >
> > Since we are already quite past the planned release time for 1.9.0, I
> would
> > like to limit the new changes included in RC3 to only the following:
> > - https://issues.apache.org/jira/browse/FLINK-13752
> > - Fix license and notice file issues that Kurt had found with
> > flink-runtime-web and flink-state-processing-api
> >
> > This means that I will not be creating RC3 with the release-1.9 branch as
> > is, but essentially only cherry-picking the above mentioned changes on
> top
> > of RC2.
> > The minimal set of changes on top of RC2 should allow us to carry most if
> > not all of the already existing votes without another round of extensive
> > testing, and allow us to have a shortened voting time.
> >
> > I understand that there are other issues mentioned in this thread that
> are
> > already spotted and merged to release-1.9, especially for the Blink
> planner
> > and DDL, but I suggest not to include them in RC3.
> > I think it would be better to collect all the remaining issues for those
> > over a period of time, and include them as 1.9.1 which can ideally also
> > happen a few weeks soon after 1.9.0.
> >
> > What do you think? If there are not objections, I would proceed with this
> > plan and push out a new RC by the end of today (Aug. 19th CET).
> >
> > Regards,
> > Gordon
> >
> > On Mon, Aug 19, 2019 at 4:09 AM Zili Chen  wrote:
> >
> > > We should investigate the performance regression but regardless the
> > > regression I vote +1
> > >
> > > Have verified following things
> > >
> > > - Jobs running on YARN x (Session & Per Job) with high-availability
> > > enabled.
> > > - Simulate JM and TM failures.
> > > - Simulate temporary network partition.
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > Stephan Ewen  于2019年8月18日周日 下午10:12写道:
> > >
> > > > For reference, this is the JIRA issue about the regression in
> question:
> > > >
> > > > https://issues.apache.org/jira/browse/FLINK-13752
> > > >
> > > >
> > > > On Fri, Aug 16, 2019 at 10:57 AM Guowei Ma 
> > wrote:
> > > >
> > > > > Hi, till
> > > > > I can send the job to you offline.
> > > > > It is just a datastream job and does not use
> > > > TwoInputSelectableStreamTask.
> > > > > A->B
> > > > >  \
> > > > >C
> > > > >  /
> > > > > D->E
> > > > > Best,
> > > > > Guowei
> > > > >
> > > > >
> > > > > Till Rohrmann  于2019年8月16日周五 下午4:34写道:
> > > > >
> > > > > > Thanks for reporting this issue Guowei. Could you share a bit
> more
> > > > > details
> > > > > > what the job exactly does and which operators it uses? Does the
> job
> > > > uses
> > > > > > the new `TwoInputSelectableStreamTask` which might cause the
> > > > performance
> > > > > > regression?
> > > > > >
> > > > > > I think it is important to understand where the problem comes
> from
> > > > before
> > > > > > we proceed with the release.
> > > > > >
> > > > > > Cheers,
> > > > > > Till
> > > > > >
> > > > > > On Fri, Aug 16, 2019 at 10:27 AM Guowei Ma  >
> > > > wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > > -1
> > > > > > > We have a benchmark job, which includes a two-input operator.
> > > > > > > This job has a big performance regression using 1.9 compared to
> > > 1.8.
> > > > > > > It's still not very clear why this regression happens.
> > > > > > >
> > > > > > > Best,
> > > > > > > Guowei
> > > > > > >
> > > > > > >
> > > > > > > Yu Li  于2019年8月16日周五 下午3:27写道:
> > > > > > >
> > > > > > > > +1 (non-binding)
> > > > > > > >
> > > > > > > > - checked release notes: OK
> > > > > > > > - checked sums and signatures: OK
> > > > > > > > - source release
> > > > > > > >  - contains no binaries: OK
> > > > > > > >  - contains no 1.9-SNAPSHOT references: OK
> > > > > > > >  - build from source: OK (8u102)
> > > > > > > >  - mvn clean verify: OK (8u102)
> > > > > > > > - binary release
> > > > > > > >  - no examples appear to be missing
> > > > > > > >  - started a cluster; WebUI reachable, example ran
> > > successfully
> > > > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-19 Thread Till Rohrmann
+1 for only cherry picking FLINK-13752 and the LICENSE fixes into RC 3.

Cheers,
Till

On Mon, Aug 19, 2019 at 9:48 AM Becket Qin  wrote:

> Hi Gordon,
>
> I remember we mentioned earlier that if there is an additional RC, we can
> piggyback the GCP PubSub API change (
> https://issues.apache.org/jira/browse/FLINK-13231). It is a small patch to
> avoid future API change. So should be able to merge it very shortly. Would
> it be possible to include that into RC3 as well?
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Aug 19, 2019 at 9:43 AM Tzu-Li (Gordon) Tai 
> wrote:
>
> > Hi,
> >
> > https://issues.apache.org/jira/browse/FLINK-13752 turns out to be an
> > actual
> > blocker, so we would have to close this RC now in favor of a new one.
> >
> > Since we are already quite past the planned release time for 1.9.0, I
> would
> > like to limit the new changes included in RC3 to only the following:
> > - https://issues.apache.org/jira/browse/FLINK-13752
> > - Fix license and notice file issues that Kurt had found with
> > flink-runtime-web and flink-state-processing-api
> >
> > This means that I will not be creating RC3 with the release-1.9 branch as
> > is, but essentially only cherry-picking the above mentioned changes on
> top
> > of RC2.
> > The minimal set of changes on top of RC2 should allow us to carry most if
> > not all of the already existing votes without another round of extensive
> > testing, and allow us to have a shortened voting time.
> >
> > I understand that there are other issues mentioned in this thread that
> are
> > already spotted and merged to release-1.9, especially for the Blink
> planner
> > and DDL, but I suggest not to include them in RC3.
> > I think it would be better to collect all the remaining issues for those
> > over a period of time, and include them as 1.9.1 which can ideally also
> > happen a few weeks soon after 1.9.0.
> >
> > What do you think? If there are not objections, I would proceed with this
> > plan and push out a new RC by the end of today (Aug. 19th CET).
> >
> > Regards,
> > Gordon
> >
> > On Mon, Aug 19, 2019 at 4:09 AM Zili Chen  wrote:
> >
> > > We should investigate the performance regression but regardless the
> > > regression I vote +1
> > >
> > > Have verified following things
> > >
> > > - Jobs running on YARN x (Session & Per Job) with high-availability
> > > enabled.
> > > - Simulate JM and TM failures.
> > > - Simulate temporary network partition.
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > Stephan Ewen  于2019年8月18日周日 下午10:12写道:
> > >
> > > > For reference, this is the JIRA issue about the regression in
> question:
> > > >
> > > > https://issues.apache.org/jira/browse/FLINK-13752
> > > >
> > > >
> > > > On Fri, Aug 16, 2019 at 10:57 AM Guowei Ma 
> > wrote:
> > > >
> > > > > Hi, till
> > > > > I can send the job to you offline.
> > > > > It is just a datastream job and does not use
> > > > TwoInputSelectableStreamTask.
> > > > > A->B
> > > > >  \
> > > > >C
> > > > >  /
> > > > > D->E
> > > > > Best,
> > > > > Guowei
> > > > >
> > > > >
> > > > > Till Rohrmann  于2019年8月16日周五 下午4:34写道:
> > > > >
> > > > > > Thanks for reporting this issue Guowei. Could you share a bit
> more
> > > > > details
> > > > > > what the job exactly does and which operators it uses? Does the
> job
> > > > uses
> > > > > > the new `TwoInputSelectableStreamTask` which might cause the
> > > > performance
> > > > > > regression?
> > > > > >
> > > > > > I think it is important to understand where the problem comes
> from
> > > > before
> > > > > > we proceed with the release.
> > > > > >
> > > > > > Cheers,
> > > > > > Till
> > > > > >
> > > > > > On Fri, Aug 16, 2019 at 10:27 AM Guowei Ma  >
> > > > wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > > -1
> > > > > > > We have a benchmark job, which includes a two-input operator.
> > > > > > > This job has a big performance regression using 1.9 compared to
> > > 1.8.
> > > > > > > It's still not very clear why this regression happens.
> > > > > > >
> > > > > > > Best,
> > > > > > > Guowei
> > > > > > >
> > > > > > >
> > > > > > > Yu Li  于2019年8月16日周五 下午3:27写道:
> > > > > > >
> > > > > > > > +1 (non-binding)
> > > > > > > >
> > > > > > > > - checked release notes: OK
> > > > > > > > - checked sums and signatures: OK
> > > > > > > > - source release
> > > > > > > >  - contains no binaries: OK
> > > > > > > >  - contains no 1.9-SNAPSHOT references: OK
> > > > > > > >  - build from source: OK (8u102)
> > > > > > > >  - mvn clean verify: OK (8u102)
> > > > > > > > - binary release
> > > > > > > >  - no examples appear to be missing
> > > > > > > >  - started a cluster; WebUI reachable, example ran
> > > successfully
> > > > > > > > - repository appears to contain all expected artifacts
> > > > > > > >
> > > > > > > > Best Regards,
> > > > > > > > Yu
> > > > > > > >
> > > > > > > >
> > > > > > > > On Fri, 16 Aug 2019 at 06:06, Bowen Li 
> > > > wrote:
> > > > > > > >
> > > > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-19 Thread Becket Qin
Hi Gordon,

I remember we mentioned earlier that if there is an additional RC, we can
piggyback the GCP PubSub API change (
https://issues.apache.org/jira/browse/FLINK-13231). It is a small patch to
avoid future API change. So should be able to merge it very shortly. Would
it be possible to include that into RC3 as well?

Thanks,

Jiangjie (Becket) Qin

On Mon, Aug 19, 2019 at 9:43 AM Tzu-Li (Gordon) Tai 
wrote:

> Hi,
>
> https://issues.apache.org/jira/browse/FLINK-13752 turns out to be an
> actual
> blocker, so we would have to close this RC now in favor of a new one.
>
> Since we are already quite past the planned release time for 1.9.0, I would
> like to limit the new changes included in RC3 to only the following:
> - https://issues.apache.org/jira/browse/FLINK-13752
> - Fix license and notice file issues that Kurt had found with
> flink-runtime-web and flink-state-processing-api
>
> This means that I will not be creating RC3 with the release-1.9 branch as
> is, but essentially only cherry-picking the above mentioned changes on top
> of RC2.
> The minimal set of changes on top of RC2 should allow us to carry most if
> not all of the already existing votes without another round of extensive
> testing, and allow us to have a shortened voting time.
>
> I understand that there are other issues mentioned in this thread that are
> already spotted and merged to release-1.9, especially for the Blink planner
> and DDL, but I suggest not to include them in RC3.
> I think it would be better to collect all the remaining issues for those
> over a period of time, and include them as 1.9.1 which can ideally also
> happen a few weeks soon after 1.9.0.
>
> What do you think? If there are not objections, I would proceed with this
> plan and push out a new RC by the end of today (Aug. 19th CET).
>
> Regards,
> Gordon
>
> On Mon, Aug 19, 2019 at 4:09 AM Zili Chen  wrote:
>
> > We should investigate the performance regression but regardless the
> > regression I vote +1
> >
> > Have verified following things
> >
> > - Jobs running on YARN x (Session & Per Job) with high-availability
> > enabled.
> > - Simulate JM and TM failures.
> > - Simulate temporary network partition.
> >
> > Best,
> > tison.
> >
> >
> > Stephan Ewen  于2019年8月18日周日 下午10:12写道:
> >
> > > For reference, this is the JIRA issue about the regression in question:
> > >
> > > https://issues.apache.org/jira/browse/FLINK-13752
> > >
> > >
> > > On Fri, Aug 16, 2019 at 10:57 AM Guowei Ma 
> wrote:
> > >
> > > > Hi, till
> > > > I can send the job to you offline.
> > > > It is just a datastream job and does not use
> > > TwoInputSelectableStreamTask.
> > > > A->B
> > > >  \
> > > >C
> > > >  /
> > > > D->E
> > > > Best,
> > > > Guowei
> > > >
> > > >
> > > > Till Rohrmann  于2019年8月16日周五 下午4:34写道:
> > > >
> > > > > Thanks for reporting this issue Guowei. Could you share a bit more
> > > > details
> > > > > what the job exactly does and which operators it uses? Does the job
> > > uses
> > > > > the new `TwoInputSelectableStreamTask` which might cause the
> > > performance
> > > > > regression?
> > > > >
> > > > > I think it is important to understand where the problem comes from
> > > before
> > > > > we proceed with the release.
> > > > >
> > > > > Cheers,
> > > > > Till
> > > > >
> > > > > On Fri, Aug 16, 2019 at 10:27 AM Guowei Ma 
> > > wrote:
> > > > >
> > > > > > Hi,
> > > > > > -1
> > > > > > We have a benchmark job, which includes a two-input operator.
> > > > > > This job has a big performance regression using 1.9 compared to
> > 1.8.
> > > > > > It's still not very clear why this regression happens.
> > > > > >
> > > > > > Best,
> > > > > > Guowei
> > > > > >
> > > > > >
> > > > > > Yu Li  于2019年8月16日周五 下午3:27写道:
> > > > > >
> > > > > > > +1 (non-binding)
> > > > > > >
> > > > > > > - checked release notes: OK
> > > > > > > - checked sums and signatures: OK
> > > > > > > - source release
> > > > > > >  - contains no binaries: OK
> > > > > > >  - contains no 1.9-SNAPSHOT references: OK
> > > > > > >  - build from source: OK (8u102)
> > > > > > >  - mvn clean verify: OK (8u102)
> > > > > > > - binary release
> > > > > > >  - no examples appear to be missing
> > > > > > >  - started a cluster; WebUI reachable, example ran
> > successfully
> > > > > > > - repository appears to contain all expected artifacts
> > > > > > >
> > > > > > > Best Regards,
> > > > > > > Yu
> > > > > > >
> > > > > > >
> > > > > > > On Fri, 16 Aug 2019 at 06:06, Bowen Li 
> > > wrote:
> > > > > > >
> > > > > > > > Hi Jark,
> > > > > > > >
> > > > > > > > Thanks for letting me know that it's been like this in
> previous
> > > > > > releases.
> > > > > > > > Though I don't think that's the right behavior, it can be
> > > discussed
> > > > > for
> > > > > > > > later release. Thus I retract my -1 for RC2.
> > > > > > > >
> > > > > > > > Bowen
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 7:49 PM Jark Wu 
> > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-19 Thread Tzu-Li (Gordon) Tai
Hi,

https://issues.apache.org/jira/browse/FLINK-13752 turns out to be an actual
blocker, so we would have to close this RC now in favor of a new one.

Since we are already quite past the planned release time for 1.9.0, I would
like to limit the new changes included in RC3 to only the following:
- https://issues.apache.org/jira/browse/FLINK-13752
- Fix license and notice file issues that Kurt had found with
flink-runtime-web and flink-state-processing-api

This means that I will not be creating RC3 with the release-1.9 branch as
is, but essentially only cherry-picking the above mentioned changes on top
of RC2.
The minimal set of changes on top of RC2 should allow us to carry most if
not all of the already existing votes without another round of extensive
testing, and allow us to have a shortened voting time.

I understand that there are other issues mentioned in this thread that are
already spotted and merged to release-1.9, especially for the Blink planner
and DDL, but I suggest not to include them in RC3.
I think it would be better to collect all the remaining issues for those
over a period of time, and include them as 1.9.1 which can ideally also
happen a few weeks soon after 1.9.0.

What do you think? If there are not objections, I would proceed with this
plan and push out a new RC by the end of today (Aug. 19th CET).

Regards,
Gordon

On Mon, Aug 19, 2019 at 4:09 AM Zili Chen  wrote:

> We should investigate the performance regression but regardless the
> regression I vote +1
>
> Have verified following things
>
> - Jobs running on YARN x (Session & Per Job) with high-availability
> enabled.
> - Simulate JM and TM failures.
> - Simulate temporary network partition.
>
> Best,
> tison.
>
>
> Stephan Ewen  于2019年8月18日周日 下午10:12写道:
>
> > For reference, this is the JIRA issue about the regression in question:
> >
> > https://issues.apache.org/jira/browse/FLINK-13752
> >
> >
> > On Fri, Aug 16, 2019 at 10:57 AM Guowei Ma  wrote:
> >
> > > Hi, till
> > > I can send the job to you offline.
> > > It is just a datastream job and does not use
> > TwoInputSelectableStreamTask.
> > > A->B
> > >  \
> > >C
> > >  /
> > > D->E
> > > Best,
> > > Guowei
> > >
> > >
> > > Till Rohrmann  于2019年8月16日周五 下午4:34写道:
> > >
> > > > Thanks for reporting this issue Guowei. Could you share a bit more
> > > details
> > > > what the job exactly does and which operators it uses? Does the job
> > uses
> > > > the new `TwoInputSelectableStreamTask` which might cause the
> > performance
> > > > regression?
> > > >
> > > > I think it is important to understand where the problem comes from
> > before
> > > > we proceed with the release.
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Fri, Aug 16, 2019 at 10:27 AM Guowei Ma 
> > wrote:
> > > >
> > > > > Hi,
> > > > > -1
> > > > > We have a benchmark job, which includes a two-input operator.
> > > > > This job has a big performance regression using 1.9 compared to
> 1.8.
> > > > > It's still not very clear why this regression happens.
> > > > >
> > > > > Best,
> > > > > Guowei
> > > > >
> > > > >
> > > > > Yu Li  于2019年8月16日周五 下午3:27写道:
> > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > - checked release notes: OK
> > > > > > - checked sums and signatures: OK
> > > > > > - source release
> > > > > >  - contains no binaries: OK
> > > > > >  - contains no 1.9-SNAPSHOT references: OK
> > > > > >  - build from source: OK (8u102)
> > > > > >  - mvn clean verify: OK (8u102)
> > > > > > - binary release
> > > > > >  - no examples appear to be missing
> > > > > >  - started a cluster; WebUI reachable, example ran
> successfully
> > > > > > - repository appears to contain all expected artifacts
> > > > > >
> > > > > > Best Regards,
> > > > > > Yu
> > > > > >
> > > > > >
> > > > > > On Fri, 16 Aug 2019 at 06:06, Bowen Li 
> > wrote:
> > > > > >
> > > > > > > Hi Jark,
> > > > > > >
> > > > > > > Thanks for letting me know that it's been like this in previous
> > > > > releases.
> > > > > > > Though I don't think that's the right behavior, it can be
> > discussed
> > > > for
> > > > > > > later release. Thus I retract my -1 for RC2.
> > > > > > >
> > > > > > > Bowen
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 7:49 PM Jark Wu 
> > wrote:
> > > > > > >
> > > > > > > > Hi Bowen,
> > > > > > > >
> > > > > > > > Thanks for reporting this.
> > > > > > > > However, I don't think this is an issue. IMO, it is by
> design.
> > > > > > > > The `tEnv.listUserDefinedFunctions()` in Table API and `show
> > > > > > functions;`
> > > > > > > in
> > > > > > > > SQL CLI are intended to return only the registered UDFs, not
> > > > > including
> > > > > > > > built-in functions.
> > > > > > > > This is also the behavior in previous versions.
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Jark
> > > > > > > >
> > > > > > > > On Fri, 16 Aug 2019 at 06:52, Bowen Li 
> > > > wrote:
> > > > > > > >
> > > > > > > > > -1 for 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-18 Thread Zili Chen
We should investigate the performance regression but regardless the
regression I vote +1

Have verified following things

- Jobs running on YARN x (Session & Per Job) with high-availability enabled.
- Simulate JM and TM failures.
- Simulate temporary network partition.

Best,
tison.


Stephan Ewen  于2019年8月18日周日 下午10:12写道:

> For reference, this is the JIRA issue about the regression in question:
>
> https://issues.apache.org/jira/browse/FLINK-13752
>
>
> On Fri, Aug 16, 2019 at 10:57 AM Guowei Ma  wrote:
>
> > Hi, till
> > I can send the job to you offline.
> > It is just a datastream job and does not use
> TwoInputSelectableStreamTask.
> > A->B
> >  \
> >C
> >  /
> > D->E
> > Best,
> > Guowei
> >
> >
> > Till Rohrmann  于2019年8月16日周五 下午4:34写道:
> >
> > > Thanks for reporting this issue Guowei. Could you share a bit more
> > details
> > > what the job exactly does and which operators it uses? Does the job
> uses
> > > the new `TwoInputSelectableStreamTask` which might cause the
> performance
> > > regression?
> > >
> > > I think it is important to understand where the problem comes from
> before
> > > we proceed with the release.
> > >
> > > Cheers,
> > > Till
> > >
> > > On Fri, Aug 16, 2019 at 10:27 AM Guowei Ma 
> wrote:
> > >
> > > > Hi,
> > > > -1
> > > > We have a benchmark job, which includes a two-input operator.
> > > > This job has a big performance regression using 1.9 compared to 1.8.
> > > > It's still not very clear why this regression happens.
> > > >
> > > > Best,
> > > > Guowei
> > > >
> > > >
> > > > Yu Li  于2019年8月16日周五 下午3:27写道:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > - checked release notes: OK
> > > > > - checked sums and signatures: OK
> > > > > - source release
> > > > >  - contains no binaries: OK
> > > > >  - contains no 1.9-SNAPSHOT references: OK
> > > > >  - build from source: OK (8u102)
> > > > >  - mvn clean verify: OK (8u102)
> > > > > - binary release
> > > > >  - no examples appear to be missing
> > > > >  - started a cluster; WebUI reachable, example ran successfully
> > > > > - repository appears to contain all expected artifacts
> > > > >
> > > > > Best Regards,
> > > > > Yu
> > > > >
> > > > >
> > > > > On Fri, 16 Aug 2019 at 06:06, Bowen Li 
> wrote:
> > > > >
> > > > > > Hi Jark,
> > > > > >
> > > > > > Thanks for letting me know that it's been like this in previous
> > > > releases.
> > > > > > Though I don't think that's the right behavior, it can be
> discussed
> > > for
> > > > > > later release. Thus I retract my -1 for RC2.
> > > > > >
> > > > > > Bowen
> > > > > >
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 7:49 PM Jark Wu 
> wrote:
> > > > > >
> > > > > > > Hi Bowen,
> > > > > > >
> > > > > > > Thanks for reporting this.
> > > > > > > However, I don't think this is an issue. IMO, it is by design.
> > > > > > > The `tEnv.listUserDefinedFunctions()` in Table API and `show
> > > > > functions;`
> > > > > > in
> > > > > > > SQL CLI are intended to return only the registered UDFs, not
> > > > including
> > > > > > > built-in functions.
> > > > > > > This is also the behavior in previous versions.
> > > > > > >
> > > > > > > Best,
> > > > > > > Jark
> > > > > > >
> > > > > > > On Fri, 16 Aug 2019 at 06:52, Bowen Li 
> > > wrote:
> > > > > > >
> > > > > > > > -1 for RC2.
> > > > > > > >
> > > > > > > > I found a bug
> > https://issues.apache.org/jira/browse/FLINK-13741,
> > > > > and I
> > > > > > > > think it's a blocker.  The bug means currently if users call
> > > > > > > > `tEnv.listUserDefinedFunctions()` in Table API or `show
> > > functions;`
> > > > > > thru
> > > > > > > > SQL would not be able to see Flink's built-in functions.
> > > > > > > >
> > > > > > > > I'm preparing a fix right now.
> > > > > > > >
> > > > > > > > Bowen
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 8:55 AM Tzu-Li (Gordon) Tai <
> > > > > > tzuli...@apache.org
> > > > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks for all the test efforts, verifications and votes so
> > > far.
> > > > > > > > >
> > > > > > > > > So far, things are looking good, but we still require one
> > more
> > > > PMC
> > > > > > > > binding
> > > > > > > > > vote for this RC to be the official release, so I would
> like
> > to
> > > > > > extend
> > > > > > > > the
> > > > > > > > > vote time for 1 more day, until *Aug. 16th 17:00 CET*.
> > > > > > > > >
> > > > > > > > > In the meantime, the release notes for 1.9.0 had only just
> > been
> > > > > > > finalized
> > > > > > > > > [1], and could use a few more eyes before closing the vote.
> > > > > > > > > Any help with checking if anything else should be mentioned
> > > there
> > > > > > > > regarding
> > > > > > > > > breaking changes / known shortcomings would be appreciated.
> > > > > > > > >
> > > > > > > > > Cheers,
> > > > > > > > > Gordon
> > > > > > > > >
> > > > > > > > > [1] https://github.com/apache/flink/pull/9438
> > > > > > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-18 Thread Stephan Ewen
For reference, this is the JIRA issue about the regression in question:

https://issues.apache.org/jira/browse/FLINK-13752


On Fri, Aug 16, 2019 at 10:57 AM Guowei Ma  wrote:

> Hi, till
> I can send the job to you offline.
> It is just a datastream job and does not use TwoInputSelectableStreamTask.
> A->B
>  \
>C
>  /
> D->E
> Best,
> Guowei
>
>
> Till Rohrmann  于2019年8月16日周五 下午4:34写道:
>
> > Thanks for reporting this issue Guowei. Could you share a bit more
> details
> > what the job exactly does and which operators it uses? Does the job uses
> > the new `TwoInputSelectableStreamTask` which might cause the performance
> > regression?
> >
> > I think it is important to understand where the problem comes from before
> > we proceed with the release.
> >
> > Cheers,
> > Till
> >
> > On Fri, Aug 16, 2019 at 10:27 AM Guowei Ma  wrote:
> >
> > > Hi,
> > > -1
> > > We have a benchmark job, which includes a two-input operator.
> > > This job has a big performance regression using 1.9 compared to 1.8.
> > > It's still not very clear why this regression happens.
> > >
> > > Best,
> > > Guowei
> > >
> > >
> > > Yu Li  于2019年8月16日周五 下午3:27写道:
> > >
> > > > +1 (non-binding)
> > > >
> > > > - checked release notes: OK
> > > > - checked sums and signatures: OK
> > > > - source release
> > > >  - contains no binaries: OK
> > > >  - contains no 1.9-SNAPSHOT references: OK
> > > >  - build from source: OK (8u102)
> > > >  - mvn clean verify: OK (8u102)
> > > > - binary release
> > > >  - no examples appear to be missing
> > > >  - started a cluster; WebUI reachable, example ran successfully
> > > > - repository appears to contain all expected artifacts
> > > >
> > > > Best Regards,
> > > > Yu
> > > >
> > > >
> > > > On Fri, 16 Aug 2019 at 06:06, Bowen Li  wrote:
> > > >
> > > > > Hi Jark,
> > > > >
> > > > > Thanks for letting me know that it's been like this in previous
> > > releases.
> > > > > Though I don't think that's the right behavior, it can be discussed
> > for
> > > > > later release. Thus I retract my -1 for RC2.
> > > > >
> > > > > Bowen
> > > > >
> > > > >
> > > > > On Thu, Aug 15, 2019 at 7:49 PM Jark Wu  wrote:
> > > > >
> > > > > > Hi Bowen,
> > > > > >
> > > > > > Thanks for reporting this.
> > > > > > However, I don't think this is an issue. IMO, it is by design.
> > > > > > The `tEnv.listUserDefinedFunctions()` in Table API and `show
> > > > functions;`
> > > > > in
> > > > > > SQL CLI are intended to return only the registered UDFs, not
> > > including
> > > > > > built-in functions.
> > > > > > This is also the behavior in previous versions.
> > > > > >
> > > > > > Best,
> > > > > > Jark
> > > > > >
> > > > > > On Fri, 16 Aug 2019 at 06:52, Bowen Li 
> > wrote:
> > > > > >
> > > > > > > -1 for RC2.
> > > > > > >
> > > > > > > I found a bug
> https://issues.apache.org/jira/browse/FLINK-13741,
> > > > and I
> > > > > > > think it's a blocker.  The bug means currently if users call
> > > > > > > `tEnv.listUserDefinedFunctions()` in Table API or `show
> > functions;`
> > > > > thru
> > > > > > > SQL would not be able to see Flink's built-in functions.
> > > > > > >
> > > > > > > I'm preparing a fix right now.
> > > > > > >
> > > > > > > Bowen
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 8:55 AM Tzu-Li (Gordon) Tai <
> > > > > tzuli...@apache.org
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Thanks for all the test efforts, verifications and votes so
> > far.
> > > > > > > >
> > > > > > > > So far, things are looking good, but we still require one
> more
> > > PMC
> > > > > > > binding
> > > > > > > > vote for this RC to be the official release, so I would like
> to
> > > > > extend
> > > > > > > the
> > > > > > > > vote time for 1 more day, until *Aug. 16th 17:00 CET*.
> > > > > > > >
> > > > > > > > In the meantime, the release notes for 1.9.0 had only just
> been
> > > > > > finalized
> > > > > > > > [1], and could use a few more eyes before closing the vote.
> > > > > > > > Any help with checking if anything else should be mentioned
> > there
> > > > > > > regarding
> > > > > > > > breaking changes / known shortcomings would be appreciated.
> > > > > > > >
> > > > > > > > Cheers,
> > > > > > > > Gordon
> > > > > > > >
> > > > > > > > [1] https://github.com/apache/flink/pull/9438
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 3:58 PM Kurt Young  >
> > > > wrote:
> > > > > > > >
> > > > > > > > > Great, then I have no other comments on legal check.
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Kurt
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler <
> > > > > ches...@apache.org
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > The licensing items aren't a problem; we don't care about
> > > Flink
> > > > > > > modules
> > > > > > > > > > in NOTICE files, and we don't have to update the
> > > source-release
> > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-16 Thread Gyula Fóra
Hi all,
I agree with Till that we should investigate the suspected performance
regression issue before proceeding with the release.

If we do not find any problem I vote +1

I have verified the following behaviour:
 - Built flink with custom hadoop version
 - YARN Deployment with and without high-availability
 - Simulated TM and JM failures
 - Test recovery with savepoints and checkpoints for simple stateful job
with kafka connectors

Gyula



On Fri, Aug 16, 2019 at 10:34 AM Till Rohrmann  wrote:

> Thanks for reporting this issue Guowei. Could you share a bit more details
> what the job exactly does and which operators it uses? Does the job uses
> the new `TwoInputSelectableStreamTask` which might cause the performance
> regression?
>
> I think it is important to understand where the problem comes from before
> we proceed with the release.
>
> Cheers,
> Till
>
> On Fri, Aug 16, 2019 at 10:27 AM Guowei Ma  wrote:
>
> > Hi,
> > -1
> > We have a benchmark job, which includes a two-input operator.
> > This job has a big performance regression using 1.9 compared to 1.8.
> > It's still not very clear why this regression happens.
> >
> > Best,
> > Guowei
> >
> >
> > Yu Li  于2019年8月16日周五 下午3:27写道:
> >
> > > +1 (non-binding)
> > >
> > > - checked release notes: OK
> > > - checked sums and signatures: OK
> > > - source release
> > >  - contains no binaries: OK
> > >  - contains no 1.9-SNAPSHOT references: OK
> > >  - build from source: OK (8u102)
> > >  - mvn clean verify: OK (8u102)
> > > - binary release
> > >  - no examples appear to be missing
> > >  - started a cluster; WebUI reachable, example ran successfully
> > > - repository appears to contain all expected artifacts
> > >
> > > Best Regards,
> > > Yu
> > >
> > >
> > > On Fri, 16 Aug 2019 at 06:06, Bowen Li  wrote:
> > >
> > > > Hi Jark,
> > > >
> > > > Thanks for letting me know that it's been like this in previous
> > releases.
> > > > Though I don't think that's the right behavior, it can be discussed
> for
> > > > later release. Thus I retract my -1 for RC2.
> > > >
> > > > Bowen
> > > >
> > > >
> > > > On Thu, Aug 15, 2019 at 7:49 PM Jark Wu  wrote:
> > > >
> > > > > Hi Bowen,
> > > > >
> > > > > Thanks for reporting this.
> > > > > However, I don't think this is an issue. IMO, it is by design.
> > > > > The `tEnv.listUserDefinedFunctions()` in Table API and `show
> > > functions;`
> > > > in
> > > > > SQL CLI are intended to return only the registered UDFs, not
> > including
> > > > > built-in functions.
> > > > > This is also the behavior in previous versions.
> > > > >
> > > > > Best,
> > > > > Jark
> > > > >
> > > > > On Fri, 16 Aug 2019 at 06:52, Bowen Li 
> wrote:
> > > > >
> > > > > > -1 for RC2.
> > > > > >
> > > > > > I found a bug https://issues.apache.org/jira/browse/FLINK-13741,
> > > and I
> > > > > > think it's a blocker.  The bug means currently if users call
> > > > > > `tEnv.listUserDefinedFunctions()` in Table API or `show
> functions;`
> > > > thru
> > > > > > SQL would not be able to see Flink's built-in functions.
> > > > > >
> > > > > > I'm preparing a fix right now.
> > > > > >
> > > > > > Bowen
> > > > > >
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 8:55 AM Tzu-Li (Gordon) Tai <
> > > > tzuli...@apache.org
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Thanks for all the test efforts, verifications and votes so
> far.
> > > > > > >
> > > > > > > So far, things are looking good, but we still require one more
> > PMC
> > > > > > binding
> > > > > > > vote for this RC to be the official release, so I would like to
> > > > extend
> > > > > > the
> > > > > > > vote time for 1 more day, until *Aug. 16th 17:00 CET*.
> > > > > > >
> > > > > > > In the meantime, the release notes for 1.9.0 had only just been
> > > > > finalized
> > > > > > > [1], and could use a few more eyes before closing the vote.
> > > > > > > Any help with checking if anything else should be mentioned
> there
> > > > > > regarding
> > > > > > > breaking changes / known shortcomings would be appreciated.
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Gordon
> > > > > > >
> > > > > > > [1] https://github.com/apache/flink/pull/9438
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 3:58 PM Kurt Young 
> > > wrote:
> > > > > > >
> > > > > > > > Great, then I have no other comments on legal check.
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Kurt
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler <
> > > > ches...@apache.org
> > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > The licensing items aren't a problem; we don't care about
> > Flink
> > > > > > modules
> > > > > > > > > in NOTICE files, and we don't have to update the
> > source-release
> > > > > > > > > licensing since we don't have a pre-built version of the
> > WebUI
> > > in
> > > > > the
> > > > > > > > > source.
> > > > > > > > >
> > > > > > > > > On 15/08/2019 15:22, Kurt Young wrote:
> > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-16 Thread Guowei Ma
Hi, till
I can send the job to you offline.
It is just a datastream job and does not use TwoInputSelectableStreamTask.
A->B
 \
   C
 /
D->E
Best,
Guowei


Till Rohrmann  于2019年8月16日周五 下午4:34写道:

> Thanks for reporting this issue Guowei. Could you share a bit more details
> what the job exactly does and which operators it uses? Does the job uses
> the new `TwoInputSelectableStreamTask` which might cause the performance
> regression?
>
> I think it is important to understand where the problem comes from before
> we proceed with the release.
>
> Cheers,
> Till
>
> On Fri, Aug 16, 2019 at 10:27 AM Guowei Ma  wrote:
>
> > Hi,
> > -1
> > We have a benchmark job, which includes a two-input operator.
> > This job has a big performance regression using 1.9 compared to 1.8.
> > It's still not very clear why this regression happens.
> >
> > Best,
> > Guowei
> >
> >
> > Yu Li  于2019年8月16日周五 下午3:27写道:
> >
> > > +1 (non-binding)
> > >
> > > - checked release notes: OK
> > > - checked sums and signatures: OK
> > > - source release
> > >  - contains no binaries: OK
> > >  - contains no 1.9-SNAPSHOT references: OK
> > >  - build from source: OK (8u102)
> > >  - mvn clean verify: OK (8u102)
> > > - binary release
> > >  - no examples appear to be missing
> > >  - started a cluster; WebUI reachable, example ran successfully
> > > - repository appears to contain all expected artifacts
> > >
> > > Best Regards,
> > > Yu
> > >
> > >
> > > On Fri, 16 Aug 2019 at 06:06, Bowen Li  wrote:
> > >
> > > > Hi Jark,
> > > >
> > > > Thanks for letting me know that it's been like this in previous
> > releases.
> > > > Though I don't think that's the right behavior, it can be discussed
> for
> > > > later release. Thus I retract my -1 for RC2.
> > > >
> > > > Bowen
> > > >
> > > >
> > > > On Thu, Aug 15, 2019 at 7:49 PM Jark Wu  wrote:
> > > >
> > > > > Hi Bowen,
> > > > >
> > > > > Thanks for reporting this.
> > > > > However, I don't think this is an issue. IMO, it is by design.
> > > > > The `tEnv.listUserDefinedFunctions()` in Table API and `show
> > > functions;`
> > > > in
> > > > > SQL CLI are intended to return only the registered UDFs, not
> > including
> > > > > built-in functions.
> > > > > This is also the behavior in previous versions.
> > > > >
> > > > > Best,
> > > > > Jark
> > > > >
> > > > > On Fri, 16 Aug 2019 at 06:52, Bowen Li 
> wrote:
> > > > >
> > > > > > -1 for RC2.
> > > > > >
> > > > > > I found a bug https://issues.apache.org/jira/browse/FLINK-13741,
> > > and I
> > > > > > think it's a blocker.  The bug means currently if users call
> > > > > > `tEnv.listUserDefinedFunctions()` in Table API or `show
> functions;`
> > > > thru
> > > > > > SQL would not be able to see Flink's built-in functions.
> > > > > >
> > > > > > I'm preparing a fix right now.
> > > > > >
> > > > > > Bowen
> > > > > >
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 8:55 AM Tzu-Li (Gordon) Tai <
> > > > tzuli...@apache.org
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Thanks for all the test efforts, verifications and votes so
> far.
> > > > > > >
> > > > > > > So far, things are looking good, but we still require one more
> > PMC
> > > > > > binding
> > > > > > > vote for this RC to be the official release, so I would like to
> > > > extend
> > > > > > the
> > > > > > > vote time for 1 more day, until *Aug. 16th 17:00 CET*.
> > > > > > >
> > > > > > > In the meantime, the release notes for 1.9.0 had only just been
> > > > > finalized
> > > > > > > [1], and could use a few more eyes before closing the vote.
> > > > > > > Any help with checking if anything else should be mentioned
> there
> > > > > > regarding
> > > > > > > breaking changes / known shortcomings would be appreciated.
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Gordon
> > > > > > >
> > > > > > > [1] https://github.com/apache/flink/pull/9438
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 3:58 PM Kurt Young 
> > > wrote:
> > > > > > >
> > > > > > > > Great, then I have no other comments on legal check.
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Kurt
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler <
> > > > ches...@apache.org
> > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > The licensing items aren't a problem; we don't care about
> > Flink
> > > > > > modules
> > > > > > > > > in NOTICE files, and we don't have to update the
> > source-release
> > > > > > > > > licensing since we don't have a pre-built version of the
> > WebUI
> > > in
> > > > > the
> > > > > > > > > source.
> > > > > > > > >
> > > > > > > > > On 15/08/2019 15:22, Kurt Young wrote:
> > > > > > > > > > After going through the licenses, I found 2 suspicions
> but
> > > not
> > > > > sure
> > > > > > > if
> > > > > > > > > they
> > > > > > > > > > are
> > > > > > > > > > valid or not.
> > > > > > > > > >
> > > > > > > > > > 1. flink-state-processing-api is packaged in 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-16 Thread Till Rohrmann
Thanks for reporting this issue Guowei. Could you share a bit more details
what the job exactly does and which operators it uses? Does the job uses
the new `TwoInputSelectableStreamTask` which might cause the performance
regression?

I think it is important to understand where the problem comes from before
we proceed with the release.

Cheers,
Till

On Fri, Aug 16, 2019 at 10:27 AM Guowei Ma  wrote:

> Hi,
> -1
> We have a benchmark job, which includes a two-input operator.
> This job has a big performance regression using 1.9 compared to 1.8.
> It's still not very clear why this regression happens.
>
> Best,
> Guowei
>
>
> Yu Li  于2019年8月16日周五 下午3:27写道:
>
> > +1 (non-binding)
> >
> > - checked release notes: OK
> > - checked sums and signatures: OK
> > - source release
> >  - contains no binaries: OK
> >  - contains no 1.9-SNAPSHOT references: OK
> >  - build from source: OK (8u102)
> >  - mvn clean verify: OK (8u102)
> > - binary release
> >  - no examples appear to be missing
> >  - started a cluster; WebUI reachable, example ran successfully
> > - repository appears to contain all expected artifacts
> >
> > Best Regards,
> > Yu
> >
> >
> > On Fri, 16 Aug 2019 at 06:06, Bowen Li  wrote:
> >
> > > Hi Jark,
> > >
> > > Thanks for letting me know that it's been like this in previous
> releases.
> > > Though I don't think that's the right behavior, it can be discussed for
> > > later release. Thus I retract my -1 for RC2.
> > >
> > > Bowen
> > >
> > >
> > > On Thu, Aug 15, 2019 at 7:49 PM Jark Wu  wrote:
> > >
> > > > Hi Bowen,
> > > >
> > > > Thanks for reporting this.
> > > > However, I don't think this is an issue. IMO, it is by design.
> > > > The `tEnv.listUserDefinedFunctions()` in Table API and `show
> > functions;`
> > > in
> > > > SQL CLI are intended to return only the registered UDFs, not
> including
> > > > built-in functions.
> > > > This is also the behavior in previous versions.
> > > >
> > > > Best,
> > > > Jark
> > > >
> > > > On Fri, 16 Aug 2019 at 06:52, Bowen Li  wrote:
> > > >
> > > > > -1 for RC2.
> > > > >
> > > > > I found a bug https://issues.apache.org/jira/browse/FLINK-13741,
> > and I
> > > > > think it's a blocker.  The bug means currently if users call
> > > > > `tEnv.listUserDefinedFunctions()` in Table API or `show functions;`
> > > thru
> > > > > SQL would not be able to see Flink's built-in functions.
> > > > >
> > > > > I'm preparing a fix right now.
> > > > >
> > > > > Bowen
> > > > >
> > > > >
> > > > > On Thu, Aug 15, 2019 at 8:55 AM Tzu-Li (Gordon) Tai <
> > > tzuli...@apache.org
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Thanks for all the test efforts, verifications and votes so far.
> > > > > >
> > > > > > So far, things are looking good, but we still require one more
> PMC
> > > > > binding
> > > > > > vote for this RC to be the official release, so I would like to
> > > extend
> > > > > the
> > > > > > vote time for 1 more day, until *Aug. 16th 17:00 CET*.
> > > > > >
> > > > > > In the meantime, the release notes for 1.9.0 had only just been
> > > > finalized
> > > > > > [1], and could use a few more eyes before closing the vote.
> > > > > > Any help with checking if anything else should be mentioned there
> > > > > regarding
> > > > > > breaking changes / known shortcomings would be appreciated.
> > > > > >
> > > > > > Cheers,
> > > > > > Gordon
> > > > > >
> > > > > > [1] https://github.com/apache/flink/pull/9438
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 3:58 PM Kurt Young 
> > wrote:
> > > > > >
> > > > > > > Great, then I have no other comments on legal check.
> > > > > > >
> > > > > > > Best,
> > > > > > > Kurt
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler <
> > > ches...@apache.org
> > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > The licensing items aren't a problem; we don't care about
> Flink
> > > > > modules
> > > > > > > > in NOTICE files, and we don't have to update the
> source-release
> > > > > > > > licensing since we don't have a pre-built version of the
> WebUI
> > in
> > > > the
> > > > > > > > source.
> > > > > > > >
> > > > > > > > On 15/08/2019 15:22, Kurt Young wrote:
> > > > > > > > > After going through the licenses, I found 2 suspicions but
> > not
> > > > sure
> > > > > > if
> > > > > > > > they
> > > > > > > > > are
> > > > > > > > > valid or not.
> > > > > > > > >
> > > > > > > > > 1. flink-state-processing-api is packaged in to flink-dist
> > jar,
> > > > but
> > > > > > not
> > > > > > > > > included in
> > > > > > > > > NOTICE-binary file (the one under the root directory) like
> > > other
> > > > > > > modules.
> > > > > > > > > 2. flink-runtime-web distributed some JavaScript
> dependencies
> > > > > through
> > > > > > > > source
> > > > > > > > > codes, the licenses and NOTICE file were only updated
> inside
> > > the
> > > > > > module
> > > > > > > > of
> > > > > > > > > flink-runtime-web, but not the NOTICE file and licenses
> > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-16 Thread Guowei Ma
Hi,
-1
We have a benchmark job, which includes a two-input operator.
This job has a big performance regression using 1.9 compared to 1.8.
It's still not very clear why this regression happens.

Best,
Guowei


Yu Li  于2019年8月16日周五 下午3:27写道:

> +1 (non-binding)
>
> - checked release notes: OK
> - checked sums and signatures: OK
> - source release
>  - contains no binaries: OK
>  - contains no 1.9-SNAPSHOT references: OK
>  - build from source: OK (8u102)
>  - mvn clean verify: OK (8u102)
> - binary release
>  - no examples appear to be missing
>  - started a cluster; WebUI reachable, example ran successfully
> - repository appears to contain all expected artifacts
>
> Best Regards,
> Yu
>
>
> On Fri, 16 Aug 2019 at 06:06, Bowen Li  wrote:
>
> > Hi Jark,
> >
> > Thanks for letting me know that it's been like this in previous releases.
> > Though I don't think that's the right behavior, it can be discussed for
> > later release. Thus I retract my -1 for RC2.
> >
> > Bowen
> >
> >
> > On Thu, Aug 15, 2019 at 7:49 PM Jark Wu  wrote:
> >
> > > Hi Bowen,
> > >
> > > Thanks for reporting this.
> > > However, I don't think this is an issue. IMO, it is by design.
> > > The `tEnv.listUserDefinedFunctions()` in Table API and `show
> functions;`
> > in
> > > SQL CLI are intended to return only the registered UDFs, not including
> > > built-in functions.
> > > This is also the behavior in previous versions.
> > >
> > > Best,
> > > Jark
> > >
> > > On Fri, 16 Aug 2019 at 06:52, Bowen Li  wrote:
> > >
> > > > -1 for RC2.
> > > >
> > > > I found a bug https://issues.apache.org/jira/browse/FLINK-13741,
> and I
> > > > think it's a blocker.  The bug means currently if users call
> > > > `tEnv.listUserDefinedFunctions()` in Table API or `show functions;`
> > thru
> > > > SQL would not be able to see Flink's built-in functions.
> > > >
> > > > I'm preparing a fix right now.
> > > >
> > > > Bowen
> > > >
> > > >
> > > > On Thu, Aug 15, 2019 at 8:55 AM Tzu-Li (Gordon) Tai <
> > tzuli...@apache.org
> > > >
> > > > wrote:
> > > >
> > > > > Thanks for all the test efforts, verifications and votes so far.
> > > > >
> > > > > So far, things are looking good, but we still require one more PMC
> > > > binding
> > > > > vote for this RC to be the official release, so I would like to
> > extend
> > > > the
> > > > > vote time for 1 more day, until *Aug. 16th 17:00 CET*.
> > > > >
> > > > > In the meantime, the release notes for 1.9.0 had only just been
> > > finalized
> > > > > [1], and could use a few more eyes before closing the vote.
> > > > > Any help with checking if anything else should be mentioned there
> > > > regarding
> > > > > breaking changes / known shortcomings would be appreciated.
> > > > >
> > > > > Cheers,
> > > > > Gordon
> > > > >
> > > > > [1] https://github.com/apache/flink/pull/9438
> > > > >
> > > > > On Thu, Aug 15, 2019 at 3:58 PM Kurt Young 
> wrote:
> > > > >
> > > > > > Great, then I have no other comments on legal check.
> > > > > >
> > > > > > Best,
> > > > > > Kurt
> > > > > >
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler <
> > ches...@apache.org
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > The licensing items aren't a problem; we don't care about Flink
> > > > modules
> > > > > > > in NOTICE files, and we don't have to update the source-release
> > > > > > > licensing since we don't have a pre-built version of the WebUI
> in
> > > the
> > > > > > > source.
> > > > > > >
> > > > > > > On 15/08/2019 15:22, Kurt Young wrote:
> > > > > > > > After going through the licenses, I found 2 suspicions but
> not
> > > sure
> > > > > if
> > > > > > > they
> > > > > > > > are
> > > > > > > > valid or not.
> > > > > > > >
> > > > > > > > 1. flink-state-processing-api is packaged in to flink-dist
> jar,
> > > but
> > > > > not
> > > > > > > > included in
> > > > > > > > NOTICE-binary file (the one under the root directory) like
> > other
> > > > > > modules.
> > > > > > > > 2. flink-runtime-web distributed some JavaScript dependencies
> > > > through
> > > > > > > source
> > > > > > > > codes, the licenses and NOTICE file were only updated inside
> > the
> > > > > module
> > > > > > > of
> > > > > > > > flink-runtime-web, but not the NOTICE file and licenses
> > directory
> > > > > which
> > > > > > > > under
> > > > > > > > the  root directory.
> > > > > > > >
> > > > > > > > Another minor issue I just found is:
> > > > > > > > FLINK-13558 tries to include table examples to flink-dist,
> but
> > I
> > > > > cannot
> > > > > > > > find it in
> > > > > > > > the binary distribution of RC2.
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Kurt
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 6:19 PM Kurt Young  >
> > > > wrote:
> > > > > > > >
> > > > > > > >> Hi Gordon & Timo,
> > > > > > > >>
> > > > > > > >> Thanks for the feedback, and I agree with it. I will
> document
> > > this
> > > > > in
> > > > > > > the
> > > > > > > >> 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-16 Thread Yu Li
+1 (non-binding)

- checked release notes: OK
- checked sums and signatures: OK
- source release
 - contains no binaries: OK
 - contains no 1.9-SNAPSHOT references: OK
 - build from source: OK (8u102)
 - mvn clean verify: OK (8u102)
- binary release
 - no examples appear to be missing
 - started a cluster; WebUI reachable, example ran successfully
- repository appears to contain all expected artifacts

Best Regards,
Yu


On Fri, 16 Aug 2019 at 06:06, Bowen Li  wrote:

> Hi Jark,
>
> Thanks for letting me know that it's been like this in previous releases.
> Though I don't think that's the right behavior, it can be discussed for
> later release. Thus I retract my -1 for RC2.
>
> Bowen
>
>
> On Thu, Aug 15, 2019 at 7:49 PM Jark Wu  wrote:
>
> > Hi Bowen,
> >
> > Thanks for reporting this.
> > However, I don't think this is an issue. IMO, it is by design.
> > The `tEnv.listUserDefinedFunctions()` in Table API and `show functions;`
> in
> > SQL CLI are intended to return only the registered UDFs, not including
> > built-in functions.
> > This is also the behavior in previous versions.
> >
> > Best,
> > Jark
> >
> > On Fri, 16 Aug 2019 at 06:52, Bowen Li  wrote:
> >
> > > -1 for RC2.
> > >
> > > I found a bug https://issues.apache.org/jira/browse/FLINK-13741, and I
> > > think it's a blocker.  The bug means currently if users call
> > > `tEnv.listUserDefinedFunctions()` in Table API or `show functions;`
> thru
> > > SQL would not be able to see Flink's built-in functions.
> > >
> > > I'm preparing a fix right now.
> > >
> > > Bowen
> > >
> > >
> > > On Thu, Aug 15, 2019 at 8:55 AM Tzu-Li (Gordon) Tai <
> tzuli...@apache.org
> > >
> > > wrote:
> > >
> > > > Thanks for all the test efforts, verifications and votes so far.
> > > >
> > > > So far, things are looking good, but we still require one more PMC
> > > binding
> > > > vote for this RC to be the official release, so I would like to
> extend
> > > the
> > > > vote time for 1 more day, until *Aug. 16th 17:00 CET*.
> > > >
> > > > In the meantime, the release notes for 1.9.0 had only just been
> > finalized
> > > > [1], and could use a few more eyes before closing the vote.
> > > > Any help with checking if anything else should be mentioned there
> > > regarding
> > > > breaking changes / known shortcomings would be appreciated.
> > > >
> > > > Cheers,
> > > > Gordon
> > > >
> > > > [1] https://github.com/apache/flink/pull/9438
> > > >
> > > > On Thu, Aug 15, 2019 at 3:58 PM Kurt Young  wrote:
> > > >
> > > > > Great, then I have no other comments on legal check.
> > > > >
> > > > > Best,
> > > > > Kurt
> > > > >
> > > > >
> > > > > On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler <
> ches...@apache.org
> > >
> > > > > wrote:
> > > > >
> > > > > > The licensing items aren't a problem; we don't care about Flink
> > > modules
> > > > > > in NOTICE files, and we don't have to update the source-release
> > > > > > licensing since we don't have a pre-built version of the WebUI in
> > the
> > > > > > source.
> > > > > >
> > > > > > On 15/08/2019 15:22, Kurt Young wrote:
> > > > > > > After going through the licenses, I found 2 suspicions but not
> > sure
> > > > if
> > > > > > they
> > > > > > > are
> > > > > > > valid or not.
> > > > > > >
> > > > > > > 1. flink-state-processing-api is packaged in to flink-dist jar,
> > but
> > > > not
> > > > > > > included in
> > > > > > > NOTICE-binary file (the one under the root directory) like
> other
> > > > > modules.
> > > > > > > 2. flink-runtime-web distributed some JavaScript dependencies
> > > through
> > > > > > source
> > > > > > > codes, the licenses and NOTICE file were only updated inside
> the
> > > > module
> > > > > > of
> > > > > > > flink-runtime-web, but not the NOTICE file and licenses
> directory
> > > > which
> > > > > > > under
> > > > > > > the  root directory.
> > > > > > >
> > > > > > > Another minor issue I just found is:
> > > > > > > FLINK-13558 tries to include table examples to flink-dist, but
> I
> > > > cannot
> > > > > > > find it in
> > > > > > > the binary distribution of RC2.
> > > > > > >
> > > > > > > Best,
> > > > > > > Kurt
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 6:19 PM Kurt Young 
> > > wrote:
> > > > > > >
> > > > > > >> Hi Gordon & Timo,
> > > > > > >>
> > > > > > >> Thanks for the feedback, and I agree with it. I will document
> > this
> > > > in
> > > > > > the
> > > > > > >> release notes.
> > > > > > >>
> > > > > > >> Best,
> > > > > > >> Kurt
> > > > > > >>
> > > > > > >>
> > > > > > >> On Thu, Aug 15, 2019 at 6:14 PM Tzu-Li (Gordon) Tai <
> > > > > > tzuli...@apache.org>
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >>> Hi Kurt,
> > > > > > >>>
> > > > > > >>> With the same argument as before, given that it is mentioned
> in
> > > the
> > > > > > >>> release
> > > > > > >>> announcement that it is a preview feature, I would not block
> > this
> > > > > > release
> > > > > > >>> because of it.
> > > > > > >>> Nevertheless, 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Bowen Li
Hi Jark,

Thanks for letting me know that it's been like this in previous releases.
Though I don't think that's the right behavior, it can be discussed for
later release. Thus I retract my -1 for RC2.

Bowen


On Thu, Aug 15, 2019 at 7:49 PM Jark Wu  wrote:

> Hi Bowen,
>
> Thanks for reporting this.
> However, I don't think this is an issue. IMO, it is by design.
> The `tEnv.listUserDefinedFunctions()` in Table API and `show functions;` in
> SQL CLI are intended to return only the registered UDFs, not including
> built-in functions.
> This is also the behavior in previous versions.
>
> Best,
> Jark
>
> On Fri, 16 Aug 2019 at 06:52, Bowen Li  wrote:
>
> > -1 for RC2.
> >
> > I found a bug https://issues.apache.org/jira/browse/FLINK-13741, and I
> > think it's a blocker.  The bug means currently if users call
> > `tEnv.listUserDefinedFunctions()` in Table API or `show functions;` thru
> > SQL would not be able to see Flink's built-in functions.
> >
> > I'm preparing a fix right now.
> >
> > Bowen
> >
> >
> > On Thu, Aug 15, 2019 at 8:55 AM Tzu-Li (Gordon) Tai  >
> > wrote:
> >
> > > Thanks for all the test efforts, verifications and votes so far.
> > >
> > > So far, things are looking good, but we still require one more PMC
> > binding
> > > vote for this RC to be the official release, so I would like to extend
> > the
> > > vote time for 1 more day, until *Aug. 16th 17:00 CET*.
> > >
> > > In the meantime, the release notes for 1.9.0 had only just been
> finalized
> > > [1], and could use a few more eyes before closing the vote.
> > > Any help with checking if anything else should be mentioned there
> > regarding
> > > breaking changes / known shortcomings would be appreciated.
> > >
> > > Cheers,
> > > Gordon
> > >
> > > [1] https://github.com/apache/flink/pull/9438
> > >
> > > On Thu, Aug 15, 2019 at 3:58 PM Kurt Young  wrote:
> > >
> > > > Great, then I have no other comments on legal check.
> > > >
> > > > Best,
> > > > Kurt
> > > >
> > > >
> > > > On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler  >
> > > > wrote:
> > > >
> > > > > The licensing items aren't a problem; we don't care about Flink
> > modules
> > > > > in NOTICE files, and we don't have to update the source-release
> > > > > licensing since we don't have a pre-built version of the WebUI in
> the
> > > > > source.
> > > > >
> > > > > On 15/08/2019 15:22, Kurt Young wrote:
> > > > > > After going through the licenses, I found 2 suspicions but not
> sure
> > > if
> > > > > they
> > > > > > are
> > > > > > valid or not.
> > > > > >
> > > > > > 1. flink-state-processing-api is packaged in to flink-dist jar,
> but
> > > not
> > > > > > included in
> > > > > > NOTICE-binary file (the one under the root directory) like other
> > > > modules.
> > > > > > 2. flink-runtime-web distributed some JavaScript dependencies
> > through
> > > > > source
> > > > > > codes, the licenses and NOTICE file were only updated inside the
> > > module
> > > > > of
> > > > > > flink-runtime-web, but not the NOTICE file and licenses directory
> > > which
> > > > > > under
> > > > > > the  root directory.
> > > > > >
> > > > > > Another minor issue I just found is:
> > > > > > FLINK-13558 tries to include table examples to flink-dist, but I
> > > cannot
> > > > > > find it in
> > > > > > the binary distribution of RC2.
> > > > > >
> > > > > > Best,
> > > > > > Kurt
> > > > > >
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 6:19 PM Kurt Young 
> > wrote:
> > > > > >
> > > > > >> Hi Gordon & Timo,
> > > > > >>
> > > > > >> Thanks for the feedback, and I agree with it. I will document
> this
> > > in
> > > > > the
> > > > > >> release notes.
> > > > > >>
> > > > > >> Best,
> > > > > >> Kurt
> > > > > >>
> > > > > >>
> > > > > >> On Thu, Aug 15, 2019 at 6:14 PM Tzu-Li (Gordon) Tai <
> > > > > tzuli...@apache.org>
> > > > > >> wrote:
> > > > > >>
> > > > > >>> Hi Kurt,
> > > > > >>>
> > > > > >>> With the same argument as before, given that it is mentioned in
> > the
> > > > > >>> release
> > > > > >>> announcement that it is a preview feature, I would not block
> this
> > > > > release
> > > > > >>> because of it.
> > > > > >>> Nevertheless, it would be important to mention this explicitly
> in
> > > the
> > > > > >>> release notes [1].
> > > > > >>>
> > > > > >>> Regards,
> > > > > >>> Gordon
> > > > > >>>
> > > > > >>> [1] https://github.com/apache/flink/pull/9438
> > > > > >>>
> > > > > >>> On Thu, Aug 15, 2019 at 11:29 AM Timo Walther <
> > twal...@apache.org>
> > > > > wrote:
> > > > > >>>
> > > > >  Hi Kurt,
> > > > > 
> > > > >  I agree that this is a serious bug. However, I would not block
> > the
> > > > >  release because of this. As you said, there is a workaround
> and
> > > the
> > > > >  `execute()` works in the most common case of a single
> execution.
> > > We
> > > > > can
> > > > >  fix this in a minor release shortly after.
> > > > > 
> > > > >  What do others think?
> > > > > 
> > > > >  Regards,
> > > > >  

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Jark Wu
Hi Bowen,

Thanks for reporting this.
However, I don't think this is an issue. IMO, it is by design.
The `tEnv.listUserDefinedFunctions()` in Table API and `show functions;` in
SQL CLI are intended to return only the registered UDFs, not including
built-in functions.
This is also the behavior in previous versions.

Best,
Jark

On Fri, 16 Aug 2019 at 06:52, Bowen Li  wrote:

> -1 for RC2.
>
> I found a bug https://issues.apache.org/jira/browse/FLINK-13741, and I
> think it's a blocker.  The bug means currently if users call
> `tEnv.listUserDefinedFunctions()` in Table API or `show functions;` thru
> SQL would not be able to see Flink's built-in functions.
>
> I'm preparing a fix right now.
>
> Bowen
>
>
> On Thu, Aug 15, 2019 at 8:55 AM Tzu-Li (Gordon) Tai 
> wrote:
>
> > Thanks for all the test efforts, verifications and votes so far.
> >
> > So far, things are looking good, but we still require one more PMC
> binding
> > vote for this RC to be the official release, so I would like to extend
> the
> > vote time for 1 more day, until *Aug. 16th 17:00 CET*.
> >
> > In the meantime, the release notes for 1.9.0 had only just been finalized
> > [1], and could use a few more eyes before closing the vote.
> > Any help with checking if anything else should be mentioned there
> regarding
> > breaking changes / known shortcomings would be appreciated.
> >
> > Cheers,
> > Gordon
> >
> > [1] https://github.com/apache/flink/pull/9438
> >
> > On Thu, Aug 15, 2019 at 3:58 PM Kurt Young  wrote:
> >
> > > Great, then I have no other comments on legal check.
> > >
> > > Best,
> > > Kurt
> > >
> > >
> > > On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler 
> > > wrote:
> > >
> > > > The licensing items aren't a problem; we don't care about Flink
> modules
> > > > in NOTICE files, and we don't have to update the source-release
> > > > licensing since we don't have a pre-built version of the WebUI in the
> > > > source.
> > > >
> > > > On 15/08/2019 15:22, Kurt Young wrote:
> > > > > After going through the licenses, I found 2 suspicions but not sure
> > if
> > > > they
> > > > > are
> > > > > valid or not.
> > > > >
> > > > > 1. flink-state-processing-api is packaged in to flink-dist jar, but
> > not
> > > > > included in
> > > > > NOTICE-binary file (the one under the root directory) like other
> > > modules.
> > > > > 2. flink-runtime-web distributed some JavaScript dependencies
> through
> > > > source
> > > > > codes, the licenses and NOTICE file were only updated inside the
> > module
> > > > of
> > > > > flink-runtime-web, but not the NOTICE file and licenses directory
> > which
> > > > > under
> > > > > the  root directory.
> > > > >
> > > > > Another minor issue I just found is:
> > > > > FLINK-13558 tries to include table examples to flink-dist, but I
> > cannot
> > > > > find it in
> > > > > the binary distribution of RC2.
> > > > >
> > > > > Best,
> > > > > Kurt
> > > > >
> > > > >
> > > > > On Thu, Aug 15, 2019 at 6:19 PM Kurt Young 
> wrote:
> > > > >
> > > > >> Hi Gordon & Timo,
> > > > >>
> > > > >> Thanks for the feedback, and I agree with it. I will document this
> > in
> > > > the
> > > > >> release notes.
> > > > >>
> > > > >> Best,
> > > > >> Kurt
> > > > >>
> > > > >>
> > > > >> On Thu, Aug 15, 2019 at 6:14 PM Tzu-Li (Gordon) Tai <
> > > > tzuli...@apache.org>
> > > > >> wrote:
> > > > >>
> > > > >>> Hi Kurt,
> > > > >>>
> > > > >>> With the same argument as before, given that it is mentioned in
> the
> > > > >>> release
> > > > >>> announcement that it is a preview feature, I would not block this
> > > > release
> > > > >>> because of it.
> > > > >>> Nevertheless, it would be important to mention this explicitly in
> > the
> > > > >>> release notes [1].
> > > > >>>
> > > > >>> Regards,
> > > > >>> Gordon
> > > > >>>
> > > > >>> [1] https://github.com/apache/flink/pull/9438
> > > > >>>
> > > > >>> On Thu, Aug 15, 2019 at 11:29 AM Timo Walther <
> twal...@apache.org>
> > > > wrote:
> > > > >>>
> > > >  Hi Kurt,
> > > > 
> > > >  I agree that this is a serious bug. However, I would not block
> the
> > > >  release because of this. As you said, there is a workaround and
> > the
> > > >  `execute()` works in the most common case of a single execution.
> > We
> > > > can
> > > >  fix this in a minor release shortly after.
> > > > 
> > > >  What do others think?
> > > > 
> > > >  Regards,
> > > >  Timo
> > > > 
> > > > 
> > > >  Am 15.08.19 um 11:23 schrieb Kurt Young:
> > > > > HI,
> > > > >
> > > > > We just find a serious bug around blink planner:
> > > > > https://issues.apache.org/jira/browse/FLINK-13708
> > > > > When user reused the table environment instance, and call
> > `execute`
> > > >  method
> > > > > multiple times for
> > > > > different sql, the later call will trigger the earlier ones to
> be
> > > > > re-executed.
> > > > >
> > > > > It's a serious bug but seems we also have a work around, which

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Bowen Li
-1 for RC2.

I found a bug https://issues.apache.org/jira/browse/FLINK-13741, and I
think it's a blocker.  The bug means currently if users call
`tEnv.listUserDefinedFunctions()` in Table API or `show functions;` thru
SQL would not be able to see Flink's built-in functions.

I'm preparing a fix right now.

Bowen


On Thu, Aug 15, 2019 at 8:55 AM Tzu-Li (Gordon) Tai 
wrote:

> Thanks for all the test efforts, verifications and votes so far.
>
> So far, things are looking good, but we still require one more PMC binding
> vote for this RC to be the official release, so I would like to extend the
> vote time for 1 more day, until *Aug. 16th 17:00 CET*.
>
> In the meantime, the release notes for 1.9.0 had only just been finalized
> [1], and could use a few more eyes before closing the vote.
> Any help with checking if anything else should be mentioned there regarding
> breaking changes / known shortcomings would be appreciated.
>
> Cheers,
> Gordon
>
> [1] https://github.com/apache/flink/pull/9438
>
> On Thu, Aug 15, 2019 at 3:58 PM Kurt Young  wrote:
>
> > Great, then I have no other comments on legal check.
> >
> > Best,
> > Kurt
> >
> >
> > On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler 
> > wrote:
> >
> > > The licensing items aren't a problem; we don't care about Flink modules
> > > in NOTICE files, and we don't have to update the source-release
> > > licensing since we don't have a pre-built version of the WebUI in the
> > > source.
> > >
> > > On 15/08/2019 15:22, Kurt Young wrote:
> > > > After going through the licenses, I found 2 suspicions but not sure
> if
> > > they
> > > > are
> > > > valid or not.
> > > >
> > > > 1. flink-state-processing-api is packaged in to flink-dist jar, but
> not
> > > > included in
> > > > NOTICE-binary file (the one under the root directory) like other
> > modules.
> > > > 2. flink-runtime-web distributed some JavaScript dependencies through
> > > source
> > > > codes, the licenses and NOTICE file were only updated inside the
> module
> > > of
> > > > flink-runtime-web, but not the NOTICE file and licenses directory
> which
> > > > under
> > > > the  root directory.
> > > >
> > > > Another minor issue I just found is:
> > > > FLINK-13558 tries to include table examples to flink-dist, but I
> cannot
> > > > find it in
> > > > the binary distribution of RC2.
> > > >
> > > > Best,
> > > > Kurt
> > > >
> > > >
> > > > On Thu, Aug 15, 2019 at 6:19 PM Kurt Young  wrote:
> > > >
> > > >> Hi Gordon & Timo,
> > > >>
> > > >> Thanks for the feedback, and I agree with it. I will document this
> in
> > > the
> > > >> release notes.
> > > >>
> > > >> Best,
> > > >> Kurt
> > > >>
> > > >>
> > > >> On Thu, Aug 15, 2019 at 6:14 PM Tzu-Li (Gordon) Tai <
> > > tzuli...@apache.org>
> > > >> wrote:
> > > >>
> > > >>> Hi Kurt,
> > > >>>
> > > >>> With the same argument as before, given that it is mentioned in the
> > > >>> release
> > > >>> announcement that it is a preview feature, I would not block this
> > > release
> > > >>> because of it.
> > > >>> Nevertheless, it would be important to mention this explicitly in
> the
> > > >>> release notes [1].
> > > >>>
> > > >>> Regards,
> > > >>> Gordon
> > > >>>
> > > >>> [1] https://github.com/apache/flink/pull/9438
> > > >>>
> > > >>> On Thu, Aug 15, 2019 at 11:29 AM Timo Walther 
> > > wrote:
> > > >>>
> > >  Hi Kurt,
> > > 
> > >  I agree that this is a serious bug. However, I would not block the
> > >  release because of this. As you said, there is a workaround and
> the
> > >  `execute()` works in the most common case of a single execution.
> We
> > > can
> > >  fix this in a minor release shortly after.
> > > 
> > >  What do others think?
> > > 
> > >  Regards,
> > >  Timo
> > > 
> > > 
> > >  Am 15.08.19 um 11:23 schrieb Kurt Young:
> > > > HI,
> > > >
> > > > We just find a serious bug around blink planner:
> > > > https://issues.apache.org/jira/browse/FLINK-13708
> > > > When user reused the table environment instance, and call
> `execute`
> > >  method
> > > > multiple times for
> > > > different sql, the later call will trigger the earlier ones to be
> > > > re-executed.
> > > >
> > > > It's a serious bug but seems we also have a work around, which is
> > > >>> never
> > > > reuse the table environment
> > > > object. I'm not sure if we should treat this one as blocker issue
> > of
> > >  1.9.0.
> > > > What's your opinion?
> > > >
> > > > Best,
> > > > Kurt
> > > >
> > > >
> > > > On Thu, Aug 15, 2019 at 2:01 PM Gary Yao 
> > wrote:
> > > >
> > > >> +1 (non-binding)
> > > >>
> > > >> Jepsen test suite passed 10 times consecutively
> > > >>
> > > >> On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek <
> > > >>> aljos...@apache.org>
> > > >> wrote:
> > > >>
> > > >>> +1
> > > >>>
> > > >>> I did some testing on a Google Cloud Dataproc cluster (it gives
> 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Tzu-Li (Gordon) Tai
Thanks for all the test efforts, verifications and votes so far.

So far, things are looking good, but we still require one more PMC binding
vote for this RC to be the official release, so I would like to extend the
vote time for 1 more day, until *Aug. 16th 17:00 CET*.

In the meantime, the release notes for 1.9.0 had only just been finalized
[1], and could use a few more eyes before closing the vote.
Any help with checking if anything else should be mentioned there regarding
breaking changes / known shortcomings would be appreciated.

Cheers,
Gordon

[1] https://github.com/apache/flink/pull/9438

On Thu, Aug 15, 2019 at 3:58 PM Kurt Young  wrote:

> Great, then I have no other comments on legal check.
>
> Best,
> Kurt
>
>
> On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler 
> wrote:
>
> > The licensing items aren't a problem; we don't care about Flink modules
> > in NOTICE files, and we don't have to update the source-release
> > licensing since we don't have a pre-built version of the WebUI in the
> > source.
> >
> > On 15/08/2019 15:22, Kurt Young wrote:
> > > After going through the licenses, I found 2 suspicions but not sure if
> > they
> > > are
> > > valid or not.
> > >
> > > 1. flink-state-processing-api is packaged in to flink-dist jar, but not
> > > included in
> > > NOTICE-binary file (the one under the root directory) like other
> modules.
> > > 2. flink-runtime-web distributed some JavaScript dependencies through
> > source
> > > codes, the licenses and NOTICE file were only updated inside the module
> > of
> > > flink-runtime-web, but not the NOTICE file and licenses directory which
> > > under
> > > the  root directory.
> > >
> > > Another minor issue I just found is:
> > > FLINK-13558 tries to include table examples to flink-dist, but I cannot
> > > find it in
> > > the binary distribution of RC2.
> > >
> > > Best,
> > > Kurt
> > >
> > >
> > > On Thu, Aug 15, 2019 at 6:19 PM Kurt Young  wrote:
> > >
> > >> Hi Gordon & Timo,
> > >>
> > >> Thanks for the feedback, and I agree with it. I will document this in
> > the
> > >> release notes.
> > >>
> > >> Best,
> > >> Kurt
> > >>
> > >>
> > >> On Thu, Aug 15, 2019 at 6:14 PM Tzu-Li (Gordon) Tai <
> > tzuli...@apache.org>
> > >> wrote:
> > >>
> > >>> Hi Kurt,
> > >>>
> > >>> With the same argument as before, given that it is mentioned in the
> > >>> release
> > >>> announcement that it is a preview feature, I would not block this
> > release
> > >>> because of it.
> > >>> Nevertheless, it would be important to mention this explicitly in the
> > >>> release notes [1].
> > >>>
> > >>> Regards,
> > >>> Gordon
> > >>>
> > >>> [1] https://github.com/apache/flink/pull/9438
> > >>>
> > >>> On Thu, Aug 15, 2019 at 11:29 AM Timo Walther 
> > wrote:
> > >>>
> >  Hi Kurt,
> > 
> >  I agree that this is a serious bug. However, I would not block the
> >  release because of this. As you said, there is a workaround and the
> >  `execute()` works in the most common case of a single execution. We
> > can
> >  fix this in a minor release shortly after.
> > 
> >  What do others think?
> > 
> >  Regards,
> >  Timo
> > 
> > 
> >  Am 15.08.19 um 11:23 schrieb Kurt Young:
> > > HI,
> > >
> > > We just find a serious bug around blink planner:
> > > https://issues.apache.org/jira/browse/FLINK-13708
> > > When user reused the table environment instance, and call `execute`
> >  method
> > > multiple times for
> > > different sql, the later call will trigger the earlier ones to be
> > > re-executed.
> > >
> > > It's a serious bug but seems we also have a work around, which is
> > >>> never
> > > reuse the table environment
> > > object. I'm not sure if we should treat this one as blocker issue
> of
> >  1.9.0.
> > > What's your opinion?
> > >
> > > Best,
> > > Kurt
> > >
> > >
> > > On Thu, Aug 15, 2019 at 2:01 PM Gary Yao 
> wrote:
> > >
> > >> +1 (non-binding)
> > >>
> > >> Jepsen test suite passed 10 times consecutively
> > >>
> > >> On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek <
> > >>> aljos...@apache.org>
> > >> wrote:
> > >>
> > >>> +1
> > >>>
> > >>> I did some testing on a Google Cloud Dataproc cluster (it gives
> you
> > >>> a
> > >>> managed YARN and Google Cloud Storage (GCS)):
> > >>> - tried both YARN session mode and YARN per-job mode, also
> > using
> > >>> bin/flink list/cancel/etc. against a YARN session cluster
> > >>> - ran examples that write to GCS, both with the native Hadoop
> > >> FileSystem
> > >>> and a custom “plugin” FileSystem
> > >>> - ran stateful streaming jobs that use GCS as a checkpoint
> > >>> backend
> > >>> - tried running SQL programs on YARN using the SQL Cli: this
> > >>> worked
> >  for
> > >>> YARN session mode but not for YARN per-job mode. Looking at the
> > >>> code I
> > >>> don’t 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Kurt Young
Great, then I have no other comments on legal check.

Best,
Kurt


On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler  wrote:

> The licensing items aren't a problem; we don't care about Flink modules
> in NOTICE files, and we don't have to update the source-release
> licensing since we don't have a pre-built version of the WebUI in the
> source.
>
> On 15/08/2019 15:22, Kurt Young wrote:
> > After going through the licenses, I found 2 suspicions but not sure if
> they
> > are
> > valid or not.
> >
> > 1. flink-state-processing-api is packaged in to flink-dist jar, but not
> > included in
> > NOTICE-binary file (the one under the root directory) like other modules.
> > 2. flink-runtime-web distributed some JavaScript dependencies through
> source
> > codes, the licenses and NOTICE file were only updated inside the module
> of
> > flink-runtime-web, but not the NOTICE file and licenses directory which
> > under
> > the  root directory.
> >
> > Another minor issue I just found is:
> > FLINK-13558 tries to include table examples to flink-dist, but I cannot
> > find it in
> > the binary distribution of RC2.
> >
> > Best,
> > Kurt
> >
> >
> > On Thu, Aug 15, 2019 at 6:19 PM Kurt Young  wrote:
> >
> >> Hi Gordon & Timo,
> >>
> >> Thanks for the feedback, and I agree with it. I will document this in
> the
> >> release notes.
> >>
> >> Best,
> >> Kurt
> >>
> >>
> >> On Thu, Aug 15, 2019 at 6:14 PM Tzu-Li (Gordon) Tai <
> tzuli...@apache.org>
> >> wrote:
> >>
> >>> Hi Kurt,
> >>>
> >>> With the same argument as before, given that it is mentioned in the
> >>> release
> >>> announcement that it is a preview feature, I would not block this
> release
> >>> because of it.
> >>> Nevertheless, it would be important to mention this explicitly in the
> >>> release notes [1].
> >>>
> >>> Regards,
> >>> Gordon
> >>>
> >>> [1] https://github.com/apache/flink/pull/9438
> >>>
> >>> On Thu, Aug 15, 2019 at 11:29 AM Timo Walther 
> wrote:
> >>>
>  Hi Kurt,
> 
>  I agree that this is a serious bug. However, I would not block the
>  release because of this. As you said, there is a workaround and the
>  `execute()` works in the most common case of a single execution. We
> can
>  fix this in a minor release shortly after.
> 
>  What do others think?
> 
>  Regards,
>  Timo
> 
> 
>  Am 15.08.19 um 11:23 schrieb Kurt Young:
> > HI,
> >
> > We just find a serious bug around blink planner:
> > https://issues.apache.org/jira/browse/FLINK-13708
> > When user reused the table environment instance, and call `execute`
>  method
> > multiple times for
> > different sql, the later call will trigger the earlier ones to be
> > re-executed.
> >
> > It's a serious bug but seems we also have a work around, which is
> >>> never
> > reuse the table environment
> > object. I'm not sure if we should treat this one as blocker issue of
>  1.9.0.
> > What's your opinion?
> >
> > Best,
> > Kurt
> >
> >
> > On Thu, Aug 15, 2019 at 2:01 PM Gary Yao  wrote:
> >
> >> +1 (non-binding)
> >>
> >> Jepsen test suite passed 10 times consecutively
> >>
> >> On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek <
> >>> aljos...@apache.org>
> >> wrote:
> >>
> >>> +1
> >>>
> >>> I did some testing on a Google Cloud Dataproc cluster (it gives you
> >>> a
> >>> managed YARN and Google Cloud Storage (GCS)):
> >>> - tried both YARN session mode and YARN per-job mode, also
> using
> >>> bin/flink list/cancel/etc. against a YARN session cluster
> >>> - ran examples that write to GCS, both with the native Hadoop
> >> FileSystem
> >>> and a custom “plugin” FileSystem
> >>> - ran stateful streaming jobs that use GCS as a checkpoint
> >>> backend
> >>> - tried running SQL programs on YARN using the SQL Cli: this
> >>> worked
>  for
> >>> YARN session mode but not for YARN per-job mode. Looking at the
> >>> code I
> >>> don’t think per-job mode would work from seeing how it is
> >>> implemented.
> >> But
> >>> I think it’s an OK restriction to have for now
> >>> - in all the testing I had fine-grained recovery (region
> >>> failover)
> >>> enabled but I didn’t simulate any failures
> >>>
>  On 14. Aug 2019, at 15:20, Kurt Young  wrote:
> 
>  Hi,
> 
>  Thanks for preparing this release candidate. I have verified the
> >>> following:
>  - verified the checksums and GPG files match the corresponding
> >>> release
> >>> files
>  - verified that the source archives do not contains any binaries
>  - build the source release with Scala 2.11 successfully.
>  - ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
> >>> [FLINK-13688],
>  but
>  both are not release blockers. Other than that, all tests are
> >>> passed.
>  - 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Chesnay Schepler
The licensing items aren't a problem; we don't care about Flink modules 
in NOTICE files, and we don't have to update the source-release 
licensing since we don't have a pre-built version of the WebUI in the 
source.


On 15/08/2019 15:22, Kurt Young wrote:

After going through the licenses, I found 2 suspicions but not sure if they
are
valid or not.

1. flink-state-processing-api is packaged in to flink-dist jar, but not
included in
NOTICE-binary file (the one under the root directory) like other modules.
2. flink-runtime-web distributed some JavaScript dependencies through source
codes, the licenses and NOTICE file were only updated inside the module of
flink-runtime-web, but not the NOTICE file and licenses directory which
under
the  root directory.

Another minor issue I just found is:
FLINK-13558 tries to include table examples to flink-dist, but I cannot
find it in
the binary distribution of RC2.

Best,
Kurt


On Thu, Aug 15, 2019 at 6:19 PM Kurt Young  wrote:


Hi Gordon & Timo,

Thanks for the feedback, and I agree with it. I will document this in the
release notes.

Best,
Kurt


On Thu, Aug 15, 2019 at 6:14 PM Tzu-Li (Gordon) Tai 
wrote:


Hi Kurt,

With the same argument as before, given that it is mentioned in the
release
announcement that it is a preview feature, I would not block this release
because of it.
Nevertheless, it would be important to mention this explicitly in the
release notes [1].

Regards,
Gordon

[1] https://github.com/apache/flink/pull/9438

On Thu, Aug 15, 2019 at 11:29 AM Timo Walther  wrote:


Hi Kurt,

I agree that this is a serious bug. However, I would not block the
release because of this. As you said, there is a workaround and the
`execute()` works in the most common case of a single execution. We can
fix this in a minor release shortly after.

What do others think?

Regards,
Timo


Am 15.08.19 um 11:23 schrieb Kurt Young:

HI,

We just find a serious bug around blink planner:
https://issues.apache.org/jira/browse/FLINK-13708
When user reused the table environment instance, and call `execute`

method

multiple times for
different sql, the later call will trigger the earlier ones to be
re-executed.

It's a serious bug but seems we also have a work around, which is

never

reuse the table environment
object. I'm not sure if we should treat this one as blocker issue of

1.9.0.

What's your opinion?

Best,
Kurt


On Thu, Aug 15, 2019 at 2:01 PM Gary Yao  wrote:


+1 (non-binding)

Jepsen test suite passed 10 times consecutively

On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek <

aljos...@apache.org>

wrote:


+1

I did some testing on a Google Cloud Dataproc cluster (it gives you

a

managed YARN and Google Cloud Storage (GCS)):
- tried both YARN session mode and YARN per-job mode, also using
bin/flink list/cancel/etc. against a YARN session cluster
- ran examples that write to GCS, both with the native Hadoop

FileSystem

and a custom “plugin” FileSystem
- ran stateful streaming jobs that use GCS as a checkpoint

backend

- tried running SQL programs on YARN using the SQL Cli: this

worked

for

YARN session mode but not for YARN per-job mode. Looking at the

code I

don’t think per-job mode would work from seeing how it is

implemented.

But

I think it’s an OK restriction to have for now
- in all the testing I had fine-grained recovery (region

failover)

enabled but I didn’t simulate any failures


On 14. Aug 2019, at 15:20, Kurt Young  wrote:

Hi,

Thanks for preparing this release candidate. I have verified the

following:

- verified the checksums and GPG files match the corresponding

release

files

- verified that the source archives do not contains any binaries
- build the source release with Scala 2.11 successfully.
- ran `mvn verify` locally, met 2 issuses [FLINK-13687] and

[FLINK-13688],

but
both are not release blockers. Other than that, all tests are

passed.

- ran all e2e tests which don't need download external packages

(it's

very

unstable
in China and almost impossible to download them), all passed.
- started local cluster, ran some examples. Met a small website

display

issue
[FLINK-13591], which is also not a release blocker.

Although we have pushed some fixes around blink planner and hive
integration
after RC2, but consider these are both preview features, I'm lean

to

be

ok

to release
without these fixes.

+1 from my side. (binding)

Best,
Kurt


On Wed, Aug 14, 2019 at 5:13 PM Jark Wu  wrote:


Hi Gordon,

I have verified the following things:

- build the source release with Scala 2.12 and Scala 2.11

successfully

- checked/verified signatures and hashes
- checked that all POM files point to the same version
- ran some flink table related end-to-end tests locally and

succeeded

(except TPC-H e2e failed which is reported in FLINK-13704)
- started cluster for both Scala 2.11 and 2.12, ran examples,

verified

web

ui and log output, nothing unexpected
- started cluster, ran a SQL query to temporal join with kafka


Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Dawid Wysakowicz
Thanks Kurt for checking that.

The mentioned problem with table-examples is that, when working on
FLINK-13558, I forgot to add dependency on flink-examples-table to
flink-dist. So this module is not built if only the flink-dist with its
dependencies is built (this happens in the release scripts: -pl
flink-dist -am) I created FLINK-13737 to fix that.

As those are only examples I wouldn't block the release on them. We
might need to change the fixVersion of the mentioned FLINK-13558 not to
confuse users. The proper fix we could include in 1.9.1. WDYT?

Best,

Dawid


[1] https://issues.apache.org/jira/browse/FLINK-13737

On 15/08/2019 15:22, Kurt Young wrote:
> After going through the licenses, I found 2 suspicions but not sure if they
> are
> valid or not.
>
> 1. flink-state-processing-api is packaged in to flink-dist jar, but not
> included in
> NOTICE-binary file (the one under the root directory) like other modules.
> 2. flink-runtime-web distributed some JavaScript dependencies through source
> codes, the licenses and NOTICE file were only updated inside the module of
> flink-runtime-web, but not the NOTICE file and licenses directory which
> under
> the  root directory.
>
> Another minor issue I just found is:
> FLINK-13558 tries to include table examples to flink-dist, but I cannot
> find it in
> the binary distribution of RC2.
>
> Best,
> Kurt
>
>
> On Thu, Aug 15, 2019 at 6:19 PM Kurt Young  wrote:
>
>> Hi Gordon & Timo,
>>
>> Thanks for the feedback, and I agree with it. I will document this in the
>> release notes.
>>
>> Best,
>> Kurt
>>
>>
>> On Thu, Aug 15, 2019 at 6:14 PM Tzu-Li (Gordon) Tai 
>> wrote:
>>
>>> Hi Kurt,
>>>
>>> With the same argument as before, given that it is mentioned in the
>>> release
>>> announcement that it is a preview feature, I would not block this release
>>> because of it.
>>> Nevertheless, it would be important to mention this explicitly in the
>>> release notes [1].
>>>
>>> Regards,
>>> Gordon
>>>
>>> [1] https://github.com/apache/flink/pull/9438
>>>
>>> On Thu, Aug 15, 2019 at 11:29 AM Timo Walther  wrote:
>>>
 Hi Kurt,

 I agree that this is a serious bug. However, I would not block the
 release because of this. As you said, there is a workaround and the
 `execute()` works in the most common case of a single execution. We can
 fix this in a minor release shortly after.

 What do others think?

 Regards,
 Timo


 Am 15.08.19 um 11:23 schrieb Kurt Young:
> HI,
>
> We just find a serious bug around blink planner:
> https://issues.apache.org/jira/browse/FLINK-13708
> When user reused the table environment instance, and call `execute`
 method
> multiple times for
> different sql, the later call will trigger the earlier ones to be
> re-executed.
>
> It's a serious bug but seems we also have a work around, which is
>>> never
> reuse the table environment
> object. I'm not sure if we should treat this one as blocker issue of
 1.9.0.
> What's your opinion?
>
> Best,
> Kurt
>
>
> On Thu, Aug 15, 2019 at 2:01 PM Gary Yao  wrote:
>
>> +1 (non-binding)
>>
>> Jepsen test suite passed 10 times consecutively
>>
>> On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek <
>>> aljos...@apache.org>
>> wrote:
>>
>>> +1
>>>
>>> I did some testing on a Google Cloud Dataproc cluster (it gives you
>>> a
>>> managed YARN and Google Cloud Storage (GCS)):
>>>- tried both YARN session mode and YARN per-job mode, also using
>>> bin/flink list/cancel/etc. against a YARN session cluster
>>>- ran examples that write to GCS, both with the native Hadoop
>> FileSystem
>>> and a custom “plugin” FileSystem
>>>- ran stateful streaming jobs that use GCS as a checkpoint
>>> backend
>>>- tried running SQL programs on YARN using the SQL Cli: this
>>> worked
 for
>>> YARN session mode but not for YARN per-job mode. Looking at the
>>> code I
>>> don’t think per-job mode would work from seeing how it is
>>> implemented.
>> But
>>> I think it’s an OK restriction to have for now
>>>- in all the testing I had fine-grained recovery (region
>>> failover)
>>> enabled but I didn’t simulate any failures
>>>
 On 14. Aug 2019, at 15:20, Kurt Young  wrote:

 Hi,

 Thanks for preparing this release candidate. I have verified the
>>> following:
 - verified the checksums and GPG files match the corresponding
>>> release
>>> files
 - verified that the source archives do not contains any binaries
 - build the source release with Scala 2.11 successfully.
 - ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
>>> [FLINK-13688],
 but
 both are not release blockers. Other than that, all tests are
>>> passed.
 - ran all e2e tests which don't need download 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Kurt Young
After going through the licenses, I found 2 suspicions but not sure if they
are
valid or not.

1. flink-state-processing-api is packaged in to flink-dist jar, but not
included in
NOTICE-binary file (the one under the root directory) like other modules.
2. flink-runtime-web distributed some JavaScript dependencies through source
codes, the licenses and NOTICE file were only updated inside the module of
flink-runtime-web, but not the NOTICE file and licenses directory which
under
the  root directory.

Another minor issue I just found is:
FLINK-13558 tries to include table examples to flink-dist, but I cannot
find it in
the binary distribution of RC2.

Best,
Kurt


On Thu, Aug 15, 2019 at 6:19 PM Kurt Young  wrote:

> Hi Gordon & Timo,
>
> Thanks for the feedback, and I agree with it. I will document this in the
> release notes.
>
> Best,
> Kurt
>
>
> On Thu, Aug 15, 2019 at 6:14 PM Tzu-Li (Gordon) Tai 
> wrote:
>
>> Hi Kurt,
>>
>> With the same argument as before, given that it is mentioned in the
>> release
>> announcement that it is a preview feature, I would not block this release
>> because of it.
>> Nevertheless, it would be important to mention this explicitly in the
>> release notes [1].
>>
>> Regards,
>> Gordon
>>
>> [1] https://github.com/apache/flink/pull/9438
>>
>> On Thu, Aug 15, 2019 at 11:29 AM Timo Walther  wrote:
>>
>> > Hi Kurt,
>> >
>> > I agree that this is a serious bug. However, I would not block the
>> > release because of this. As you said, there is a workaround and the
>> > `execute()` works in the most common case of a single execution. We can
>> > fix this in a minor release shortly after.
>> >
>> > What do others think?
>> >
>> > Regards,
>> > Timo
>> >
>> >
>> > Am 15.08.19 um 11:23 schrieb Kurt Young:
>> > > HI,
>> > >
>> > > We just find a serious bug around blink planner:
>> > > https://issues.apache.org/jira/browse/FLINK-13708
>> > > When user reused the table environment instance, and call `execute`
>> > method
>> > > multiple times for
>> > > different sql, the later call will trigger the earlier ones to be
>> > > re-executed.
>> > >
>> > > It's a serious bug but seems we also have a work around, which is
>> never
>> > > reuse the table environment
>> > > object. I'm not sure if we should treat this one as blocker issue of
>> > 1.9.0.
>> > >
>> > > What's your opinion?
>> > >
>> > > Best,
>> > > Kurt
>> > >
>> > >
>> > > On Thu, Aug 15, 2019 at 2:01 PM Gary Yao  wrote:
>> > >
>> > >> +1 (non-binding)
>> > >>
>> > >> Jepsen test suite passed 10 times consecutively
>> > >>
>> > >> On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek <
>> aljos...@apache.org>
>> > >> wrote:
>> > >>
>> > >>> +1
>> > >>>
>> > >>> I did some testing on a Google Cloud Dataproc cluster (it gives you
>> a
>> > >>> managed YARN and Google Cloud Storage (GCS)):
>> > >>>- tried both YARN session mode and YARN per-job mode, also using
>> > >>> bin/flink list/cancel/etc. against a YARN session cluster
>> > >>>- ran examples that write to GCS, both with the native Hadoop
>> > >> FileSystem
>> > >>> and a custom “plugin” FileSystem
>> > >>>- ran stateful streaming jobs that use GCS as a checkpoint
>> backend
>> > >>>- tried running SQL programs on YARN using the SQL Cli: this
>> worked
>> > for
>> > >>> YARN session mode but not for YARN per-job mode. Looking at the
>> code I
>> > >>> don’t think per-job mode would work from seeing how it is
>> implemented.
>> > >> But
>> > >>> I think it’s an OK restriction to have for now
>> > >>>- in all the testing I had fine-grained recovery (region
>> failover)
>> > >>> enabled but I didn’t simulate any failures
>> > >>>
>> >  On 14. Aug 2019, at 15:20, Kurt Young  wrote:
>> > 
>> >  Hi,
>> > 
>> >  Thanks for preparing this release candidate. I have verified the
>> > >>> following:
>> >  - verified the checksums and GPG files match the corresponding
>> release
>> > >>> files
>> >  - verified that the source archives do not contains any binaries
>> >  - build the source release with Scala 2.11 successfully.
>> >  - ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
>> > >>> [FLINK-13688],
>> >  but
>> >  both are not release blockers. Other than that, all tests are
>> passed.
>> >  - ran all e2e tests which don't need download external packages
>> (it's
>> > >>> very
>> >  unstable
>> >  in China and almost impossible to download them), all passed.
>> >  - started local cluster, ran some examples. Met a small website
>> > display
>> >  issue
>> >  [FLINK-13591], which is also not a release blocker.
>> > 
>> >  Although we have pushed some fixes around blink planner and hive
>> >  integration
>> >  after RC2, but consider these are both preview features, I'm lean
>> to
>> > be
>> > >>> ok
>> >  to release
>> >  without these fixes.
>> > 
>> >  +1 from my side. (binding)
>> > 
>> >  Best,
>> >  Kurt
>> > 
>> > 
>> >  On Wed, 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Kurt Young
Hi Gordon & Timo,

Thanks for the feedback, and I agree with it. I will document this in the
release notes.

Best,
Kurt


On Thu, Aug 15, 2019 at 6:14 PM Tzu-Li (Gordon) Tai 
wrote:

> Hi Kurt,
>
> With the same argument as before, given that it is mentioned in the release
> announcement that it is a preview feature, I would not block this release
> because of it.
> Nevertheless, it would be important to mention this explicitly in the
> release notes [1].
>
> Regards,
> Gordon
>
> [1] https://github.com/apache/flink/pull/9438
>
> On Thu, Aug 15, 2019 at 11:29 AM Timo Walther  wrote:
>
> > Hi Kurt,
> >
> > I agree that this is a serious bug. However, I would not block the
> > release because of this. As you said, there is a workaround and the
> > `execute()` works in the most common case of a single execution. We can
> > fix this in a minor release shortly after.
> >
> > What do others think?
> >
> > Regards,
> > Timo
> >
> >
> > Am 15.08.19 um 11:23 schrieb Kurt Young:
> > > HI,
> > >
> > > We just find a serious bug around blink planner:
> > > https://issues.apache.org/jira/browse/FLINK-13708
> > > When user reused the table environment instance, and call `execute`
> > method
> > > multiple times for
> > > different sql, the later call will trigger the earlier ones to be
> > > re-executed.
> > >
> > > It's a serious bug but seems we also have a work around, which is never
> > > reuse the table environment
> > > object. I'm not sure if we should treat this one as blocker issue of
> > 1.9.0.
> > >
> > > What's your opinion?
> > >
> > > Best,
> > > Kurt
> > >
> > >
> > > On Thu, Aug 15, 2019 at 2:01 PM Gary Yao  wrote:
> > >
> > >> +1 (non-binding)
> > >>
> > >> Jepsen test suite passed 10 times consecutively
> > >>
> > >> On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek  >
> > >> wrote:
> > >>
> > >>> +1
> > >>>
> > >>> I did some testing on a Google Cloud Dataproc cluster (it gives you a
> > >>> managed YARN and Google Cloud Storage (GCS)):
> > >>>- tried both YARN session mode and YARN per-job mode, also using
> > >>> bin/flink list/cancel/etc. against a YARN session cluster
> > >>>- ran examples that write to GCS, both with the native Hadoop
> > >> FileSystem
> > >>> and a custom “plugin” FileSystem
> > >>>- ran stateful streaming jobs that use GCS as a checkpoint backend
> > >>>- tried running SQL programs on YARN using the SQL Cli: this
> worked
> > for
> > >>> YARN session mode but not for YARN per-job mode. Looking at the code
> I
> > >>> don’t think per-job mode would work from seeing how it is
> implemented.
> > >> But
> > >>> I think it’s an OK restriction to have for now
> > >>>- in all the testing I had fine-grained recovery (region failover)
> > >>> enabled but I didn’t simulate any failures
> > >>>
> >  On 14. Aug 2019, at 15:20, Kurt Young  wrote:
> > 
> >  Hi,
> > 
> >  Thanks for preparing this release candidate. I have verified the
> > >>> following:
> >  - verified the checksums and GPG files match the corresponding
> release
> > >>> files
> >  - verified that the source archives do not contains any binaries
> >  - build the source release with Scala 2.11 successfully.
> >  - ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
> > >>> [FLINK-13688],
> >  but
> >  both are not release blockers. Other than that, all tests are
> passed.
> >  - ran all e2e tests which don't need download external packages
> (it's
> > >>> very
> >  unstable
> >  in China and almost impossible to download them), all passed.
> >  - started local cluster, ran some examples. Met a small website
> > display
> >  issue
> >  [FLINK-13591], which is also not a release blocker.
> > 
> >  Although we have pushed some fixes around blink planner and hive
> >  integration
> >  after RC2, but consider these are both preview features, I'm lean to
> > be
> > >>> ok
> >  to release
> >  without these fixes.
> > 
> >  +1 from my side. (binding)
> > 
> >  Best,
> >  Kurt
> > 
> > 
> >  On Wed, Aug 14, 2019 at 5:13 PM Jark Wu  wrote:
> > 
> > > Hi Gordon,
> > >
> > > I have verified the following things:
> > >
> > > - build the source release with Scala 2.12 and Scala 2.11
> > successfully
> > > - checked/verified signatures and hashes
> > > - checked that all POM files point to the same version
> > > - ran some flink table related end-to-end tests locally and
> succeeded
> > > (except TPC-H e2e failed which is reported in FLINK-13704)
> > > - started cluster for both Scala 2.11 and 2.12, ran examples,
> > verified
> > >>> web
> > > ui and log output, nothing unexpected
> > > - started cluster, ran a SQL query to temporal join with kafka
> source
> > >>> and
> > > mysql jdbc table, and write results to kafka again. Using DDL to
> > >> create
> > >>> the
> > > source and sinks. looks good.
> > > - reviewed the release 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Tzu-Li (Gordon) Tai
Hi Kurt,

With the same argument as before, given that it is mentioned in the release
announcement that it is a preview feature, I would not block this release
because of it.
Nevertheless, it would be important to mention this explicitly in the
release notes [1].

Regards,
Gordon

[1] https://github.com/apache/flink/pull/9438

On Thu, Aug 15, 2019 at 11:29 AM Timo Walther  wrote:

> Hi Kurt,
>
> I agree that this is a serious bug. However, I would not block the
> release because of this. As you said, there is a workaround and the
> `execute()` works in the most common case of a single execution. We can
> fix this in a minor release shortly after.
>
> What do others think?
>
> Regards,
> Timo
>
>
> Am 15.08.19 um 11:23 schrieb Kurt Young:
> > HI,
> >
> > We just find a serious bug around blink planner:
> > https://issues.apache.org/jira/browse/FLINK-13708
> > When user reused the table environment instance, and call `execute`
> method
> > multiple times for
> > different sql, the later call will trigger the earlier ones to be
> > re-executed.
> >
> > It's a serious bug but seems we also have a work around, which is never
> > reuse the table environment
> > object. I'm not sure if we should treat this one as blocker issue of
> 1.9.0.
> >
> > What's your opinion?
> >
> > Best,
> > Kurt
> >
> >
> > On Thu, Aug 15, 2019 at 2:01 PM Gary Yao  wrote:
> >
> >> +1 (non-binding)
> >>
> >> Jepsen test suite passed 10 times consecutively
> >>
> >> On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek 
> >> wrote:
> >>
> >>> +1
> >>>
> >>> I did some testing on a Google Cloud Dataproc cluster (it gives you a
> >>> managed YARN and Google Cloud Storage (GCS)):
> >>>- tried both YARN session mode and YARN per-job mode, also using
> >>> bin/flink list/cancel/etc. against a YARN session cluster
> >>>- ran examples that write to GCS, both with the native Hadoop
> >> FileSystem
> >>> and a custom “plugin” FileSystem
> >>>- ran stateful streaming jobs that use GCS as a checkpoint backend
> >>>- tried running SQL programs on YARN using the SQL Cli: this worked
> for
> >>> YARN session mode but not for YARN per-job mode. Looking at the code I
> >>> don’t think per-job mode would work from seeing how it is implemented.
> >> But
> >>> I think it’s an OK restriction to have for now
> >>>- in all the testing I had fine-grained recovery (region failover)
> >>> enabled but I didn’t simulate any failures
> >>>
>  On 14. Aug 2019, at 15:20, Kurt Young  wrote:
> 
>  Hi,
> 
>  Thanks for preparing this release candidate. I have verified the
> >>> following:
>  - verified the checksums and GPG files match the corresponding release
> >>> files
>  - verified that the source archives do not contains any binaries
>  - build the source release with Scala 2.11 successfully.
>  - ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
> >>> [FLINK-13688],
>  but
>  both are not release blockers. Other than that, all tests are passed.
>  - ran all e2e tests which don't need download external packages (it's
> >>> very
>  unstable
>  in China and almost impossible to download them), all passed.
>  - started local cluster, ran some examples. Met a small website
> display
>  issue
>  [FLINK-13591], which is also not a release blocker.
> 
>  Although we have pushed some fixes around blink planner and hive
>  integration
>  after RC2, but consider these are both preview features, I'm lean to
> be
> >>> ok
>  to release
>  without these fixes.
> 
>  +1 from my side. (binding)
> 
>  Best,
>  Kurt
> 
> 
>  On Wed, Aug 14, 2019 at 5:13 PM Jark Wu  wrote:
> 
> > Hi Gordon,
> >
> > I have verified the following things:
> >
> > - build the source release with Scala 2.12 and Scala 2.11
> successfully
> > - checked/verified signatures and hashes
> > - checked that all POM files point to the same version
> > - ran some flink table related end-to-end tests locally and succeeded
> > (except TPC-H e2e failed which is reported in FLINK-13704)
> > - started cluster for both Scala 2.11 and 2.12, ran examples,
> verified
> >>> web
> > ui and log output, nothing unexpected
> > - started cluster, ran a SQL query to temporal join with kafka source
> >>> and
> > mysql jdbc table, and write results to kafka again. Using DDL to
> >> create
> >>> the
> > source and sinks. looks good.
> > - reviewed the release PR
> >
> > As FLINK-13704 is not recognized as blocker issue, so +1 from my side
> > (non-binding).
> >
> > On Tue, 13 Aug 2019 at 17:07, Till Rohrmann 
> >>> wrote:
> >> Hi Richard,
> >>
> >> although I can see that it would be handy for users who have PubSub
> >> set
> > up,
> >> I would rather not include examples which require an external
> >>> dependency
> >> into the Flink distribution. I think examples should be
> >> 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Andrey Zagrebin
+1 (non-binding)

Tested in AWS EMR Yarn: 1 master and 4 worker nodes (m5.xlarge: 4 vCore, 16
GiB).

EMR runs only on Java 8. Fine-grained recovery is enabled by default.

Modified E2E test scripts can be found here (asserting output):
https://github.com/azagrebin/flink/commits/FLINK-13597

Batch SQL:

   - S3(a) filesystem over HADOOP works out-of-the-box (already on AWS
   class path) and also if put in plugins

Streaming SQL:

   - Hadoop output (s3 does not support recoverable writers)


On Thu, Aug 15, 2019 at 11:24 AM Kurt Young  wrote:

> HI,
>
> We just find a serious bug around blink planner:
> https://issues.apache.org/jira/browse/FLINK-13708
> When user reused the table environment instance, and call `execute` method
> multiple times for
> different sql, the later call will trigger the earlier ones to be
> re-executed.
>
> It's a serious bug but seems we also have a work around, which is never
> reuse the table environment
> object. I'm not sure if we should treat this one as blocker issue of 1.9.0.
>
> What's your opinion?
>
> Best,
> Kurt
>
>
> On Thu, Aug 15, 2019 at 2:01 PM Gary Yao  wrote:
>
> > +1 (non-binding)
> >
> > Jepsen test suite passed 10 times consecutively
> >
> > On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek 
> > wrote:
> >
> > > +1
> > >
> > > I did some testing on a Google Cloud Dataproc cluster (it gives you a
> > > managed YARN and Google Cloud Storage (GCS)):
> > >   - tried both YARN session mode and YARN per-job mode, also using
> > > bin/flink list/cancel/etc. against a YARN session cluster
> > >   - ran examples that write to GCS, both with the native Hadoop
> > FileSystem
> > > and a custom “plugin” FileSystem
> > >   - ran stateful streaming jobs that use GCS as a checkpoint backend
> > >   - tried running SQL programs on YARN using the SQL Cli: this worked
> for
> > > YARN session mode but not for YARN per-job mode. Looking at the code I
> > > don’t think per-job mode would work from seeing how it is implemented.
> > But
> > > I think it’s an OK restriction to have for now
> > >   - in all the testing I had fine-grained recovery (region failover)
> > > enabled but I didn’t simulate any failures
> > >
> > > > On 14. Aug 2019, at 15:20, Kurt Young  wrote:
> > > >
> > > > Hi,
> > > >
> > > > Thanks for preparing this release candidate. I have verified the
> > > following:
> > > >
> > > > - verified the checksums and GPG files match the corresponding
> release
> > > files
> > > > - verified that the source archives do not contains any binaries
> > > > - build the source release with Scala 2.11 successfully.
> > > > - ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
> > > [FLINK-13688],
> > > > but
> > > > both are not release blockers. Other than that, all tests are passed.
> > > > - ran all e2e tests which don't need download external packages (it's
> > > very
> > > > unstable
> > > > in China and almost impossible to download them), all passed.
> > > > - started local cluster, ran some examples. Met a small website
> display
> > > > issue
> > > > [FLINK-13591], which is also not a release blocker.
> > > >
> > > > Although we have pushed some fixes around blink planner and hive
> > > > integration
> > > > after RC2, but consider these are both preview features, I'm lean to
> be
> > > ok
> > > > to release
> > > > without these fixes.
> > > >
> > > > +1 from my side. (binding)
> > > >
> > > > Best,
> > > > Kurt
> > > >
> > > >
> > > > On Wed, Aug 14, 2019 at 5:13 PM Jark Wu  wrote:
> > > >
> > > >> Hi Gordon,
> > > >>
> > > >> I have verified the following things:
> > > >>
> > > >> - build the source release with Scala 2.12 and Scala 2.11
> successfully
> > > >> - checked/verified signatures and hashes
> > > >> - checked that all POM files point to the same version
> > > >> - ran some flink table related end-to-end tests locally and
> succeeded
> > > >> (except TPC-H e2e failed which is reported in FLINK-13704)
> > > >> - started cluster for both Scala 2.11 and 2.12, ran examples,
> verified
> > > web
> > > >> ui and log output, nothing unexpected
> > > >> - started cluster, ran a SQL query to temporal join with kafka
> source
> > > and
> > > >> mysql jdbc table, and write results to kafka again. Using DDL to
> > create
> > > the
> > > >> source and sinks. looks good.
> > > >> - reviewed the release PR
> > > >>
> > > >> As FLINK-13704 is not recognized as blocker issue, so +1 from my
> side
> > > >> (non-binding).
> > > >>
> > > >> On Tue, 13 Aug 2019 at 17:07, Till Rohrmann 
> > > wrote:
> > > >>
> > > >>> Hi Richard,
> > > >>>
> > > >>> although I can see that it would be handy for users who have PubSub
> > set
> > > >> up,
> > > >>> I would rather not include examples which require an external
> > > dependency
> > > >>> into the Flink distribution. I think examples should be
> > self-contained.
> > > >> My
> > > >>> concern is that we would bloat the distribution for many users at
> the
> > > >>> benefit of a few. Instead, I think it would be better to 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Timo Walther

Hi Kurt,

I agree that this is a serious bug. However, I would not block the 
release because of this. As you said, there is a workaround and the 
`execute()` works in the most common case of a single execution. We can 
fix this in a minor release shortly after.


What do others think?

Regards,
Timo


Am 15.08.19 um 11:23 schrieb Kurt Young:

HI,

We just find a serious bug around blink planner:
https://issues.apache.org/jira/browse/FLINK-13708
When user reused the table environment instance, and call `execute` method
multiple times for
different sql, the later call will trigger the earlier ones to be
re-executed.

It's a serious bug but seems we also have a work around, which is never
reuse the table environment
object. I'm not sure if we should treat this one as blocker issue of 1.9.0.

What's your opinion?

Best,
Kurt


On Thu, Aug 15, 2019 at 2:01 PM Gary Yao  wrote:


+1 (non-binding)

Jepsen test suite passed 10 times consecutively

On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek 
wrote:


+1

I did some testing on a Google Cloud Dataproc cluster (it gives you a
managed YARN and Google Cloud Storage (GCS)):
   - tried both YARN session mode and YARN per-job mode, also using
bin/flink list/cancel/etc. against a YARN session cluster
   - ran examples that write to GCS, both with the native Hadoop

FileSystem

and a custom “plugin” FileSystem
   - ran stateful streaming jobs that use GCS as a checkpoint backend
   - tried running SQL programs on YARN using the SQL Cli: this worked for
YARN session mode but not for YARN per-job mode. Looking at the code I
don’t think per-job mode would work from seeing how it is implemented.

But

I think it’s an OK restriction to have for now
   - in all the testing I had fine-grained recovery (region failover)
enabled but I didn’t simulate any failures


On 14. Aug 2019, at 15:20, Kurt Young  wrote:

Hi,

Thanks for preparing this release candidate. I have verified the

following:

- verified the checksums and GPG files match the corresponding release

files

- verified that the source archives do not contains any binaries
- build the source release with Scala 2.11 successfully.
- ran `mvn verify` locally, met 2 issuses [FLINK-13687] and

[FLINK-13688],

but
both are not release blockers. Other than that, all tests are passed.
- ran all e2e tests which don't need download external packages (it's

very

unstable
in China and almost impossible to download them), all passed.
- started local cluster, ran some examples. Met a small website display
issue
[FLINK-13591], which is also not a release blocker.

Although we have pushed some fixes around blink planner and hive
integration
after RC2, but consider these are both preview features, I'm lean to be

ok

to release
without these fixes.

+1 from my side. (binding)

Best,
Kurt


On Wed, Aug 14, 2019 at 5:13 PM Jark Wu  wrote:


Hi Gordon,

I have verified the following things:

- build the source release with Scala 2.12 and Scala 2.11 successfully
- checked/verified signatures and hashes
- checked that all POM files point to the same version
- ran some flink table related end-to-end tests locally and succeeded
(except TPC-H e2e failed which is reported in FLINK-13704)
- started cluster for both Scala 2.11 and 2.12, ran examples, verified

web

ui and log output, nothing unexpected
- started cluster, ran a SQL query to temporal join with kafka source

and

mysql jdbc table, and write results to kafka again. Using DDL to

create

the

source and sinks. looks good.
- reviewed the release PR

As FLINK-13704 is not recognized as blocker issue, so +1 from my side
(non-binding).

On Tue, 13 Aug 2019 at 17:07, Till Rohrmann 

wrote:

Hi Richard,

although I can see that it would be handy for users who have PubSub

set

up,

I would rather not include examples which require an external

dependency

into the Flink distribution. I think examples should be

self-contained.

My

concern is that we would bloat the distribution for many users at the
benefit of a few. Instead, I think it would be better to make these
examples available differently, maybe through Flink's ecosystem

website

or

maybe a new examples section in Flink's documentation.

Cheers,
Till

On Tue, Aug 13, 2019 at 9:43 AM Jark Wu  wrote:


Hi Till,

After thinking about we can use VARCHAR as an alternative of
timestamp/time/date.
I'm fine with not recognize it as a blocker issue.
We can fix it into 1.9.1.


Thanks,
Jark


On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder 

wrote:

Hello all,

I noticed the PubSub example jar is not included in the examples/

dir

of

flink-dist. I've created

https://issues.apache.org/jira/browse/FLINK-13700

+ https://github.com/apache/flink/pull/9424/files to fix this.

I will leave it up to you to decide if we want to add this to

1.9.0.

Regards,

Richard

On Tue, Aug 13, 2019 at 9:04 AM Till Rohrmann <

trohrm...@apache.org>

wrote:


Hi Jark,

thanks for reporting this issue. Could this be a documented

limitation

of


Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Kurt Young
HI,

We just find a serious bug around blink planner:
https://issues.apache.org/jira/browse/FLINK-13708
When user reused the table environment instance, and call `execute` method
multiple times for
different sql, the later call will trigger the earlier ones to be
re-executed.

It's a serious bug but seems we also have a work around, which is never
reuse the table environment
object. I'm not sure if we should treat this one as blocker issue of 1.9.0.

What's your opinion?

Best,
Kurt


On Thu, Aug 15, 2019 at 2:01 PM Gary Yao  wrote:

> +1 (non-binding)
>
> Jepsen test suite passed 10 times consecutively
>
> On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek 
> wrote:
>
> > +1
> >
> > I did some testing on a Google Cloud Dataproc cluster (it gives you a
> > managed YARN and Google Cloud Storage (GCS)):
> >   - tried both YARN session mode and YARN per-job mode, also using
> > bin/flink list/cancel/etc. against a YARN session cluster
> >   - ran examples that write to GCS, both with the native Hadoop
> FileSystem
> > and a custom “plugin” FileSystem
> >   - ran stateful streaming jobs that use GCS as a checkpoint backend
> >   - tried running SQL programs on YARN using the SQL Cli: this worked for
> > YARN session mode but not for YARN per-job mode. Looking at the code I
> > don’t think per-job mode would work from seeing how it is implemented.
> But
> > I think it’s an OK restriction to have for now
> >   - in all the testing I had fine-grained recovery (region failover)
> > enabled but I didn’t simulate any failures
> >
> > > On 14. Aug 2019, at 15:20, Kurt Young  wrote:
> > >
> > > Hi,
> > >
> > > Thanks for preparing this release candidate. I have verified the
> > following:
> > >
> > > - verified the checksums and GPG files match the corresponding release
> > files
> > > - verified that the source archives do not contains any binaries
> > > - build the source release with Scala 2.11 successfully.
> > > - ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
> > [FLINK-13688],
> > > but
> > > both are not release blockers. Other than that, all tests are passed.
> > > - ran all e2e tests which don't need download external packages (it's
> > very
> > > unstable
> > > in China and almost impossible to download them), all passed.
> > > - started local cluster, ran some examples. Met a small website display
> > > issue
> > > [FLINK-13591], which is also not a release blocker.
> > >
> > > Although we have pushed some fixes around blink planner and hive
> > > integration
> > > after RC2, but consider these are both preview features, I'm lean to be
> > ok
> > > to release
> > > without these fixes.
> > >
> > > +1 from my side. (binding)
> > >
> > > Best,
> > > Kurt
> > >
> > >
> > > On Wed, Aug 14, 2019 at 5:13 PM Jark Wu  wrote:
> > >
> > >> Hi Gordon,
> > >>
> > >> I have verified the following things:
> > >>
> > >> - build the source release with Scala 2.12 and Scala 2.11 successfully
> > >> - checked/verified signatures and hashes
> > >> - checked that all POM files point to the same version
> > >> - ran some flink table related end-to-end tests locally and succeeded
> > >> (except TPC-H e2e failed which is reported in FLINK-13704)
> > >> - started cluster for both Scala 2.11 and 2.12, ran examples, verified
> > web
> > >> ui and log output, nothing unexpected
> > >> - started cluster, ran a SQL query to temporal join with kafka source
> > and
> > >> mysql jdbc table, and write results to kafka again. Using DDL to
> create
> > the
> > >> source and sinks. looks good.
> > >> - reviewed the release PR
> > >>
> > >> As FLINK-13704 is not recognized as blocker issue, so +1 from my side
> > >> (non-binding).
> > >>
> > >> On Tue, 13 Aug 2019 at 17:07, Till Rohrmann 
> > wrote:
> > >>
> > >>> Hi Richard,
> > >>>
> > >>> although I can see that it would be handy for users who have PubSub
> set
> > >> up,
> > >>> I would rather not include examples which require an external
> > dependency
> > >>> into the Flink distribution. I think examples should be
> self-contained.
> > >> My
> > >>> concern is that we would bloat the distribution for many users at the
> > >>> benefit of a few. Instead, I think it would be better to make these
> > >>> examples available differently, maybe through Flink's ecosystem
> website
> > >> or
> > >>> maybe a new examples section in Flink's documentation.
> > >>>
> > >>> Cheers,
> > >>> Till
> > >>>
> > >>> On Tue, Aug 13, 2019 at 9:43 AM Jark Wu  wrote:
> > >>>
> >  Hi Till,
> > 
> >  After thinking about we can use VARCHAR as an alternative of
> >  timestamp/time/date.
> >  I'm fine with not recognize it as a blocker issue.
> >  We can fix it into 1.9.1.
> > 
> > 
> >  Thanks,
> >  Jark
> > 
> > 
> >  On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder 
> > >>> wrote:
> > 
> > > Hello all,
> > >
> > > I noticed the PubSub example jar is not included in the examples/
> dir
> > >>> of
> > > flink-dist. I've created
> 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-15 Thread Gary Yao
+1 (non-binding)

Jepsen test suite passed 10 times consecutively

On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek 
wrote:

> +1
>
> I did some testing on a Google Cloud Dataproc cluster (it gives you a
> managed YARN and Google Cloud Storage (GCS)):
>   - tried both YARN session mode and YARN per-job mode, also using
> bin/flink list/cancel/etc. against a YARN session cluster
>   - ran examples that write to GCS, both with the native Hadoop FileSystem
> and a custom “plugin” FileSystem
>   - ran stateful streaming jobs that use GCS as a checkpoint backend
>   - tried running SQL programs on YARN using the SQL Cli: this worked for
> YARN session mode but not for YARN per-job mode. Looking at the code I
> don’t think per-job mode would work from seeing how it is implemented. But
> I think it’s an OK restriction to have for now
>   - in all the testing I had fine-grained recovery (region failover)
> enabled but I didn’t simulate any failures
>
> > On 14. Aug 2019, at 15:20, Kurt Young  wrote:
> >
> > Hi,
> >
> > Thanks for preparing this release candidate. I have verified the
> following:
> >
> > - verified the checksums and GPG files match the corresponding release
> files
> > - verified that the source archives do not contains any binaries
> > - build the source release with Scala 2.11 successfully.
> > - ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
> [FLINK-13688],
> > but
> > both are not release blockers. Other than that, all tests are passed.
> > - ran all e2e tests which don't need download external packages (it's
> very
> > unstable
> > in China and almost impossible to download them), all passed.
> > - started local cluster, ran some examples. Met a small website display
> > issue
> > [FLINK-13591], which is also not a release blocker.
> >
> > Although we have pushed some fixes around blink planner and hive
> > integration
> > after RC2, but consider these are both preview features, I'm lean to be
> ok
> > to release
> > without these fixes.
> >
> > +1 from my side. (binding)
> >
> > Best,
> > Kurt
> >
> >
> > On Wed, Aug 14, 2019 at 5:13 PM Jark Wu  wrote:
> >
> >> Hi Gordon,
> >>
> >> I have verified the following things:
> >>
> >> - build the source release with Scala 2.12 and Scala 2.11 successfully
> >> - checked/verified signatures and hashes
> >> - checked that all POM files point to the same version
> >> - ran some flink table related end-to-end tests locally and succeeded
> >> (except TPC-H e2e failed which is reported in FLINK-13704)
> >> - started cluster for both Scala 2.11 and 2.12, ran examples, verified
> web
> >> ui and log output, nothing unexpected
> >> - started cluster, ran a SQL query to temporal join with kafka source
> and
> >> mysql jdbc table, and write results to kafka again. Using DDL to create
> the
> >> source and sinks. looks good.
> >> - reviewed the release PR
> >>
> >> As FLINK-13704 is not recognized as blocker issue, so +1 from my side
> >> (non-binding).
> >>
> >> On Tue, 13 Aug 2019 at 17:07, Till Rohrmann 
> wrote:
> >>
> >>> Hi Richard,
> >>>
> >>> although I can see that it would be handy for users who have PubSub set
> >> up,
> >>> I would rather not include examples which require an external
> dependency
> >>> into the Flink distribution. I think examples should be self-contained.
> >> My
> >>> concern is that we would bloat the distribution for many users at the
> >>> benefit of a few. Instead, I think it would be better to make these
> >>> examples available differently, maybe through Flink's ecosystem website
> >> or
> >>> maybe a new examples section in Flink's documentation.
> >>>
> >>> Cheers,
> >>> Till
> >>>
> >>> On Tue, Aug 13, 2019 at 9:43 AM Jark Wu  wrote:
> >>>
>  Hi Till,
> 
>  After thinking about we can use VARCHAR as an alternative of
>  timestamp/time/date.
>  I'm fine with not recognize it as a blocker issue.
>  We can fix it into 1.9.1.
> 
> 
>  Thanks,
>  Jark
> 
> 
>  On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder 
> >>> wrote:
> 
> > Hello all,
> >
> > I noticed the PubSub example jar is not included in the examples/ dir
> >>> of
> > flink-dist. I've created
>  https://issues.apache.org/jira/browse/FLINK-13700
> > + https://github.com/apache/flink/pull/9424/files to fix this.
> >
> > I will leave it up to you to decide if we want to add this to 1.9.0.
> >
> > Regards,
> >
> > Richard
> >
> > On Tue, Aug 13, 2019 at 9:04 AM Till Rohrmann 
> > wrote:
> >
> >> Hi Jark,
> >>
> >> thanks for reporting this issue. Could this be a documented
> >>> limitation
>  of
> >> Blink's preview version? I think we have agreed that the Blink SQL
> > planner
> >> will be rather a preview feature than production ready. Hence it
> >>> could
> >> still contain some bugs. My concern is that there might be still
> >>> other
> >> issues which we'll discover bit by bit and could postpone 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-14 Thread Kurt Young
Hi Robert,

I will do it today.

Best,
Kurt


On Wed, Aug 14, 2019 at 11:55 PM Robert Metzger  wrote:

> Has anybody verified the inclusion of all bundled dependencies into the
> NOTICE files?
>
> I'm asking because we had some issues with that in the last release(s).
>
> On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek 
> wrote:
>
> > +1
> >
> > I did some testing on a Google Cloud Dataproc cluster (it gives you a
> > managed YARN and Google Cloud Storage (GCS)):
> >   - tried both YARN session mode and YARN per-job mode, also using
> > bin/flink list/cancel/etc. against a YARN session cluster
> >   - ran examples that write to GCS, both with the native Hadoop
> FileSystem
> > and a custom “plugin” FileSystem
> >   - ran stateful streaming jobs that use GCS as a checkpoint backend
> >   - tried running SQL programs on YARN using the SQL Cli: this worked for
> > YARN session mode but not for YARN per-job mode. Looking at the code I
> > don’t think per-job mode would work from seeing how it is implemented.
> But
> > I think it’s an OK restriction to have for now
> >   - in all the testing I had fine-grained recovery (region failover)
> > enabled but I didn’t simulate any failures
> >
> > > On 14. Aug 2019, at 15:20, Kurt Young  wrote:
> > >
> > > Hi,
> > >
> > > Thanks for preparing this release candidate. I have verified the
> > following:
> > >
> > > - verified the checksums and GPG files match the corresponding release
> > files
> > > - verified that the source archives do not contains any binaries
> > > - build the source release with Scala 2.11 successfully.
> > > - ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
> > [FLINK-13688],
> > > but
> > > both are not release blockers. Other than that, all tests are passed.
> > > - ran all e2e tests which don't need download external packages (it's
> > very
> > > unstable
> > > in China and almost impossible to download them), all passed.
> > > - started local cluster, ran some examples. Met a small website display
> > > issue
> > > [FLINK-13591], which is also not a release blocker.
> > >
> > > Although we have pushed some fixes around blink planner and hive
> > > integration
> > > after RC2, but consider these are both preview features, I'm lean to be
> > ok
> > > to release
> > > without these fixes.
> > >
> > > +1 from my side. (binding)
> > >
> > > Best,
> > > Kurt
> > >
> > >
> > > On Wed, Aug 14, 2019 at 5:13 PM Jark Wu  wrote:
> > >
> > >> Hi Gordon,
> > >>
> > >> I have verified the following things:
> > >>
> > >> - build the source release with Scala 2.12 and Scala 2.11 successfully
> > >> - checked/verified signatures and hashes
> > >> - checked that all POM files point to the same version
> > >> - ran some flink table related end-to-end tests locally and succeeded
> > >> (except TPC-H e2e failed which is reported in FLINK-13704)
> > >> - started cluster for both Scala 2.11 and 2.12, ran examples, verified
> > web
> > >> ui and log output, nothing unexpected
> > >> - started cluster, ran a SQL query to temporal join with kafka source
> > and
> > >> mysql jdbc table, and write results to kafka again. Using DDL to
> create
> > the
> > >> source and sinks. looks good.
> > >> - reviewed the release PR
> > >>
> > >> As FLINK-13704 is not recognized as blocker issue, so +1 from my side
> > >> (non-binding).
> > >>
> > >> On Tue, 13 Aug 2019 at 17:07, Till Rohrmann 
> > wrote:
> > >>
> > >>> Hi Richard,
> > >>>
> > >>> although I can see that it would be handy for users who have PubSub
> set
> > >> up,
> > >>> I would rather not include examples which require an external
> > dependency
> > >>> into the Flink distribution. I think examples should be
> self-contained.
> > >> My
> > >>> concern is that we would bloat the distribution for many users at the
> > >>> benefit of a few. Instead, I think it would be better to make these
> > >>> examples available differently, maybe through Flink's ecosystem
> website
> > >> or
> > >>> maybe a new examples section in Flink's documentation.
> > >>>
> > >>> Cheers,
> > >>> Till
> > >>>
> > >>> On Tue, Aug 13, 2019 at 9:43 AM Jark Wu  wrote:
> > >>>
> >  Hi Till,
> > 
> >  After thinking about we can use VARCHAR as an alternative of
> >  timestamp/time/date.
> >  I'm fine with not recognize it as a blocker issue.
> >  We can fix it into 1.9.1.
> > 
> > 
> >  Thanks,
> >  Jark
> > 
> > 
> >  On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder 
> > >>> wrote:
> > 
> > > Hello all,
> > >
> > > I noticed the PubSub example jar is not included in the examples/
> dir
> > >>> of
> > > flink-dist. I've created
> >  https://issues.apache.org/jira/browse/FLINK-13700
> > > + https://github.com/apache/flink/pull/9424/files to fix this.
> > >
> > > I will leave it up to you to decide if we want to add this to
> 1.9.0.
> > >
> > > Regards,
> > >
> > > Richard
> > >
> > > On Tue, Aug 13, 2019 at 9:04 AM Till 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-14 Thread Robert Metzger
Has anybody verified the inclusion of all bundled dependencies into the
NOTICE files?

I'm asking because we had some issues with that in the last release(s).

On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek 
wrote:

> +1
>
> I did some testing on a Google Cloud Dataproc cluster (it gives you a
> managed YARN and Google Cloud Storage (GCS)):
>   - tried both YARN session mode and YARN per-job mode, also using
> bin/flink list/cancel/etc. against a YARN session cluster
>   - ran examples that write to GCS, both with the native Hadoop FileSystem
> and a custom “plugin” FileSystem
>   - ran stateful streaming jobs that use GCS as a checkpoint backend
>   - tried running SQL programs on YARN using the SQL Cli: this worked for
> YARN session mode but not for YARN per-job mode. Looking at the code I
> don’t think per-job mode would work from seeing how it is implemented. But
> I think it’s an OK restriction to have for now
>   - in all the testing I had fine-grained recovery (region failover)
> enabled but I didn’t simulate any failures
>
> > On 14. Aug 2019, at 15:20, Kurt Young  wrote:
> >
> > Hi,
> >
> > Thanks for preparing this release candidate. I have verified the
> following:
> >
> > - verified the checksums and GPG files match the corresponding release
> files
> > - verified that the source archives do not contains any binaries
> > - build the source release with Scala 2.11 successfully.
> > - ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
> [FLINK-13688],
> > but
> > both are not release blockers. Other than that, all tests are passed.
> > - ran all e2e tests which don't need download external packages (it's
> very
> > unstable
> > in China and almost impossible to download them), all passed.
> > - started local cluster, ran some examples. Met a small website display
> > issue
> > [FLINK-13591], which is also not a release blocker.
> >
> > Although we have pushed some fixes around blink planner and hive
> > integration
> > after RC2, but consider these are both preview features, I'm lean to be
> ok
> > to release
> > without these fixes.
> >
> > +1 from my side. (binding)
> >
> > Best,
> > Kurt
> >
> >
> > On Wed, Aug 14, 2019 at 5:13 PM Jark Wu  wrote:
> >
> >> Hi Gordon,
> >>
> >> I have verified the following things:
> >>
> >> - build the source release with Scala 2.12 and Scala 2.11 successfully
> >> - checked/verified signatures and hashes
> >> - checked that all POM files point to the same version
> >> - ran some flink table related end-to-end tests locally and succeeded
> >> (except TPC-H e2e failed which is reported in FLINK-13704)
> >> - started cluster for both Scala 2.11 and 2.12, ran examples, verified
> web
> >> ui and log output, nothing unexpected
> >> - started cluster, ran a SQL query to temporal join with kafka source
> and
> >> mysql jdbc table, and write results to kafka again. Using DDL to create
> the
> >> source and sinks. looks good.
> >> - reviewed the release PR
> >>
> >> As FLINK-13704 is not recognized as blocker issue, so +1 from my side
> >> (non-binding).
> >>
> >> On Tue, 13 Aug 2019 at 17:07, Till Rohrmann 
> wrote:
> >>
> >>> Hi Richard,
> >>>
> >>> although I can see that it would be handy for users who have PubSub set
> >> up,
> >>> I would rather not include examples which require an external
> dependency
> >>> into the Flink distribution. I think examples should be self-contained.
> >> My
> >>> concern is that we would bloat the distribution for many users at the
> >>> benefit of a few. Instead, I think it would be better to make these
> >>> examples available differently, maybe through Flink's ecosystem website
> >> or
> >>> maybe a new examples section in Flink's documentation.
> >>>
> >>> Cheers,
> >>> Till
> >>>
> >>> On Tue, Aug 13, 2019 at 9:43 AM Jark Wu  wrote:
> >>>
>  Hi Till,
> 
>  After thinking about we can use VARCHAR as an alternative of
>  timestamp/time/date.
>  I'm fine with not recognize it as a blocker issue.
>  We can fix it into 1.9.1.
> 
> 
>  Thanks,
>  Jark
> 
> 
>  On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder 
> >>> wrote:
> 
> > Hello all,
> >
> > I noticed the PubSub example jar is not included in the examples/ dir
> >>> of
> > flink-dist. I've created
>  https://issues.apache.org/jira/browse/FLINK-13700
> > + https://github.com/apache/flink/pull/9424/files to fix this.
> >
> > I will leave it up to you to decide if we want to add this to 1.9.0.
> >
> > Regards,
> >
> > Richard
> >
> > On Tue, Aug 13, 2019 at 9:04 AM Till Rohrmann 
> > wrote:
> >
> >> Hi Jark,
> >>
> >> thanks for reporting this issue. Could this be a documented
> >>> limitation
>  of
> >> Blink's preview version? I think we have agreed that the Blink SQL
> > planner
> >> will be rather a preview feature than production ready. Hence it
> >>> could
> >> still contain some bugs. My concern is that there 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-14 Thread Aljoscha Krettek
+1

I did some testing on a Google Cloud Dataproc cluster (it gives you a managed 
YARN and Google Cloud Storage (GCS)):
  - tried both YARN session mode and YARN per-job mode, also using bin/flink 
list/cancel/etc. against a YARN session cluster
  - ran examples that write to GCS, both with the native Hadoop FileSystem and 
a custom “plugin” FileSystem
  - ran stateful streaming jobs that use GCS as a checkpoint backend
  - tried running SQL programs on YARN using the SQL Cli: this worked for YARN 
session mode but not for YARN per-job mode. Looking at the code I don’t think 
per-job mode would work from seeing how it is implemented. But I think it’s an 
OK restriction to have for now
  - in all the testing I had fine-grained recovery (region failover) enabled 
but I didn’t simulate any failures

> On 14. Aug 2019, at 15:20, Kurt Young  wrote:
> 
> Hi,
> 
> Thanks for preparing this release candidate. I have verified the following:
> 
> - verified the checksums and GPG files match the corresponding release files
> - verified that the source archives do not contains any binaries
> - build the source release with Scala 2.11 successfully.
> - ran `mvn verify` locally, met 2 issuses [FLINK-13687] and [FLINK-13688],
> but
> both are not release blockers. Other than that, all tests are passed.
> - ran all e2e tests which don't need download external packages (it's very
> unstable
> in China and almost impossible to download them), all passed.
> - started local cluster, ran some examples. Met a small website display
> issue
> [FLINK-13591], which is also not a release blocker.
> 
> Although we have pushed some fixes around blink planner and hive
> integration
> after RC2, but consider these are both preview features, I'm lean to be ok
> to release
> without these fixes.
> 
> +1 from my side. (binding)
> 
> Best,
> Kurt
> 
> 
> On Wed, Aug 14, 2019 at 5:13 PM Jark Wu  wrote:
> 
>> Hi Gordon,
>> 
>> I have verified the following things:
>> 
>> - build the source release with Scala 2.12 and Scala 2.11 successfully
>> - checked/verified signatures and hashes
>> - checked that all POM files point to the same version
>> - ran some flink table related end-to-end tests locally and succeeded
>> (except TPC-H e2e failed which is reported in FLINK-13704)
>> - started cluster for both Scala 2.11 and 2.12, ran examples, verified web
>> ui and log output, nothing unexpected
>> - started cluster, ran a SQL query to temporal join with kafka source and
>> mysql jdbc table, and write results to kafka again. Using DDL to create the
>> source and sinks. looks good.
>> - reviewed the release PR
>> 
>> As FLINK-13704 is not recognized as blocker issue, so +1 from my side
>> (non-binding).
>> 
>> On Tue, 13 Aug 2019 at 17:07, Till Rohrmann  wrote:
>> 
>>> Hi Richard,
>>> 
>>> although I can see that it would be handy for users who have PubSub set
>> up,
>>> I would rather not include examples which require an external dependency
>>> into the Flink distribution. I think examples should be self-contained.
>> My
>>> concern is that we would bloat the distribution for many users at the
>>> benefit of a few. Instead, I think it would be better to make these
>>> examples available differently, maybe through Flink's ecosystem website
>> or
>>> maybe a new examples section in Flink's documentation.
>>> 
>>> Cheers,
>>> Till
>>> 
>>> On Tue, Aug 13, 2019 at 9:43 AM Jark Wu  wrote:
>>> 
 Hi Till,
 
 After thinking about we can use VARCHAR as an alternative of
 timestamp/time/date.
 I'm fine with not recognize it as a blocker issue.
 We can fix it into 1.9.1.
 
 
 Thanks,
 Jark
 
 
 On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder 
>>> wrote:
 
> Hello all,
> 
> I noticed the PubSub example jar is not included in the examples/ dir
>>> of
> flink-dist. I've created
 https://issues.apache.org/jira/browse/FLINK-13700
> + https://github.com/apache/flink/pull/9424/files to fix this.
> 
> I will leave it up to you to decide if we want to add this to 1.9.0.
> 
> Regards,
> 
> Richard
> 
> On Tue, Aug 13, 2019 at 9:04 AM Till Rohrmann 
> wrote:
> 
>> Hi Jark,
>> 
>> thanks for reporting this issue. Could this be a documented
>>> limitation
 of
>> Blink's preview version? I think we have agreed that the Blink SQL
> planner
>> will be rather a preview feature than production ready. Hence it
>>> could
>> still contain some bugs. My concern is that there might be still
>>> other
>> issues which we'll discover bit by bit and could postpone the
>> release
> even
>> further if we say Blink bugs are blockers.
>> 
>> Cheers,
>> Till
>> 
>> On Tue, Aug 13, 2019 at 7:42 AM Jark Wu  wrote:
>> 
>>> Hi all,
>>> 
>>> I just find an issue when testing connector DDLs against blink
 planner
>> for
>>> rc2.
>>> This issue lead to the DDL doesn't work when 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-14 Thread Kurt Young
Hi,

Thanks for preparing this release candidate. I have verified the following:

- verified the checksums and GPG files match the corresponding release files
- verified that the source archives do not contains any binaries
- build the source release with Scala 2.11 successfully.
- ran `mvn verify` locally, met 2 issuses [FLINK-13687] and [FLINK-13688],
but
both are not release blockers. Other than that, all tests are passed.
- ran all e2e tests which don't need download external packages (it's very
unstable
in China and almost impossible to download them), all passed.
- started local cluster, ran some examples. Met a small website display
issue
[FLINK-13591], which is also not a release blocker.

Although we have pushed some fixes around blink planner and hive
integration
after RC2, but consider these are both preview features, I'm lean to be ok
to release
without these fixes.

+1 from my side. (binding)

Best,
Kurt


On Wed, Aug 14, 2019 at 5:13 PM Jark Wu  wrote:

> Hi Gordon,
>
> I have verified the following things:
>
> - build the source release with Scala 2.12 and Scala 2.11 successfully
> - checked/verified signatures and hashes
> - checked that all POM files point to the same version
> - ran some flink table related end-to-end tests locally and succeeded
> (except TPC-H e2e failed which is reported in FLINK-13704)
> - started cluster for both Scala 2.11 and 2.12, ran examples, verified web
> ui and log output, nothing unexpected
> - started cluster, ran a SQL query to temporal join with kafka source and
> mysql jdbc table, and write results to kafka again. Using DDL to create the
> source and sinks. looks good.
> - reviewed the release PR
>
> As FLINK-13704 is not recognized as blocker issue, so +1 from my side
> (non-binding).
>
> On Tue, 13 Aug 2019 at 17:07, Till Rohrmann  wrote:
>
> > Hi Richard,
> >
> > although I can see that it would be handy for users who have PubSub set
> up,
> > I would rather not include examples which require an external dependency
> > into the Flink distribution. I think examples should be self-contained.
> My
> > concern is that we would bloat the distribution for many users at the
> > benefit of a few. Instead, I think it would be better to make these
> > examples available differently, maybe through Flink's ecosystem website
> or
> > maybe a new examples section in Flink's documentation.
> >
> > Cheers,
> > Till
> >
> > On Tue, Aug 13, 2019 at 9:43 AM Jark Wu  wrote:
> >
> > > Hi Till,
> > >
> > > After thinking about we can use VARCHAR as an alternative of
> > > timestamp/time/date.
> > > I'm fine with not recognize it as a blocker issue.
> > > We can fix it into 1.9.1.
> > >
> > >
> > > Thanks,
> > > Jark
> > >
> > >
> > > On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder 
> > wrote:
> > >
> > > > Hello all,
> > > >
> > > > I noticed the PubSub example jar is not included in the examples/ dir
> > of
> > > > flink-dist. I've created
> > > https://issues.apache.org/jira/browse/FLINK-13700
> > > >  + https://github.com/apache/flink/pull/9424/files to fix this.
> > > >
> > > > I will leave it up to you to decide if we want to add this to 1.9.0.
> > > >
> > > > Regards,
> > > >
> > > > Richard
> > > >
> > > > On Tue, Aug 13, 2019 at 9:04 AM Till Rohrmann 
> > > > wrote:
> > > >
> > > > > Hi Jark,
> > > > >
> > > > > thanks for reporting this issue. Could this be a documented
> > limitation
> > > of
> > > > > Blink's preview version? I think we have agreed that the Blink SQL
> > > > planner
> > > > > will be rather a preview feature than production ready. Hence it
> > could
> > > > > still contain some bugs. My concern is that there might be still
> > other
> > > > > issues which we'll discover bit by bit and could postpone the
> release
> > > > even
> > > > > further if we say Blink bugs are blockers.
> > > > >
> > > > > Cheers,
> > > > > Till
> > > > >
> > > > > On Tue, Aug 13, 2019 at 7:42 AM Jark Wu  wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I just find an issue when testing connector DDLs against blink
> > > planner
> > > > > for
> > > > > > rc2.
> > > > > > This issue lead to the DDL doesn't work when containing
> > > > > timestamp/date/time
> > > > > > type.
> > > > > > I have created an issue FLINK-13699[1] and a pull request for
> this.
> > > > > >
> > > > > > IMO, this can be a blocker issue of 1.9 release. Because
> > > > > > timestamp/date/time are primitive types, and this will break the
> > DDL
> > > > > > feature.
> > > > > > However, I want to hear more thoughts from the community whether
> we
> > > > > should
> > > > > > recognize it as a blocker.
> > > > > >
> > > > > > Thanks,
> > > > > > Jark
> > > > > >
> > > > > >
> > > > > > [1]: https://issues.apache.org/jira/browse/FLINK-13699
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Mon, 12 Aug 2019 at 22:46, Becket Qin 
> > > wrote:
> > > > > >
> > > > > > > Thanks Gordon, will do that.
> > > > > > >
> > > > > > > On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai <
> > > > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-14 Thread Jark Wu
Hi Gordon,

I have verified the following things:

- build the source release with Scala 2.12 and Scala 2.11 successfully
- checked/verified signatures and hashes
- checked that all POM files point to the same version
- ran some flink table related end-to-end tests locally and succeeded
(except TPC-H e2e failed which is reported in FLINK-13704)
- started cluster for both Scala 2.11 and 2.12, ran examples, verified web
ui and log output, nothing unexpected
- started cluster, ran a SQL query to temporal join with kafka source and
mysql jdbc table, and write results to kafka again. Using DDL to create the
source and sinks. looks good.
- reviewed the release PR

As FLINK-13704 is not recognized as blocker issue, so +1 from my side
(non-binding).

On Tue, 13 Aug 2019 at 17:07, Till Rohrmann  wrote:

> Hi Richard,
>
> although I can see that it would be handy for users who have PubSub set up,
> I would rather not include examples which require an external dependency
> into the Flink distribution. I think examples should be self-contained. My
> concern is that we would bloat the distribution for many users at the
> benefit of a few. Instead, I think it would be better to make these
> examples available differently, maybe through Flink's ecosystem website or
> maybe a new examples section in Flink's documentation.
>
> Cheers,
> Till
>
> On Tue, Aug 13, 2019 at 9:43 AM Jark Wu  wrote:
>
> > Hi Till,
> >
> > After thinking about we can use VARCHAR as an alternative of
> > timestamp/time/date.
> > I'm fine with not recognize it as a blocker issue.
> > We can fix it into 1.9.1.
> >
> >
> > Thanks,
> > Jark
> >
> >
> > On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder 
> wrote:
> >
> > > Hello all,
> > >
> > > I noticed the PubSub example jar is not included in the examples/ dir
> of
> > > flink-dist. I've created
> > https://issues.apache.org/jira/browse/FLINK-13700
> > >  + https://github.com/apache/flink/pull/9424/files to fix this.
> > >
> > > I will leave it up to you to decide if we want to add this to 1.9.0.
> > >
> > > Regards,
> > >
> > > Richard
> > >
> > > On Tue, Aug 13, 2019 at 9:04 AM Till Rohrmann 
> > > wrote:
> > >
> > > > Hi Jark,
> > > >
> > > > thanks for reporting this issue. Could this be a documented
> limitation
> > of
> > > > Blink's preview version? I think we have agreed that the Blink SQL
> > > planner
> > > > will be rather a preview feature than production ready. Hence it
> could
> > > > still contain some bugs. My concern is that there might be still
> other
> > > > issues which we'll discover bit by bit and could postpone the release
> > > even
> > > > further if we say Blink bugs are blockers.
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Tue, Aug 13, 2019 at 7:42 AM Jark Wu  wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I just find an issue when testing connector DDLs against blink
> > planner
> > > > for
> > > > > rc2.
> > > > > This issue lead to the DDL doesn't work when containing
> > > > timestamp/date/time
> > > > > type.
> > > > > I have created an issue FLINK-13699[1] and a pull request for this.
> > > > >
> > > > > IMO, this can be a blocker issue of 1.9 release. Because
> > > > > timestamp/date/time are primitive types, and this will break the
> DDL
> > > > > feature.
> > > > > However, I want to hear more thoughts from the community whether we
> > > > should
> > > > > recognize it as a blocker.
> > > > >
> > > > > Thanks,
> > > > > Jark
> > > > >
> > > > >
> > > > > [1]: https://issues.apache.org/jira/browse/FLINK-13699
> > > > >
> > > > >
> > > > >
> > > > > On Mon, 12 Aug 2019 at 22:46, Becket Qin 
> > wrote:
> > > > >
> > > > > > Thanks Gordon, will do that.
> > > > > >
> > > > > > On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai <
> > > > tzuli...@apache.org
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Concerning FLINK-13231:
> > > > > > >
> > > > > > > Since this is a @PublicEvolving interface, technically it is ok
> > to
> > > > > break
> > > > > > > it across releases (including across bugfix releases?).
> > > > > > > So, @Becket if you do merge it now, please mark the fix version
> > as
> > > > > 1.9.1.
> > > > > > >
> > > > > > > During the voting process, in the case a new RC is created, we
> > > > usually
> > > > > > > check the list of changes compared to the previous RC, and
> > correct
> > > > the
> > > > > > "Fix
> > > > > > > Version" of the corresponding JIRAs to be the right version (in
> > the
> > > > > case,
> > > > > > > it would be corrected to 1.9.0 instead of 1.9.1).
> > > > > > >
> > > > > > > On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann <
> > > trohrm...@apache.org>
> > > > > > > wrote:
> > > > > > >
> > > > > > >> I agree that it would be nicer. Not sure whether we should
> > cancel
> > > > the
> > > > > RC
> > > > > > >> for this issue given that it is open for quite some time and
> > > hasn't
> > > > > been
> > > > > > >> addressed until very recently. Maybe we could include it on
> the
> > > > > > shortlist
> > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-13 Thread Till Rohrmann
Hi Richard,

although I can see that it would be handy for users who have PubSub set up,
I would rather not include examples which require an external dependency
into the Flink distribution. I think examples should be self-contained. My
concern is that we would bloat the distribution for many users at the
benefit of a few. Instead, I think it would be better to make these
examples available differently, maybe through Flink's ecosystem website or
maybe a new examples section in Flink's documentation.

Cheers,
Till

On Tue, Aug 13, 2019 at 9:43 AM Jark Wu  wrote:

> Hi Till,
>
> After thinking about we can use VARCHAR as an alternative of
> timestamp/time/date.
> I'm fine with not recognize it as a blocker issue.
> We can fix it into 1.9.1.
>
>
> Thanks,
> Jark
>
>
> On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder  wrote:
>
> > Hello all,
> >
> > I noticed the PubSub example jar is not included in the examples/ dir of
> > flink-dist. I've created
> https://issues.apache.org/jira/browse/FLINK-13700
> >  + https://github.com/apache/flink/pull/9424/files to fix this.
> >
> > I will leave it up to you to decide if we want to add this to 1.9.0.
> >
> > Regards,
> >
> > Richard
> >
> > On Tue, Aug 13, 2019 at 9:04 AM Till Rohrmann 
> > wrote:
> >
> > > Hi Jark,
> > >
> > > thanks for reporting this issue. Could this be a documented limitation
> of
> > > Blink's preview version? I think we have agreed that the Blink SQL
> > planner
> > > will be rather a preview feature than production ready. Hence it could
> > > still contain some bugs. My concern is that there might be still other
> > > issues which we'll discover bit by bit and could postpone the release
> > even
> > > further if we say Blink bugs are blockers.
> > >
> > > Cheers,
> > > Till
> > >
> > > On Tue, Aug 13, 2019 at 7:42 AM Jark Wu  wrote:
> > >
> > > > Hi all,
> > > >
> > > > I just find an issue when testing connector DDLs against blink
> planner
> > > for
> > > > rc2.
> > > > This issue lead to the DDL doesn't work when containing
> > > timestamp/date/time
> > > > type.
> > > > I have created an issue FLINK-13699[1] and a pull request for this.
> > > >
> > > > IMO, this can be a blocker issue of 1.9 release. Because
> > > > timestamp/date/time are primitive types, and this will break the DDL
> > > > feature.
> > > > However, I want to hear more thoughts from the community whether we
> > > should
> > > > recognize it as a blocker.
> > > >
> > > > Thanks,
> > > > Jark
> > > >
> > > >
> > > > [1]: https://issues.apache.org/jira/browse/FLINK-13699
> > > >
> > > >
> > > >
> > > > On Mon, 12 Aug 2019 at 22:46, Becket Qin 
> wrote:
> > > >
> > > > > Thanks Gordon, will do that.
> > > > >
> > > > > On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai <
> > > tzuli...@apache.org
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Concerning FLINK-13231:
> > > > > >
> > > > > > Since this is a @PublicEvolving interface, technically it is ok
> to
> > > > break
> > > > > > it across releases (including across bugfix releases?).
> > > > > > So, @Becket if you do merge it now, please mark the fix version
> as
> > > > 1.9.1.
> > > > > >
> > > > > > During the voting process, in the case a new RC is created, we
> > > usually
> > > > > > check the list of changes compared to the previous RC, and
> correct
> > > the
> > > > > "Fix
> > > > > > Version" of the corresponding JIRAs to be the right version (in
> the
> > > > case,
> > > > > > it would be corrected to 1.9.0 instead of 1.9.1).
> > > > > >
> > > > > > On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann <
> > trohrm...@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > >> I agree that it would be nicer. Not sure whether we should
> cancel
> > > the
> > > > RC
> > > > > >> for this issue given that it is open for quite some time and
> > hasn't
> > > > been
> > > > > >> addressed until very recently. Maybe we could include it on the
> > > > > shortlist
> > > > > >> of nice-to-do things which we do in case that the RC gets
> > cancelled.
> > > > > >>
> > > > > >> Cheers,
> > > > > >> Till
> > > > > >>
> > > > > >> On Mon, Aug 12, 2019 at 4:18 PM Becket Qin <
> becket@gmail.com>
> > > > > wrote:
> > > > > >>
> > > > > >>> Hi Till,
> > > > > >>>
> > > > > >>> Yes, I think we have already documented in that way. So
> > technically
> > > > > >>> speaking it is fine to change it later. It is just better if we
> > > could
> > > > > >>> avoid
> > > > > >>> doing that.
> > > > > >>>
> > > > > >>> Thanks,
> > > > > >>>
> > > > > >>> Jiangjie (Becket) Qin
> > > > > >>>
> > > > > >>> On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann <
> > > trohrm...@apache.org>
> > > > > >>> wrote:
> > > > > >>>
> > > > > >>> > Could we say that the PubSub connector is public evolving
> > > instead?
> > > > > >>> >
> > > > > >>> > Cheers,
> > > > > >>> > Till
> > > > > >>> >
> > > > > >>> > On Mon, Aug 12, 2019 at 3:18 PM Becket Qin <
> > becket@gmail.com
> > > >
> > > > > >>> wrote:
> > > > > >>> >
> > > > > >>> > > Hi all,
> > > > > >>> 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-13 Thread Jark Wu
Hi Till,

After thinking about we can use VARCHAR as an alternative of
timestamp/time/date.
I'm fine with not recognize it as a blocker issue.
We can fix it into 1.9.1.


Thanks,
Jark


On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder  wrote:

> Hello all,
>
> I noticed the PubSub example jar is not included in the examples/ dir of
> flink-dist. I've created https://issues.apache.org/jira/browse/FLINK-13700
>  + https://github.com/apache/flink/pull/9424/files to fix this.
>
> I will leave it up to you to decide if we want to add this to 1.9.0.
>
> Regards,
>
> Richard
>
> On Tue, Aug 13, 2019 at 9:04 AM Till Rohrmann 
> wrote:
>
> > Hi Jark,
> >
> > thanks for reporting this issue. Could this be a documented limitation of
> > Blink's preview version? I think we have agreed that the Blink SQL
> planner
> > will be rather a preview feature than production ready. Hence it could
> > still contain some bugs. My concern is that there might be still other
> > issues which we'll discover bit by bit and could postpone the release
> even
> > further if we say Blink bugs are blockers.
> >
> > Cheers,
> > Till
> >
> > On Tue, Aug 13, 2019 at 7:42 AM Jark Wu  wrote:
> >
> > > Hi all,
> > >
> > > I just find an issue when testing connector DDLs against blink planner
> > for
> > > rc2.
> > > This issue lead to the DDL doesn't work when containing
> > timestamp/date/time
> > > type.
> > > I have created an issue FLINK-13699[1] and a pull request for this.
> > >
> > > IMO, this can be a blocker issue of 1.9 release. Because
> > > timestamp/date/time are primitive types, and this will break the DDL
> > > feature.
> > > However, I want to hear more thoughts from the community whether we
> > should
> > > recognize it as a blocker.
> > >
> > > Thanks,
> > > Jark
> > >
> > >
> > > [1]: https://issues.apache.org/jira/browse/FLINK-13699
> > >
> > >
> > >
> > > On Mon, 12 Aug 2019 at 22:46, Becket Qin  wrote:
> > >
> > > > Thanks Gordon, will do that.
> > > >
> > > > On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai <
> > tzuli...@apache.org
> > > >
> > > > wrote:
> > > >
> > > > > Concerning FLINK-13231:
> > > > >
> > > > > Since this is a @PublicEvolving interface, technically it is ok to
> > > break
> > > > > it across releases (including across bugfix releases?).
> > > > > So, @Becket if you do merge it now, please mark the fix version as
> > > 1.9.1.
> > > > >
> > > > > During the voting process, in the case a new RC is created, we
> > usually
> > > > > check the list of changes compared to the previous RC, and correct
> > the
> > > > "Fix
> > > > > Version" of the corresponding JIRAs to be the right version (in the
> > > case,
> > > > > it would be corrected to 1.9.0 instead of 1.9.1).
> > > > >
> > > > > On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann <
> trohrm...@apache.org>
> > > > > wrote:
> > > > >
> > > > >> I agree that it would be nicer. Not sure whether we should cancel
> > the
> > > RC
> > > > >> for this issue given that it is open for quite some time and
> hasn't
> > > been
> > > > >> addressed until very recently. Maybe we could include it on the
> > > > shortlist
> > > > >> of nice-to-do things which we do in case that the RC gets
> cancelled.
> > > > >>
> > > > >> Cheers,
> > > > >> Till
> > > > >>
> > > > >> On Mon, Aug 12, 2019 at 4:18 PM Becket Qin 
> > > > wrote:
> > > > >>
> > > > >>> Hi Till,
> > > > >>>
> > > > >>> Yes, I think we have already documented in that way. So
> technically
> > > > >>> speaking it is fine to change it later. It is just better if we
> > could
> > > > >>> avoid
> > > > >>> doing that.
> > > > >>>
> > > > >>> Thanks,
> > > > >>>
> > > > >>> Jiangjie (Becket) Qin
> > > > >>>
> > > > >>> On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann <
> > trohrm...@apache.org>
> > > > >>> wrote:
> > > > >>>
> > > > >>> > Could we say that the PubSub connector is public evolving
> > instead?
> > > > >>> >
> > > > >>> > Cheers,
> > > > >>> > Till
> > > > >>> >
> > > > >>> > On Mon, Aug 12, 2019 at 3:18 PM Becket Qin <
> becket@gmail.com
> > >
> > > > >>> wrote:
> > > > >>> >
> > > > >>> > > Hi all,
> > > > >>> > >
> > > > >>> > > FLINK-13231(palindrome!) has a minor Google PubSub connector
> > API
> > > > >>> change
> > > > >>> > > regarding how to config rate limiting. The GCP PubSub
> connector
> > > is
> > > > a
> > > > >>> > newly
> > > > >>> > > introduced connector in 1.9, so it would be nice to include
> > this
> > > > >>> change
> > > > >>> > > into 1.9 rather than later to avoid a public API change. I am
> > > > >>> thinking of
> > > > >>> > > making this as a blocker for 1.9. Want to check what do
> others
> > > > think.
> > > > >>> > >
> > > > >>> > > Thanks,
> > > > >>> > >
> > > > >>> > > Jiangjie (Becket) Qin
> > > > >>> > >
> > > > >>> > > On Mon, Aug 12, 2019 at 2:04 PM Zili Chen <
> > wander4...@gmail.com>
> > > > >>> wrote:
> > > > >>> > >
> > > > >>> > > > Hi Kurt,
> > > > >>> > > >
> > > > >>> > > > Thanks for your explanation. For [1] I think at least we
> > should
> > > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-13 Thread Richard Deurwaarder
Hello all,

I noticed the PubSub example jar is not included in the examples/ dir of
flink-dist. I've created https://issues.apache.org/jira/browse/FLINK-13700
 + https://github.com/apache/flink/pull/9424/files to fix this.

I will leave it up to you to decide if we want to add this to 1.9.0.

Regards,

Richard

On Tue, Aug 13, 2019 at 9:04 AM Till Rohrmann  wrote:

> Hi Jark,
>
> thanks for reporting this issue. Could this be a documented limitation of
> Blink's preview version? I think we have agreed that the Blink SQL planner
> will be rather a preview feature than production ready. Hence it could
> still contain some bugs. My concern is that there might be still other
> issues which we'll discover bit by bit and could postpone the release even
> further if we say Blink bugs are blockers.
>
> Cheers,
> Till
>
> On Tue, Aug 13, 2019 at 7:42 AM Jark Wu  wrote:
>
> > Hi all,
> >
> > I just find an issue when testing connector DDLs against blink planner
> for
> > rc2.
> > This issue lead to the DDL doesn't work when containing
> timestamp/date/time
> > type.
> > I have created an issue FLINK-13699[1] and a pull request for this.
> >
> > IMO, this can be a blocker issue of 1.9 release. Because
> > timestamp/date/time are primitive types, and this will break the DDL
> > feature.
> > However, I want to hear more thoughts from the community whether we
> should
> > recognize it as a blocker.
> >
> > Thanks,
> > Jark
> >
> >
> > [1]: https://issues.apache.org/jira/browse/FLINK-13699
> >
> >
> >
> > On Mon, 12 Aug 2019 at 22:46, Becket Qin  wrote:
> >
> > > Thanks Gordon, will do that.
> > >
> > > On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai <
> tzuli...@apache.org
> > >
> > > wrote:
> > >
> > > > Concerning FLINK-13231:
> > > >
> > > > Since this is a @PublicEvolving interface, technically it is ok to
> > break
> > > > it across releases (including across bugfix releases?).
> > > > So, @Becket if you do merge it now, please mark the fix version as
> > 1.9.1.
> > > >
> > > > During the voting process, in the case a new RC is created, we
> usually
> > > > check the list of changes compared to the previous RC, and correct
> the
> > > "Fix
> > > > Version" of the corresponding JIRAs to be the right version (in the
> > case,
> > > > it would be corrected to 1.9.0 instead of 1.9.1).
> > > >
> > > > On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann 
> > > > wrote:
> > > >
> > > >> I agree that it would be nicer. Not sure whether we should cancel
> the
> > RC
> > > >> for this issue given that it is open for quite some time and hasn't
> > been
> > > >> addressed until very recently. Maybe we could include it on the
> > > shortlist
> > > >> of nice-to-do things which we do in case that the RC gets cancelled.
> > > >>
> > > >> Cheers,
> > > >> Till
> > > >>
> > > >> On Mon, Aug 12, 2019 at 4:18 PM Becket Qin 
> > > wrote:
> > > >>
> > > >>> Hi Till,
> > > >>>
> > > >>> Yes, I think we have already documented in that way. So technically
> > > >>> speaking it is fine to change it later. It is just better if we
> could
> > > >>> avoid
> > > >>> doing that.
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> Jiangjie (Becket) Qin
> > > >>>
> > > >>> On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann <
> trohrm...@apache.org>
> > > >>> wrote:
> > > >>>
> > > >>> > Could we say that the PubSub connector is public evolving
> instead?
> > > >>> >
> > > >>> > Cheers,
> > > >>> > Till
> > > >>> >
> > > >>> > On Mon, Aug 12, 2019 at 3:18 PM Becket Qin  >
> > > >>> wrote:
> > > >>> >
> > > >>> > > Hi all,
> > > >>> > >
> > > >>> > > FLINK-13231(palindrome!) has a minor Google PubSub connector
> API
> > > >>> change
> > > >>> > > regarding how to config rate limiting. The GCP PubSub connector
> > is
> > > a
> > > >>> > newly
> > > >>> > > introduced connector in 1.9, so it would be nice to include
> this
> > > >>> change
> > > >>> > > into 1.9 rather than later to avoid a public API change. I am
> > > >>> thinking of
> > > >>> > > making this as a blocker for 1.9. Want to check what do others
> > > think.
> > > >>> > >
> > > >>> > > Thanks,
> > > >>> > >
> > > >>> > > Jiangjie (Becket) Qin
> > > >>> > >
> > > >>> > > On Mon, Aug 12, 2019 at 2:04 PM Zili Chen <
> wander4...@gmail.com>
> > > >>> wrote:
> > > >>> > >
> > > >>> > > > Hi Kurt,
> > > >>> > > >
> > > >>> > > > Thanks for your explanation. For [1] I think at least we
> should
> > > >>> change
> > > >>> > > > the JIRA issue field, like unset the fixed version. For [2] I
> > can
> > > >>> see
> > > >>> > > > the change is all in test scope but wonder if such a commit
> > still
> > > >>> > invalid
> > > >>> > > > the release candidate. IIRC previous RC VOTE threads would
> > > contain
> > > >>> a
> > > >>> > > > release manual/guide, I will try to look up it, too.
> > > >>> > > >
> > > >>> > > > Best,
> > > >>> > > > tison.
> > > >>> > > >
> > > >>> > > >
> > > >>> > > > Kurt Young  于2019年8月12日周一 下午5:42写道:
> > > >>> > > >
> > > >>> > > > > Hi Zili,
> > > >>> > > > >
> > > >>> > > > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-13 Thread Till Rohrmann
Hi Jark,

thanks for reporting this issue. Could this be a documented limitation of
Blink's preview version? I think we have agreed that the Blink SQL planner
will be rather a preview feature than production ready. Hence it could
still contain some bugs. My concern is that there might be still other
issues which we'll discover bit by bit and could postpone the release even
further if we say Blink bugs are blockers.

Cheers,
Till

On Tue, Aug 13, 2019 at 7:42 AM Jark Wu  wrote:

> Hi all,
>
> I just find an issue when testing connector DDLs against blink planner for
> rc2.
> This issue lead to the DDL doesn't work when containing timestamp/date/time
> type.
> I have created an issue FLINK-13699[1] and a pull request for this.
>
> IMO, this can be a blocker issue of 1.9 release. Because
> timestamp/date/time are primitive types, and this will break the DDL
> feature.
> However, I want to hear more thoughts from the community whether we should
> recognize it as a blocker.
>
> Thanks,
> Jark
>
>
> [1]: https://issues.apache.org/jira/browse/FLINK-13699
>
>
>
> On Mon, 12 Aug 2019 at 22:46, Becket Qin  wrote:
>
> > Thanks Gordon, will do that.
> >
> > On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai  >
> > wrote:
> >
> > > Concerning FLINK-13231:
> > >
> > > Since this is a @PublicEvolving interface, technically it is ok to
> break
> > > it across releases (including across bugfix releases?).
> > > So, @Becket if you do merge it now, please mark the fix version as
> 1.9.1.
> > >
> > > During the voting process, in the case a new RC is created, we usually
> > > check the list of changes compared to the previous RC, and correct the
> > "Fix
> > > Version" of the corresponding JIRAs to be the right version (in the
> case,
> > > it would be corrected to 1.9.0 instead of 1.9.1).
> > >
> > > On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann 
> > > wrote:
> > >
> > >> I agree that it would be nicer. Not sure whether we should cancel the
> RC
> > >> for this issue given that it is open for quite some time and hasn't
> been
> > >> addressed until very recently. Maybe we could include it on the
> > shortlist
> > >> of nice-to-do things which we do in case that the RC gets cancelled.
> > >>
> > >> Cheers,
> > >> Till
> > >>
> > >> On Mon, Aug 12, 2019 at 4:18 PM Becket Qin 
> > wrote:
> > >>
> > >>> Hi Till,
> > >>>
> > >>> Yes, I think we have already documented in that way. So technically
> > >>> speaking it is fine to change it later. It is just better if we could
> > >>> avoid
> > >>> doing that.
> > >>>
> > >>> Thanks,
> > >>>
> > >>> Jiangjie (Becket) Qin
> > >>>
> > >>> On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann 
> > >>> wrote:
> > >>>
> > >>> > Could we say that the PubSub connector is public evolving instead?
> > >>> >
> > >>> > Cheers,
> > >>> > Till
> > >>> >
> > >>> > On Mon, Aug 12, 2019 at 3:18 PM Becket Qin 
> > >>> wrote:
> > >>> >
> > >>> > > Hi all,
> > >>> > >
> > >>> > > FLINK-13231(palindrome!) has a minor Google PubSub connector API
> > >>> change
> > >>> > > regarding how to config rate limiting. The GCP PubSub connector
> is
> > a
> > >>> > newly
> > >>> > > introduced connector in 1.9, so it would be nice to include this
> > >>> change
> > >>> > > into 1.9 rather than later to avoid a public API change. I am
> > >>> thinking of
> > >>> > > making this as a blocker for 1.9. Want to check what do others
> > think.
> > >>> > >
> > >>> > > Thanks,
> > >>> > >
> > >>> > > Jiangjie (Becket) Qin
> > >>> > >
> > >>> > > On Mon, Aug 12, 2019 at 2:04 PM Zili Chen 
> > >>> wrote:
> > >>> > >
> > >>> > > > Hi Kurt,
> > >>> > > >
> > >>> > > > Thanks for your explanation. For [1] I think at least we should
> > >>> change
> > >>> > > > the JIRA issue field, like unset the fixed version. For [2] I
> can
> > >>> see
> > >>> > > > the change is all in test scope but wonder if such a commit
> still
> > >>> > invalid
> > >>> > > > the release candidate. IIRC previous RC VOTE threads would
> > contain
> > >>> a
> > >>> > > > release manual/guide, I will try to look up it, too.
> > >>> > > >
> > >>> > > > Best,
> > >>> > > > tison.
> > >>> > > >
> > >>> > > >
> > >>> > > > Kurt Young  于2019年8月12日周一 下午5:42写道:
> > >>> > > >
> > >>> > > > > Hi Zili,
> > >>> > > > >
> > >>> > > > > Thanks for the heads up. The 2 issues you mentioned were
> opened
> > >>> by
> > >>> > me.
> > >>> > > We
> > >>> > > > > have
> > >>> > > > > found the reason of the second issue and a PR was opened for
> > it.
> > >>> As
> > >>> > > said
> > >>> > > > in
> > >>> > > > > jira, the
> > >>> > > > > issue was just a testing problem, should not be blocker of
> > 1.9.0
> > >>> > > release.
> > >>> > > > > However,
> > >>> > > > > we will still merge it into 1.9 branch.
> > >>> > > > >
> > >>> > > > > Best,
> > >>> > > > > Kurt
> > >>> > > > >
> > >>> > > > >
> > >>> > > > > On Mon, Aug 12, 2019 at 5:38 PM Zili Chen <
> > wander4...@gmail.com>
> > >>> > > wrote:
> > >>> > > > >
> > >>> > > > > > Hi,
> > >>> > > > > >
> > >>> > > > > > I just 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Jark Wu
Hi all,

I just find an issue when testing connector DDLs against blink planner for
rc2.
This issue lead to the DDL doesn't work when containing timestamp/date/time
type.
I have created an issue FLINK-13699[1] and a pull request for this.

IMO, this can be a blocker issue of 1.9 release. Because
timestamp/date/time are primitive types, and this will break the DDL
feature.
However, I want to hear more thoughts from the community whether we should
recognize it as a blocker.

Thanks,
Jark


[1]: https://issues.apache.org/jira/browse/FLINK-13699



On Mon, 12 Aug 2019 at 22:46, Becket Qin  wrote:

> Thanks Gordon, will do that.
>
> On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai 
> wrote:
>
> > Concerning FLINK-13231:
> >
> > Since this is a @PublicEvolving interface, technically it is ok to break
> > it across releases (including across bugfix releases?).
> > So, @Becket if you do merge it now, please mark the fix version as 1.9.1.
> >
> > During the voting process, in the case a new RC is created, we usually
> > check the list of changes compared to the previous RC, and correct the
> "Fix
> > Version" of the corresponding JIRAs to be the right version (in the case,
> > it would be corrected to 1.9.0 instead of 1.9.1).
> >
> > On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann 
> > wrote:
> >
> >> I agree that it would be nicer. Not sure whether we should cancel the RC
> >> for this issue given that it is open for quite some time and hasn't been
> >> addressed until very recently. Maybe we could include it on the
> shortlist
> >> of nice-to-do things which we do in case that the RC gets cancelled.
> >>
> >> Cheers,
> >> Till
> >>
> >> On Mon, Aug 12, 2019 at 4:18 PM Becket Qin 
> wrote:
> >>
> >>> Hi Till,
> >>>
> >>> Yes, I think we have already documented in that way. So technically
> >>> speaking it is fine to change it later. It is just better if we could
> >>> avoid
> >>> doing that.
> >>>
> >>> Thanks,
> >>>
> >>> Jiangjie (Becket) Qin
> >>>
> >>> On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann 
> >>> wrote:
> >>>
> >>> > Could we say that the PubSub connector is public evolving instead?
> >>> >
> >>> > Cheers,
> >>> > Till
> >>> >
> >>> > On Mon, Aug 12, 2019 at 3:18 PM Becket Qin 
> >>> wrote:
> >>> >
> >>> > > Hi all,
> >>> > >
> >>> > > FLINK-13231(palindrome!) has a minor Google PubSub connector API
> >>> change
> >>> > > regarding how to config rate limiting. The GCP PubSub connector is
> a
> >>> > newly
> >>> > > introduced connector in 1.9, so it would be nice to include this
> >>> change
> >>> > > into 1.9 rather than later to avoid a public API change. I am
> >>> thinking of
> >>> > > making this as a blocker for 1.9. Want to check what do others
> think.
> >>> > >
> >>> > > Thanks,
> >>> > >
> >>> > > Jiangjie (Becket) Qin
> >>> > >
> >>> > > On Mon, Aug 12, 2019 at 2:04 PM Zili Chen 
> >>> wrote:
> >>> > >
> >>> > > > Hi Kurt,
> >>> > > >
> >>> > > > Thanks for your explanation. For [1] I think at least we should
> >>> change
> >>> > > > the JIRA issue field, like unset the fixed version. For [2] I can
> >>> see
> >>> > > > the change is all in test scope but wonder if such a commit still
> >>> > invalid
> >>> > > > the release candidate. IIRC previous RC VOTE threads would
> contain
> >>> a
> >>> > > > release manual/guide, I will try to look up it, too.
> >>> > > >
> >>> > > > Best,
> >>> > > > tison.
> >>> > > >
> >>> > > >
> >>> > > > Kurt Young  于2019年8月12日周一 下午5:42写道:
> >>> > > >
> >>> > > > > Hi Zili,
> >>> > > > >
> >>> > > > > Thanks for the heads up. The 2 issues you mentioned were opened
> >>> by
> >>> > me.
> >>> > > We
> >>> > > > > have
> >>> > > > > found the reason of the second issue and a PR was opened for
> it.
> >>> As
> >>> > > said
> >>> > > > in
> >>> > > > > jira, the
> >>> > > > > issue was just a testing problem, should not be blocker of
> 1.9.0
> >>> > > release.
> >>> > > > > However,
> >>> > > > > we will still merge it into 1.9 branch.
> >>> > > > >
> >>> > > > > Best,
> >>> > > > > Kurt
> >>> > > > >
> >>> > > > >
> >>> > > > > On Mon, Aug 12, 2019 at 5:38 PM Zili Chen <
> wander4...@gmail.com>
> >>> > > wrote:
> >>> > > > >
> >>> > > > > > Hi,
> >>> > > > > >
> >>> > > > > > I just noticed that a few hours ago there were two new issues
> >>> > > > > > filed and marked as blockers to 1.9.0[1][2].
> >>> > > > > >
> >>> > > > > > Now [1] is closed as duplication but still marked as
> >>> > > > > > a blocker to 1.9.0, while [2] is downgrade to "Major"
> priority
> >>> > > > > > but still target to be fixed in 1.9.0.
> >>> > > > > >
> >>> > > > > > It would be worth to have attention of our release manager at
> >>> > least.
> >>> > > > > >
> >>> > > > > > Best,
> >>> > > > > > tison.
> >>> > > > > >
> >>> > > > > > [1] https://issues.apache.org/jira/browse/FLINK-13687
> >>> > > > > > [2] https://issues.apache.org/jira/browse/FLINK-13688
> >>> > > > > >
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Gyula Fóra  于2019年8月12日周一 下午5:10写道:
> >>> > > > > >
> >>> > > > > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Becket Qin
Thanks Gordon, will do that.

On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai 
wrote:

> Concerning FLINK-13231:
>
> Since this is a @PublicEvolving interface, technically it is ok to break
> it across releases (including across bugfix releases?).
> So, @Becket if you do merge it now, please mark the fix version as 1.9.1.
>
> During the voting process, in the case a new RC is created, we usually
> check the list of changes compared to the previous RC, and correct the "Fix
> Version" of the corresponding JIRAs to be the right version (in the case,
> it would be corrected to 1.9.0 instead of 1.9.1).
>
> On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann 
> wrote:
>
>> I agree that it would be nicer. Not sure whether we should cancel the RC
>> for this issue given that it is open for quite some time and hasn't been
>> addressed until very recently. Maybe we could include it on the shortlist
>> of nice-to-do things which we do in case that the RC gets cancelled.
>>
>> Cheers,
>> Till
>>
>> On Mon, Aug 12, 2019 at 4:18 PM Becket Qin  wrote:
>>
>>> Hi Till,
>>>
>>> Yes, I think we have already documented in that way. So technically
>>> speaking it is fine to change it later. It is just better if we could
>>> avoid
>>> doing that.
>>>
>>> Thanks,
>>>
>>> Jiangjie (Becket) Qin
>>>
>>> On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann 
>>> wrote:
>>>
>>> > Could we say that the PubSub connector is public evolving instead?
>>> >
>>> > Cheers,
>>> > Till
>>> >
>>> > On Mon, Aug 12, 2019 at 3:18 PM Becket Qin 
>>> wrote:
>>> >
>>> > > Hi all,
>>> > >
>>> > > FLINK-13231(palindrome!) has a minor Google PubSub connector API
>>> change
>>> > > regarding how to config rate limiting. The GCP PubSub connector is a
>>> > newly
>>> > > introduced connector in 1.9, so it would be nice to include this
>>> change
>>> > > into 1.9 rather than later to avoid a public API change. I am
>>> thinking of
>>> > > making this as a blocker for 1.9. Want to check what do others think.
>>> > >
>>> > > Thanks,
>>> > >
>>> > > Jiangjie (Becket) Qin
>>> > >
>>> > > On Mon, Aug 12, 2019 at 2:04 PM Zili Chen 
>>> wrote:
>>> > >
>>> > > > Hi Kurt,
>>> > > >
>>> > > > Thanks for your explanation. For [1] I think at least we should
>>> change
>>> > > > the JIRA issue field, like unset the fixed version. For [2] I can
>>> see
>>> > > > the change is all in test scope but wonder if such a commit still
>>> > invalid
>>> > > > the release candidate. IIRC previous RC VOTE threads would contain
>>> a
>>> > > > release manual/guide, I will try to look up it, too.
>>> > > >
>>> > > > Best,
>>> > > > tison.
>>> > > >
>>> > > >
>>> > > > Kurt Young  于2019年8月12日周一 下午5:42写道:
>>> > > >
>>> > > > > Hi Zili,
>>> > > > >
>>> > > > > Thanks for the heads up. The 2 issues you mentioned were opened
>>> by
>>> > me.
>>> > > We
>>> > > > > have
>>> > > > > found the reason of the second issue and a PR was opened for it.
>>> As
>>> > > said
>>> > > > in
>>> > > > > jira, the
>>> > > > > issue was just a testing problem, should not be blocker of 1.9.0
>>> > > release.
>>> > > > > However,
>>> > > > > we will still merge it into 1.9 branch.
>>> > > > >
>>> > > > > Best,
>>> > > > > Kurt
>>> > > > >
>>> > > > >
>>> > > > > On Mon, Aug 12, 2019 at 5:38 PM Zili Chen 
>>> > > wrote:
>>> > > > >
>>> > > > > > Hi,
>>> > > > > >
>>> > > > > > I just noticed that a few hours ago there were two new issues
>>> > > > > > filed and marked as blockers to 1.9.0[1][2].
>>> > > > > >
>>> > > > > > Now [1] is closed as duplication but still marked as
>>> > > > > > a blocker to 1.9.0, while [2] is downgrade to "Major" priority
>>> > > > > > but still target to be fixed in 1.9.0.
>>> > > > > >
>>> > > > > > It would be worth to have attention of our release manager at
>>> > least.
>>> > > > > >
>>> > > > > > Best,
>>> > > > > > tison.
>>> > > > > >
>>> > > > > > [1] https://issues.apache.org/jira/browse/FLINK-13687
>>> > > > > > [2] https://issues.apache.org/jira/browse/FLINK-13688
>>> > > > > >
>>> > > > > >
>>> > > > > >
>>> > > > > > Gyula Fóra  于2019年8月12日周一 下午5:10写道:
>>> > > > > >
>>> > > > > > > Thanks Stephan :)
>>> > > > > > > That looks easy enough, will try!
>>> > > > > > >
>>> > > > > > > Gyula
>>> > > > > > >
>>> > > > > > > On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen <
>>> se...@apache.org>
>>> > > > > wrote:
>>> > > > > > >
>>> > > > > > > > Hi Gyula!
>>> > > > > > > >
>>> > > > > > > > Thanks for reporting this.
>>> > > > > > > >
>>> > > > > > > > Can you try to simply build Flink without Hadoop and then
>>> > > exporting
>>> > > > > > > > HADOOP_CLASSPATH to your CloudEra libs?
>>> > > > > > > > That is the recommended way these days.
>>> > > > > > > >
>>> > > > > > > > Best,
>>> > > > > > > > Stephan
>>> > > > > > > >
>>> > > > > > > >
>>> > > > > > > >
>>> > > > > > > > On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra <
>>> > > gyula.f...@gmail.com>
>>> > > > > > > wrote:
>>> > > > > > > >
>>> > > > > > > > > Thanks Dawid,
>>> > > > > > > > >
>>> > > > > > > > > In the meantime I 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Becket Qin
That sounds good to me. I was initially trying to piggyback it into an RC,
but fell behind and was not able to catch the last one.

Thanks,

Jiangjie (Becket) Qin

On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann  wrote:

> I agree that it would be nicer. Not sure whether we should cancel the RC
> for this issue given that it is open for quite some time and hasn't been
> addressed until very recently. Maybe we could include it on the shortlist
> of nice-to-do things which we do in case that the RC gets cancelled.
>
> Cheers,
> Till
>
> On Mon, Aug 12, 2019 at 4:18 PM Becket Qin  wrote:
>
> > Hi Till,
> >
> > Yes, I think we have already documented in that way. So technically
> > speaking it is fine to change it later. It is just better if we could
> avoid
> > doing that.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann 
> > wrote:
> >
> > > Could we say that the PubSub connector is public evolving instead?
> > >
> > > Cheers,
> > > Till
> > >
> > > On Mon, Aug 12, 2019 at 3:18 PM Becket Qin 
> wrote:
> > >
> > > > Hi all,
> > > >
> > > > FLINK-13231(palindrome!) has a minor Google PubSub connector API
> change
> > > > regarding how to config rate limiting. The GCP PubSub connector is a
> > > newly
> > > > introduced connector in 1.9, so it would be nice to include this
> change
> > > > into 1.9 rather than later to avoid a public API change. I am
> thinking
> > of
> > > > making this as a blocker for 1.9. Want to check what do others think.
> > > >
> > > > Thanks,
> > > >
> > > > Jiangjie (Becket) Qin
> > > >
> > > > On Mon, Aug 12, 2019 at 2:04 PM Zili Chen 
> > wrote:
> > > >
> > > > > Hi Kurt,
> > > > >
> > > > > Thanks for your explanation. For [1] I think at least we should
> > change
> > > > > the JIRA issue field, like unset the fixed version. For [2] I can
> see
> > > > > the change is all in test scope but wonder if such a commit still
> > > invalid
> > > > > the release candidate. IIRC previous RC VOTE threads would contain
> a
> > > > > release manual/guide, I will try to look up it, too.
> > > > >
> > > > > Best,
> > > > > tison.
> > > > >
> > > > >
> > > > > Kurt Young  于2019年8月12日周一 下午5:42写道:
> > > > >
> > > > > > Hi Zili,
> > > > > >
> > > > > > Thanks for the heads up. The 2 issues you mentioned were opened
> by
> > > me.
> > > > We
> > > > > > have
> > > > > > found the reason of the second issue and a PR was opened for it.
> As
> > > > said
> > > > > in
> > > > > > jira, the
> > > > > > issue was just a testing problem, should not be blocker of 1.9.0
> > > > release.
> > > > > > However,
> > > > > > we will still merge it into 1.9 branch.
> > > > > >
> > > > > > Best,
> > > > > > Kurt
> > > > > >
> > > > > >
> > > > > > On Mon, Aug 12, 2019 at 5:38 PM Zili Chen 
> > > > wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > I just noticed that a few hours ago there were two new issues
> > > > > > > filed and marked as blockers to 1.9.0[1][2].
> > > > > > >
> > > > > > > Now [1] is closed as duplication but still marked as
> > > > > > > a blocker to 1.9.0, while [2] is downgrade to "Major" priority
> > > > > > > but still target to be fixed in 1.9.0.
> > > > > > >
> > > > > > > It would be worth to have attention of our release manager at
> > > least.
> > > > > > >
> > > > > > > Best,
> > > > > > > tison.
> > > > > > >
> > > > > > > [1] https://issues.apache.org/jira/browse/FLINK-13687
> > > > > > > [2] https://issues.apache.org/jira/browse/FLINK-13688
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Gyula Fóra  于2019年8月12日周一 下午5:10写道:
> > > > > > >
> > > > > > > > Thanks Stephan :)
> > > > > > > > That looks easy enough, will try!
> > > > > > > >
> > > > > > > > Gyula
> > > > > > > >
> > > > > > > > On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen <
> > se...@apache.org>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Gyula!
> > > > > > > > >
> > > > > > > > > Thanks for reporting this.
> > > > > > > > >
> > > > > > > > > Can you try to simply build Flink without Hadoop and then
> > > > exporting
> > > > > > > > > HADOOP_CLASSPATH to your CloudEra libs?
> > > > > > > > > That is the recommended way these days.
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Stephan
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra <
> > > > gyula.f...@gmail.com>
> > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Thanks Dawid,
> > > > > > > > > >
> > > > > > > > > > In the meantime I also figured out that I need to build
> the
> > > > > > > > > > https://github.com/apache/flink-shaded project locally
> > with
> > > > > > > > > > -Dhadoop.version set to the specific hadoop version if I
> > want
> > > > > > > something
> > > > > > > > > > different.
> > > > > > > > > >
> > > > > > > > > > Cheers,
> > > > > > > > > > Gyula
> > > > > > > > > >
> > > > > > > > > > On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz <
> > > > > > > > dwysakow...@apache.org
> 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Tzu-Li (Gordon) Tai
Concerning FLINK-13231:

Since this is a @PublicEvolving interface, technically it is ok to break it
across releases (including across bugfix releases?).
So, @Becket if you do merge it now, please mark the fix version as 1.9.1.

During the voting process, in the case a new RC is created, we usually
check the list of changes compared to the previous RC, and correct the "Fix
Version" of the corresponding JIRAs to be the right version (in the case,
it would be corrected to 1.9.0 instead of 1.9.1).

On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann  wrote:

> I agree that it would be nicer. Not sure whether we should cancel the RC
> for this issue given that it is open for quite some time and hasn't been
> addressed until very recently. Maybe we could include it on the shortlist
> of nice-to-do things which we do in case that the RC gets cancelled.
>
> Cheers,
> Till
>
> On Mon, Aug 12, 2019 at 4:18 PM Becket Qin  wrote:
>
>> Hi Till,
>>
>> Yes, I think we have already documented in that way. So technically
>> speaking it is fine to change it later. It is just better if we could
>> avoid
>> doing that.
>>
>> Thanks,
>>
>> Jiangjie (Becket) Qin
>>
>> On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann 
>> wrote:
>>
>> > Could we say that the PubSub connector is public evolving instead?
>> >
>> > Cheers,
>> > Till
>> >
>> > On Mon, Aug 12, 2019 at 3:18 PM Becket Qin 
>> wrote:
>> >
>> > > Hi all,
>> > >
>> > > FLINK-13231(palindrome!) has a minor Google PubSub connector API
>> change
>> > > regarding how to config rate limiting. The GCP PubSub connector is a
>> > newly
>> > > introduced connector in 1.9, so it would be nice to include this
>> change
>> > > into 1.9 rather than later to avoid a public API change. I am
>> thinking of
>> > > making this as a blocker for 1.9. Want to check what do others think.
>> > >
>> > > Thanks,
>> > >
>> > > Jiangjie (Becket) Qin
>> > >
>> > > On Mon, Aug 12, 2019 at 2:04 PM Zili Chen 
>> wrote:
>> > >
>> > > > Hi Kurt,
>> > > >
>> > > > Thanks for your explanation. For [1] I think at least we should
>> change
>> > > > the JIRA issue field, like unset the fixed version. For [2] I can
>> see
>> > > > the change is all in test scope but wonder if such a commit still
>> > invalid
>> > > > the release candidate. IIRC previous RC VOTE threads would contain a
>> > > > release manual/guide, I will try to look up it, too.
>> > > >
>> > > > Best,
>> > > > tison.
>> > > >
>> > > >
>> > > > Kurt Young  于2019年8月12日周一 下午5:42写道:
>> > > >
>> > > > > Hi Zili,
>> > > > >
>> > > > > Thanks for the heads up. The 2 issues you mentioned were opened by
>> > me.
>> > > We
>> > > > > have
>> > > > > found the reason of the second issue and a PR was opened for it.
>> As
>> > > said
>> > > > in
>> > > > > jira, the
>> > > > > issue was just a testing problem, should not be blocker of 1.9.0
>> > > release.
>> > > > > However,
>> > > > > we will still merge it into 1.9 branch.
>> > > > >
>> > > > > Best,
>> > > > > Kurt
>> > > > >
>> > > > >
>> > > > > On Mon, Aug 12, 2019 at 5:38 PM Zili Chen 
>> > > wrote:
>> > > > >
>> > > > > > Hi,
>> > > > > >
>> > > > > > I just noticed that a few hours ago there were two new issues
>> > > > > > filed and marked as blockers to 1.9.0[1][2].
>> > > > > >
>> > > > > > Now [1] is closed as duplication but still marked as
>> > > > > > a blocker to 1.9.0, while [2] is downgrade to "Major" priority
>> > > > > > but still target to be fixed in 1.9.0.
>> > > > > >
>> > > > > > It would be worth to have attention of our release manager at
>> > least.
>> > > > > >
>> > > > > > Best,
>> > > > > > tison.
>> > > > > >
>> > > > > > [1] https://issues.apache.org/jira/browse/FLINK-13687
>> > > > > > [2] https://issues.apache.org/jira/browse/FLINK-13688
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > Gyula Fóra  于2019年8月12日周一 下午5:10写道:
>> > > > > >
>> > > > > > > Thanks Stephan :)
>> > > > > > > That looks easy enough, will try!
>> > > > > > >
>> > > > > > > Gyula
>> > > > > > >
>> > > > > > > On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen <
>> se...@apache.org>
>> > > > > wrote:
>> > > > > > >
>> > > > > > > > Hi Gyula!
>> > > > > > > >
>> > > > > > > > Thanks for reporting this.
>> > > > > > > >
>> > > > > > > > Can you try to simply build Flink without Hadoop and then
>> > > exporting
>> > > > > > > > HADOOP_CLASSPATH to your CloudEra libs?
>> > > > > > > > That is the recommended way these days.
>> > > > > > > >
>> > > > > > > > Best,
>> > > > > > > > Stephan
>> > > > > > > >
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra <
>> > > gyula.f...@gmail.com>
>> > > > > > > wrote:
>> > > > > > > >
>> > > > > > > > > Thanks Dawid,
>> > > > > > > > >
>> > > > > > > > > In the meantime I also figured out that I need to build
>> the
>> > > > > > > > > https://github.com/apache/flink-shaded project locally
>> with
>> > > > > > > > > -Dhadoop.version set to the specific hadoop version if I
>> want
>> > > > > > something
>> > > > > > > > > different.

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Till Rohrmann
I agree that it would be nicer. Not sure whether we should cancel the RC
for this issue given that it is open for quite some time and hasn't been
addressed until very recently. Maybe we could include it on the shortlist
of nice-to-do things which we do in case that the RC gets cancelled.

Cheers,
Till

On Mon, Aug 12, 2019 at 4:18 PM Becket Qin  wrote:

> Hi Till,
>
> Yes, I think we have already documented in that way. So technically
> speaking it is fine to change it later. It is just better if we could avoid
> doing that.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann 
> wrote:
>
> > Could we say that the PubSub connector is public evolving instead?
> >
> > Cheers,
> > Till
> >
> > On Mon, Aug 12, 2019 at 3:18 PM Becket Qin  wrote:
> >
> > > Hi all,
> > >
> > > FLINK-13231(palindrome!) has a minor Google PubSub connector API change
> > > regarding how to config rate limiting. The GCP PubSub connector is a
> > newly
> > > introduced connector in 1.9, so it would be nice to include this change
> > > into 1.9 rather than later to avoid a public API change. I am thinking
> of
> > > making this as a blocker for 1.9. Want to check what do others think.
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becket) Qin
> > >
> > > On Mon, Aug 12, 2019 at 2:04 PM Zili Chen 
> wrote:
> > >
> > > > Hi Kurt,
> > > >
> > > > Thanks for your explanation. For [1] I think at least we should
> change
> > > > the JIRA issue field, like unset the fixed version. For [2] I can see
> > > > the change is all in test scope but wonder if such a commit still
> > invalid
> > > > the release candidate. IIRC previous RC VOTE threads would contain a
> > > > release manual/guide, I will try to look up it, too.
> > > >
> > > > Best,
> > > > tison.
> > > >
> > > >
> > > > Kurt Young  于2019年8月12日周一 下午5:42写道:
> > > >
> > > > > Hi Zili,
> > > > >
> > > > > Thanks for the heads up. The 2 issues you mentioned were opened by
> > me.
> > > We
> > > > > have
> > > > > found the reason of the second issue and a PR was opened for it. As
> > > said
> > > > in
> > > > > jira, the
> > > > > issue was just a testing problem, should not be blocker of 1.9.0
> > > release.
> > > > > However,
> > > > > we will still merge it into 1.9 branch.
> > > > >
> > > > > Best,
> > > > > Kurt
> > > > >
> > > > >
> > > > > On Mon, Aug 12, 2019 at 5:38 PM Zili Chen 
> > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I just noticed that a few hours ago there were two new issues
> > > > > > filed and marked as blockers to 1.9.0[1][2].
> > > > > >
> > > > > > Now [1] is closed as duplication but still marked as
> > > > > > a blocker to 1.9.0, while [2] is downgrade to "Major" priority
> > > > > > but still target to be fixed in 1.9.0.
> > > > > >
> > > > > > It would be worth to have attention of our release manager at
> > least.
> > > > > >
> > > > > > Best,
> > > > > > tison.
> > > > > >
> > > > > > [1] https://issues.apache.org/jira/browse/FLINK-13687
> > > > > > [2] https://issues.apache.org/jira/browse/FLINK-13688
> > > > > >
> > > > > >
> > > > > >
> > > > > > Gyula Fóra  于2019年8月12日周一 下午5:10写道:
> > > > > >
> > > > > > > Thanks Stephan :)
> > > > > > > That looks easy enough, will try!
> > > > > > >
> > > > > > > Gyula
> > > > > > >
> > > > > > > On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen <
> se...@apache.org>
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi Gyula!
> > > > > > > >
> > > > > > > > Thanks for reporting this.
> > > > > > > >
> > > > > > > > Can you try to simply build Flink without Hadoop and then
> > > exporting
> > > > > > > > HADOOP_CLASSPATH to your CloudEra libs?
> > > > > > > > That is the recommended way these days.
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Stephan
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra <
> > > gyula.f...@gmail.com>
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks Dawid,
> > > > > > > > >
> > > > > > > > > In the meantime I also figured out that I need to build the
> > > > > > > > > https://github.com/apache/flink-shaded project locally
> with
> > > > > > > > > -Dhadoop.version set to the specific hadoop version if I
> want
> > > > > > something
> > > > > > > > > different.
> > > > > > > > >
> > > > > > > > > Cheers,
> > > > > > > > > Gyula
> > > > > > > > >
> > > > > > > > > On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz <
> > > > > > > dwysakow...@apache.org
> > > > > > > > >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Gyula,
> > > > > > > > > >
> > > > > > > > > > As for the issues with mapr maven repository, you might
> > have
> > > a
> > > > > look
> > > > > > > at
> > > > > > > > > > this message:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
> > > > > > > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Becket Qin
Hi Till,

Yes, I think we have already documented in that way. So technically
speaking it is fine to change it later. It is just better if we could avoid
doing that.

Thanks,

Jiangjie (Becket) Qin

On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann  wrote:

> Could we say that the PubSub connector is public evolving instead?
>
> Cheers,
> Till
>
> On Mon, Aug 12, 2019 at 3:18 PM Becket Qin  wrote:
>
> > Hi all,
> >
> > FLINK-13231(palindrome!) has a minor Google PubSub connector API change
> > regarding how to config rate limiting. The GCP PubSub connector is a
> newly
> > introduced connector in 1.9, so it would be nice to include this change
> > into 1.9 rather than later to avoid a public API change. I am thinking of
> > making this as a blocker for 1.9. Want to check what do others think.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Mon, Aug 12, 2019 at 2:04 PM Zili Chen  wrote:
> >
> > > Hi Kurt,
> > >
> > > Thanks for your explanation. For [1] I think at least we should change
> > > the JIRA issue field, like unset the fixed version. For [2] I can see
> > > the change is all in test scope but wonder if such a commit still
> invalid
> > > the release candidate. IIRC previous RC VOTE threads would contain a
> > > release manual/guide, I will try to look up it, too.
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > Kurt Young  于2019年8月12日周一 下午5:42写道:
> > >
> > > > Hi Zili,
> > > >
> > > > Thanks for the heads up. The 2 issues you mentioned were opened by
> me.
> > We
> > > > have
> > > > found the reason of the second issue and a PR was opened for it. As
> > said
> > > in
> > > > jira, the
> > > > issue was just a testing problem, should not be blocker of 1.9.0
> > release.
> > > > However,
> > > > we will still merge it into 1.9 branch.
> > > >
> > > > Best,
> > > > Kurt
> > > >
> > > >
> > > > On Mon, Aug 12, 2019 at 5:38 PM Zili Chen 
> > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I just noticed that a few hours ago there were two new issues
> > > > > filed and marked as blockers to 1.9.0[1][2].
> > > > >
> > > > > Now [1] is closed as duplication but still marked as
> > > > > a blocker to 1.9.0, while [2] is downgrade to "Major" priority
> > > > > but still target to be fixed in 1.9.0.
> > > > >
> > > > > It would be worth to have attention of our release manager at
> least.
> > > > >
> > > > > Best,
> > > > > tison.
> > > > >
> > > > > [1] https://issues.apache.org/jira/browse/FLINK-13687
> > > > > [2] https://issues.apache.org/jira/browse/FLINK-13688
> > > > >
> > > > >
> > > > >
> > > > > Gyula Fóra  于2019年8月12日周一 下午5:10写道:
> > > > >
> > > > > > Thanks Stephan :)
> > > > > > That looks easy enough, will try!
> > > > > >
> > > > > > Gyula
> > > > > >
> > > > > > On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen 
> > > > wrote:
> > > > > >
> > > > > > > Hi Gyula!
> > > > > > >
> > > > > > > Thanks for reporting this.
> > > > > > >
> > > > > > > Can you try to simply build Flink without Hadoop and then
> > exporting
> > > > > > > HADOOP_CLASSPATH to your CloudEra libs?
> > > > > > > That is the recommended way these days.
> > > > > > >
> > > > > > > Best,
> > > > > > > Stephan
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra <
> > gyula.f...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Thanks Dawid,
> > > > > > > >
> > > > > > > > In the meantime I also figured out that I need to build the
> > > > > > > > https://github.com/apache/flink-shaded project locally with
> > > > > > > > -Dhadoop.version set to the specific hadoop version if I want
> > > > > something
> > > > > > > > different.
> > > > > > > >
> > > > > > > > Cheers,
> > > > > > > > Gyula
> > > > > > > >
> > > > > > > > On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz <
> > > > > > dwysakow...@apache.org
> > > > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Gyula,
> > > > > > > > >
> > > > > > > > > As for the issues with mapr maven repository, you might
> have
> > a
> > > > look
> > > > > > at
> > > > > > > > > this message:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
> > > > > > > > >
> > > > > > > > > Try using the "unsafe-mapr-repo" profile.
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > >
> > > > > > > > > Dawid
> > > > > > > > >
> > > > > > > > > On 11/08/2019 19:31, Gyula Fóra wrote:
> > > > > > > > > > Hi again,
> > > > > > > > > >
> > > > > > > > > > How do I build the RC locally with the hadoop version
> > > > specified?
> > > > > > > Seems
> > > > > > > > > like
> > > > > > > > > > no matter what I do I run into dependency problems with
> the
> > > > > shaded
> > > > > > > > hadoop
> > > > > > > > > > dependencies.
> > > > > > > > > > This seems to have worked in the past.
> > > > > > > > > >
> > > > > > > > > > There might be some 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Till Rohrmann
Could we say that the PubSub connector is public evolving instead?

Cheers,
Till

On Mon, Aug 12, 2019 at 3:18 PM Becket Qin  wrote:

> Hi all,
>
> FLINK-13231(palindrome!) has a minor Google PubSub connector API change
> regarding how to config rate limiting. The GCP PubSub connector is a newly
> introduced connector in 1.9, so it would be nice to include this change
> into 1.9 rather than later to avoid a public API change. I am thinking of
> making this as a blocker for 1.9. Want to check what do others think.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Aug 12, 2019 at 2:04 PM Zili Chen  wrote:
>
> > Hi Kurt,
> >
> > Thanks for your explanation. For [1] I think at least we should change
> > the JIRA issue field, like unset the fixed version. For [2] I can see
> > the change is all in test scope but wonder if such a commit still invalid
> > the release candidate. IIRC previous RC VOTE threads would contain a
> > release manual/guide, I will try to look up it, too.
> >
> > Best,
> > tison.
> >
> >
> > Kurt Young  于2019年8月12日周一 下午5:42写道:
> >
> > > Hi Zili,
> > >
> > > Thanks for the heads up. The 2 issues you mentioned were opened by me.
> We
> > > have
> > > found the reason of the second issue and a PR was opened for it. As
> said
> > in
> > > jira, the
> > > issue was just a testing problem, should not be blocker of 1.9.0
> release.
> > > However,
> > > we will still merge it into 1.9 branch.
> > >
> > > Best,
> > > Kurt
> > >
> > >
> > > On Mon, Aug 12, 2019 at 5:38 PM Zili Chen 
> wrote:
> > >
> > > > Hi,
> > > >
> > > > I just noticed that a few hours ago there were two new issues
> > > > filed and marked as blockers to 1.9.0[1][2].
> > > >
> > > > Now [1] is closed as duplication but still marked as
> > > > a blocker to 1.9.0, while [2] is downgrade to "Major" priority
> > > > but still target to be fixed in 1.9.0.
> > > >
> > > > It would be worth to have attention of our release manager at least.
> > > >
> > > > Best,
> > > > tison.
> > > >
> > > > [1] https://issues.apache.org/jira/browse/FLINK-13687
> > > > [2] https://issues.apache.org/jira/browse/FLINK-13688
> > > >
> > > >
> > > >
> > > > Gyula Fóra  于2019年8月12日周一 下午5:10写道:
> > > >
> > > > > Thanks Stephan :)
> > > > > That looks easy enough, will try!
> > > > >
> > > > > Gyula
> > > > >
> > > > > On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen 
> > > wrote:
> > > > >
> > > > > > Hi Gyula!
> > > > > >
> > > > > > Thanks for reporting this.
> > > > > >
> > > > > > Can you try to simply build Flink without Hadoop and then
> exporting
> > > > > > HADOOP_CLASSPATH to your CloudEra libs?
> > > > > > That is the recommended way these days.
> > > > > >
> > > > > > Best,
> > > > > > Stephan
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra <
> gyula.f...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > > Thanks Dawid,
> > > > > > >
> > > > > > > In the meantime I also figured out that I need to build the
> > > > > > > https://github.com/apache/flink-shaded project locally with
> > > > > > > -Dhadoop.version set to the specific hadoop version if I want
> > > > something
> > > > > > > different.
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Gyula
> > > > > > >
> > > > > > > On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz <
> > > > > dwysakow...@apache.org
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Gyula,
> > > > > > > >
> > > > > > > > As for the issues with mapr maven repository, you might have
> a
> > > look
> > > > > at
> > > > > > > > this message:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
> > > > > > > >
> > > > > > > > Try using the "unsafe-mapr-repo" profile.
> > > > > > > >
> > > > > > > > Best,
> > > > > > > >
> > > > > > > > Dawid
> > > > > > > >
> > > > > > > > On 11/08/2019 19:31, Gyula Fóra wrote:
> > > > > > > > > Hi again,
> > > > > > > > >
> > > > > > > > > How do I build the RC locally with the hadoop version
> > > specified?
> > > > > > Seems
> > > > > > > > like
> > > > > > > > > no matter what I do I run into dependency problems with the
> > > > shaded
> > > > > > > hadoop
> > > > > > > > > dependencies.
> > > > > > > > > This seems to have worked in the past.
> > > > > > > > >
> > > > > > > > > There might be some documentation somewhere that I couldnt
> > > find,
> > > > > so I
> > > > > > > > would
> > > > > > > > > appreciate any pointers :)
> > > > > > > > >
> > > > > > > > > Thanks!
> > > > > > > > > Gyula
> > > > > > > > >
> > > > > > > > > On Sun, Aug 11, 2019 at 6:57 PM Gyula Fóra <
> > > gyula.f...@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > >> Hi!
> > > > > > > > >>
> > > > > > > > >> I am trying to build 1.9.0-rc2 with the -Pvendor-repos
> > profile
> > > > > > > enabled.
> > > > > > > > I
> > > > > > > > >> get the following error:
> > > > > > > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Becket Qin
Hi all,

FLINK-13231(palindrome!) has a minor Google PubSub connector API change
regarding how to config rate limiting. The GCP PubSub connector is a newly
introduced connector in 1.9, so it would be nice to include this change
into 1.9 rather than later to avoid a public API change. I am thinking of
making this as a blocker for 1.9. Want to check what do others think.

Thanks,

Jiangjie (Becket) Qin

On Mon, Aug 12, 2019 at 2:04 PM Zili Chen  wrote:

> Hi Kurt,
>
> Thanks for your explanation. For [1] I think at least we should change
> the JIRA issue field, like unset the fixed version. For [2] I can see
> the change is all in test scope but wonder if such a commit still invalid
> the release candidate. IIRC previous RC VOTE threads would contain a
> release manual/guide, I will try to look up it, too.
>
> Best,
> tison.
>
>
> Kurt Young  于2019年8月12日周一 下午5:42写道:
>
> > Hi Zili,
> >
> > Thanks for the heads up. The 2 issues you mentioned were opened by me. We
> > have
> > found the reason of the second issue and a PR was opened for it. As said
> in
> > jira, the
> > issue was just a testing problem, should not be blocker of 1.9.0 release.
> > However,
> > we will still merge it into 1.9 branch.
> >
> > Best,
> > Kurt
> >
> >
> > On Mon, Aug 12, 2019 at 5:38 PM Zili Chen  wrote:
> >
> > > Hi,
> > >
> > > I just noticed that a few hours ago there were two new issues
> > > filed and marked as blockers to 1.9.0[1][2].
> > >
> > > Now [1] is closed as duplication but still marked as
> > > a blocker to 1.9.0, while [2] is downgrade to "Major" priority
> > > but still target to be fixed in 1.9.0.
> > >
> > > It would be worth to have attention of our release manager at least.
> > >
> > > Best,
> > > tison.
> > >
> > > [1] https://issues.apache.org/jira/browse/FLINK-13687
> > > [2] https://issues.apache.org/jira/browse/FLINK-13688
> > >
> > >
> > >
> > > Gyula Fóra  于2019年8月12日周一 下午5:10写道:
> > >
> > > > Thanks Stephan :)
> > > > That looks easy enough, will try!
> > > >
> > > > Gyula
> > > >
> > > > On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen 
> > wrote:
> > > >
> > > > > Hi Gyula!
> > > > >
> > > > > Thanks for reporting this.
> > > > >
> > > > > Can you try to simply build Flink without Hadoop and then exporting
> > > > > HADOOP_CLASSPATH to your CloudEra libs?
> > > > > That is the recommended way these days.
> > > > >
> > > > > Best,
> > > > > Stephan
> > > > >
> > > > >
> > > > >
> > > > > On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra 
> > > > wrote:
> > > > >
> > > > > > Thanks Dawid,
> > > > > >
> > > > > > In the meantime I also figured out that I need to build the
> > > > > > https://github.com/apache/flink-shaded project locally with
> > > > > > -Dhadoop.version set to the specific hadoop version if I want
> > > something
> > > > > > different.
> > > > > >
> > > > > > Cheers,
> > > > > > Gyula
> > > > > >
> > > > > > On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz <
> > > > dwysakow...@apache.org
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Gyula,
> > > > > > >
> > > > > > > As for the issues with mapr maven repository, you might have a
> > look
> > > > at
> > > > > > > this message:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
> > > > > > >
> > > > > > > Try using the "unsafe-mapr-repo" profile.
> > > > > > >
> > > > > > > Best,
> > > > > > >
> > > > > > > Dawid
> > > > > > >
> > > > > > > On 11/08/2019 19:31, Gyula Fóra wrote:
> > > > > > > > Hi again,
> > > > > > > >
> > > > > > > > How do I build the RC locally with the hadoop version
> > specified?
> > > > > Seems
> > > > > > > like
> > > > > > > > no matter what I do I run into dependency problems with the
> > > shaded
> > > > > > hadoop
> > > > > > > > dependencies.
> > > > > > > > This seems to have worked in the past.
> > > > > > > >
> > > > > > > > There might be some documentation somewhere that I couldnt
> > find,
> > > > so I
> > > > > > > would
> > > > > > > > appreciate any pointers :)
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > > Gyula
> > > > > > > >
> > > > > > > > On Sun, Aug 11, 2019 at 6:57 PM Gyula Fóra <
> > gyula.f...@gmail.com
> > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > >> Hi!
> > > > > > > >>
> > > > > > > >> I am trying to build 1.9.0-rc2 with the -Pvendor-repos
> profile
> > > > > > enabled.
> > > > > > > I
> > > > > > > >> get the following error:
> > > > > > > >>
> > > > > > > >> mvn clean install -DskipTests -Pvendor-repos
> > > > -Dhadoop.version=2.6.0
> > > > > > > >> -Pinclude-hadoop (ignore that the hadoop version is not a
> > vendor
> > > > > > hadoop
> > > > > > > >> version)
> > > > > > > >>
> > > > > > > >> [ERROR] Failed to execute goal on project flink-hadoop-fs:
> > Could
> > > > not
> > > > > > > >> resolve dependencies for project
> > > > > > > >> org.apache.flink:flink-hadoop-fs:jar:1.9.0: Failed to
> collect
> > > > > > > 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Zili Chen
Hi Kurt,

Thanks for your explanation. For [1] I think at least we should change
the JIRA issue field, like unset the fixed version. For [2] I can see
the change is all in test scope but wonder if such a commit still invalid
the release candidate. IIRC previous RC VOTE threads would contain a
release manual/guide, I will try to look up it, too.

Best,
tison.


Kurt Young  于2019年8月12日周一 下午5:42写道:

> Hi Zili,
>
> Thanks for the heads up. The 2 issues you mentioned were opened by me. We
> have
> found the reason of the second issue and a PR was opened for it. As said in
> jira, the
> issue was just a testing problem, should not be blocker of 1.9.0 release.
> However,
> we will still merge it into 1.9 branch.
>
> Best,
> Kurt
>
>
> On Mon, Aug 12, 2019 at 5:38 PM Zili Chen  wrote:
>
> > Hi,
> >
> > I just noticed that a few hours ago there were two new issues
> > filed and marked as blockers to 1.9.0[1][2].
> >
> > Now [1] is closed as duplication but still marked as
> > a blocker to 1.9.0, while [2] is downgrade to "Major" priority
> > but still target to be fixed in 1.9.0.
> >
> > It would be worth to have attention of our release manager at least.
> >
> > Best,
> > tison.
> >
> > [1] https://issues.apache.org/jira/browse/FLINK-13687
> > [2] https://issues.apache.org/jira/browse/FLINK-13688
> >
> >
> >
> > Gyula Fóra  于2019年8月12日周一 下午5:10写道:
> >
> > > Thanks Stephan :)
> > > That looks easy enough, will try!
> > >
> > > Gyula
> > >
> > > On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen 
> wrote:
> > >
> > > > Hi Gyula!
> > > >
> > > > Thanks for reporting this.
> > > >
> > > > Can you try to simply build Flink without Hadoop and then exporting
> > > > HADOOP_CLASSPATH to your CloudEra libs?
> > > > That is the recommended way these days.
> > > >
> > > > Best,
> > > > Stephan
> > > >
> > > >
> > > >
> > > > On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra 
> > > wrote:
> > > >
> > > > > Thanks Dawid,
> > > > >
> > > > > In the meantime I also figured out that I need to build the
> > > > > https://github.com/apache/flink-shaded project locally with
> > > > > -Dhadoop.version set to the specific hadoop version if I want
> > something
> > > > > different.
> > > > >
> > > > > Cheers,
> > > > > Gyula
> > > > >
> > > > > On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz <
> > > dwysakow...@apache.org
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Hi Gyula,
> > > > > >
> > > > > > As for the issues with mapr maven repository, you might have a
> look
> > > at
> > > > > > this message:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
> > > > > >
> > > > > > Try using the "unsafe-mapr-repo" profile.
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > Dawid
> > > > > >
> > > > > > On 11/08/2019 19:31, Gyula Fóra wrote:
> > > > > > > Hi again,
> > > > > > >
> > > > > > > How do I build the RC locally with the hadoop version
> specified?
> > > > Seems
> > > > > > like
> > > > > > > no matter what I do I run into dependency problems with the
> > shaded
> > > > > hadoop
> > > > > > > dependencies.
> > > > > > > This seems to have worked in the past.
> > > > > > >
> > > > > > > There might be some documentation somewhere that I couldnt
> find,
> > > so I
> > > > > > would
> > > > > > > appreciate any pointers :)
> > > > > > >
> > > > > > > Thanks!
> > > > > > > Gyula
> > > > > > >
> > > > > > > On Sun, Aug 11, 2019 at 6:57 PM Gyula Fóra <
> gyula.f...@gmail.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > >> Hi!
> > > > > > >>
> > > > > > >> I am trying to build 1.9.0-rc2 with the -Pvendor-repos profile
> > > > > enabled.
> > > > > > I
> > > > > > >> get the following error:
> > > > > > >>
> > > > > > >> mvn clean install -DskipTests -Pvendor-repos
> > > -Dhadoop.version=2.6.0
> > > > > > >> -Pinclude-hadoop (ignore that the hadoop version is not a
> vendor
> > > > > hadoop
> > > > > > >> version)
> > > > > > >>
> > > > > > >> [ERROR] Failed to execute goal on project flink-hadoop-fs:
> Could
> > > not
> > > > > > >> resolve dependencies for project
> > > > > > >> org.apache.flink:flink-hadoop-fs:jar:1.9.0: Failed to collect
> > > > > > dependencies
> > > > > > >> at org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0:
> Failed
> > to
> > > > > read
> > > > > > >> artifact descriptor for
> > > > > > >> org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Could
> not
> > > > > transfer
> > > > > > >> artifact org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-7.0
> > > > from/to
> > > > > > >> mapr-releases (https://repository.mapr.com/maven/):
> > > > > > >> sun.security.validator.ValidatorException: PKIX path building
> > > > failed:
> > > > > > >> sun.security.provider.certpath.SunCertPathBuilderException:
> > unable
> > > > to
> > > > > > find
> > > > > > >> valid certification path to requested target -> [Help 1]
> > > > > > >>
> > > > > > >> This looks like a TLS error. Might not be related to 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Kurt Young
Hi Zili,

Thanks for the heads up. The 2 issues you mentioned were opened by me. We
have
found the reason of the second issue and a PR was opened for it. As said in
jira, the
issue was just a testing problem, should not be blocker of 1.9.0 release.
However,
we will still merge it into 1.9 branch.

Best,
Kurt


On Mon, Aug 12, 2019 at 5:38 PM Zili Chen  wrote:

> Hi,
>
> I just noticed that a few hours ago there were two new issues
> filed and marked as blockers to 1.9.0[1][2].
>
> Now [1] is closed as duplication but still marked as
> a blocker to 1.9.0, while [2] is downgrade to "Major" priority
> but still target to be fixed in 1.9.0.
>
> It would be worth to have attention of our release manager at least.
>
> Best,
> tison.
>
> [1] https://issues.apache.org/jira/browse/FLINK-13687
> [2] https://issues.apache.org/jira/browse/FLINK-13688
>
>
>
> Gyula Fóra  于2019年8月12日周一 下午5:10写道:
>
> > Thanks Stephan :)
> > That looks easy enough, will try!
> >
> > Gyula
> >
> > On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen  wrote:
> >
> > > Hi Gyula!
> > >
> > > Thanks for reporting this.
> > >
> > > Can you try to simply build Flink without Hadoop and then exporting
> > > HADOOP_CLASSPATH to your CloudEra libs?
> > > That is the recommended way these days.
> > >
> > > Best,
> > > Stephan
> > >
> > >
> > >
> > > On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra 
> > wrote:
> > >
> > > > Thanks Dawid,
> > > >
> > > > In the meantime I also figured out that I need to build the
> > > > https://github.com/apache/flink-shaded project locally with
> > > > -Dhadoop.version set to the specific hadoop version if I want
> something
> > > > different.
> > > >
> > > > Cheers,
> > > > Gyula
> > > >
> > > > On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz <
> > dwysakow...@apache.org
> > > >
> > > > wrote:
> > > >
> > > > > Hi Gyula,
> > > > >
> > > > > As for the issues with mapr maven repository, you might have a look
> > at
> > > > > this message:
> > > > >
> > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
> > > > >
> > > > > Try using the "unsafe-mapr-repo" profile.
> > > > >
> > > > > Best,
> > > > >
> > > > > Dawid
> > > > >
> > > > > On 11/08/2019 19:31, Gyula Fóra wrote:
> > > > > > Hi again,
> > > > > >
> > > > > > How do I build the RC locally with the hadoop version specified?
> > > Seems
> > > > > like
> > > > > > no matter what I do I run into dependency problems with the
> shaded
> > > > hadoop
> > > > > > dependencies.
> > > > > > This seems to have worked in the past.
> > > > > >
> > > > > > There might be some documentation somewhere that I couldnt find,
> > so I
> > > > > would
> > > > > > appreciate any pointers :)
> > > > > >
> > > > > > Thanks!
> > > > > > Gyula
> > > > > >
> > > > > > On Sun, Aug 11, 2019 at 6:57 PM Gyula Fóra  >
> > > > wrote:
> > > > > >
> > > > > >> Hi!
> > > > > >>
> > > > > >> I am trying to build 1.9.0-rc2 with the -Pvendor-repos profile
> > > > enabled.
> > > > > I
> > > > > >> get the following error:
> > > > > >>
> > > > > >> mvn clean install -DskipTests -Pvendor-repos
> > -Dhadoop.version=2.6.0
> > > > > >> -Pinclude-hadoop (ignore that the hadoop version is not a vendor
> > > > hadoop
> > > > > >> version)
> > > > > >>
> > > > > >> [ERROR] Failed to execute goal on project flink-hadoop-fs: Could
> > not
> > > > > >> resolve dependencies for project
> > > > > >> org.apache.flink:flink-hadoop-fs:jar:1.9.0: Failed to collect
> > > > > dependencies
> > > > > >> at org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Failed
> to
> > > > read
> > > > > >> artifact descriptor for
> > > > > >> org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Could not
> > > > transfer
> > > > > >> artifact org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-7.0
> > > from/to
> > > > > >> mapr-releases (https://repository.mapr.com/maven/):
> > > > > >> sun.security.validator.ValidatorException: PKIX path building
> > > failed:
> > > > > >> sun.security.provider.certpath.SunCertPathBuilderException:
> unable
> > > to
> > > > > find
> > > > > >> valid certification path to requested target -> [Help 1]
> > > > > >>
> > > > > >> This looks like a TLS error. Might not be related to the release
> > but
> > > > it
> > > > > >> could be good to know.
> > > > > >>
> > > > > >> Cheers,
> > > > > >> Gyula
> > > > > >>
> > > > > >> On Fri, Aug 9, 2019 at 6:26 PM Tzu-Li (Gordon) Tai <
> > > > tzuli...@apache.org
> > > > > >
> > > > > >> wrote:
> > > > > >>
> > > > > >>> Please note that the unresolved issues that are still tagged
> > with a
> > > > fix
> > > > > >>> version "1.9.0", as seen in the JIRA release notes [1], are
> > issues
> > > to
> > > > > >>> update documents for new features.
> > > > > >>> I've left them still associated with 1.9.0 since these should
> > still
> > > > be
> > > > > >>> updated for 1.9.0 soon along with the official release.
> > > > > >>>
> > > > > >>> [1]
> > > > > >>>
> > > > > >>>
> > > > >

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Zili Chen
Hi,

I just noticed that a few hours ago there were two new issues
filed and marked as blockers to 1.9.0[1][2].

Now [1] is closed as duplication but still marked as
a blocker to 1.9.0, while [2] is downgrade to "Major" priority
but still target to be fixed in 1.9.0.

It would be worth to have attention of our release manager at least.

Best,
tison.

[1] https://issues.apache.org/jira/browse/FLINK-13687
[2] https://issues.apache.org/jira/browse/FLINK-13688



Gyula Fóra  于2019年8月12日周一 下午5:10写道:

> Thanks Stephan :)
> That looks easy enough, will try!
>
> Gyula
>
> On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen  wrote:
>
> > Hi Gyula!
> >
> > Thanks for reporting this.
> >
> > Can you try to simply build Flink without Hadoop and then exporting
> > HADOOP_CLASSPATH to your CloudEra libs?
> > That is the recommended way these days.
> >
> > Best,
> > Stephan
> >
> >
> >
> > On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra 
> wrote:
> >
> > > Thanks Dawid,
> > >
> > > In the meantime I also figured out that I need to build the
> > > https://github.com/apache/flink-shaded project locally with
> > > -Dhadoop.version set to the specific hadoop version if I want something
> > > different.
> > >
> > > Cheers,
> > > Gyula
> > >
> > > On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz <
> dwysakow...@apache.org
> > >
> > > wrote:
> > >
> > > > Hi Gyula,
> > > >
> > > > As for the issues with mapr maven repository, you might have a look
> at
> > > > this message:
> > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
> > > >
> > > > Try using the "unsafe-mapr-repo" profile.
> > > >
> > > > Best,
> > > >
> > > > Dawid
> > > >
> > > > On 11/08/2019 19:31, Gyula Fóra wrote:
> > > > > Hi again,
> > > > >
> > > > > How do I build the RC locally with the hadoop version specified?
> > Seems
> > > > like
> > > > > no matter what I do I run into dependency problems with the shaded
> > > hadoop
> > > > > dependencies.
> > > > > This seems to have worked in the past.
> > > > >
> > > > > There might be some documentation somewhere that I couldnt find,
> so I
> > > > would
> > > > > appreciate any pointers :)
> > > > >
> > > > > Thanks!
> > > > > Gyula
> > > > >
> > > > > On Sun, Aug 11, 2019 at 6:57 PM Gyula Fóra 
> > > wrote:
> > > > >
> > > > >> Hi!
> > > > >>
> > > > >> I am trying to build 1.9.0-rc2 with the -Pvendor-repos profile
> > > enabled.
> > > > I
> > > > >> get the following error:
> > > > >>
> > > > >> mvn clean install -DskipTests -Pvendor-repos
> -Dhadoop.version=2.6.0
> > > > >> -Pinclude-hadoop (ignore that the hadoop version is not a vendor
> > > hadoop
> > > > >> version)
> > > > >>
> > > > >> [ERROR] Failed to execute goal on project flink-hadoop-fs: Could
> not
> > > > >> resolve dependencies for project
> > > > >> org.apache.flink:flink-hadoop-fs:jar:1.9.0: Failed to collect
> > > > dependencies
> > > > >> at org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Failed to
> > > read
> > > > >> artifact descriptor for
> > > > >> org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Could not
> > > transfer
> > > > >> artifact org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-7.0
> > from/to
> > > > >> mapr-releases (https://repository.mapr.com/maven/):
> > > > >> sun.security.validator.ValidatorException: PKIX path building
> > failed:
> > > > >> sun.security.provider.certpath.SunCertPathBuilderException: unable
> > to
> > > > find
> > > > >> valid certification path to requested target -> [Help 1]
> > > > >>
> > > > >> This looks like a TLS error. Might not be related to the release
> but
> > > it
> > > > >> could be good to know.
> > > > >>
> > > > >> Cheers,
> > > > >> Gyula
> > > > >>
> > > > >> On Fri, Aug 9, 2019 at 6:26 PM Tzu-Li (Gordon) Tai <
> > > tzuli...@apache.org
> > > > >
> > > > >> wrote:
> > > > >>
> > > > >>> Please note that the unresolved issues that are still tagged
> with a
> > > fix
> > > > >>> version "1.9.0", as seen in the JIRA release notes [1], are
> issues
> > to
> > > > >>> update documents for new features.
> > > > >>> I've left them still associated with 1.9.0 since these should
> still
> > > be
> > > > >>> updated for 1.9.0 soon along with the official release.
> > > > >>>
> > > > >>> [1]
> > > > >>>
> > > > >>>
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
> > > > >>>
> > > > >>> On Fri, Aug 9, 2019 at 6:17 PM Tzu-Li (Gordon) Tai <
> > > > tzuli...@apache.org>
> > > > >>> wrote:
> > > > >>>
> > > >  Hi all,
> > > > 
> > > >  Release candidate #2 for Apache Flink 1.9.0 is now ready for
> your
> > > > >>> review.
> > > >  This is the first voting candidate for 1.9.0, following the
> > preview
> > > >  candidates RC0 and RC1.
> > > > 
> > > >  Please review and vote on release candidate #2 for version
> 1.9.0,
> > as
> > > >  follows:
> > > >  [ ] +1, Approve the release
> > > >  [ ] -1, Do not 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Gyula Fóra
Thanks Stephan :)
That looks easy enough, will try!

Gyula

On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen  wrote:

> Hi Gyula!
>
> Thanks for reporting this.
>
> Can you try to simply build Flink without Hadoop and then exporting
> HADOOP_CLASSPATH to your CloudEra libs?
> That is the recommended way these days.
>
> Best,
> Stephan
>
>
>
> On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra  wrote:
>
> > Thanks Dawid,
> >
> > In the meantime I also figured out that I need to build the
> > https://github.com/apache/flink-shaded project locally with
> > -Dhadoop.version set to the specific hadoop version if I want something
> > different.
> >
> > Cheers,
> > Gyula
> >
> > On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz  >
> > wrote:
> >
> > > Hi Gyula,
> > >
> > > As for the issues with mapr maven repository, you might have a look at
> > > this message:
> > >
> > >
> >
> https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
> > >
> > > Try using the "unsafe-mapr-repo" profile.
> > >
> > > Best,
> > >
> > > Dawid
> > >
> > > On 11/08/2019 19:31, Gyula Fóra wrote:
> > > > Hi again,
> > > >
> > > > How do I build the RC locally with the hadoop version specified?
> Seems
> > > like
> > > > no matter what I do I run into dependency problems with the shaded
> > hadoop
> > > > dependencies.
> > > > This seems to have worked in the past.
> > > >
> > > > There might be some documentation somewhere that I couldnt find, so I
> > > would
> > > > appreciate any pointers :)
> > > >
> > > > Thanks!
> > > > Gyula
> > > >
> > > > On Sun, Aug 11, 2019 at 6:57 PM Gyula Fóra 
> > wrote:
> > > >
> > > >> Hi!
> > > >>
> > > >> I am trying to build 1.9.0-rc2 with the -Pvendor-repos profile
> > enabled.
> > > I
> > > >> get the following error:
> > > >>
> > > >> mvn clean install -DskipTests -Pvendor-repos -Dhadoop.version=2.6.0
> > > >> -Pinclude-hadoop (ignore that the hadoop version is not a vendor
> > hadoop
> > > >> version)
> > > >>
> > > >> [ERROR] Failed to execute goal on project flink-hadoop-fs: Could not
> > > >> resolve dependencies for project
> > > >> org.apache.flink:flink-hadoop-fs:jar:1.9.0: Failed to collect
> > > dependencies
> > > >> at org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Failed to
> > read
> > > >> artifact descriptor for
> > > >> org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Could not
> > transfer
> > > >> artifact org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-7.0
> from/to
> > > >> mapr-releases (https://repository.mapr.com/maven/):
> > > >> sun.security.validator.ValidatorException: PKIX path building
> failed:
> > > >> sun.security.provider.certpath.SunCertPathBuilderException: unable
> to
> > > find
> > > >> valid certification path to requested target -> [Help 1]
> > > >>
> > > >> This looks like a TLS error. Might not be related to the release but
> > it
> > > >> could be good to know.
> > > >>
> > > >> Cheers,
> > > >> Gyula
> > > >>
> > > >> On Fri, Aug 9, 2019 at 6:26 PM Tzu-Li (Gordon) Tai <
> > tzuli...@apache.org
> > > >
> > > >> wrote:
> > > >>
> > > >>> Please note that the unresolved issues that are still tagged with a
> > fix
> > > >>> version "1.9.0", as seen in the JIRA release notes [1], are issues
> to
> > > >>> update documents for new features.
> > > >>> I've left them still associated with 1.9.0 since these should still
> > be
> > > >>> updated for 1.9.0 soon along with the official release.
> > > >>>
> > > >>> [1]
> > > >>>
> > > >>>
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
> > > >>>
> > > >>> On Fri, Aug 9, 2019 at 6:17 PM Tzu-Li (Gordon) Tai <
> > > tzuli...@apache.org>
> > > >>> wrote:
> > > >>>
> > >  Hi all,
> > > 
> > >  Release candidate #2 for Apache Flink 1.9.0 is now ready for your
> > > >>> review.
> > >  This is the first voting candidate for 1.9.0, following the
> preview
> > >  candidates RC0 and RC1.
> > > 
> > >  Please review and vote on release candidate #2 for version 1.9.0,
> as
> > >  follows:
> > >  [ ] +1, Approve the release
> > >  [ ] -1, Do not approve the release (please provide specific
> > comments)
> > > 
> > >  The complete staging area is available for your review, which
> > > includes:
> > >  * JIRA release notes [1],
> > >  * the official Apache source release and binary convenience
> releases
> > > to
> > > >>> be
> > >  deployed to dist.apache.org [2], which are signed with the key
> with
> > >  fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
> > >  * all artifacts to be deployed to the Maven Central Repository
> [4],
> > >  * source code tag “release-1.9.0-rc2” [5].
> > > 
> > >  Robert is also preparing a pull request for the announcement blog
> > post
> > > >>> in
> > >  the works, and will update this voting thread with a link to the
> > pull
> > >  request shortly afterwards.
> > > 
> > >  The vote 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Stephan Ewen
Hi Gyula!

Thanks for reporting this.

Can you try to simply build Flink without Hadoop and then exporting
HADOOP_CLASSPATH to your CloudEra libs?
That is the recommended way these days.

Best,
Stephan



On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra  wrote:

> Thanks Dawid,
>
> In the meantime I also figured out that I need to build the
> https://github.com/apache/flink-shaded project locally with
> -Dhadoop.version set to the specific hadoop version if I want something
> different.
>
> Cheers,
> Gyula
>
> On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz 
> wrote:
>
> > Hi Gyula,
> >
> > As for the issues with mapr maven repository, you might have a look at
> > this message:
> >
> >
> https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
> >
> > Try using the "unsafe-mapr-repo" profile.
> >
> > Best,
> >
> > Dawid
> >
> > On 11/08/2019 19:31, Gyula Fóra wrote:
> > > Hi again,
> > >
> > > How do I build the RC locally with the hadoop version specified? Seems
> > like
> > > no matter what I do I run into dependency problems with the shaded
> hadoop
> > > dependencies.
> > > This seems to have worked in the past.
> > >
> > > There might be some documentation somewhere that I couldnt find, so I
> > would
> > > appreciate any pointers :)
> > >
> > > Thanks!
> > > Gyula
> > >
> > > On Sun, Aug 11, 2019 at 6:57 PM Gyula Fóra 
> wrote:
> > >
> > >> Hi!
> > >>
> > >> I am trying to build 1.9.0-rc2 with the -Pvendor-repos profile
> enabled.
> > I
> > >> get the following error:
> > >>
> > >> mvn clean install -DskipTests -Pvendor-repos -Dhadoop.version=2.6.0
> > >> -Pinclude-hadoop (ignore that the hadoop version is not a vendor
> hadoop
> > >> version)
> > >>
> > >> [ERROR] Failed to execute goal on project flink-hadoop-fs: Could not
> > >> resolve dependencies for project
> > >> org.apache.flink:flink-hadoop-fs:jar:1.9.0: Failed to collect
> > dependencies
> > >> at org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Failed to
> read
> > >> artifact descriptor for
> > >> org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Could not
> transfer
> > >> artifact org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-7.0 from/to
> > >> mapr-releases (https://repository.mapr.com/maven/):
> > >> sun.security.validator.ValidatorException: PKIX path building failed:
> > >> sun.security.provider.certpath.SunCertPathBuilderException: unable to
> > find
> > >> valid certification path to requested target -> [Help 1]
> > >>
> > >> This looks like a TLS error. Might not be related to the release but
> it
> > >> could be good to know.
> > >>
> > >> Cheers,
> > >> Gyula
> > >>
> > >> On Fri, Aug 9, 2019 at 6:26 PM Tzu-Li (Gordon) Tai <
> tzuli...@apache.org
> > >
> > >> wrote:
> > >>
> > >>> Please note that the unresolved issues that are still tagged with a
> fix
> > >>> version "1.9.0", as seen in the JIRA release notes [1], are issues to
> > >>> update documents for new features.
> > >>> I've left them still associated with 1.9.0 since these should still
> be
> > >>> updated for 1.9.0 soon along with the official release.
> > >>>
> > >>> [1]
> > >>>
> > >>>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
> > >>>
> > >>> On Fri, Aug 9, 2019 at 6:17 PM Tzu-Li (Gordon) Tai <
> > tzuli...@apache.org>
> > >>> wrote:
> > >>>
> >  Hi all,
> > 
> >  Release candidate #2 for Apache Flink 1.9.0 is now ready for your
> > >>> review.
> >  This is the first voting candidate for 1.9.0, following the preview
> >  candidates RC0 and RC1.
> > 
> >  Please review and vote on release candidate #2 for version 1.9.0, as
> >  follows:
> >  [ ] +1, Approve the release
> >  [ ] -1, Do not approve the release (please provide specific
> comments)
> > 
> >  The complete staging area is available for your review, which
> > includes:
> >  * JIRA release notes [1],
> >  * the official Apache source release and binary convenience releases
> > to
> > >>> be
> >  deployed to dist.apache.org [2], which are signed with the key with
> >  fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
> >  * all artifacts to be deployed to the Maven Central Repository [4],
> >  * source code tag “release-1.9.0-rc2” [5].
> > 
> >  Robert is also preparing a pull request for the announcement blog
> post
> > >>> in
> >  the works, and will update this voting thread with a link to the
> pull
> >  request shortly afterwards.
> > 
> >  The vote will be open for *at least 72 hours*.
> >  Please cast your votes before *Aug. 14th (Wed.) 2019, 17:00 PM
> CET*.It
> > >>> is
> >  adopted by majority approval, with at least 3 PMC affirmative votes.
> >  Thanks,
> >  Gordon[1]
> > 
> > >>>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
> >  [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc2/
> >  [3] 

Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Gyula Fóra
Thanks Dawid,

In the meantime I also figured out that I need to build the
https://github.com/apache/flink-shaded project locally with
-Dhadoop.version set to the specific hadoop version if I want something
different.

Cheers,
Gyula

On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz 
wrote:

> Hi Gyula,
>
> As for the issues with mapr maven repository, you might have a look at
> this message:
>
> https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
>
> Try using the "unsafe-mapr-repo" profile.
>
> Best,
>
> Dawid
>
> On 11/08/2019 19:31, Gyula Fóra wrote:
> > Hi again,
> >
> > How do I build the RC locally with the hadoop version specified? Seems
> like
> > no matter what I do I run into dependency problems with the shaded hadoop
> > dependencies.
> > This seems to have worked in the past.
> >
> > There might be some documentation somewhere that I couldnt find, so I
> would
> > appreciate any pointers :)
> >
> > Thanks!
> > Gyula
> >
> > On Sun, Aug 11, 2019 at 6:57 PM Gyula Fóra  wrote:
> >
> >> Hi!
> >>
> >> I am trying to build 1.9.0-rc2 with the -Pvendor-repos profile enabled.
> I
> >> get the following error:
> >>
> >> mvn clean install -DskipTests -Pvendor-repos -Dhadoop.version=2.6.0
> >> -Pinclude-hadoop (ignore that the hadoop version is not a vendor hadoop
> >> version)
> >>
> >> [ERROR] Failed to execute goal on project flink-hadoop-fs: Could not
> >> resolve dependencies for project
> >> org.apache.flink:flink-hadoop-fs:jar:1.9.0: Failed to collect
> dependencies
> >> at org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Failed to read
> >> artifact descriptor for
> >> org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Could not transfer
> >> artifact org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-7.0 from/to
> >> mapr-releases (https://repository.mapr.com/maven/):
> >> sun.security.validator.ValidatorException: PKIX path building failed:
> >> sun.security.provider.certpath.SunCertPathBuilderException: unable to
> find
> >> valid certification path to requested target -> [Help 1]
> >>
> >> This looks like a TLS error. Might not be related to the release but it
> >> could be good to know.
> >>
> >> Cheers,
> >> Gyula
> >>
> >> On Fri, Aug 9, 2019 at 6:26 PM Tzu-Li (Gordon) Tai  >
> >> wrote:
> >>
> >>> Please note that the unresolved issues that are still tagged with a fix
> >>> version "1.9.0", as seen in the JIRA release notes [1], are issues to
> >>> update documents for new features.
> >>> I've left them still associated with 1.9.0 since these should still be
> >>> updated for 1.9.0 soon along with the official release.
> >>>
> >>> [1]
> >>>
> >>>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
> >>>
> >>> On Fri, Aug 9, 2019 at 6:17 PM Tzu-Li (Gordon) Tai <
> tzuli...@apache.org>
> >>> wrote:
> >>>
>  Hi all,
> 
>  Release candidate #2 for Apache Flink 1.9.0 is now ready for your
> >>> review.
>  This is the first voting candidate for 1.9.0, following the preview
>  candidates RC0 and RC1.
> 
>  Please review and vote on release candidate #2 for version 1.9.0, as
>  follows:
>  [ ] +1, Approve the release
>  [ ] -1, Do not approve the release (please provide specific comments)
> 
>  The complete staging area is available for your review, which
> includes:
>  * JIRA release notes [1],
>  * the official Apache source release and binary convenience releases
> to
> >>> be
>  deployed to dist.apache.org [2], which are signed with the key with
>  fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
>  * all artifacts to be deployed to the Maven Central Repository [4],
>  * source code tag “release-1.9.0-rc2” [5].
> 
>  Robert is also preparing a pull request for the announcement blog post
> >>> in
>  the works, and will update this voting thread with a link to the pull
>  request shortly afterwards.
> 
>  The vote will be open for *at least 72 hours*.
>  Please cast your votes before *Aug. 14th (Wed.) 2019, 17:00 PM CET*.It
> >>> is
>  adopted by majority approval, with at least 3 PMC affirmative votes.
>  Thanks,
>  Gordon[1]
> 
> >>>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
>  [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc2/
>  [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>  [4]
> >>> https://repository.apache.org/content/repositories/orgapacheflink-1234
>  [5]
> 
> >>>
> https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.9.0-rc2
>
>


Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-12 Thread Dawid Wysakowicz
Hi Gyula,

As for the issues with mapr maven repository, you might have a look at
this message:
https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E

Try using the "unsafe-mapr-repo" profile.

Best,

Dawid

On 11/08/2019 19:31, Gyula Fóra wrote:
> Hi again,
>
> How do I build the RC locally with the hadoop version specified? Seems like
> no matter what I do I run into dependency problems with the shaded hadoop
> dependencies.
> This seems to have worked in the past.
>
> There might be some documentation somewhere that I couldnt find, so I would
> appreciate any pointers :)
>
> Thanks!
> Gyula
>
> On Sun, Aug 11, 2019 at 6:57 PM Gyula Fóra  wrote:
>
>> Hi!
>>
>> I am trying to build 1.9.0-rc2 with the -Pvendor-repos profile enabled. I
>> get the following error:
>>
>> mvn clean install -DskipTests -Pvendor-repos -Dhadoop.version=2.6.0
>> -Pinclude-hadoop (ignore that the hadoop version is not a vendor hadoop
>> version)
>>
>> [ERROR] Failed to execute goal on project flink-hadoop-fs: Could not
>> resolve dependencies for project
>> org.apache.flink:flink-hadoop-fs:jar:1.9.0: Failed to collect dependencies
>> at org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Failed to read
>> artifact descriptor for
>> org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Could not transfer
>> artifact org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-7.0 from/to
>> mapr-releases (https://repository.mapr.com/maven/):
>> sun.security.validator.ValidatorException: PKIX path building failed:
>> sun.security.provider.certpath.SunCertPathBuilderException: unable to find
>> valid certification path to requested target -> [Help 1]
>>
>> This looks like a TLS error. Might not be related to the release but it
>> could be good to know.
>>
>> Cheers,
>> Gyula
>>
>> On Fri, Aug 9, 2019 at 6:26 PM Tzu-Li (Gordon) Tai 
>> wrote:
>>
>>> Please note that the unresolved issues that are still tagged with a fix
>>> version "1.9.0", as seen in the JIRA release notes [1], are issues to
>>> update documents for new features.
>>> I've left them still associated with 1.9.0 since these should still be
>>> updated for 1.9.0 soon along with the official release.
>>>
>>> [1]
>>>
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
>>>
>>> On Fri, Aug 9, 2019 at 6:17 PM Tzu-Li (Gordon) Tai 
>>> wrote:
>>>
 Hi all,

 Release candidate #2 for Apache Flink 1.9.0 is now ready for your
>>> review.
 This is the first voting candidate for 1.9.0, following the preview
 candidates RC0 and RC1.

 Please review and vote on release candidate #2 for version 1.9.0, as
 follows:
 [ ] +1, Approve the release
 [ ] -1, Do not approve the release (please provide specific comments)

 The complete staging area is available for your review, which includes:
 * JIRA release notes [1],
 * the official Apache source release and binary convenience releases to
>>> be
 deployed to dist.apache.org [2], which are signed with the key with
 fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
 * all artifacts to be deployed to the Maven Central Repository [4],
 * source code tag “release-1.9.0-rc2” [5].

 Robert is also preparing a pull request for the announcement blog post
>>> in
 the works, and will update this voting thread with a link to the pull
 request shortly afterwards.

 The vote will be open for *at least 72 hours*.
 Please cast your votes before *Aug. 14th (Wed.) 2019, 17:00 PM CET*.It
>>> is
 adopted by majority approval, with at least 3 PMC affirmative votes.
 Thanks,
 Gordon[1]

>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
 [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc2/
 [3] https://dist.apache.org/repos/dist/release/flink/KEYS
 [4]
>>> https://repository.apache.org/content/repositories/orgapacheflink-1234
 [5]

>>> https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.9.0-rc2



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-11 Thread Gyula Fóra
Hi again,

How do I build the RC locally with the hadoop version specified? Seems like
no matter what I do I run into dependency problems with the shaded hadoop
dependencies.
This seems to have worked in the past.

There might be some documentation somewhere that I couldnt find, so I would
appreciate any pointers :)

Thanks!
Gyula

On Sun, Aug 11, 2019 at 6:57 PM Gyula Fóra  wrote:

> Hi!
>
> I am trying to build 1.9.0-rc2 with the -Pvendor-repos profile enabled. I
> get the following error:
>
> mvn clean install -DskipTests -Pvendor-repos -Dhadoop.version=2.6.0
> -Pinclude-hadoop (ignore that the hadoop version is not a vendor hadoop
> version)
>
> [ERROR] Failed to execute goal on project flink-hadoop-fs: Could not
> resolve dependencies for project
> org.apache.flink:flink-hadoop-fs:jar:1.9.0: Failed to collect dependencies
> at org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Failed to read
> artifact descriptor for
> org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Could not transfer
> artifact org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-7.0 from/to
> mapr-releases (https://repository.mapr.com/maven/):
> sun.security.validator.ValidatorException: PKIX path building failed:
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find
> valid certification path to requested target -> [Help 1]
>
> This looks like a TLS error. Might not be related to the release but it
> could be good to know.
>
> Cheers,
> Gyula
>
> On Fri, Aug 9, 2019 at 6:26 PM Tzu-Li (Gordon) Tai 
> wrote:
>
>> Please note that the unresolved issues that are still tagged with a fix
>> version "1.9.0", as seen in the JIRA release notes [1], are issues to
>> update documents for new features.
>> I've left them still associated with 1.9.0 since these should still be
>> updated for 1.9.0 soon along with the official release.
>>
>> [1]
>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
>>
>> On Fri, Aug 9, 2019 at 6:17 PM Tzu-Li (Gordon) Tai 
>> wrote:
>>
>> > Hi all,
>> >
>> > Release candidate #2 for Apache Flink 1.9.0 is now ready for your
>> review.
>> > This is the first voting candidate for 1.9.0, following the preview
>> > candidates RC0 and RC1.
>> >
>> > Please review and vote on release candidate #2 for version 1.9.0, as
>> > follows:
>> > [ ] +1, Approve the release
>> > [ ] -1, Do not approve the release (please provide specific comments)
>> >
>> > The complete staging area is available for your review, which includes:
>> > * JIRA release notes [1],
>> > * the official Apache source release and binary convenience releases to
>> be
>> > deployed to dist.apache.org [2], which are signed with the key with
>> > fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
>> > * all artifacts to be deployed to the Maven Central Repository [4],
>> > * source code tag “release-1.9.0-rc2” [5].
>> >
>> > Robert is also preparing a pull request for the announcement blog post
>> in
>> > the works, and will update this voting thread with a link to the pull
>> > request shortly afterwards.
>> >
>> > The vote will be open for *at least 72 hours*.
>> > Please cast your votes before *Aug. 14th (Wed.) 2019, 17:00 PM CET*.It
>> is
>> > adopted by majority approval, with at least 3 PMC affirmative votes.
>> > Thanks,
>> > Gordon[1]
>> >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
>> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc2/
>> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> > [4]
>> https://repository.apache.org/content/repositories/orgapacheflink-1234
>> > [5]
>> >
>> https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.9.0-rc2
>> >
>>
>


Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-11 Thread Gyula Fóra
Hi!

I am trying to build 1.9.0-rc2 with the -Pvendor-repos profile enabled. I
get the following error:

mvn clean install -DskipTests -Pvendor-repos -Dhadoop.version=2.6.0
-Pinclude-hadoop (ignore that the hadoop version is not a vendor hadoop
version)

[ERROR] Failed to execute goal on project flink-hadoop-fs: Could not
resolve dependencies for project
org.apache.flink:flink-hadoop-fs:jar:1.9.0: Failed to collect dependencies
at org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Failed to read
artifact descriptor for
org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-7.0: Could not transfer
artifact org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-7.0 from/to
mapr-releases (https://repository.mapr.com/maven/):
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to find
valid certification path to requested target -> [Help 1]

This looks like a TLS error. Might not be related to the release but it
could be good to know.

Cheers,
Gyula

On Fri, Aug 9, 2019 at 6:26 PM Tzu-Li (Gordon) Tai 
wrote:

> Please note that the unresolved issues that are still tagged with a fix
> version "1.9.0", as seen in the JIRA release notes [1], are issues to
> update documents for new features.
> I've left them still associated with 1.9.0 since these should still be
> updated for 1.9.0 soon along with the official release.
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
>
> On Fri, Aug 9, 2019 at 6:17 PM Tzu-Li (Gordon) Tai 
> wrote:
>
> > Hi all,
> >
> > Release candidate #2 for Apache Flink 1.9.0 is now ready for your review.
> > This is the first voting candidate for 1.9.0, following the preview
> > candidates RC0 and RC1.
> >
> > Please review and vote on release candidate #2 for version 1.9.0, as
> > follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release and binary convenience releases to
> be
> > deployed to dist.apache.org [2], which are signed with the key with
> > fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag “release-1.9.0-rc2” [5].
> >
> > Robert is also preparing a pull request for the announcement blog post in
> > the works, and will update this voting thread with a link to the pull
> > request shortly afterwards.
> >
> > The vote will be open for *at least 72 hours*.
> > Please cast your votes before *Aug. 14th (Wed.) 2019, 17:00 PM CET*.It is
> > adopted by majority approval, with at least 3 PMC affirmative votes.
> > Thanks,
> > Gordon[1]
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc2/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1234
> > [5]
> >
> https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.9.0-rc2
> >
>


Re: [VOTE] Apache Flink Release 1.9.0, release candidate #2

2019-08-09 Thread Tzu-Li (Gordon) Tai
Please note that the unresolved issues that are still tagged with a fix
version "1.9.0", as seen in the JIRA release notes [1], are issues to
update documents for new features.
I've left them still associated with 1.9.0 since these should still be
updated for 1.9.0 soon along with the official release.

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601

On Fri, Aug 9, 2019 at 6:17 PM Tzu-Li (Gordon) Tai 
wrote:

> Hi all,
>
> Release candidate #2 for Apache Flink 1.9.0 is now ready for your review.
> This is the first voting candidate for 1.9.0, following the preview
> candidates RC0 and RC1.
>
> Please review and vote on release candidate #2 for version 1.9.0, as
> follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases to be
> deployed to dist.apache.org [2], which are signed with the key with
> fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag “release-1.9.0-rc2” [5].
>
> Robert is also preparing a pull request for the announcement blog post in
> the works, and will update this voting thread with a link to the pull
> request shortly afterwards.
>
> The vote will be open for *at least 72 hours*.
> Please cast your votes before *Aug. 14th (Wed.) 2019, 17:00 PM CET*.It is
> adopted by majority approval, with at least 3 PMC affirmative votes.
> Thanks,
> Gordon[1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc2/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1234
> [5]
> https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.9.0-rc2
>