Vote thread for RC3 has been started:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Apache-Flink-1-9-0-release-candidate-3-td31988.html
On Mon, Aug 19, 2019 at 6:32 PM Tzu-Li (Gordon) Tai
wrote:
> Thanks for the comments and fast fixes.
>
> @Becket Qin I've quickly looked
Thanks for the comments and fast fixes.
@Becket Qin I've quickly looked at the changes to
the PubSub connector. Given that it is a API-breaking change and is quite
local as a configuration change, I've decided to include that change in RC3.
@Jark @Timo Walther I'll be adding FLINK-13699 as
Looking at FLINK-13699, it seems to be very local to Table API and HBase
connector.
We can cherry-pick that without re-running distributed tests.
On Mon, Aug 19, 2019 at 1:46 PM Till Rohrmann wrote:
> I've merged the fix for FLINK-13752. Hence we are good to go to create the
> new RC.
>
>
I've merged the fix for FLINK-13752. Hence we are good to go to create the
new RC.
Cheers,
Till
On Mon, Aug 19, 2019 at 1:30 PM Timo Walther wrote:
> I support Jark's fix for FLINK-13699 because it would be disappointing
> if both DDL and connectors are ready to handle DATE/TIME/TIMESTAMP but
I support Jark's fix for FLINK-13699 because it would be disappointing
if both DDL and connectors are ready to handle DATE/TIME/TIMESTAMP but a
little component in the middle of the stack is preventing an otherwise
usable feature. The changes are minor.
Thanks,
Timo
Am 19.08.19 um 13:24
Hi Gordon,
I agree that we should pick the minimal set of changes to shorten the
release testing time.
However, I would like to include FLINK-13699 into RC3. FLINK-13699 is a
critical DDL issue, and is a small change to flink table (won't affect the
runtime feature and stability).
I will do some
+1 for Gordon's approach.
If we do that, we can probably skip re-testing everything and mainly need
to verify the release artifacts (signatures, build from source, etc.).
If we open the RC up for changes, I fear a lot of small issues will rush in
and destabilize the candidate again, meaning we
+1 for only cherry picking FLINK-13752 and the LICENSE fixes into RC 3.
Cheers,
Till
On Mon, Aug 19, 2019 at 9:48 AM Becket Qin wrote:
> Hi Gordon,
>
> I remember we mentioned earlier that if there is an additional RC, we can
> piggyback the GCP PubSub API change (
>
Hi Gordon,
I remember we mentioned earlier that if there is an additional RC, we can
piggyback the GCP PubSub API change (
https://issues.apache.org/jira/browse/FLINK-13231). It is a small patch to
avoid future API change. So should be able to merge it very shortly. Would
it be possible to
Hi,
https://issues.apache.org/jira/browse/FLINK-13752 turns out to be an actual
blocker, so we would have to close this RC now in favor of a new one.
Since we are already quite past the planned release time for 1.9.0, I would
like to limit the new changes included in RC3 to only the following:
-
We should investigate the performance regression but regardless the
regression I vote +1
Have verified following things
- Jobs running on YARN x (Session & Per Job) with high-availability enabled.
- Simulate JM and TM failures.
- Simulate temporary network partition.
Best,
tison.
Stephan Ewen
For reference, this is the JIRA issue about the regression in question:
https://issues.apache.org/jira/browse/FLINK-13752
On Fri, Aug 16, 2019 at 10:57 AM Guowei Ma wrote:
> Hi, till
> I can send the job to you offline.
> It is just a datastream job and does not use
Hi all,
I agree with Till that we should investigate the suspected performance
regression issue before proceeding with the release.
If we do not find any problem I vote +1
I have verified the following behaviour:
- Built flink with custom hadoop version
- YARN Deployment with and without
Hi, till
I can send the job to you offline.
It is just a datastream job and does not use TwoInputSelectableStreamTask.
A->B
\
C
/
D->E
Best,
Guowei
Till Rohrmann 于2019年8月16日周五 下午4:34写道:
> Thanks for reporting this issue Guowei. Could you share a bit more details
>
Thanks for reporting this issue Guowei. Could you share a bit more details
what the job exactly does and which operators it uses? Does the job uses
the new `TwoInputSelectableStreamTask` which might cause the performance
regression?
I think it is important to understand where the problem comes
Hi,
-1
We have a benchmark job, which includes a two-input operator.
This job has a big performance regression using 1.9 compared to 1.8.
It's still not very clear why this regression happens.
Best,
Guowei
Yu Li 于2019年8月16日周五 下午3:27写道:
> +1 (non-binding)
>
> - checked release notes: OK
> -
+1 (non-binding)
- checked release notes: OK
- checked sums and signatures: OK
- source release
- contains no binaries: OK
- contains no 1.9-SNAPSHOT references: OK
- build from source: OK (8u102)
- mvn clean verify: OK (8u102)
- binary release
- no examples appear to be
Hi Jark,
Thanks for letting me know that it's been like this in previous releases.
Though I don't think that's the right behavior, it can be discussed for
later release. Thus I retract my -1 for RC2.
Bowen
On Thu, Aug 15, 2019 at 7:49 PM Jark Wu wrote:
> Hi Bowen,
>
> Thanks for reporting
Hi Bowen,
Thanks for reporting this.
However, I don't think this is an issue. IMO, it is by design.
The `tEnv.listUserDefinedFunctions()` in Table API and `show functions;` in
SQL CLI are intended to return only the registered UDFs, not including
built-in functions.
This is also the behavior in
-1 for RC2.
I found a bug https://issues.apache.org/jira/browse/FLINK-13741, and I
think it's a blocker. The bug means currently if users call
`tEnv.listUserDefinedFunctions()` in Table API or `show functions;` thru
SQL would not be able to see Flink's built-in functions.
I'm preparing a fix
Thanks for all the test efforts, verifications and votes so far.
So far, things are looking good, but we still require one more PMC binding
vote for this RC to be the official release, so I would like to extend the
vote time for 1 more day, until *Aug. 16th 17:00 CET*.
In the meantime, the
Great, then I have no other comments on legal check.
Best,
Kurt
On Thu, Aug 15, 2019 at 9:56 PM Chesnay Schepler wrote:
> The licensing items aren't a problem; we don't care about Flink modules
> in NOTICE files, and we don't have to update the source-release
> licensing since we don't have a
The licensing items aren't a problem; we don't care about Flink modules
in NOTICE files, and we don't have to update the source-release
licensing since we don't have a pre-built version of the WebUI in the
source.
On 15/08/2019 15:22, Kurt Young wrote:
After going through the licenses, I
Thanks Kurt for checking that.
The mentioned problem with table-examples is that, when working on
FLINK-13558, I forgot to add dependency on flink-examples-table to
flink-dist. So this module is not built if only the flink-dist with its
dependencies is built (this happens in the release scripts:
After going through the licenses, I found 2 suspicions but not sure if they
are
valid or not.
1. flink-state-processing-api is packaged in to flink-dist jar, but not
included in
NOTICE-binary file (the one under the root directory) like other modules.
2. flink-runtime-web distributed some
Hi Gordon & Timo,
Thanks for the feedback, and I agree with it. I will document this in the
release notes.
Best,
Kurt
On Thu, Aug 15, 2019 at 6:14 PM Tzu-Li (Gordon) Tai
wrote:
> Hi Kurt,
>
> With the same argument as before, given that it is mentioned in the release
> announcement that it
Hi Kurt,
With the same argument as before, given that it is mentioned in the release
announcement that it is a preview feature, I would not block this release
because of it.
Nevertheless, it would be important to mention this explicitly in the
release notes [1].
Regards,
Gordon
[1]
+1 (non-binding)
Tested in AWS EMR Yarn: 1 master and 4 worker nodes (m5.xlarge: 4 vCore, 16
GiB).
EMR runs only on Java 8. Fine-grained recovery is enabled by default.
Modified E2E test scripts can be found here (asserting output):
https://github.com/azagrebin/flink/commits/FLINK-13597
Batch
Hi Kurt,
I agree that this is a serious bug. However, I would not block the
release because of this. As you said, there is a workaround and the
`execute()` works in the most common case of a single execution. We can
fix this in a minor release shortly after.
What do others think?
Regards,
HI,
We just find a serious bug around blink planner:
https://issues.apache.org/jira/browse/FLINK-13708
When user reused the table environment instance, and call `execute` method
multiple times for
different sql, the later call will trigger the earlier ones to be
re-executed.
It's a serious bug
+1 (non-binding)
Jepsen test suite passed 10 times consecutively
On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek
wrote:
> +1
>
> I did some testing on a Google Cloud Dataproc cluster (it gives you a
> managed YARN and Google Cloud Storage (GCS)):
> - tried both YARN session mode and YARN
Hi Robert,
I will do it today.
Best,
Kurt
On Wed, Aug 14, 2019 at 11:55 PM Robert Metzger wrote:
> Has anybody verified the inclusion of all bundled dependencies into the
> NOTICE files?
>
> I'm asking because we had some issues with that in the last release(s).
>
> On Wed, Aug 14, 2019 at
Has anybody verified the inclusion of all bundled dependencies into the
NOTICE files?
I'm asking because we had some issues with that in the last release(s).
On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek
wrote:
> +1
>
> I did some testing on a Google Cloud Dataproc cluster (it gives you a
>
+1
I did some testing on a Google Cloud Dataproc cluster (it gives you a managed
YARN and Google Cloud Storage (GCS)):
- tried both YARN session mode and YARN per-job mode, also using bin/flink
list/cancel/etc. against a YARN session cluster
- ran examples that write to GCS, both with the
Hi,
Thanks for preparing this release candidate. I have verified the following:
- verified the checksums and GPG files match the corresponding release files
- verified that the source archives do not contains any binaries
- build the source release with Scala 2.11 successfully.
- ran `mvn
Hi Gordon,
I have verified the following things:
- build the source release with Scala 2.12 and Scala 2.11 successfully
- checked/verified signatures and hashes
- checked that all POM files point to the same version
- ran some flink table related end-to-end tests locally and succeeded
(except
Hi Richard,
although I can see that it would be handy for users who have PubSub set up,
I would rather not include examples which require an external dependency
into the Flink distribution. I think examples should be self-contained. My
concern is that we would bloat the distribution for many
Hi Till,
After thinking about we can use VARCHAR as an alternative of
timestamp/time/date.
I'm fine with not recognize it as a blocker issue.
We can fix it into 1.9.1.
Thanks,
Jark
On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder wrote:
> Hello all,
>
> I noticed the PubSub example jar is
Hello all,
I noticed the PubSub example jar is not included in the examples/ dir of
flink-dist. I've created https://issues.apache.org/jira/browse/FLINK-13700
+ https://github.com/apache/flink/pull/9424/files to fix this.
I will leave it up to you to decide if we want to add this to 1.9.0.
Hi Jark,
thanks for reporting this issue. Could this be a documented limitation of
Blink's preview version? I think we have agreed that the Blink SQL planner
will be rather a preview feature than production ready. Hence it could
still contain some bugs. My concern is that there might be still
Hi all,
I just find an issue when testing connector DDLs against blink planner for
rc2.
This issue lead to the DDL doesn't work when containing timestamp/date/time
type.
I have created an issue FLINK-13699[1] and a pull request for this.
IMO, this can be a blocker issue of 1.9 release. Because
Thanks Gordon, will do that.
On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai
wrote:
> Concerning FLINK-13231:
>
> Since this is a @PublicEvolving interface, technically it is ok to break
> it across releases (including across bugfix releases?).
> So, @Becket if you do merge it now, please
That sounds good to me. I was initially trying to piggyback it into an RC,
but fell behind and was not able to catch the last one.
Thanks,
Jiangjie (Becket) Qin
On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann wrote:
> I agree that it would be nicer. Not sure whether we should cancel the RC
>
Concerning FLINK-13231:
Since this is a @PublicEvolving interface, technically it is ok to break it
across releases (including across bugfix releases?).
So, @Becket if you do merge it now, please mark the fix version as 1.9.1.
During the voting process, in the case a new RC is created, we
I agree that it would be nicer. Not sure whether we should cancel the RC
for this issue given that it is open for quite some time and hasn't been
addressed until very recently. Maybe we could include it on the shortlist
of nice-to-do things which we do in case that the RC gets cancelled.
Cheers,
Hi Till,
Yes, I think we have already documented in that way. So technically
speaking it is fine to change it later. It is just better if we could avoid
doing that.
Thanks,
Jiangjie (Becket) Qin
On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann wrote:
> Could we say that the PubSub connector is
Could we say that the PubSub connector is public evolving instead?
Cheers,
Till
On Mon, Aug 12, 2019 at 3:18 PM Becket Qin wrote:
> Hi all,
>
> FLINK-13231(palindrome!) has a minor Google PubSub connector API change
> regarding how to config rate limiting. The GCP PubSub connector is a newly
>
Hi all,
FLINK-13231(palindrome!) has a minor Google PubSub connector API change
regarding how to config rate limiting. The GCP PubSub connector is a newly
introduced connector in 1.9, so it would be nice to include this change
into 1.9 rather than later to avoid a public API change. I am thinking
Hi Kurt,
Thanks for your explanation. For [1] I think at least we should change
the JIRA issue field, like unset the fixed version. For [2] I can see
the change is all in test scope but wonder if such a commit still invalid
the release candidate. IIRC previous RC VOTE threads would contain a
Hi Zili,
Thanks for the heads up. The 2 issues you mentioned were opened by me. We
have
found the reason of the second issue and a PR was opened for it. As said in
jira, the
issue was just a testing problem, should not be blocker of 1.9.0 release.
However,
we will still merge it into 1.9 branch.
Hi,
I just noticed that a few hours ago there were two new issues
filed and marked as blockers to 1.9.0[1][2].
Now [1] is closed as duplication but still marked as
a blocker to 1.9.0, while [2] is downgrade to "Major" priority
but still target to be fixed in 1.9.0.
It would be worth to have
Thanks Stephan :)
That looks easy enough, will try!
Gyula
On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen wrote:
> Hi Gyula!
>
> Thanks for reporting this.
>
> Can you try to simply build Flink without Hadoop and then exporting
> HADOOP_CLASSPATH to your CloudEra libs?
> That is the recommended
Hi Gyula!
Thanks for reporting this.
Can you try to simply build Flink without Hadoop and then exporting
HADOOP_CLASSPATH to your CloudEra libs?
That is the recommended way these days.
Best,
Stephan
On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra wrote:
> Thanks Dawid,
>
> In the meantime I
Thanks Dawid,
In the meantime I also figured out that I need to build the
https://github.com/apache/flink-shaded project locally with
-Dhadoop.version set to the specific hadoop version if I want something
different.
Cheers,
Gyula
On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz
wrote:
> Hi
Hi Gyula,
As for the issues with mapr maven repository, you might have a look at
this message:
https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
Try using the "unsafe-mapr-repo" profile.
Best,
Dawid
On 11/08/2019 19:31,
Hi again,
How do I build the RC locally with the hadoop version specified? Seems like
no matter what I do I run into dependency problems with the shaded hadoop
dependencies.
This seems to have worked in the past.
There might be some documentation somewhere that I couldnt find, so I would
Hi!
I am trying to build 1.9.0-rc2 with the -Pvendor-repos profile enabled. I
get the following error:
mvn clean install -DskipTests -Pvendor-repos -Dhadoop.version=2.6.0
-Pinclude-hadoop (ignore that the hadoop version is not a vendor hadoop
version)
[ERROR] Failed to execute goal on project
Hi all,
Release candidate #2 for Apache Flink 1.9.0 is now ready for your review.
This is the first voting candidate for 1.9.0, following the preview
candidates RC0 and RC1.
Please review and vote on release candidate #2 for version 1.9.0, as
follows:
[ ] +1, Approve the release
[ ] -1, Do not
58 matches
Mail list logo