I agree that this is a serious bug. However, I would not block the
release because of this. As you said, there is a workaround and the
`execute()` works in the most common case of a single execution. We can
fix this in a minor release shortly after.
What do others think?
Am 15.08.19 um 11:23 schrieb Kurt Young:
We just find a serious bug around blink planner:
When user reused the table environment instance, and call `execute` method
multiple times for
different sql, the later call will trigger the earlier ones to be
It's a serious bug but seems we also have a work around, which is never
reuse the table environment
object. I'm not sure if we should treat this one as blocker issue of 1.9.0.
What's your opinion?
On Thu, Aug 15, 2019 at 2:01 PM Gary Yao <g...@ververica.com> wrote:
Jepsen test suite passed 10 times consecutively
On Wed, Aug 14, 2019 at 5:31 PM Aljoscha Krettek <aljos...@apache.org>
I did some testing on a Google Cloud Dataproc cluster (it gives you a
managed YARN and Google Cloud Storage (GCS)):
- tried both YARN session mode and YARN per-job mode, also using
bin/flink list/cancel/etc. against a YARN session cluster
- ran examples that write to GCS, both with the native Hadoop
and a custom “plugin” FileSystem
- ran stateful streaming jobs that use GCS as a checkpoint backend
- tried running SQL programs on YARN using the SQL Cli: this worked for
YARN session mode but not for YARN per-job mode. Looking at the code I
don’t think per-job mode would work from seeing how it is implemented.
I think it’s an OK restriction to have for now
- in all the testing I had fine-grained recovery (region failover)
enabled but I didn’t simulate any failures
On 14. Aug 2019, at 15:20, Kurt Young <ykt...@gmail.com> wrote:
Thanks for preparing this release candidate. I have verified the
- verified the checksums and GPG files match the corresponding release
- verified that the source archives do not contains any binaries
- build the source release with Scala 2.11 successfully.
- ran `mvn verify` locally, met 2 issuses [FLINK-13687] and
both are not release blockers. Other than that, all tests are passed.
- ran all e2e tests which don't need download external packages (it's
in China and almost impossible to download them), all passed.
- started local cluster, ran some examples. Met a small website display
[FLINK-13591], which is also not a release blocker.
Although we have pushed some fixes around blink planner and hive
after RC2, but consider these are both preview features, I'm lean to be
without these fixes.
+1 from my side. (binding)
On Wed, Aug 14, 2019 at 5:13 PM Jark Wu <imj...@gmail.com> wrote:
I have verified the following things:
- build the source release with Scala 2.12 and Scala 2.11 successfully
- checked/verified signatures and hashes
- checked that all POM files point to the same version
- ran some flink table related end-to-end tests locally and succeeded
(except TPC-H e2e failed which is reported in FLINK-13704)
- started cluster for both Scala 2.11 and 2.12, ran examples, verified
ui and log output, nothing unexpected
- started cluster, ran a SQL query to temporal join with kafka source
mysql jdbc table, and write results to kafka again. Using DDL to
source and sinks. looks good.
- reviewed the release PR
As FLINK-13704 is not recognized as blocker issue, so +1 from my side
On Tue, 13 Aug 2019 at 17:07, Till Rohrmann <trohrm...@apache.org>
although I can see that it would be handy for users who have PubSub
I would rather not include examples which require an external
into the Flink distribution. I think examples should be
concern is that we would bloat the distribution for many users at the
benefit of a few. Instead, I think it would be better to make these
examples available differently, maybe through Flink's ecosystem
maybe a new examples section in Flink's documentation.
On Tue, Aug 13, 2019 at 9:43 AM Jark Wu <imj...@gmail.com> wrote:
After thinking about we can use VARCHAR as an alternative of
I'm fine with not recognize it as a blocker issue.
We can fix it into 1.9.1.
On Tue, 13 Aug 2019 at 15:10, Richard Deurwaarder <rich...@xeli.eu>
I noticed the PubSub example jar is not included in the examples/
flink-dist. I've created
+ https://github.com/apache/flink/pull/9424/files to fix this.
I will leave it up to you to decide if we want to add this to
On Tue, Aug 13, 2019 at 9:04 AM Till Rohrmann <
thanks for reporting this issue. Could this be a documented
Blink's preview version? I think we have agreed that the Blink SQL
will be rather a preview feature than production ready. Hence it
still contain some bugs. My concern is that there might be still
issues which we'll discover bit by bit and could postpone the
further if we say Blink bugs are blockers.
On Tue, Aug 13, 2019 at 7:42 AM Jark Wu <imj...@gmail.com> wrote:
I just find an issue when testing connector DDLs against blink
This issue lead to the DDL doesn't work when containing
I have created an issue FLINK-13699 and a pull request for
IMO, this can be a blocker issue of 1.9 release. Because
timestamp/date/time are primitive types, and this will break the
However, I want to hear more thoughts from the community whether
recognize it as a blocker.
On Mon, 12 Aug 2019 at 22:46, Becket Qin <becket....@gmail.com>
Thanks Gordon, will do that.
On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai <
Since this is a @PublicEvolving interface, technically it is
it across releases (including across bugfix releases?).
So, @Becket if you do merge it now, please mark the fix
During the voting process, in the case a new RC is created,
check the list of changes compared to the previous RC, and
Version" of the corresponding JIRAs to be the right version
it would be corrected to 1.9.0 instead of 1.9.1).
On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann <
I agree that it would be nicer. Not sure whether we should
for this issue given that it is open for quite some time and
addressed until very recently. Maybe we could include it on
of nice-to-do things which we do in case that the RC gets
On Mon, Aug 12, 2019 at 4:18 PM Becket Qin <
Yes, I think we have already documented in that way. So
speaking it is fine to change it later. It is just better
Jiangjie (Becket) Qin
On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann <
Could we say that the PubSub connector is public evolving
On Mon, Aug 12, 2019 at 3:18 PM Becket Qin <
FLINK-13231(palindrome!) has a minor Google PubSub
regarding how to config rate limiting. The GCP PubSub
introduced connector in 1.9, so it would be nice to
into 1.9 rather than later to avoid a public API
making this as a blocker for 1.9. Want to check what do
Jiangjie (Becket) Qin
On Mon, Aug 12, 2019 at 2:04 PM Zili Chen <
Thanks for your explanation. For  I think at least
the JIRA issue field, like unset the fixed version.
the change is all in test scope but wonder if such a
the release candidate. IIRC previous RC VOTE threads
release manual/guide, I will try to look up it, too.
Kurt Young <ykt...@gmail.com> 于2019年8月12日周一
Thanks for the heads up. The 2 issues you mentioned
found the reason of the second issue and a PR was
issue was just a testing problem, should not be
we will still merge it into 1.9 branch.
On Mon, Aug 12, 2019 at 5:38 PM Zili Chen <
I just noticed that a few hours ago there were
filed and marked as blockers to 1.9.0.
Now  is closed as duplication but still marked
a blocker to 1.9.0, while  is downgrade to
but still target to be fixed in 1.9.0.
It would be worth to have attention of our
Gyula Fóra <gyula.f...@gmail.com> 于2019年8月12日周一
Thanks Stephan :)
That looks easy enough, will try!
On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen <
Thanks for reporting this.
Can you try to simply build Flink without
HADOOP_CLASSPATH to your CloudEra libs?
That is the recommended way these days.
On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra <
In the meantime I also figured out that I
-Dhadoop.version set to the specific hadoop
On Mon, Aug 12, 2019 at 9:54 AM Dawid
As for the issues with mapr maven
Try using the "unsafe-mapr-repo" profile.
On 11/08/2019 19:31, Gyula Fóra wrote:
How do I build the RC locally with the
no matter what I do I run into
This seems to have worked in the past.
There might be some documentation
appreciate any pointers :)
On Sun, Aug 11, 2019 at 6:57 PM Gyula
I am trying to build 1.9.0-rc2 with
get the following error:
mvn clean install -DskipTests
-Pinclude-hadoop (ignore that the
[ERROR] Failed to execute goal on
resolve dependencies for project
artifact descriptor for
valid certification path to requested
This looks like a TLS error. Might not
could be good to know.
On Fri, Aug 9, 2019 at 6:26 PM Tzu-Li
Please note that the unresolved
version "1.9.0", as seen in the JIRA
update documents for new features.
I've left them still associated with
updated for 1.9.0 soon along with the
On Fri, Aug 9, 2019 at 6:17 PM Tzu-Li
Release candidate #2 for Apache
This is the first voting candidate
candidates RC0 and RC1.
Please review and vote on release
[ ] +1, Approve the release
[ ] -1, Do not approve the release
The complete staging area is
* JIRA release notes ,
* the official Apache source release
deployed to dist.apache.org ,
* all artifacts to be deployed to
* source code tag
Robert is also preparing a pull
the works, and will update this
request shortly afterwards.
The vote will be open for *at least
Please cast your votes before *Aug.
adopted by majority approval, with