Thanks Robert for coordinating the release and thanks to the team for all
the hard work!

On Thu, Feb 2, 2017 at 9:32 PM, Ufuk Celebi <u...@apache.org> wrote:

> Thanks for all your effort Robert and the rest of the team! :-)
>
> On Thu, Feb 2, 2017 at 11:43 AM, Robert Metzger <rmetz...@apache.org>
> wrote:
> > I hereby officially close the voting of the 1.2.0 release with 8 +1 votes
> > (5 binding) and no -1 votes.
> >
> > +1 votes:
> > - Ufuk
> > - Stephan
> > - Gordon (non-binding)
> > - Sree (non-binding)
> > - Gyula
> > - Robert
> > - Till
> > - Fokko (non-binding)
> >
> >
> > Thanks a lot to everybody who helped testing this and the previous
> release
> > candidates!
> > We managed to get the 1.2.0 release out in roughly one month, starting
> from
> > the RC0 testing.
> > I think this shows that we will be able to stick to the time-based
> release
> > schedule.
> >
> > I'll now publish the artifacts to maven central and the mirror.
> > Once I'm done with that, I'll start writing the release announcement blog
> > post here:
> > https://docs.google.com/document/d/1-_ycVaf1WseDj-
> WmgNZqI33dWCFf-gwmqnhhJcUCqog/edit?usp=sharing
> > Feel free to help :) (I might change the sharing to comment/suggest only
> > once the majority of the text has been written).
> >
> >
> >
> >
> > On Tue, Jan 31, 2017 at 5:56 PM, Driesprong, Fokko <fo...@driesprong.frl
> >
> > wrote:
> >
> >> +1 Looks great
> >>
> >> 2017-01-31 17:53 GMT+01:00 Till Rohrmann <trohrm...@apache.org>:
> >>
> >> > +1
> >> >
> >> > - Built Flink with Hadoop 2.7.1
> >> > - Tested SBT quickstarts
> >> > - Ran Flink on Mesos (standalone and HA mode) executing the WindowJoin
> >> > example
> >> > - Ran example job using the RemoteEnvironment
> >> >
> >> > On Tue, Jan 31, 2017 at 3:06 PM, Robert Metzger <rmetz...@apache.org>
> >> > wrote:
> >> >
> >> > > +1
> >> > >
> >> > > - Downloaded the hadoop26, scala 2.10 build and ran it overnight on
> a
> >> CDH
> >> > > 5.9.0 cluster using YARN (with HA), HDFS (with HA) and all services
> >> with
> >> > > Kerberos
> >> > > - Build a streaming job using the artifacts from the staging
> repository
> >> > > - ran a job which is specifically designed to be misbehaved. I
> didn't
> >> see
> >> > > any release critical issues
> >> > > - Killed the JM with HA enabled and watched the job recover
> >> successfully
> >> > > - Checked if the version in the quickstarts is set correctly.
> >> > >
> >> > >
> >> > > On Tue, Jan 31, 2017 at 8:43 AM, Gyula Fóra <gyf...@apache.org>
> wrote:
> >> > >
> >> > > > +1 from me
> >> > > >
> >> > > > - Built with tests on osx/linux for Hadoop 2.6.0
> >> > > > - Ran several large scale streaming programs for days on RC2 and
> now
> >> > for
> >> > > > almost a day on RC3 without any issues on YARN
> >> > > > - Tested Savepoint/External checkpoints/rescaling
> >> > > > - Tested user metrics with custom reporter
> >> > > >
> >> > > > Gyula
> >> > > >
> >> > > > Tzu-Li (Gordon) Tai <tzuli...@apache.org> ezt írta (időpont:
> 2017.
> >> > jan.
> >> > > > 31., K, 7:06):
> >> > > >
> >> > > > > +1 (non-binding)
> >> > > > >
> >> > > > > - Tested TaskManager failures on Mesos / Standalone with
> >> exactly-once
> >> > > > > guarantees
> >> > > > > - Above tests also done against Kafka 0.8 / 0.9 / 0.10, offsets
> >> > > > committed
> >> > > > > correctly back to ZK (manual check for 0.8 due to FLINK-4822)
> >> > > > > - Tested Kafka 0.10 server-side timestamps
> >> > > > > - Verified Async I/O with fake async operation, checked
> >> exactly-once
> >> > > > > guarantees
> >> > > > >
> >> > > > >
> >> > > > > Not rerun for RC3, but is reasonable to forward:
> >> > > > > - Kerberos authentication against YARN, Kafka, HDFS, Zookeeper.
> All
> >> > > files
> >> > > > > (including shipped keytabs) not lingering. Logs sane.
> >> > > > > - Tested TaskManager / JobManager failures on YARN. Tested start
> >> from
> >> > > > > savepoint on YARN.
> >> > > > >
> >> > > > > On January 31, 2017 at 2:45:29 AM, Stephan Ewen (
> se...@apache.org)
> >> > > > wrote:
> >> > > > >
> >> > > > > +1 from my side
> >> > > > >
> >> > > > > - Checked the LICENSE and NOTICE files
> >> > > > > - No binary executable in the release
> >> > > > > - Clean build and tests for Linux Scala 2.11, Hadoop 2.6.2
> >> > > > > - Ran a streaming program with Async I/O against Redis
> >> > > > > - Ran examples on a local cluster - all log files are sane
> >> > > > > - checked the contents of the uber-jar file (no un-shaded guava)
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > On Mon, Jan 30, 2017 at 5:40 PM, Ufuk Celebi <u...@apache.org>
> >> wrote:
> >> > > > >
> >> > > > > > +1 from my side:
> >> > > > > >
> >> > > > > > - Re-verified signatures and checksums
> >> > > > > > - Re-checked out the Java quickstarts and ran the jobs
> >> > > > > > - Re-checked that all poms point to 1.2.0
> >> > > > > > - Re-ran streaming state machine with Kafka source, RocksDB
> >> backend
> >> > > > > > and master and worker failures (standalone cluster)
> >> > > > > > - Tested externalized checkpoints
> >> > > > > > - Verified lingering files fix for RocksDB
> >> > > > > > - Verified sane results in checkpoints monitoring UI
> >> > > > > >
> >> > > > > > Did not re-do the following tests, but I think it's
> reasonable to
> >> > > > > > forward these previous results to this RC:
> >> > > > > > - Migrated multiple jobs via savepoint from 1.1.4 to 1.2.0
> with
> >> > Kryo
> >> > > > > > types, session windows (w/o lateness), operator and keyed
> state
> >> for
> >> > > > > > all three backends
> >> > > > > > - Rescaled the same jobs from 1.2.0 savepoints with all three
> >> > > backends
> >> > > > > > - Verified the "migration namespace serializer" fix
> >> > > > > >
> >> > > > > >
> >> > > > > > On Mon, Jan 30, 2017 at 8:40 AM, Robert Metzger <
> >> > rmetz...@apache.org
> >> > > >
> >> > > > > > wrote:
> >> > > > > > > Dear Flink community,
> >> > > > > > >
> >> > > > > > > Please vote on releasing the following candidate as Apache
> >> Flink
> >> > > > > version
> >> > > > > > > 1.2.0.
> >> > > > > > >
> >> > > > > > > The commit to be voted on:
> >> > > > > > > 1c659cf4 <http://git-wip-us.apache.org/
> repos/asf/flink/commit/
> >> > > > 1c659cf4
> >> > > > > >
> >> > > > > > > (*http://git-wip-us.apache.org/repos/asf/flink/commit/
> 1c659cf4
> >> > > > > > > <http://git-wip-us.apache.org/repos/asf/flink/commit/
> 1c659cf4
> >> >*)
> >> > > > > > >
> >> > > > > > > Branch:
> >> > > > > > > release-1.2.0-rc3
> >> > > > > > > (https://git1-us-west.apache.org/repos/asf/flink/repo?p=
> flin
> >> > > > > > > k.git;a=shortlog;h=refs/heads/release-1.2.0-rc3)
> >> > > > > > >
> >> > > > > > > The release artifacts to be voted on can be found at:
> >> > > > > > > *http://people.apache.org/~rmetzger/flink-1.2.0-rc3/
> >> > > > > > > <http://people.apache.org/~rmetzger/flink-1.2.0-rc3/>*
> >> > > > > > >
> >> > > > > > > The release artifacts are signed with the key with
> fingerprint
> >> > > > > D9839159:
> >> > > > > > > http://www.apache.org/dist/flink/KEYS
> >> > > > > > >
> >> > > > > > > The staging repository for this release can be found at:
> >> > > > > > > https://repository.apache.org/content/repositories/
> >> > > > orgapacheflink-1114
> >> > > > > > >
> >> > > > > > > ------------------------------
> -------------------------------
> >> > > > > > >
> >> > > > > > > The vote ends on 11:00:00 am CET | Thursday, February 2,
> 2017
> >> > > > > > >
> >> > > > > > > Please test the release as soon as possible, to be able to
> >> cancel
> >> > > it
> >> > > > as
> >> > > > > > > early as possible.
> >> > > > > > > For making the testing easier, I've created this document to
> >> > track
> >> > > > what
> >> > > > > > has
> >> > > > > > > already been tested and what needs to be tested:
> >> > > > > https://docs.google.co
> >> > > > > > > m/document/d/1MX-8l9RrLly3UmZMODHBnuZUrK_n-DGIBLjFKyCrTAs/
> >> > > > > > edit?usp=sharing
> >> > > > > > > Feel free to add more tests or change existing ones.
> >> > > > > > >
> >> > > > > > > [ ] +1 Release this package as Apache Flink 1.2.0
> >> > > > > > > [ ] -1 Do not release this package, because ...
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
>

Reply via email to