This vote has passed with 10 binding +1 votes. I'll go ahead finalize
and package this release.

+1s (all binding):
Marton Balassi
Vasia Kalavri
Gyula Fora
Henry Saputra
Stephan Ewen
Till Rohrmann
Robert Metzger
Maximilian Michels
Aljoscha Krettek
Ufuk Celebi

There are no 0s or -1s.

On Wed, Aug 10, 2016 at 6:44 PM, Ufuk Celebi <u...@apache.org> wrote:
> This vote has passed with 10 binding +1 votes. I'll go ahead finalize
> and package this release.
>
> +1s (all binding):
> Marton Balassi
> Vasia Kalavri
> Gyula Fora
> Henry Saputra
> Stephan Ewen
> Till Rohrmann
> Robert Metzger
> Maximilian Michels
> Aljoscha Krettek
> Ufuk Celebi
>
> There are no 0s or -1s.
>
> On Wed, Aug 10, 2016 at 3:17 PM, Ufuk Celebi <u...@apache.org> wrote:
>> +1 to release this as Flink 1.1.1.
>>
>> I've verified the checksums and signatures.
>>
>> On Wed, Aug 10, 2016 at 11:13 AM, Aljoscha Krettek <aljos...@apache.org> 
>> wrote:
>>> +1 (binding)
>>>
>>> On Wed, 10 Aug 2016 at 10:45 Maximilian Michels <m...@apache.org> wrote:
>>>
>>>> +1 (binding)
>>>>
>>>> On Wed, Aug 10, 2016 at 9:54 AM, Robert Metzger <rmetz...@apache.org>
>>>> wrote:
>>>> > +1 to release 1.1.1
>>>> >
>>>> > I've checked the files in the staging repository and reproduced one of
>>>> the
>>>> > issues reported on user@. With 1.1.1, the issue is gone.
>>>> >
>>>> > The exception with 1.1.0:
>>>> > Exception in thread "main"
>>>> > org.apache.flink.runtime.client.JobExecutionException:
>>>> > Failed to submit job 8a1b148b34b313cf4539131edd5f276f (Flink Java Job at
>>>> > Tue Aug 09 18:56:04 CEST 2016)
>>>> > at org.apache.flink.runtime.jobmanager.JobManager.org
>>>> > <http://org.apache.flink.runtime.jobmanager.jobmanager.org/>$apache$
>>>> > flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1281)
>>>> > at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$
>>>> > handleMessage$1.applyOrElse(JobManager.scala:478)
>>>> > at scala.runtime.AbstractPartialFunction.apply(AbstractPartialF
>>>> > unction.scala:36)
>>>> > at org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun
>>>> > $receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>>>> > at scala.runtime.AbstractPartialFunction.apply(AbstractPartialF
>>>> > unction.scala:36)
>>>> > at
>>>> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>>>> > at
>>>> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>>>> > at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>>>> > at org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(
>>>> > LogMessages.scala:28)
>>>> > at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>>>> > at org.apache.flink.runtime.jobmanager.JobManager.aroundReceive
>>>> > (JobManager.scala:121)
>>>> > at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>>>> > at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>>>> > at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>>>> > at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>>>> > at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>>>> > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>>> > at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(
>>>> > ForkJoinPool.java:1339)
>>>> > at
>>>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>>> > at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinW
>>>> > orkerThread.java:107)
>>>> > Caused by: org.apache.flink.runtime.JobException: Creating the input
>>>> splits
>>>> > caused an error: Implementing class
>>>> > at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<
>>>> > init>(ExecutionJobVertex.java:172)
>>>> > at org.apache.flink.runtime.executiongraph.ExecutionGraph.attac
>>>> > hJobGraph(ExecutionGraph.java:695)
>>>> > at org.apache.flink.runtime.jobmanager.JobManager.org
>>>> > <http://org.apache.flink.runtime.jobmanager.jobmanager.org/>$apache$
>>>> > flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1178)
>>>> > ... 19 more
>>>> > Caused by: java.lang.IncompatibleClassChangeError: Implementing class
>>>> > at java.lang.ClassLoader.defineClass1(Native Method)
>>>> > at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>>>> > at
>>>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>>> > at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>>>> > at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>>>> > at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>>>> > at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>>>> > at java.security.AccessController.doPrivileged(Native Method)
>>>> > at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>>>> > at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>>> > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>>>> > at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>>> > at java.lang.Class.forName0(Native Method)
>>>> > at java.lang.Class.forName(Class.java:264)
>>>> > at
>>>> org.apache.parquet.hadoop.util.ContextUtil.<clinit>(ContextUtil.java:71)
>>>> > at org.apache.parquet.hadoop.ParquetInputFormat.getSplits(Parqu
>>>> > etInputFormat.java:298)
>>>> > at org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormat
>>>> > Base.createInputSplits(HadoopInputFormatBase.java:166)
>>>> > at org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormat
>>>> > Base.createInputSplits(HadoopInputFormatBase.java:56)
>>>> > at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<
>>>> > init>(ExecutionJobVertex.java:156)
>>>> > ... 21 more
>>>> >
>>>> > With the 1.1.1 RC1, the exception disappeared.
>>>> >
>>>> >
>>>> >
>>>> > On Wed, Aug 10, 2016 at 9:24 AM, Till Rohrmann <trohrm...@apache.org>
>>>> wrote:
>>>> >
>>>> >> +1 from my side as well.
>>>> >>
>>>> >> On Tue, Aug 9, 2016 at 9:01 PM, Stephan Ewen <se...@apache.org> wrote:
>>>> >>
>>>> >> > +1
>>>> >> >
>>>> >> > This is a crucial fix and the released sources are actually still the
>>>> >> same,
>>>> >> > so reduced time should be okay.
>>>> >> >
>>>> >> > On Tue, Aug 9, 2016 at 8:24 PM, Henry Saputra <
>>>> henry.sapu...@gmail.com>
>>>> >> > wrote:
>>>> >> >
>>>> >> > > Official vote
>>>> >> > > +1 (binding)
>>>> >> > >
>>>> >> > > On Tuesday, August 9, 2016, Gyula Fóra <gyf...@apache.org> wrote:
>>>> >> > >
>>>> >> > > > +1 from me, this is a very important fix.
>>>> >> > > >
>>>> >> > > > Gyula
>>>> >> > > >
>>>> >> > > > Vasiliki Kalavri <vasilikikala...@gmail.com <javascript:;>> ezt
>>>> írta
>>>> >> > > > (időpont: 2016. aug.
>>>> >> > > > 9., K, 19:15):
>>>> >> > > >
>>>> >> > > > > On 9 August 2016 at 18:27, Ufuk Celebi <u...@apache.org
>>>> >> > <javascript:;>>
>>>> >> > > > wrote:
>>>> >> > > > >
>>>> >> > > > > > Dear Flink community,
>>>> >> > > > > >
>>>> >> > > > > > Please vote on releasing the following candidate as Apache
>>>> Flink
>>>> >> > > > version
>>>> >> > > > > > 1.1.1.
>>>> >> > > > > >
>>>> >> > > > > > *Important*: I would like to reduce the voting time to 24
>>>> hours
>>>> >> > (with
>>>> >> > > > > > a majority of at least three +1 PMC votes as usual). We
>>>> >> discovered
>>>> >> > > > > > that the Maven artifacts published with version 1.1.0 have
>>>> >> > dependency
>>>> >> > > > > > issues, which will affect users running on Hadoop 2
>>>> >> infrastructure
>>>> >> > > > > > (like HDFS). Since Maven artifacts are immutable, we cannot
>>>> >> > override
>>>> >> > > > > > them and we have to publish a new version to fix this.
>>>> >> > > > > >
>>>> >> > > > > > The release script contained a bug, which resulted in no
>>>> >> deployment
>>>> >> > > of
>>>> >> > > > > > a Hadoop 1 specific version (1.1.0-hadoop1) and regular 1.1.0
>>>> >> > > > > > artifacts having a dependency on Hadoop 1 instead of Hadoop 2.
>>>> >> I've
>>>> >> > > > > > updated the release announcement accordingly with a warning
>>>> (see
>>>> >> > > > > > http://flink.apache.org/news/2016/08/08/release-1.1.0.html).
>>>> >> > > > > >
>>>> >> > > > > > Please indicate whether you are OK with the reduced voting
>>>> time.
>>>> >> > > > > >
>>>> >> > > > >
>>>> >> > > > > +1 fine with me
>>>> >> > > > >
>>>> >> > > > >
>>>> >> > > > >
>>>> >> > > > > >
>>>> >> > > > > > The commit to be voted on:
>>>> >> > > > > > 61bfb36 (http://git-wip-us.apache.org/
>>>> >> > repos/asf/flink/commit/61bfb36
>>>> >> > > )
>>>> >> > > > > >
>>>> >> > > > > > Branch:
>>>> >> > > > > > release-1.1.1-rc1
>>>> >> > > > > > (https://git1-us-west.apache.org/repos/asf/flink/repo?p=
>>>> >> > > > > > flink.git;a=shortlog;h=refs/heads/release-1.1.1-rc1)
>>>> >> > > > > >
>>>> >> > > > > > The release artifacts to be voted on can be found at:
>>>> >> > > > > > http://people.apache.org/~uce/flink-1.1.1-rc1/
>>>> >> > > > > >
>>>> >> > > > > > The release artifacts are signed with the key with fingerprint
>>>> >> > > > 9D403309:
>>>> >> > > > > > http://www.apache.org/dist/flink/KEYS
>>>> >> > > > > >
>>>> >> > > > > > The staging repository for this release can be found at:
>>>> >> > > > > > https://repository.apache.org/content/repositories/
>>>> >> > > orgapacheflink-1101
>>>> >> > > > > >
>>>> >> > > > > > -------------------------------------------------------------
>>>> >> > > > > >
>>>> >> > > > > > The vote is open for the next 24 hours (see above for
>>>> >> explanation)
>>>> >> > > and
>>>> >> > > > > > passes if a majority of at least three +1 PMC votes are cast.
>>>> >> > > > > >
>>>> >> > > > > > The vote ends on Wednesday, August 10th, 2016.
>>>> >> > > > > >
>>>> >> > > > > > [ ] +1 Release this package as Apache Flink 1.1.1
>>>> >> > > > > > [ ] -1 Do not release this package, because ...
>>>> >> > > > > >
>>>> >> > > > >
>>>> >> > > >
>>>> >> > >
>>>> >> >
>>>> >>
>>>>

Reply via email to