This vote has been cancelled in favour of RC2.

On Tue, Aug 2, 2016 at 1:51 PM, Stephan Ewen <se...@apache.org> wrote:
> @Ufuk - I agree, this looks quite dubious.
>
> Need to resolve that before proceeding with the release...
>
>
> On Tue, Aug 2, 2016 at 1:45 PM, Ufuk Celebi <u...@apache.org> wrote:
>
>> I just saw that we changed the behaviour of ListState and
>> FoldingState. They used to return the default value given to the state
>> descriptor, but have been changed to return null now (in [1]).
>> Furthermore ValueState still returns the default value instead of
>> null. Gyula noticed another inconsistency for GenericListState and
>> GenericFoldingState in [2].
>>
>> The state interfaces are annotated with @PublicEvolving, so
>> technically it should be OK to change this, but I wanted to double
>> check that everyone is aware of this. Do we want to keep it like it is
>> or should we revert this?
>>
>> – Ufuk
>>
>> [1]
>> https://github.com/apache/flink/commit/12bf7c1a0b81d199085fe874c64763c51a93b3bf#diff-2c622001cff86abb3e36e6621d6f73ad
>> [2] https://issues.apache.org/jira/browse/FLINK-4275
>>
>> On Tue, Aug 2, 2016 at 1:37 PM, Maximilian Michels <m...@apache.org> wrote:
>> > I agree with Ufuk and Stephan that we could forward most of the
>> > testing if we only included the hash function fix in the new RC. There
>> > are some other minor issues we could merge as well, but they are
>> > involved enough that they would set us back to redoing the testing. So
>> > +1 for a new RC with the hash function fix.
>> >
>> > On Tue, Aug 2, 2016 at 12:35 PM, Stephan Ewen <se...@apache.org> wrote:
>> >> +1 from my side
>> >>
>> >> Create a new RC that differs only in the hash function commit.
>> >> I would support to carry forward the vote thread (extend it for one
>> >> additional day), because virtually all test results should apply to the
>> new
>> >> RC as well.
>> >>
>> >> We certainly need to redo:
>> >>   - signature validation
>> >>   - Build & integration tests (that should catch any potential error
>> caused
>> >> by a change of hash function)
>> >>
>> >> That is pretty lightweight, should be good within a day.
>> >>
>> >>
>> >> On Tue, Aug 2, 2016 at 10:43 AM, Ufuk Celebi <u...@apache.org> wrote:
>> >>
>> >>> Dear community,
>> >>>
>> >>> I would like to vote +1, but during testing I've noted that we should
>> >>> have reverted FLINK-4154 (correction of murmur hash) for this release.
>> >>>
>> >>> We had a wrong murmur hash implementation for 1.0, which was fixed for
>> >>> 1.1. We reverted that fix, because we thought that it broke savepoint
>> >>> compatibility between 1.0 and 1.1. That revert is part of RC1. It
>> >>> turns out though that there are other problems with savepoint
>> >>> compatibility which are independent of the hash function. Therefore I
>> >>> would like to revert it again and create a new RC with only this extra
>> >>> commit and extend the vote for one day.
>> >>>
>> >>> Would you be OK with this? Most testing results should be applicable
>> >>> to RC2, too.
>> >>>
>> >>> I ran the following tests:
>> >>>
>> >>> + Check checksums and signatures
>> >>> + Verify no binaries in source release
>> >>> + Build (clean verify) with default Hadoop version
>> >>> + Build (clean verify) with Hadoop 2.6.1
>> >>> + Checked build for Scala 2.11
>> >>> + Checked all POMs
>> >>> + Read README.md
>> >>> + Examined OUT and LOG files
>> >>> + Checked paths with spaces (found non-blocking issue with YARN CLI)
>> >>> + Checked local, cluster mode, and multi-node cluster
>> >>> + Tested HDFS split assignment
>> >>> + Tested bin/flink command line
>> >>> + Tested recovery (master and worker failure) in standalone mode with
>> >>> RocksDB and HDFS
>> >>> + Tested Scala/SBT giter8 template
>> >>> + Tested Metrics (user defined metrics, multiple JMX reporters, JM
>> >>> metrics, user defined reporter)
>> >>>
>> >>> – Ufuk
>> >>>
>> >>>
>> >>> On Tue, Aug 2, 2016 at 10:13 AM, Till Rohrmann <trohrm...@apache.org>
>> >>> wrote:
>> >>> > I can confirm Aljoscha's findings concerning building Flink with
>> Hadoop
>> >>> > version 2.6.0 using Maven 3.3.9. Aljoscha is right that it is indeed
>> a
>> >>> > Maven 3.3 issue. If you build flink-runtime twice, then everything
>> goes
>> >>> > through because the shaded curator Flink dependency is installed in
>> >>> during
>> >>> > the first run.
>> >>> >
>> >>> > On Tue, Aug 2, 2016 at 5:09 AM, Aljoscha Krettek <
>> aljos...@apache.org>
>> >>> > wrote:
>> >>> >
>> >>> >> @Ufuk: 3.3.9, that's probably it because that messes with the
>> shading,
>> >>> >> right?
>> >>> >>
>> >>> >> @Stephan: Yes, even did a "rm -r .m2/repository". But the maven
>> version
>> >>> is
>> >>> >> most likely the reason.
>> >>> >>
>> >>> >> On Mon, 1 Aug 2016 at 10:59 Stephan Ewen <se...@apache.org> wrote:
>> >>> >>
>> >>> >> > @Aljoscha: Have you made sure you have a clean maven cache
>> (remove the
>> >>> >> > .m2/repository/org/apache/flink folder)?
>> >>> >> >
>> >>> >> > On Mon, Aug 1, 2016 at 5:56 PM, Aljoscha Krettek <
>> aljos...@apache.org
>> >>> >
>> >>> >> > wrote:
>> >>> >> >
>> >>> >> > > I tried it again now. I did:
>> >>> >> > >
>> >>> >> > > rm -r .m2/repository
>> >>> >> > > mvn clean verify -Dhadoop.version=2.6.0
>> >>> >> > >
>> >>> >> > > failed again. Also with versions 2.6.1 and 2.6.3.
>> >>> >> > >
>> >>> >> > > On Mon, 1 Aug 2016 at 08:23 Maximilian Michels <m...@apache.org>
>> >>> wrote:
>> >>> >> > >
>> >>> >> > > > This is also a major issue for batch with off-heap memory and
>> >>> memory
>> >>> >> > > > preallocation turned off:
>> >>> >> > > > https://issues.apache.org/jira/browse/FLINK-4094
>> >>> >> > > > Not hard to fix though as we simply need to reliably clear the
>> >>> direct
>> >>> >> > > > memory instead of relying on garbage collection. Another
>> possible
>> >>> fix
>> >>> >> > > > is to maintain memory pools independently of the preallocation
>> >>> mode.
>> >>> >> I
>> >>> >> > > > think this is fine because preallocation:false suggests that
>> no
>> >>> >> memory
>> >>> >> > > > will be preallocated but not that memory will be freed once
>> >>> acquired.
>> >>> >> > > >
>> >>> >> > >
>> >>> >> >
>> >>> >>
>> >>>
>>

Reply via email to