I can confirm Aljoscha's findings concerning building Flink with Hadoop
version 2.6.0 using Maven 3.3.9. Aljoscha is right that it is indeed a
Maven 3.3 issue. If you build flink-runtime twice, then everything goes
through because the shaded curator Flink dependency is installed in during
the first run.

On Tue, Aug 2, 2016 at 5:09 AM, Aljoscha Krettek <aljos...@apache.org>
wrote:

> @Ufuk: 3.3.9, that's probably it because that messes with the shading,
> right?
>
> @Stephan: Yes, even did a "rm -r .m2/repository". But the maven version is
> most likely the reason.
>
> On Mon, 1 Aug 2016 at 10:59 Stephan Ewen <se...@apache.org> wrote:
>
> > @Aljoscha: Have you made sure you have a clean maven cache (remove the
> > .m2/repository/org/apache/flink folder)?
> >
> > On Mon, Aug 1, 2016 at 5:56 PM, Aljoscha Krettek <aljos...@apache.org>
> > wrote:
> >
> > > I tried it again now. I did:
> > >
> > > rm -r .m2/repository
> > > mvn clean verify -Dhadoop.version=2.6.0
> > >
> > > failed again. Also with versions 2.6.1 and 2.6.3.
> > >
> > > On Mon, 1 Aug 2016 at 08:23 Maximilian Michels <m...@apache.org> wrote:
> > >
> > > > This is also a major issue for batch with off-heap memory and memory
> > > > preallocation turned off:
> > > > https://issues.apache.org/jira/browse/FLINK-4094
> > > > Not hard to fix though as we simply need to reliably clear the direct
> > > > memory instead of relying on garbage collection. Another possible fix
> > > > is to maintain memory pools independently of the preallocation mode.
> I
> > > > think this is fine because preallocation:false suggests that no
> memory
> > > > will be preallocated but not that memory will be freed once acquired.
> > > >
> > >
> >
>

Reply via email to