The work on HADOOP-13070 and the ApplicationClassLoader are generic and go
beyond YARN. It can be used in any JVM that uses hadoop. The current use
cases are MR containers, hadoop's RunJar (as in "hadoop jar"), and the YARN
node manager auxiliary services. I'm not sure if that's what you were
asking, but I hope it helps.

Regards,
Sangjin

On Fri, Jul 22, 2016 at 9:16 AM, Sean Busbey <bus...@cloudera.com> wrote:

> My work on HADOOP-11804 *only* helps processes that sit outside of YARN. :)
>
> On Fri, Jul 22, 2016 at 10:48 AM, Allen Wittenauer
> <a...@effectivemachines.com> wrote:
> >
> > Does any of this work actually help processes that sit outside of YARN?
> >
> >> On Jul 21, 2016, at 12:29 PM, Sean Busbey <bus...@cloudera.com> wrote:
> >>
> >> thanks for bringing this up! big +1 on upgrading dependencies for 3.0.
> >>
> >> I have an updated patch for HADOOP-11804 ready to post this week. I've
> >> been updating HBase's master branch to try to make use of it, but
> >> could use some other reviews.
> >>
> >> On Thu, Jul 21, 2016 at 4:30 AM, Tsuyoshi Ozawa <oz...@apache.org>
> wrote:
> >>> Hi developers,
> >>>
> >>> I'd like to discuss how to make an advance towards dependency
> >>> management in Apache Hadoop trunk code since there has been lots work
> >>> about updating dependencies in parallel. Summarizing recent works and
> >>> activities as follows:
> >>>
> >>> 0) Currently, we have merged minimum update dependencies for making
> >>> Hadoop JDK-8 compatible(compilable and runnable on JDK-8).
> >>> 1) After that, some people suggest that we should update the other
> >>> dependencies on trunk(e.g. protobuf, netty, jackthon etc.).
> >>> 2) In parallel, Sangjin and Sean are working on classpath isolation:
> >>> HADOOP-13070, HADOOP-11804 and HADOOP-11656.
> >>>
> >>> Main problems we try to solve in the activities above is as follows:
> >>>
> >>> * 1) tries to solve dependency hell between user-level jar and
> >>> system(Hadoop)-level jar.
> >>> * 2) tries to solve updating old libraries.
> >>>
> >>> IIUC, 1) and 2) looks not related, but it's related in fact. 2) tries
> >>> to separate class loader between client-side dependencies and
> >>> server-side dependencies in Hadoop, so we can the change policy of
> >>> updating libraries after doing 2). We can also decide which libraries
> >>> can be shaded after 2).
> >>>
> >>> Hence, IMHO, a straight way we should go to is doing 2 at first.
> >>> After that, we can update both client-side and server-side
> >>> dependencies based on new policy(maybe we should discuss what kind of
> >>> incompatibility is acceptable, and the others are not).
> >>>
> >>> Thoughts?
> >>>
> >>> Thanks,
> >>> - Tsuyoshi
> >>>
> >>> ---------------------------------------------------------------------
> >>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> >>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >>>
> >>
> >>
> >>
> >> --
> >> busbey
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >>
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
>
>
>
> --
> busbey
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>

Reply via email to