[jira] [Created] (YARN-8841) Analyze if ApplicationMasterService schedule tests can be applied to all scheduler types

2018-10-02 Thread Szilard Nemeth (JIRA)
Szilard Nemeth created YARN-8841:


 Summary: Analyze if ApplicationMasterService schedule tests can be 
applied to all scheduler types
 Key: YARN-8841
 URL: https://issues.apache.org/jira/browse/YARN-8841
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: test
Reporter: Szilard Nemeth
Assignee: Szilard Nemeth


This is a follow-up jira of YARN-8732.

1. testResourceTypes() checks all three schedulers, fifo, capacity scheduler 
and fair scheduler. How about we split them into three classes respectively, 
even though it might mean some code duplication?

2. testUpdateTrackingUrl() is now run for capacity scheduler only. I think we 
shall run it with all three schedulers. So is  
testInvalidIncreaseDecreaseRequest() in theory (If the other two schedulers do 
not support increase/decrease requests, let's keep it with capacity scheduler 
only.

3. All the unit tests in ApplicationMasterServiceTestBase are applicable to all 
three schedulers, but we are just running them with Fifo scheduler. We should 
probably enable them for capacity and fair scheduler too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: Resourcemanager Failing on OpenJDK11

2018-10-02 Thread Akira Ajisaka
Hi Jeremiah,

I wrote a patch to fix this issue in
https://issues.apache.org/jira/browse/HADOOP-15775

There is also a wiki to document the progress of Java 9, 10, and 11 support.
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+and+Java+9%2C+10%2C+11

Now there is no community consensus regarding the plan of Java 11 support,
however, I'd like to support Java 11 in Apache Hadoop 3.3 release.

Thanks,
Akira
2018年10月3日(水) 4:14 Jeremiah Adams :
>
> I am doing some testing of OpenJDK 11 and YARN, Kafka and Samza. I found that 
> I YARN Resourcemanager will not run on OpenJDK11. I didn’t see any tasking in 
> JIRA regarding Java 11.
>
> What are YARN’s plans regarding OpenJDK11 and the changes to Oracle support 
> and release cadences? Is there an Epic or Stories regarding Java 11 that I 
> can add this issue to?
>
>
> The issue with Resourcemanager is the WebAppContext failing:
>
>
> Caused by: java.lang.NoClassDefFoundError: javax/activation/DataSource
>
>
> The Activation package was lumped in with J2EE and CORBA for removal. 
> Deprecated in v. 9, marked for removal and removed in v.11. Now gone.
>
>
>
> Output from the logs:
>
>
>
>
> 2018-10-02 07:36:37,410 WARN org.eclipse.jetty.webapp.WebAppContext: Failed 
> startup of context 
> o.e.j.w.WebAppContext@43d3aba5{/,file:///private/var/folders/9y/92nwpmbd6pjf4m68mkcw29z4gn/T/jetty-0.0.0.0-8042-node-_-any-10842369110863142525.dir/webapp/,UNAVAILABLE}{/node}
>
> com.google.inject.ProvisionException: Unable to provision, see the following 
> errors:
>
>
> 1) Error injecting constructor, java.lang.NoClassDefFoundError: 
> javax/activation/DataSource
>
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver.(JAXBContextResolver.java:52)
>
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer$NMWebApp.setup(WebServer.java:153)
>
>  while locating 
> org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver
>
>
> 1 error
>
>at 
> com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1025)
>
>at 
> com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1051)
>
>at 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory$GuiceInstantiatedComponentProvider.getInstance(GuiceComponentProviderFactory.java:345)
>
>at 
> com.sun.jersey.core.spi.component.ioc.IoCProviderFactory$ManagedSingleton.(IoCProviderFactory.java:202)
>
>at 
> com.sun.jersey.core.spi.component.ioc.IoCProviderFactory.wrap(IoCProviderFactory.java:123)
>
>at 
> com.sun.jersey.core.spi.component.ioc.IoCProviderFactory._getComponentProvider(IoCProviderFactory.java:116)
>
>at 
> com.sun.jersey.core.spi.component.ProviderFactory.getComponentProvider(ProviderFactory.java:153)
>
>at 
> com.sun.jersey.core.spi.component.ProviderServices.getComponent(ProviderServices.java:278)
>
>at 
> com.sun.jersey.core.spi.component.ProviderServices.getProviders(ProviderServices.java:151)
>
>at 
> com.sun.jersey.core.spi.factory.ContextResolverFactory.init(ContextResolverFactory.java:83)
>
>at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1332)
>
>at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:180)
>
>at 
> com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:799)
>
>at 
> com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:795)
>
>at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
>
>at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:795)
>
>at 
> com.sun.jersey.guice.spi.container.servlet.GuiceContainer.initiate(GuiceContainer.java:121)
>
>at 
> com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:339)
>
>at 
> com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:605)
>
>at 
> com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:207)
>
>at 
> com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:394)
>
>at 
> com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:744)
>
>at 
> com.google.inject.servlet.FilterDefinition.init(FilterDefinition.java:112)
>
>at 
> com.google.inject.servlet.ManagedFilterPipeline.initPipeline(ManagedFilterPipeline.java:99)
>
>at com.google.inject.servlet.GuiceFilter.init(GuiceFilter.java:220)
>
>at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139)
>
>at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)
>
>at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
>
>at 
> 

Re: Hadoop 3.2 Release Plan proposal

2018-10-02 Thread Sunil G
Thanks Robert and Haibo for quickly correcting same.
Sigh, I somehow missed one file while committing the change. Sorry for the
trouble.

- Sunil

On Wed, Oct 3, 2018 at 5:22 AM Robert Kanter  wrote:

> Looks like there's two that weren't updated:
> >> [115] 16:32 : hadoop-common (trunk) :: grep "3.2.0-SNAPSHOT" . -r
> --include=pom.xml
> ./hadoop-project/pom.xml:
> 3.2.0-SNAPSHOT
> ./pom.xml:3.2.0-SNAPSHOT
>
> I've just pushed in an addendum commit to fix those.
> In the future, please make sure to do a sanity compile when updating poms.
>
> thanks
> - Robert
>
> On Tue, Oct 2, 2018 at 11:44 AM Aaron Fabbri 
> wrote:
>
>> Trunk is not building for me.. Did you miss a 3.2.0-SNAPSHOT in the
>> top-level pom.xml?
>>
>>
>> On Tue, Oct 2, 2018 at 10:16 AM Sunil G  wrote:
>>
>> > Hi All
>> >
>> > As mentioned in earlier mail, I have cut branch-3.2 and reset trunk to
>> > 3.3.0-SNAPSHOT. I will share the RC details sooner once all necessary
>> > patches are pulled into branch-3.2.
>> >
>> > Thank You
>> > - Sunil
>> >
>> >
>> > On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:
>> >
>> > > Hi All
>> > >
>> > > We are now down to the last Blocker and HADOOP-15407 is merged to
>> trunk.
>> > > Thanks for the support.
>> > >
>> > > *Plan for RC*
>> > > 3.2 branch cut and reset trunk : *25th Tuesday*
>> > > RC0 for 3.2: *28th Friday*
>> > >
>> > > Thank You
>> > > Sunil
>> > >
>> > >
>> > > On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:
>> > >
>> > >> Hi All
>> > >>
>> > >> We are down to 3 Blockers and 4 Critical now. Thanks all of you for
>> > >> helping in this. I am following up on these tickets, once its closed
>> we
>> > >> will cut the 3.2 branch.
>> > >>
>> > >> Thanks
>> > >> Sunil Govindan
>> > >>
>> > >>
>> > >> On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:
>> > >>
>> > >>> Hi All,
>> > >>>
>> > >>> Inline with the original 3.2 communication proposal dated 17th July
>> > >>> 2018, I would like to provide more updates.
>> > >>>
>> > >>> We are approaching previously proposed code freeze date (September
>> 14,
>> > >>> 2018). So I would like to cut 3.2 branch on 17th Sept and point
>> > existing
>> > >>> trunk to 3.3 if there are no issues.
>> > >>>
>> > >>> *Current Release Plan:*
>> > >>> Feature freeze date : all features to merge by September 7, 2018.
>> > >>> Code freeze date : blockers/critical only, no improvements and
>> > >>> blocker/critical bug-fixes September 14, 2018.
>> > >>> Release date: September 28, 2018
>> > >>>
>> > >>> If any critical/blocker tickets which are targeted to 3.2.0, we
>> need to
>> > >>> backport to 3.2 post branch cut.
>> > >>>
>> > >>> Here's an updated 3.2.0 feature status:
>> > >>>
>> > >>> 1. Merged & Completed features:
>> > >>>
>> > >>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
>> > >>> workloads Initial cut.
>> > >>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>> > >>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
>> > >>> Scheduler.
>> > >>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service
>> > API
>> > >>> and CLI.
>> > >>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
>> > >>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement
>> works.
>> > >>>
>> > >>> 2. Features close to finish:
>> > >>>
>> > >>> - (Steve) S3Guard Phase III. Close to commit.
>> > >>> - (Steve) S3a phase V. Close to commit.
>> > >>> - (Steve) Support Windows Azure Storage. Close to commit.
>> > >>>
>> > >>> 3. Tentative/Cancelled features for 3.2:
>> > >>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
>> > >>> ATSv2. Patch in progress.
>> > >>> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks
>> challenging to
>> > >>> be done before Aug 2018.
>> > >>> - (Eric) YARN-7129: Application Catalog for YARN applications.
>> > >>> Challenging as more discussions are on-going.
>> > >>>
>> > >>> *Summary of 3.2.0 issues status:*
>> > >>> 19 Blocker and Critical issues [1] are open, I am following up with
>> > >>> owners to get status on each of them to get in by Code Freeze date.
>> > >>>
>> > >>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in
>> > (Blocker,
>> > >>> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0
>> > ORDER
>> > >>> BY priority DESC
>> > >>>
>> > >>> Thanks,
>> > >>> Sunil
>> > >>>
>> > >>>
>> > >>>
>> > >>> On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:
>> > >>>
>> >  Hi All,
>> > 
>> >  Inline with earlier communication dated 17th July 2018, I would
>> like
>> > to
>> >  provide some updates.
>> > 
>> >  We are approaching previously proposed code freeze date (Aug 31).
>> > 
>> >  One of the critical feature Node Attributes feature merge
>> >  discussion/vote is ongoing. Also few other Blocker bugs need a bit
>> > more
>> >  time. With regard to this, suggesting to push the feature/code
>> freeze
>> > for 2
>> >  more weeks to accommodate these jiras too.
>> > 
>> >  

Re: Hadoop 3.2 Release Plan proposal

2018-10-02 Thread Robert Kanter
Looks like there's two that weren't updated:
>> [115] 16:32 : hadoop-common (trunk) :: grep "3.2.0-SNAPSHOT" . -r
--include=pom.xml
./hadoop-project/pom.xml:
3.2.0-SNAPSHOT
./pom.xml:3.2.0-SNAPSHOT

I've just pushed in an addendum commit to fix those.
In the future, please make sure to do a sanity compile when updating poms.

thanks
- Robert

On Tue, Oct 2, 2018 at 11:44 AM Aaron Fabbri 
wrote:

> Trunk is not building for me.. Did you miss a 3.2.0-SNAPSHOT in the
> top-level pom.xml?
>
>
> On Tue, Oct 2, 2018 at 10:16 AM Sunil G  wrote:
>
> > Hi All
> >
> > As mentioned in earlier mail, I have cut branch-3.2 and reset trunk to
> > 3.3.0-SNAPSHOT. I will share the RC details sooner once all necessary
> > patches are pulled into branch-3.2.
> >
> > Thank You
> > - Sunil
> >
> >
> > On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:
> >
> > > Hi All
> > >
> > > We are now down to the last Blocker and HADOOP-15407 is merged to
> trunk.
> > > Thanks for the support.
> > >
> > > *Plan for RC*
> > > 3.2 branch cut and reset trunk : *25th Tuesday*
> > > RC0 for 3.2: *28th Friday*
> > >
> > > Thank You
> > > Sunil
> > >
> > >
> > > On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:
> > >
> > >> Hi All
> > >>
> > >> We are down to 3 Blockers and 4 Critical now. Thanks all of you for
> > >> helping in this. I am following up on these tickets, once its closed
> we
> > >> will cut the 3.2 branch.
> > >>
> > >> Thanks
> > >> Sunil Govindan
> > >>
> > >>
> > >> On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:
> > >>
> > >>> Hi All,
> > >>>
> > >>> Inline with the original 3.2 communication proposal dated 17th July
> > >>> 2018, I would like to provide more updates.
> > >>>
> > >>> We are approaching previously proposed code freeze date (September
> 14,
> > >>> 2018). So I would like to cut 3.2 branch on 17th Sept and point
> > existing
> > >>> trunk to 3.3 if there are no issues.
> > >>>
> > >>> *Current Release Plan:*
> > >>> Feature freeze date : all features to merge by September 7, 2018.
> > >>> Code freeze date : blockers/critical only, no improvements and
> > >>> blocker/critical bug-fixes September 14, 2018.
> > >>> Release date: September 28, 2018
> > >>>
> > >>> If any critical/blocker tickets which are targeted to 3.2.0, we need
> to
> > >>> backport to 3.2 post branch cut.
> > >>>
> > >>> Here's an updated 3.2.0 feature status:
> > >>>
> > >>> 1. Merged & Completed features:
> > >>>
> > >>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
> > >>> workloads Initial cut.
> > >>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
> > >>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
> > >>> Scheduler.
> > >>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service
> > API
> > >>> and CLI.
> > >>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
> > >>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement
> works.
> > >>>
> > >>> 2. Features close to finish:
> > >>>
> > >>> - (Steve) S3Guard Phase III. Close to commit.
> > >>> - (Steve) S3a phase V. Close to commit.
> > >>> - (Steve) Support Windows Azure Storage. Close to commit.
> > >>>
> > >>> 3. Tentative/Cancelled features for 3.2:
> > >>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
> > >>> ATSv2. Patch in progress.
> > >>> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging
> to
> > >>> be done before Aug 2018.
> > >>> - (Eric) YARN-7129: Application Catalog for YARN applications.
> > >>> Challenging as more discussions are on-going.
> > >>>
> > >>> *Summary of 3.2.0 issues status:*
> > >>> 19 Blocker and Critical issues [1] are open, I am following up with
> > >>> owners to get status on each of them to get in by Code Freeze date.
> > >>>
> > >>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in
> > (Blocker,
> > >>> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0
> > ORDER
> > >>> BY priority DESC
> > >>>
> > >>> Thanks,
> > >>> Sunil
> > >>>
> > >>>
> > >>>
> > >>> On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:
> > >>>
> >  Hi All,
> > 
> >  Inline with earlier communication dated 17th July 2018, I would like
> > to
> >  provide some updates.
> > 
> >  We are approaching previously proposed code freeze date (Aug 31).
> > 
> >  One of the critical feature Node Attributes feature merge
> >  discussion/vote is ongoing. Also few other Blocker bugs need a bit
> > more
> >  time. With regard to this, suggesting to push the feature/code
> freeze
> > for 2
> >  more weeks to accommodate these jiras too.
> > 
> >  Proposing Updated changes in plan inline with this:
> >  Feature freeze date : all features to merge by September 7, 2018.
> >  Code freeze date : blockers/critical only, no improvements and
> >   blocker/critical bug-fixes September 14, 2018.
> >  Release date: September 28, 2018
> > 
> >  If any features in branch which are targeted to 

[jira] [Created] (YARN-8840) Add missing cleanupSSLConfig() call for TestTimelineClient test

2018-10-02 Thread Aki Tanaka (JIRA)
Aki Tanaka created YARN-8840:


 Summary: Add missing cleanupSSLConfig() call for 
TestTimelineClient test
 Key: YARN-8840
 URL: https://issues.apache.org/jira/browse/YARN-8840
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test, timelineclient
Reporter: Aki Tanaka


Tests that setup SSLConfigs can leave conf-files lingering unless they are 
cleaned up via {{KeyStoreTestUtil.cleanupSSLConfig}} call. TestTimelineClient 
test is missing this call.

If the cleanup method is not called explicitly, a modified ssl-client.xml is 
left in {{test-classes}}, might affect subsequent test cases.

 

There was a similar report in HDFS-11042, but looks that we need to fix 
TestTimelineClient test too.

 
{code:java}
$ mvn test -Dtest=TestTimelineClient
$ find .|grep ssl-client.xml$
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-classes/ssl-client.xml
$ cat 
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-classes/ssl-client.xml

ssl.client.truststore.reload.interval1000falseprogrammatically
ssl.client.truststore.location/Users/tanaka/work/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-dir/trustKS.jksfalseprogrammatically
ssl.client.keystore.keypasswordclientPfalseprogrammatically
ssl.client.keystore.location/Users/tanaka/work/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-dir/clientKS.jksfalseprogrammatically
ssl.client.truststore.passwordtrustPfalseprogrammatically
ssl.client.keystore.passwordclientPfalseprogrammatically
{code}
 

After applying this patch, the ssl-client.xml is not generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Resourcemanager Failing on OpenJDK11

2018-10-02 Thread Jeremiah Adams
I am doing some testing of OpenJDK 11 and YARN, Kafka and Samza. I found that I 
YARN Resourcemanager will not run on OpenJDK11. I didn’t see any tasking in 
JIRA regarding Java 11.

What are YARN’s plans regarding OpenJDK11 and the changes to Oracle support and 
release cadences? Is there an Epic or Stories regarding Java 11 that I can add 
this issue to?


The issue with Resourcemanager is the WebAppContext failing:


Caused by: java.lang.NoClassDefFoundError: javax/activation/DataSource


The Activation package was lumped in with J2EE and CORBA for removal. 
Deprecated in v. 9, marked for removal and removed in v.11. Now gone.



Output from the logs:




2018-10-02 07:36:37,410 WARN org.eclipse.jetty.webapp.WebAppContext: Failed 
startup of context 
o.e.j.w.WebAppContext@43d3aba5{/,file:///private/var/folders/9y/92nwpmbd6pjf4m68mkcw29z4gn/T/jetty-0.0.0.0-8042-node-_-any-10842369110863142525.dir/webapp/,UNAVAILABLE}{/node}

com.google.inject.ProvisionException: Unable to provision, see the following 
errors:


1) Error injecting constructor, java.lang.NoClassDefFoundError: 
javax/activation/DataSource

 at 
org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver.(JAXBContextResolver.java:52)

 at 
org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer$NMWebApp.setup(WebServer.java:153)

 while locating 
org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver


1 error

   at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1025)

   at 
com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1051)

   at 
com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory$GuiceInstantiatedComponentProvider.getInstance(GuiceComponentProviderFactory.java:345)

   at 
com.sun.jersey.core.spi.component.ioc.IoCProviderFactory$ManagedSingleton.(IoCProviderFactory.java:202)

   at 
com.sun.jersey.core.spi.component.ioc.IoCProviderFactory.wrap(IoCProviderFactory.java:123)

   at 
com.sun.jersey.core.spi.component.ioc.IoCProviderFactory._getComponentProvider(IoCProviderFactory.java:116)

   at 
com.sun.jersey.core.spi.component.ProviderFactory.getComponentProvider(ProviderFactory.java:153)

   at 
com.sun.jersey.core.spi.component.ProviderServices.getComponent(ProviderServices.java:278)

   at 
com.sun.jersey.core.spi.component.ProviderServices.getProviders(ProviderServices.java:151)

   at 
com.sun.jersey.core.spi.factory.ContextResolverFactory.init(ContextResolverFactory.java:83)

   at 
com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1332)

   at 
com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:180)

   at 
com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:799)

   at 
com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:795)

   at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)

   at 
com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:795)

   at 
com.sun.jersey.guice.spi.container.servlet.GuiceContainer.initiate(GuiceContainer.java:121)

   at 
com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:339)

   at 
com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:605)

   at 
com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:207)

   at 
com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:394)

   at 
com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:744)

   at 
com.google.inject.servlet.FilterDefinition.init(FilterDefinition.java:112)

   at 
com.google.inject.servlet.ManagedFilterPipeline.initPipeline(ManagedFilterPipeline.java:99)

   at com.google.inject.servlet.GuiceFilter.init(GuiceFilter.java:220)

   at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139)

   at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873)

   at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)

   at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1406)

   at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1368)

   at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)

   at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)

   at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:522)

   at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)

   at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)

   at 

Re: Hadoop 3.2 Release Plan proposal

2018-10-02 Thread Aaron Fabbri
Trunk is not building for me.. Did you miss a 3.2.0-SNAPSHOT in the
top-level pom.xml?


On Tue, Oct 2, 2018 at 10:16 AM Sunil G  wrote:

> Hi All
>
> As mentioned in earlier mail, I have cut branch-3.2 and reset trunk to
> 3.3.0-SNAPSHOT. I will share the RC details sooner once all necessary
> patches are pulled into branch-3.2.
>
> Thank You
> - Sunil
>
>
> On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:
>
> > Hi All
> >
> > We are now down to the last Blocker and HADOOP-15407 is merged to trunk.
> > Thanks for the support.
> >
> > *Plan for RC*
> > 3.2 branch cut and reset trunk : *25th Tuesday*
> > RC0 for 3.2: *28th Friday*
> >
> > Thank You
> > Sunil
> >
> >
> > On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:
> >
> >> Hi All
> >>
> >> We are down to 3 Blockers and 4 Critical now. Thanks all of you for
> >> helping in this. I am following up on these tickets, once its closed we
> >> will cut the 3.2 branch.
> >>
> >> Thanks
> >> Sunil Govindan
> >>
> >>
> >> On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:
> >>
> >>> Hi All,
> >>>
> >>> Inline with the original 3.2 communication proposal dated 17th July
> >>> 2018, I would like to provide more updates.
> >>>
> >>> We are approaching previously proposed code freeze date (September 14,
> >>> 2018). So I would like to cut 3.2 branch on 17th Sept and point
> existing
> >>> trunk to 3.3 if there are no issues.
> >>>
> >>> *Current Release Plan:*
> >>> Feature freeze date : all features to merge by September 7, 2018.
> >>> Code freeze date : blockers/critical only, no improvements and
> >>> blocker/critical bug-fixes September 14, 2018.
> >>> Release date: September 28, 2018
> >>>
> >>> If any critical/blocker tickets which are targeted to 3.2.0, we need to
> >>> backport to 3.2 post branch cut.
> >>>
> >>> Here's an updated 3.2.0 feature status:
> >>>
> >>> 1. Merged & Completed features:
> >>>
> >>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
> >>> workloads Initial cut.
> >>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
> >>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
> >>> Scheduler.
> >>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service
> API
> >>> and CLI.
> >>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
> >>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement works.
> >>>
> >>> 2. Features close to finish:
> >>>
> >>> - (Steve) S3Guard Phase III. Close to commit.
> >>> - (Steve) S3a phase V. Close to commit.
> >>> - (Steve) Support Windows Azure Storage. Close to commit.
> >>>
> >>> 3. Tentative/Cancelled features for 3.2:
> >>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
> >>> ATSv2. Patch in progress.
> >>> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to
> >>> be done before Aug 2018.
> >>> - (Eric) YARN-7129: Application Catalog for YARN applications.
> >>> Challenging as more discussions are on-going.
> >>>
> >>> *Summary of 3.2.0 issues status:*
> >>> 19 Blocker and Critical issues [1] are open, I am following up with
> >>> owners to get status on each of them to get in by Code Freeze date.
> >>>
> >>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in
> (Blocker,
> >>> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0
> ORDER
> >>> BY priority DESC
> >>>
> >>> Thanks,
> >>> Sunil
> >>>
> >>>
> >>>
> >>> On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:
> >>>
>  Hi All,
> 
>  Inline with earlier communication dated 17th July 2018, I would like
> to
>  provide some updates.
> 
>  We are approaching previously proposed code freeze date (Aug 31).
> 
>  One of the critical feature Node Attributes feature merge
>  discussion/vote is ongoing. Also few other Blocker bugs need a bit
> more
>  time. With regard to this, suggesting to push the feature/code freeze
> for 2
>  more weeks to accommodate these jiras too.
> 
>  Proposing Updated changes in plan inline with this:
>  Feature freeze date : all features to merge by September 7, 2018.
>  Code freeze date : blockers/critical only, no improvements and
>   blocker/critical bug-fixes September 14, 2018.
>  Release date: September 28, 2018
> 
>  If any features in branch which are targeted to 3.2.0, please reply to
>  this email thread.
> 
>  *Here's an updated 3.2.0 feature status:*
> 
>  1. Merged & Completed features:
> 
>  - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
>  workloads Initial cut.
>  - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>  - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
>  Scheduler.
>  - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service
>  API and CLI.
> 
>  2. Features close to finish:
> 
>  - (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Merge/Vote
>  Ongoing.
>  - (Rohith) YARN-5742: Serve 

Re: Hadoop 3.2 Release Plan proposal

2018-10-02 Thread Sunil G
Hi All

As mentioned in earlier mail, I have cut branch-3.2 and reset trunk to
3.3.0-SNAPSHOT. I will share the RC details sooner once all necessary
patches are pulled into branch-3.2.

Thank You
- Sunil


On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:

> Hi All
>
> We are now down to the last Blocker and HADOOP-15407 is merged to trunk.
> Thanks for the support.
>
> *Plan for RC*
> 3.2 branch cut and reset trunk : *25th Tuesday*
> RC0 for 3.2: *28th Friday*
>
> Thank You
> Sunil
>
>
> On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:
>
>> Hi All
>>
>> We are down to 3 Blockers and 4 Critical now. Thanks all of you for
>> helping in this. I am following up on these tickets, once its closed we
>> will cut the 3.2 branch.
>>
>> Thanks
>> Sunil Govindan
>>
>>
>> On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:
>>
>>> Hi All,
>>>
>>> Inline with the original 3.2 communication proposal dated 17th July
>>> 2018, I would like to provide more updates.
>>>
>>> We are approaching previously proposed code freeze date (September 14,
>>> 2018). So I would like to cut 3.2 branch on 17th Sept and point existing
>>> trunk to 3.3 if there are no issues.
>>>
>>> *Current Release Plan:*
>>> Feature freeze date : all features to merge by September 7, 2018.
>>> Code freeze date : blockers/critical only, no improvements and
>>> blocker/critical bug-fixes September 14, 2018.
>>> Release date: September 28, 2018
>>>
>>> If any critical/blocker tickets which are targeted to 3.2.0, we need to
>>> backport to 3.2 post branch cut.
>>>
>>> Here's an updated 3.2.0 feature status:
>>>
>>> 1. Merged & Completed features:
>>>
>>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
>>> workloads Initial cut.
>>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
>>> Scheduler.
>>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
>>> and CLI.
>>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
>>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement works.
>>>
>>> 2. Features close to finish:
>>>
>>> - (Steve) S3Guard Phase III. Close to commit.
>>> - (Steve) S3a phase V. Close to commit.
>>> - (Steve) Support Windows Azure Storage. Close to commit.
>>>
>>> 3. Tentative/Cancelled features for 3.2:
>>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
>>> ATSv2. Patch in progress.
>>> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to
>>> be done before Aug 2018.
>>> - (Eric) YARN-7129: Application Catalog for YARN applications.
>>> Challenging as more discussions are on-going.
>>>
>>> *Summary of 3.2.0 issues status:*
>>> 19 Blocker and Critical issues [1] are open, I am following up with
>>> owners to get status on each of them to get in by Code Freeze date.
>>>
>>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
>>> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
>>> BY priority DESC
>>>
>>> Thanks,
>>> Sunil
>>>
>>>
>>>
>>> On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:
>>>
 Hi All,

 Inline with earlier communication dated 17th July 2018, I would like to
 provide some updates.

 We are approaching previously proposed code freeze date (Aug 31).

 One of the critical feature Node Attributes feature merge
 discussion/vote is ongoing. Also few other Blocker bugs need a bit more
 time. With regard to this, suggesting to push the feature/code freeze for 2
 more weeks to accommodate these jiras too.

 Proposing Updated changes in plan inline with this:
 Feature freeze date : all features to merge by September 7, 2018.
 Code freeze date : blockers/critical only, no improvements and
  blocker/critical bug-fixes September 14, 2018.
 Release date: September 28, 2018

 If any features in branch which are targeted to 3.2.0, please reply to
 this email thread.

 *Here's an updated 3.2.0 feature status:*

 1. Merged & Completed features:

 - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
 workloads Initial cut.
 - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
 - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
 Scheduler.
 - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service
 API and CLI.

 2. Features close to finish:

 - (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Merge/Vote
 Ongoing.
 - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
 ATSv2. Patch in progress.
 - (Virajit) HDFS-12615: Router-based HDFS federation. Improvement works.
 - (Steve) S3Guard Phase III, S3a phase V, Support Windows Azure
 Storage. In progress.

 3. Tentative features:

 - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to
 be done before Aug 2018.
 - (Eric) YARN-7129: Application 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-02 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/

[Oct 1, 2018 8:20:17 AM] (nanda) HDDS-325. Add event watcher for delete blocks 
command. Contributed by
[Oct 1, 2018 6:21:26 PM] (ajay) HDDS-557. DeadNodeHandler should handle 
exception from
[Oct 1, 2018 8:12:38 PM] (gifuma) YARN-8760. [AMRMProxy] Fix concurrent 
re-register due to YarnRM failover
[Oct 1, 2018 8:16:08 PM] (bharat) HDDS-525. Support virtual-hosted style URLs. 
Contributed by Bharat
[Oct 1, 2018 9:46:42 PM] (haibochen) YARN-8621. Add test coverage of custom 
Resource Types for the
[Oct 1, 2018 10:04:20 PM] (bharat) HDDS-562. Create acceptance test to test aws 
cli with the s3 gateway.
[Oct 2, 2018 12:49:48 AM] (tasanuma) HDFS-13943. [JDK10] Fix javadoc errors in 
hadoop-hdfs-client module.
[Oct 2, 2018 1:43:14 AM] (yqlin) HDFS-13768. Adding replicas to volume map 
makes DataNode start slowly.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 123] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.yarn.server.nodemanager.containermanager.TestNMProxy 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]