[jira] [Created] (YARN-8964) UI2 should use clusters/{cluster name} for all ATSv2 REST APIs

2018-10-30 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-8964:
---

 Summary: UI2 should use clusters/{cluster name} for all ATSv2 REST 
APIs
 Key: YARN-8964
 URL: https://issues.apache.org/jira/browse/YARN-8964
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Rohith Sharma K S


UI2 makes a REST call to TimelineReader without cluster name. It is advised to 
make a REST call with clusters/{cluster name} so that remote TimelineReader 
daemon could serve for different clusters.
*Example*:
*Current*: /ws/v2/timeline/flows/
*Change*: /ws/v2/timeline/*clusters/\{cluster name\}*/flows/


*yarn.resourcemanager.cluster-id *is configured with cluster. So, this config 
could be used to get cluster-id

cc:/ [~sunilg] [~akhilpb]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-8963) Add flag to disable interactive shell

2018-10-30 Thread Eric Yang (JIRA)
Eric Yang created YARN-8963:
---

 Summary: Add flag to disable interactive shell
 Key: YARN-8963
 URL: https://issues.apache.org/jira/browse/YARN-8963
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Eric Yang


For some production job, application admin might choose to disable debugging to 
production jobs to prevent developer or system admin from accessing the 
containers.  It would be nice to add an environment variable flag to disable 
interactive shell during application submission.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-8962) Add ability to use interactive shell with normal yarn container

2018-10-30 Thread Eric Yang (JIRA)
Eric Yang created YARN-8962:
---

 Summary: Add ability to use interactive shell with normal yarn 
container
 Key: YARN-8962
 URL: https://issues.apache.org/jira/browse/YARN-8962
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Eric Yang


This task is focusing on extending interactive shell capability to yarn 
container without docker.  This will improve some aspect of debugging mapreduce 
or spark applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/

[Oct 29, 2018 3:53:10 AM] (wwei) YARN-8944.
[Oct 29, 2018 4:04:32 AM] (wwei) YARN-8915. Update the doc about the default 
value of
[Oct 29, 2018 1:45:01 PM] (elek) HDDS-573. Make VirtualHostStyleFilter port 
agnostic. Contributed by
[Oct 29, 2018 2:23:52 PM] (nanda) HDDS-728. Datanodes should use different 
ContainerStateMachine for each
[Oct 29, 2018 4:35:18 PM] (arp) HDDS-727. ozone.log is not getting created in 
logs directory.
[Oct 29, 2018 7:59:41 PM] (bharat) HDDS-743. S3 multi delete request should 
return XML header in quiet
[Oct 30, 2018 12:14:18 AM] (arp) HDDS-620. ozone.scm.client.address should be 
an optional setting.
[Oct 30, 2018 2:06:15 AM] (xiao) HDFS-14027. DFSStripedOutputStream should 
implement both hsync methods.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-registry 
   Exceptional return value of 
java.util.concurrent.ExecutorService.submit(Callable) ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) 
At RegistryDNS.java:ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) 
At RegistryDNS.java:[line 900] 
   Exceptional return value of 
java.util.concurrent.ExecutorService.submit(Callable) ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) 
At RegistryDNS.java:ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) 
At RegistryDNS.java:[line 926] 
   Exceptional return value of 
java.util.concurrent.ExecutorService.submit(Callable) ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel,
 InetAddress, int) At RegistryDNS.java:ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel,
 InetAddress, int) At RegistryDNS.java:[line 850] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/diff-compile-javac-root.txt
  [324K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/diff-patch-shellcheck.txt
  [68K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/branch-findbugs-hadoop-common-project_hadoop-registry-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/942/artifact/out/branch-findbugs-hadoop-

[jira] [Created] (YARN-8961) [UI2] Flow Run End Time shows 'Invalid date'

2018-10-30 Thread Charan Hebri (JIRA)
Charan Hebri created YARN-8961:
--

 Summary: [UI2] Flow Run End Time shows 'Invalid date'
 Key: YARN-8961
 URL: https://issues.apache.org/jira/browse/YARN-8961
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Charan Hebri
 Attachments: Invalid_Date.png

End Time for Flow Runs is shown as *Invalid date* for runs that are in 
progress. This should be shown as *N/A* just like how it is shown for 'CPU 
VCores' and 'Memory Used'. Attached relevant screenshot.
cc [~akhilpb]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-8960) Can't get submarine service status using the command of "yarn app -status" under security environment

2018-10-30 Thread Zac Zhou (JIRA)
Zac Zhou created YARN-8960:
--

 Summary: Can't get submarine service status using the command of 
"yarn app -status" under security environment
 Key: YARN-8960
 URL: https://issues.apache.org/jira/browse/YARN-8960
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zac Zhou
Assignee: Zac Zhou


After submitting a submarine job, we tried to get service status using the 
following command:

yarn app -status ${service_name}

But we got the following error:

HTTP error code : 500

 

The stack in resourcemanager log is :

ERROR org.apache.hadoop.yarn.service.webapp.ApiServer: Get service failed: {}
java.lang.reflect.UndeclaredThrowableException
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1748)
 at 
org.apache.hadoop.yarn.service.webapp.ApiServer.getServiceFromClient(ApiServer.java:800)
 at 
org.apache.hadoop.yarn.service.webapp.ApiServer.getService(ApiServer.java:186)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker
._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodD
ispatcher.java:75)
 at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
 at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
 at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
 at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
 at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
 at 
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:89)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875)
 at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:179)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829)
 at 
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
 at 
com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119)
 at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133)
 at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130)
 at com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203)
 at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
 at 
org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:6
44)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:5
92)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
 at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1610)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
 at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
 at 
org.eclipse.jetty

Zeppelin integration hadoop {submarine}

2018-10-30 Thread liuxun
Hi,
Hadoop Submarine is the latest machine learning framework subproject in the 
Hadoop 3.2 release. It allows Hadoop to support Tensorflow, MXNet,Caffe, Spark, 
etc. A variety of deep learning frameworks provide a full-featured system 
framework for machine learning algorithm development, distributed model 
training, model management, and model publishing, combined with hadoop's 
intrinsic data storage and data processing capabilities to enable data 
scientists to Good mining and the value of the data.

I was involved in the development of the hadoop submarine project. So I plan to 
add the interpreter module of hadoop submarine to zeppelin.
Let zeppeline increase the development of deep learning. This is my design 
document, let's see if there is any opinion, you can put it directly in the 
document, thank you!

https://docs.google.com/document/d/16YN8Kjmxt1Ym3clx5pDnGNXGajUT36hzQxjaik1cP4A/edit?ts=5bc6bfdd
 




[jira] [Created] (YARN-8959) TestContainerResizing fails randomly

2018-10-30 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-8959:
--

 Summary: TestContainerResizing fails randomly
 Key: YARN-8959
 URL: https://issues.apache.org/jira/browse/YARN-8959
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bibin A Chundatt


org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testSimpleDecreaseContainer
{code}
testSimpleDecreaseContainer(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing)
  Time elapsed: 0.348 s  <<< FAILURE!
java.lang.AssertionError: expected:<1024> but was:<3072>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.checkUsedResource(TestContainerResizing.java:1011)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testSimpleDecreaseContainer(TestContainerResizing.java:210)
{code}
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testIncreaseContainerUnreservedWhenContainerCompleted
{code}
testIncreaseContainerUnreservedWhenContainerCompleted(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing)
  Time elapsed: 0.445 s  <<< FAILURE!
java.lang.AssertionError: expected:<1024> but was:<7168>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.checkUsedResource(TestContainerResizing.java:1011)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testIncreaseContainerUnreservedWhenContainerCompleted(TestContainerResizing.java:729)

{code}
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testExcessiveReservationWhenDecreaseSameContainer
{code}
testExcessiveReservationWhenDecreaseSameContainer(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing)
  Time elapsed: 0.321 s  <<< FAILURE!
java.lang.AssertionError: expected:<1024> but was:<2048>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.checkUsedResource(TestContainerResizing.java:1015)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testExcessiveReservationWhenDecreaseSameContainer(TestContainerResizing.java:623)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-8958) Schedulable entities leak in fair ordering policy when recovering containers between remove app attempt and remove app

2018-10-30 Thread Tao Yang (JIRA)
Tao Yang created YARN-8958:
--

 Summary: Schedulable entities leak in fair ordering policy when 
recovering containers between remove app attempt and remove app
 Key: YARN-8958
 URL: https://issues.apache.org/jira/browse/YARN-8958
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 3.2.1
Reporter: Tao Yang
Assignee: Tao Yang


We found a NPE in ClientRMService#getApplications when querying apps with 
specified queue. The cause is that there is one app which can't be found by 
calling RMContextImpl#getRMApps(is finished and swapped out of memory) but 
still can be queried from fair ordering policy.

To reproduce schedulable entities leak in fair ordering policy:
(1) create app1 and launch container1 on node1
(2) restart RM
(3) remove app1 attempt, app1 is removed from the schedulable entities.
(4) recover container1, then the state of contianer1 is changed to COMPLETED, 
app1 is bring back to entitiesToReorder after container released, then app1 
will be added back into schedulable entities after calling 
FairOrderingPolicy#getAssignmentIterator by scheduler.
(5) remove app1

To solve this problem, we should make sure schedulableEntities can only be 
affected by add or remove app attempt, new entity should not be added into 
schedulableEntities by reordering process.
{code:java}
  protected void reorderSchedulableEntity(S schedulableEntity) {
//remove, update comparable data, and reinsert to update position in order
schedulableEntities.remove(schedulableEntity);
updateSchedulingResourceUsage(
  schedulableEntity.getSchedulingResourceUsage());
schedulableEntities.add(schedulableEntity);
  }
{code}
Related codes above can be improved as follow to make sure only existent entity 
can be re-add into schedulableEntities.
{code:java}
  protected void reorderSchedulableEntity(S schedulableEntity) {
//remove, update comparable data, and reinsert to update position in order
boolean exists = schedulableEntities.remove(schedulableEntity);
updateSchedulingResourceUsage(
  schedulableEntity.getSchedulingResourceUsage());
if (exists) {
  schedulableEntities.add(schedulableEntity);
} else {
  LOG.info("Skip reordering non-existent schedulable entity: "
  + schedulableEntity.getId());
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org