[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2018-07-12 Thread Mohammad Kamrul Islam (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542074#comment-16542074
 ] 

Mohammad Kamrul Islam commented on HADOOP-13363:


I would also prefer to move to v3 , if there is no blocker.

Is there anyone working on this?

 

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Tsuyoshi Ozawa
>Priority: Major
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-7139) Allow appending to existing SequenceFiles

2015-12-07 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam reassigned HADOOP-7139:
-

Assignee: Mohammad Kamrul Islam  (was: Kanaka Kumar Avvaru)

> Allow appending to existing SequenceFiles
> -
>
> Key: HADOOP-7139
> URL: https://issues.apache.org/jira/browse/HADOOP-7139
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 1.0.0
>Reporter: Stephen Rose
>Assignee: Mohammad Kamrul Islam
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.7.2
>
> Attachments: HADOOP-7139-01.patch, HADOOP-7139-02.patch, 
> HADOOP-7139-03.patch, HADOOP-7139-04.patch, HADOOP-7139-05.patch, 
> HADOOP-7139-kt.patch, HADOOP-7139.patch, HADOOP-7139.patch, 
> HADOOP-7139.patch, HADOOP-7139.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7139) Allow appending to existing SequenceFiles

2015-12-07 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HADOOP-7139:
--
Assignee: Kanaka Kumar Avvaru  (was: Mohammad Kamrul Islam)

> Allow appending to existing SequenceFiles
> -
>
> Key: HADOOP-7139
> URL: https://issues.apache.org/jira/browse/HADOOP-7139
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 1.0.0
>Reporter: Stephen Rose
>Assignee: Kanaka Kumar Avvaru
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.7.2
>
> Attachments: HADOOP-7139-01.patch, HADOOP-7139-02.patch, 
> HADOOP-7139-03.patch, HADOOP-7139-04.patch, HADOOP-7139-05.patch, 
> HADOOP-7139-kt.patch, HADOOP-7139.patch, HADOOP-7139.patch, 
> HADOOP-7139.patch, HADOOP-7139.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11364) [Java 8] Over usage of virtual memory

2015-01-16 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14280923#comment-14280923
 ] 

Mohammad Kamrul Islam commented on HADOOP-11364:


Sorry [~jira.shegalov] for the late reply. The failure was coming from distcp 
command which uses 1GB as mapreduce.map.memory.mb. I think distcp is map-only 
job.

But in other cases, we used higher memory.mb (2GB) and got the similar 
exception with max 4.2 GB VM.


 [Java 8] Over usage of virtual memory
 -

 Key: HADOOP-11364
 URL: https://issues.apache.org/jira/browse/HADOOP-11364
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 In our Hadoop 2 + Java8 effort , we found few jobs are being Killed by Hadoop 
 due to excessive virtual memory allocation.  Although the physical memory 
 usage is low.
 The most common error message is Container [pid=??,containerID=container_??] 
 is running beyond virtual memory limits. Current usage: 365.1 MB of 1 GB 
 physical memory used; 3.2 GB of 2.1 GB virtual memory used. Killing 
 container.
 We see this problem for MR job as well as in spark driver/executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11461) Namenode stdout log contains IllegalAccessException

2015-01-05 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HADOOP-11461:
--

 Summary: Namenode stdout log contains IllegalAccessException
 Key: HADOOP-11461
 URL: https://issues.apache.org/jira/browse/HADOOP-11461
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


We frequently see the following exception in namenode out log file.

{noformat}
Nov 19, 2014 8:11:19 PM 
com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
attachTypes
INFO: Couldn't find JAX-B element for class javax.ws.rs.core.Response
Nov 19, 2014 8:11:19 PM 
com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 
resolve
SEVERE: null
java.lang.IllegalAccessException: Class 
com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 can 
not access a member of class javax.ws.rs.co
re.Response with modifiers protected
at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
at java.lang.Class.newInstance(Class.java:368)
at 
com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8.resolve(WadlGeneratorJAXBGrammarGenerator.java:467)
at 
com.sun.jersey.server.wadl.WadlGenerator$ExternalGrammarDefinition.resolve(WadlGenerator.java:181)
at 
com.sun.jersey.server.wadl.ApplicationDescription.resolve(ApplicationDescription.java:81)
at 
com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.attachTypes(WadlGeneratorJAXBGrammarGenerator.java:518)
at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:124)
at 
com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:104)
at 
com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:120)
at 
com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:98)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384)
at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1183)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)

[jira] [Commented] (HADOOP-11461) Namenode stdout log contains IllegalAccessException

2015-01-05 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14265219#comment-14265219
 ] 

Mohammad Kamrul Islam commented on HADOOP-11461:


Would it be a good idea to upgrade the version to 1.10 or 1.11 or something 
higher?

I am inclined to upgrade to 1.10 only. 

comments?

 Namenode stdout log contains IllegalAccessException
 ---

 Key: HADOOP-11461
 URL: https://issues.apache.org/jira/browse/HADOOP-11461
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 We frequently see the following exception in namenode out log file.
 {noformat}
 Nov 19, 2014 8:11:19 PM 
 com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
 attachTypes
 INFO: Couldn't find JAX-B element for class javax.ws.rs.core.Response
 Nov 19, 2014 8:11:19 PM 
 com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 
 resolve
 SEVERE: null
 java.lang.IllegalAccessException: Class 
 com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 can 
 not access a member of class javax.ws.rs.co
 re.Response with modifiers protected
 at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
 at java.lang.Class.newInstance(Class.java:368)
 at 
 com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8.resolve(WadlGeneratorJAXBGrammarGenerator.java:467)
 at 
 com.sun.jersey.server.wadl.WadlGenerator$ExternalGrammarDefinition.resolve(WadlGenerator.java:181)
 at 
 com.sun.jersey.server.wadl.ApplicationDescription.resolve(ApplicationDescription.java:81)
 at 
 com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.attachTypes(WadlGeneratorJAXBGrammarGenerator.java:518)
 at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:124)
 at 
 com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:104)
 at 
 com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:120)
 at 
 com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:98)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
 at 
 com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384)
 at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1183)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
 at 
 

[jira] [Commented] (HADOOP-11461) Namenode stdout log contains IllegalAccessException

2015-01-05 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14265201#comment-14265201
 ] 

Mohammad Kamrul Islam commented on HADOOP-11461:


Looks like it is a known jersey bug for version 1.9.
Upgrading Jersey to 1.10 should resolve the issue. ( 
http://permalink.gmane.org/gmane.comp.java.jersey.user/10977)
Basically jersey patch downgraded this message level from SEVERE to FINE.

Working on a patch to upgrade  the version.

If needed, an alternative approach may be to hide this minor issue by turning 
off this log. Related reference is in Hadoop KMS patch at 
https://issues.apache.org/jira/browse/HADOOP-10433.
We can test this by adding 
log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
 into log4j.properties.

Related discussion at :https://issues.apache.org/jira/browse/HDFS-5333



 Namenode stdout log contains IllegalAccessException
 ---

 Key: HADOOP-11461
 URL: https://issues.apache.org/jira/browse/HADOOP-11461
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 We frequently see the following exception in namenode out log file.
 {noformat}
 Nov 19, 2014 8:11:19 PM 
 com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator 
 attachTypes
 INFO: Couldn't find JAX-B element for class javax.ws.rs.core.Response
 Nov 19, 2014 8:11:19 PM 
 com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 
 resolve
 SEVERE: null
 java.lang.IllegalAccessException: Class 
 com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 can 
 not access a member of class javax.ws.rs.co
 re.Response with modifiers protected
 at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
 at java.lang.Class.newInstance(Class.java:368)
 at 
 com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8.resolve(WadlGeneratorJAXBGrammarGenerator.java:467)
 at 
 com.sun.jersey.server.wadl.WadlGenerator$ExternalGrammarDefinition.resolve(WadlGenerator.java:181)
 at 
 com.sun.jersey.server.wadl.ApplicationDescription.resolve(ApplicationDescription.java:81)
 at 
 com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.attachTypes(WadlGeneratorJAXBGrammarGenerator.java:518)
 at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:124)
 at 
 com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:104)
 at 
 com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:120)
 at 
 com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:98)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
 at 
 com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384)
 at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1183)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
 at 
 

[jira] [Created] (HADOOP-11364) [Java 8] Over usage of virtual memory

2014-12-08 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HADOOP-11364:
--

 Summary: [Java 8] Over usage of virtual memory
 Key: HADOOP-11364
 URL: https://issues.apache.org/jira/browse/HADOOP-11364
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


In our Hadoop 2 + Java8 effort , we found few jobs are being Killed by Hadoop 
due to excessive virtual memory allocation.  Although the physical memory usage 
is low.

The most common error message is Container [pid=??,containerID=container_??] 
is running beyond virtual memory limits. Current usage: 365.1 MB of 1 GB 
physical memory used; 3.2 GB of 2.1 GB virtual memory used. Killing container.

We see this problem for MR job as well as in spark driver/executor.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop

2014-12-08 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238669#comment-14238669
 ] 

Mohammad Kamrul Islam commented on HADOOP-11090:


I took some other short-cut to build with Java 8 by disabling the java-doc 
generation.
I passed -Dmaven.javadoc.skip=true in the mvn command.

However, we must resolve this either by disabling doclint or by changing the 
doc manually.



 [Umbrella] Support Java 8 in Hadoop
 ---

 Key: HADOOP-11090
 URL: https://issues.apache.org/jira/browse/HADOOP-11090
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
 works  with Java 8 is important for the Apache community.
   
 This JIRA is to track  the issues/experiences encountered during Java 8 
 migration. If you find a potential bug , please create a separate JIRA either 
 as a sub-task or linked into this JIRA.
 If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
 well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11364) [Java 8] Over usage of virtual memory

2014-12-08 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238716#comment-14238716
 ] 

Mohammad Kamrul Islam commented on HADOOP-11364:


My findings and quick resolutions:
By default, Java 8 allocates extra virtual memory then Java 7. However, we can 
control the non-heap memory usage by limiting the maximum allowed values for 
some JVM  parameters  such as  -XX:ReservedCodeCacheSize=100M 
-XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256

For M/R based job (such as Pig, Hive etc), user can pass the following  JVM -XX 
parameters as part of mapreduce.reduce.java.opts or mapreduce.map.java.opts
{noformat}
mapreduce.reduce.java.opts  '-XX:ReservedCodeCacheSize=100M 
-XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m -Xmx1536m -Xms512m 
-Djava.net.preferIPv4Stack=true'
{noformat}

Similarly for Spark job, we need to pass the same parameters in the Spark 
AM/master and executor. Spark community is working on the ways to pass these 
type of parameters easily. In Spark-1.1.0, user can pass it for spark-cluster 
based job submission as follows. For general job submission, user has to wait 
until https://issues.apache.org/jira/browse/SPARK-4461 is released.
{noformat}
spark.driver.extraJavaOptions = -XX:ReservedCodeCacheSize=100M 
-XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m
{noformat}

For Spark executor, pass the following.
{noformat} 
spark.executor.extraJavaOptions = -XX:ReservedCodeCacheSize=100M 
-XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m
{noformat}

 These parameters can be set in conf/spark-defaults.conf as well.

 [Java 8] Over usage of virtual memory
 -

 Key: HADOOP-11364
 URL: https://issues.apache.org/jira/browse/HADOOP-11364
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 In our Hadoop 2 + Java8 effort , we found few jobs are being Killed by Hadoop 
 due to excessive virtual memory allocation.  Although the physical memory 
 usage is low.
 The most common error message is Container [pid=??,containerID=container_??] 
 is running beyond virtual memory limits. Current usage: 365.1 MB of 1 GB 
 physical memory used; 3.2 GB of 2.1 GB virtual memory used. Killing 
 container.
 We see this problem for MR job as well as in spark driver/executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11211) mapreduce.job.classloader.system.classes property behaves differently when the exclusion and inclusion order is different

2014-11-05 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14199711#comment-14199711
 ] 

Mohammad Kamrul Islam commented on HADOOP-11211:


+1.
I also concur with [~jira.shegalov] comments to include some description in the 
method level. Anyone want to make changes in future, will get the benefit.
 

 mapreduce.job.classloader.system.classes property behaves differently when 
 the exclusion and inclusion order is different
 -

 Key: HADOOP-11211
 URL: https://issues.apache.org/jira/browse/HADOOP-11211
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: hudson
Reporter: Yitong Zhou
Assignee: Yitong Zhou
 Attachments: HADOOP-11211.patch


 If we want to include package foo.bar.* but exclude all sub packages named 
 foo.bar.tar.* in system classes, configuring 
 mapreduce.job.classloader.system.classes=foo.bar.,-foo.bar.tar. won't work. 
 foo.bar.tar will still be pulled in. But if we change the order:
 mapreduce.job.classloader.system.classes=-foo.bar.tar.,foo.bar., then it 
 will work.
 This bug is due to the implementation of ApplicationClassLoaser#isSystemClass 
 in hadoop-common, where we simply return the matching result immediately when 
 the class name hits the first match (either positive or negative).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11139) Allow user to choose JVM for container execution

2014-09-29 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam resolved HADOOP-11139.

  Resolution: Implemented
Release Note: 
As mentioned in the patch of YARN-1964
hadoop jar 
$HADOOP_INSTALLATION_DIR/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar 
\
teragen \
-Dmapreduce.map.env=JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 \
-Dyarn.app.mapreduce.am.env=JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 \
1000 teragen_out_dir

 Allow user to choose JVM for container execution
 

 Key: HADOOP-11139
 URL: https://issues.apache.org/jira/browse/HADOOP-11139
 Project: Hadoop Common
  Issue Type: Task
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 Hadoop currently supports one JVM defined through JAVA_HOME. 
 Since multiple JVMs (Java 6,7,8,9) are active, it will be helpful if there is 
 an user configuration to choose the custom but supported JVM for her job.
 In other words, user will be able to choose her expected JVM only for her 
 container execution while Hadoop services may be running on different JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11098) [JDK8] Max Non Heap Memory default changed between JDK7 and 8

2014-09-29 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam reassigned HADOOP-11098:
--

Assignee: Mohammad Kamrul Islam

 [JDK8] Max Non Heap Memory default changed between JDK7 and 8
 -

 Key: HADOOP-11098
 URL: https://issues.apache.org/jira/browse/HADOOP-11098
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
 Environment: JDK8
Reporter: Travis Thompson
Assignee: Mohammad Kamrul Islam

 I noticed this because the NameNode UI shows Max Non Heap Memory which 
 after some digging I found correlates to MaxDirectMemorySize.
 JDK7
 {noformat}
 Heap Memory used 16.75 GB of 23 GB Heap Memory. Max Heap Memory is 23.7 GB.
 Non Heap Memory used 57.32 MB of 67.38 MB Commited Non Heap Memory. Max Non 
 Heap Memory is 130 MB. 
 {noformat}
 JDK8
 {noformat}
 Heap Memory used 3.02 GB of 7.65 GB Heap Memory. Max Heap Memory is 23.7 GB.
 Non Heap Memory used 103.12 MB of 104.41 MB Commited Non Heap Memory. Max Non 
 Heap Memory is -1 B. 
 {noformat}
 More information in first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11098) [JDK8] Max Non Heap Memory default changed between JDK7 and 8

2014-09-29 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HADOOP-11098:
---
Assignee: (was: Mohammad Kamrul Islam)

 [JDK8] Max Non Heap Memory default changed between JDK7 and 8
 -

 Key: HADOOP-11098
 URL: https://issues.apache.org/jira/browse/HADOOP-11098
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
 Environment: JDK8
Reporter: Travis Thompson

 I noticed this because the NameNode UI shows Max Non Heap Memory which 
 after some digging I found correlates to MaxDirectMemorySize.
 JDK7
 {noformat}
 Heap Memory used 16.75 GB of 23 GB Heap Memory. Max Heap Memory is 23.7 GB.
 Non Heap Memory used 57.32 MB of 67.38 MB Commited Non Heap Memory. Max Non 
 Heap Memory is 130 MB. 
 {noformat}
 JDK8
 {noformat}
 Heap Memory used 3.02 GB of 7.65 GB Heap Memory. Max Heap Memory is 23.7 GB.
 Non Heap Memory used 103.12 MB of 104.41 MB Commited Non Heap Memory. Max Non 
 Heap Memory is -1 B. 
 {noformat}
 More information in first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11139) Allow user to choose JVM for container execution

2014-09-26 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14150024#comment-14150024
 ] 

Mohammad Kamrul Islam commented on HADOOP-11139:


Good find [~aw] !

I will post a comment in  YARN-2481 to make sure that JIRA has the exact same 
goal.
After than one can be closed in favor of other.

 Allow user to choose JVM for container execution
 

 Key: HADOOP-11139
 URL: https://issues.apache.org/jira/browse/HADOOP-11139
 Project: Hadoop Common
  Issue Type: Task
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 Hadoop currently supports one JVM defined through JAVA_HOME. 
 Since multiple JVMs (Java 6,7,8,9) are active, it will be helpful if there is 
 an user configuration to choose the custom but supported JVM for her job.
 In other words, user will be able to choose her expected JVM only for her 
 container execution while Hadoop services may be running on different JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11139) Allow user to choose JVM for container execution

2014-09-25 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HADOOP-11139:
--

 Summary: Allow user to choose JVM for container execution
 Key: HADOOP-11139
 URL: https://issues.apache.org/jira/browse/HADOOP-11139
 Project: Hadoop Common
  Issue Type: Task
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


Hadoop currently supports one JVM defined through JAVA_HOME. 
Since multiple JVMs (Java 6,7,8,9) are active, it will be helpful if there is 
an user configuration to choose the custom but supported JVM for her job.

In other words, user will be able to choose her expected JVM only for her 
container execution while Hadoop services may be running on different JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11090) [Umbrella] Issues with Java 8 in Hadoop

2014-09-12 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HADOOP-11090:
--

 Summary: [Umbrella] Issues with Java 8 in Hadoop
 Key: HADOOP-11090
 URL: https://issues.apache.org/jira/browse/HADOOP-11090
 Project: Hadoop Common
  Issue Type: Task
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
works  with Java 8 is important for the Apache community.
  
This JIRA is to track  the issues/experiences encountered during Java 8 
migration. If you find a potential bug , please create a separate JIRA either 
as a sub-task or linked into this JIRA.
If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
well. Or you can add  a comment  here.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10741) A lightweight WebHDFS client library

2014-06-23 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14041603#comment-14041603
 ] 

Mohammad Kamrul Islam commented on HADOOP-10741:


[~tucu00] thanks for the comments. 
One of the key requirement is to provide a light-weight library that is 
independent of hadoop core. The independence of core is required because any 
update to hadoop service means the upgrade of application  as well (which might 
need to go through the full qualification life cycle).  This is the pain for a 
non-hadoop application that runs outside Hadoop cluster and occasionally 
retrieves file from Hadoop.

I agree to reuse the hadoop-auth. But dependent of hadoop core jar will miss 
the key requirement.

I also agree that some of this (scaled down version) needs to be 
re-implemented. If you have some idea how to achieve the both : not to 
re-implement and not to depend on hadoop core library, it will be great.

In short, this requirement focuses on the application running outside hadoop 
but needs to occasionally get/put the data from/into hadoop.

  

 A lightweight WebHDFS client library
 

 Key: HADOOP-10741
 URL: https://issues.apache.org/jira/browse/HADOOP-10741
 Project: Hadoop Common
  Issue Type: New Feature
  Components: tools
Reporter: Tsz Wo Nicholas Sze
Assignee: Mohammad Kamrul Islam

 One of the motivations for creating WebHDFS is for applications connecting to 
 HDFS from outside the cluster.  In order to do so, users have to either
 # install Hadoop and use WebHdfsFileSsytem, or
 # develop their own client using the WebHDFS REST API.
 For #1, it is very difficult to manage and unnecessarily complicated for 
 other applications since Hadoop is not a lightweight library.  For #2, it is 
 not easy to deal with security and handle transient errors.
 Therefore, we propose adding a lightweight WebHDFS client as a separated 
 library which does not depend on Common and HDFS.  The client can be packaged 
 as a standalone jar.  Other applications simply add the jar to their 
 classpath for using it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10523) Hadoop services (such as RM, NN and JHS) throw confusing exception during token auto-cancelation

2014-05-16 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13998555#comment-13998555
 ] 

Mohammad Kamrul Islam commented on HADOOP-10523:


Very good explanations!
Mostly agreed with the following comments:

 I think the better solution is for users to not cancel tokens. Tokens are 
 supposed to be an invisible implementation detail of job submission and 
 thus not require user manipulation.

I was told every (delegation) token creates an overhead on the process  memory. 
So try to close it early. If that thinking is changed, i'm open for any option. 
Btw, long running process like Azkaban explicitly cancels its delegation  token.

 I'd suggest modifying the RM to either swallow the cancel error on job 
 completion, or to simply emit a single line in the log instead of a backtrace.

So this will be added as a WARN message in the caller of cancelToken(). It 
includes RM, JHS and NN. right? Can you please give little more details about  
on job completion?



 Hadoop services (such as RM, NN and JHS) throw confusing exception during 
 token auto-cancelation 
 -

 Key: HADOOP-10523
 URL: https://issues.apache.org/jira/browse/HADOOP-10523
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Fix For: 2.5.0

 Attachments: HADOOP-10523.1.patch


 When a user explicitly cancels the token, the system (such as RM, NN and JHS) 
 also periodically tries to cancel the same token. During the second cancel 
 (originated by RM/NN/JHS), Hadoop processes throw the following 
 error/exception in the  log file. Although the exception is harmless, it 
 creates a lot of confusions and causes the dev to spend a lot of time to 
 investigate.
 This JIRA is to make sure if the token is available/not cancelled before 
 attempting to cancel the token and  finally replace this exception with 
 proper warning message.
 {noformat}
 2014-04-15 01:41:14,686 INFO 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
  Token cancelation requested for identifier:: 
 owner=FULL_PRINCIPAL.linkedin.com@REALM, renewer=yarn, realUser=, 
 issueDate=1397525405921, maxDate=1398130205921, sequenceNumber=1, 
 masterKeyId=2
 2014-04-15 01:41:14,688 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:yarn/HOST@REALM (auth:KERBEROS) 
 cause:org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not 
 found
 2014-04-15 01:41:14,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 7 on 10020, call 
 org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
 from 172.20.128.42:2783 Call#37759 Retry#0: error: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:436)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10523) Hadoop services (such as RM, NN and JHS) throw confusing exception during token auto-cancelation

2014-05-12 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13995880#comment-13995880
 ] 

Mohammad Kamrul Islam commented on HADOOP-10523:


Thanks [~daryn] for the comments.

I agree with your observation.

 The goal appears to be an attempt to mask a problem: Why is the token being 
 doubled cancelled?

The scenarios of token double cancellation:
1. Owner of the token cancels it first.
2. After that, services also cancel the token. This second one created the 
unnecessary exception trace.

In most of the case, Step #1 was not done (e.g. Oozie). But others are 
canceling it explicitly.

What do you think will be a do approach?

Approach 1: Services like NN. RM can check if the token is already canceled or 
removed.
 Approach 2: When the user cancel the token, the services should update its 
list accordingly.
Approach 3: something else




 Hadoop services (such as RM, NN and JHS) throw confusing exception during 
 token auto-cancelation 
 -

 Key: HADOOP-10523
 URL: https://issues.apache.org/jira/browse/HADOOP-10523
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Fix For: 2.5.0

 Attachments: HADOOP-10523.1.patch


 When a user explicitly cancels the token, the system (such as RM, NN and JHS) 
 also periodically tries to cancel the same token. During the second cancel 
 (originated by RM/NN/JHS), Hadoop processes throw the following 
 error/exception in the  log file. Although the exception is harmless, it 
 creates a lot of confusions and causes the dev to spend a lot of time to 
 investigate.
 This JIRA is to make sure if the token is available/not cancelled before 
 attempting to cancel the token and  finally replace this exception with 
 proper warning message.
 {noformat}
 2014-04-15 01:41:14,686 INFO 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
  Token cancelation requested for identifier:: 
 owner=FULL_PRINCIPAL.linkedin.com@REALM, renewer=yarn, realUser=, 
 issueDate=1397525405921, maxDate=1398130205921, sequenceNumber=1, 
 masterKeyId=2
 2014-04-15 01:41:14,688 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:yarn/HOST@REALM (auth:KERBEROS) 
 cause:org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not 
 found
 2014-04-15 01:41:14,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 7 on 10020, call 
 org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
 from 172.20.128.42:2783 Call#37759 Retry#0: error: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:436)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10509) cancelToken doesn't work in some instances

2014-05-06 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991095#comment-13991095
 ] 

Mohammad Kamrul Islam commented on HADOOP-10509:


Can someone please review or commit this patch?

 cancelToken doesn't work in some instances
 --

 Key: HADOOP-10509
 URL: https://issues.apache.org/jira/browse/HADOOP-10509
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Fix For: 2.5.0

 Attachments: HADOOP-10509.1.patch


 When the owner of a token tries to explicitly cancel the token, it gets the 
 following error/exception
 {noformat} 
 2014-04-14 20:07:35,744 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException 
 as:someuser/machine_name.linkedin.com@realm.LINKEDIN.COM 
 (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: 
 someuser is not authorized to cancel the token
 2014-04-14 20:07:35,744 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 2 on 10020, call 
 org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
 from 172.20.158.61:49042 Call#4 Retry#0: error: 
 org.apache.hadoop.security.AccessControlException: someuser is not 
 authorized to cancel the token
 org.apache.hadoop.security.AccessControlException: someuser is not 
 authorized to cancel the token
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:429)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
 {noformat}
 Details:
 AbstractDelegationTokenSecretManager.cacelToken() gets the owner as full 
 principal name where as the canceller is the short name.
 The potential code snippets:
 {code}
 String owner = id.getUser().getUserName(); 
 Text renewer = id.getRenewer();
 HadoopKerberosName cancelerKrbName = new HadoopKerberosName(canceller);
 String cancelerShortName = cancelerKrbName.getShortName();
 if (!canceller.equals(owner)
  (renewer == null || renewer.toString().isEmpty() || 
 !cancelerShortName
 .equals(renewer.toString( {
   throw new AccessControlException(canceller
   +  is not authorized to cancel the token);
 }
 {code}
 The code shows 'owner' gets the full principal name. Where as the value of 
 'canceller' depends on who is calling it. 
 In some cases, it is the short name. REF: HistoryClientService.java
 {code}
 String user = UserGroupInformation.getCurrentUser().getShortUserName();
 jhsDTSecretManager.cancelToken(token, user);
 {code}
  
 In other cases, the value could be full principal name. REF: 
 FSNamesystem.java.
 {code}
 String canceller = getRemoteUser().getUserName();
   DelegationTokenIdentifier id = dtSecretManager
 .cancelToken(token, canceller);
 {code}
 Possible resolution:
 --
 Option 1: in cancelToken() method, compare with both : short name and full 
 principal name.
 Pros: Easy. Have to change in one place.
 Cons: Someone can argue that it is hacky!
  
 Option 2:
 All the caller sends the consistent value as 'canceller' : either short name 
 or full principal name.
 Pros: Cleaner.
 Cons: A lot of code changes and potential bug injections.
 I'm open for both options.
 Please give your opinion.
 Btw, how it is working now in most cases?  The short name and the full 
 principal name are usually the same for end-users.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10523) Hadoop services (such as RM, NN and JHS) throw confusing exception during token auto-cancelation

2014-05-06 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991096#comment-13991096
 ] 

Mohammad Kamrul Islam commented on HADOOP-10523:


Can someone please review or commit this patch?

 Hadoop services (such as RM, NN and JHS) throw confusing exception during 
 token auto-cancelation 
 -

 Key: HADOOP-10523
 URL: https://issues.apache.org/jira/browse/HADOOP-10523
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Fix For: 2.5.0

 Attachments: HADOOP-10523.1.patch


 When a user explicitly cancels the token, the system (such as RM, NN and JHS) 
 also periodically tries to cancel the same token. During the second cancel 
 (originated by RM/NN/JHS), Hadoop processes throw the following 
 error/exception in the  log file. Although the exception is harmless, it 
 creates a lot of confusions and causes the dev to spend a lot of time to 
 investigate.
 This JIRA is to make sure if the token is available/not cancelled before 
 attempting to cancel the token and  finally replace this exception with 
 proper warning message.
 {noformat}
 2014-04-15 01:41:14,686 INFO 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
  Token cancelation requested for identifier:: 
 owner=FULL_PRINCIPAL.linkedin.com@REALM, renewer=yarn, realUser=, 
 issueDate=1397525405921, maxDate=1398130205921, sequenceNumber=1, 
 masterKeyId=2
 2014-04-15 01:41:14,688 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:yarn/HOST@REALM (auth:KERBEROS) 
 cause:org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not 
 found
 2014-04-15 01:41:14,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 7 on 10020, call 
 org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
 from 172.20.128.42:2783 Call#37759 Retry#0: error: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:436)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10523) Hadoop services (such as RM, NN and JHS) throw confusing exception during token auto-cancelation

2014-04-23 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HADOOP-10523:
---

Attachment: HADOOP-10523.1.patch

patch uploaded.

 Hadoop services (such as RM, NN and JHS) throw confusing exception during 
 token auto-cancelation 
 -

 Key: HADOOP-10523
 URL: https://issues.apache.org/jira/browse/HADOOP-10523
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Attachments: HADOOP-10523.1.patch


 When a user explicitly cancels the token, the system (such as RM, NN and JHS) 
 also periodically tries to cancel the same token. During the second cancel 
 (originated by RM/NN/JHS), Hadoop processes throw the following 
 error/exception in the  log file. Although the exception is harmless, it 
 creates a lot of confusions and causes the dev to spend a lot of time to 
 investigate.
 This JIRA is to make sure if the token is available/not cancelled before 
 attempting to cancel the token and  finally replace this exception with 
 proper warning message.
 {noformat}
 2014-04-15 01:41:14,686 INFO 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
  Token cancelation requested for identifier:: 
 owner=FULL_PRINCIPAL.linkedin.com@REALM, renewer=yarn, realUser=, 
 issueDate=1397525405921, maxDate=1398130205921, sequenceNumber=1, 
 masterKeyId=2
 2014-04-15 01:41:14,688 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:yarn/HOST@REALM (auth:KERBEROS) 
 cause:org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not 
 found
 2014-04-15 01:41:14,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 7 on 10020, call 
 org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
 from 172.20.128.42:2783 Call#37759 Retry#0: error: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:436)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10523) Hadoop services (such as RM, NN and JHS) throw confusing exception during token auto-cancelation

2014-04-23 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HADOOP-10523:
---

Fix Version/s: 2.5.0
   Status: Patch Available  (was: Open)

 Hadoop services (such as RM, NN and JHS) throw confusing exception during 
 token auto-cancelation 
 -

 Key: HADOOP-10523
 URL: https://issues.apache.org/jira/browse/HADOOP-10523
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Fix For: 2.5.0

 Attachments: HADOOP-10523.1.patch


 When a user explicitly cancels the token, the system (such as RM, NN and JHS) 
 also periodically tries to cancel the same token. During the second cancel 
 (originated by RM/NN/JHS), Hadoop processes throw the following 
 error/exception in the  log file. Although the exception is harmless, it 
 creates a lot of confusions and causes the dev to spend a lot of time to 
 investigate.
 This JIRA is to make sure if the token is available/not cancelled before 
 attempting to cancel the token and  finally replace this exception with 
 proper warning message.
 {noformat}
 2014-04-15 01:41:14,686 INFO 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
  Token cancelation requested for identifier:: 
 owner=FULL_PRINCIPAL.linkedin.com@REALM, renewer=yarn, realUser=, 
 issueDate=1397525405921, maxDate=1398130205921, sequenceNumber=1, 
 masterKeyId=2
 2014-04-15 01:41:14,688 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:yarn/HOST@REALM (auth:KERBEROS) 
 cause:org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not 
 found
 2014-04-15 01:41:14,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 7 on 10020, call 
 org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
 from 172.20.128.42:2783 Call#37759 Retry#0: error: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:436)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10523) Hadoop services (such as RM, NN and JHS) throw confusing exception during token auto-cancelation

2014-04-18 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HADOOP-10523:
--

 Summary: Hadoop services (such as RM, NN and JHS) throw confusing 
exception during token auto-cancelation 
 Key: HADOOP-10523
 URL: https://issues.apache.org/jira/browse/HADOOP-10523
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


When a user explicitly cancels the token, the system (such as RM, NN and JHS) 
also periodically tries to cancel the same token. During the second cancel 
(originated by RM/NN/JHS), Hadoop processes throw the following error/exception 
in the  log file. Although the exception is harmless, it creates a lot of 
confusions and causes the dev to spend a lot of time to investigate.
This JIRA is to make sure if the token is available/not cancelled before 
attempting to cancel the token and  finally replace this exception with proper 
warning message.


{noformat}
2014-04-15 01:41:14,686 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Token cancelation requested for identifier:: 
owner=FULL_PRINCIPAL.linkedin.com@REALM, renewer=yarn, realUser=, 
issueDate=1397525405921, maxDate=1398130205921, sequenceNumber=1, masterKeyId=2
2014-04-15 01:41:14,688 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:yarn/HOST@REALM (auth:KERBEROS) 
cause:org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not 
found
2014-04-15 01:41:14,689 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 
on 10020, call 
org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
from 172.20.128.42:2783 Call#37759 Retry#0: error: 
org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
org.apache.hadoop.security.token.SecretManager$InvalidToken: Token not found
at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:436)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
at 
org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
at 
org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10509) cancelToken doesn't work in some instances

2014-04-17 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HADOOP-10509:
---

Attachment: HADOOP-10509.1.patch

Patch uploaded.

 cancelToken doesn't work in some instances
 --

 Key: HADOOP-10509
 URL: https://issues.apache.org/jira/browse/HADOOP-10509
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Fix For: 2.5.0

 Attachments: HADOOP-10509.1.patch


 When the owner of a token tries to explicitly cancel the token, it gets the 
 following error/exception
 {noformat} 
 2014-04-14 20:07:35,744 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException 
 as:someuser/machine_name.linkedin.com@realm.LINKEDIN.COM 
 (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: 
 someuser is not authorized to cancel the token
 2014-04-14 20:07:35,744 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 2 on 10020, call 
 org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
 from 172.20.158.61:49042 Call#4 Retry#0: error: 
 org.apache.hadoop.security.AccessControlException: someuser is not 
 authorized to cancel the token
 org.apache.hadoop.security.AccessControlException: someuser is not 
 authorized to cancel the token
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:429)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
 {noformat}
 Details:
 AbstractDelegationTokenSecretManager.cacelToken() gets the owner as full 
 principal name where as the canceller is the short name.
 The potential code snippets:
 {code}
 String owner = id.getUser().getUserName(); 
 Text renewer = id.getRenewer();
 HadoopKerberosName cancelerKrbName = new HadoopKerberosName(canceller);
 String cancelerShortName = cancelerKrbName.getShortName();
 if (!canceller.equals(owner)
  (renewer == null || renewer.toString().isEmpty() || 
 !cancelerShortName
 .equals(renewer.toString( {
   throw new AccessControlException(canceller
   +  is not authorized to cancel the token);
 }
 {code}
 The code shows 'owner' gets the full principal name. Where as the value of 
 'canceller' depends on who is calling it. 
 In some cases, it is the short name. REF: HistoryClientService.java
 {code}
 String user = UserGroupInformation.getCurrentUser().getShortUserName();
 jhsDTSecretManager.cancelToken(token, user);
 {code}
  
 In other cases, the value could be full principal name. REF: 
 FSNamesystem.java.
 {code}
 String canceller = getRemoteUser().getUserName();
   DelegationTokenIdentifier id = dtSecretManager
 .cancelToken(token, canceller);
 {code}
 Possible resolution:
 --
 Option 1: in cancelToken() method, compare with both : short name and full 
 principal name.
 Pros: Easy. Have to change in one place.
 Cons: Someone can argue that it is hacky!
  
 Option 2:
 All the caller sends the consistent value as 'canceller' : either short name 
 or full principal name.
 Pros: Cleaner.
 Cons: A lot of code changes and potential bug injections.
 I'm open for both options.
 Please give your opinion.
 Btw, how it is working now in most cases?  The short name and the full 
 principal name are usually the same for end-users.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10509) cancelToken doesn't work in some instances

2014-04-17 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HADOOP-10509:
---

Fix Version/s: 2.5.0
Affects Version/s: 2.3.0
   Status: Patch Available  (was: Open)

 cancelToken doesn't work in some instances
 --

 Key: HADOOP-10509
 URL: https://issues.apache.org/jira/browse/HADOOP-10509
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Fix For: 2.5.0

 Attachments: HADOOP-10509.1.patch


 When the owner of a token tries to explicitly cancel the token, it gets the 
 following error/exception
 {noformat} 
 2014-04-14 20:07:35,744 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException 
 as:someuser/machine_name.linkedin.com@realm.LINKEDIN.COM 
 (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: 
 someuser is not authorized to cancel the token
 2014-04-14 20:07:35,744 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 2 on 10020, call 
 org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
 from 172.20.158.61:49042 Call#4 Retry#0: error: 
 org.apache.hadoop.security.AccessControlException: someuser is not 
 authorized to cancel the token
 org.apache.hadoop.security.AccessControlException: someuser is not 
 authorized to cancel the token
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:429)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
 {noformat}
 Details:
 AbstractDelegationTokenSecretManager.cacelToken() gets the owner as full 
 principal name where as the canceller is the short name.
 The potential code snippets:
 {code}
 String owner = id.getUser().getUserName(); 
 Text renewer = id.getRenewer();
 HadoopKerberosName cancelerKrbName = new HadoopKerberosName(canceller);
 String cancelerShortName = cancelerKrbName.getShortName();
 if (!canceller.equals(owner)
  (renewer == null || renewer.toString().isEmpty() || 
 !cancelerShortName
 .equals(renewer.toString( {
   throw new AccessControlException(canceller
   +  is not authorized to cancel the token);
 }
 {code}
 The code shows 'owner' gets the full principal name. Where as the value of 
 'canceller' depends on who is calling it. 
 In some cases, it is the short name. REF: HistoryClientService.java
 {code}
 String user = UserGroupInformation.getCurrentUser().getShortUserName();
 jhsDTSecretManager.cancelToken(token, user);
 {code}
  
 In other cases, the value could be full principal name. REF: 
 FSNamesystem.java.
 {code}
 String canceller = getRemoteUser().getUserName();
   DelegationTokenIdentifier id = dtSecretManager
 .cancelToken(token, canceller);
 {code}
 Possible resolution:
 --
 Option 1: in cancelToken() method, compare with both : short name and full 
 principal name.
 Pros: Easy. Have to change in one place.
 Cons: Someone can argue that it is hacky!
  
 Option 2:
 All the caller sends the consistent value as 'canceller' : either short name 
 or full principal name.
 Pros: Cleaner.
 Cons: A lot of code changes and potential bug injections.
 I'm open for both options.
 Please give your opinion.
 Btw, how it is working now in most cases?  The short name and the full 
 principal name are usually the same for end-users.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10509) cancelToken doesn't work in some instances

2014-04-16 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13971819#comment-13971819
 ] 

Mohammad Kamrul Islam commented on HADOOP-10509:


Thanks Daryn for the confirmation.
I will upload a patch soon.

 cancelToken doesn't work in some instances
 --

 Key: HADOOP-10509
 URL: https://issues.apache.org/jira/browse/HADOOP-10509
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 When the owner of a token tries to explicitly cancel the token, it gets the 
 following error/exception
 {noformat} 
 2014-04-14 20:07:35,744 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException 
 as:someuser/machine_name.linkedin.com@realm.LINKEDIN.COM 
 (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: 
 someuser is not authorized to cancel the token
 2014-04-14 20:07:35,744 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 2 on 10020, call 
 org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
 from 172.20.158.61:49042 Call#4 Retry#0: error: 
 org.apache.hadoop.security.AccessControlException: someuser is not 
 authorized to cancel the token
 org.apache.hadoop.security.AccessControlException: someuser is not 
 authorized to cancel the token
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:429)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
 {noformat}
 Details:
 AbstractDelegationTokenSecretManager.cacelToken() gets the owner as full 
 principal name where as the canceller is the short name.
 The potential code snippets:
 {code}
 String owner = id.getUser().getUserName(); 
 Text renewer = id.getRenewer();
 HadoopKerberosName cancelerKrbName = new HadoopKerberosName(canceller);
 String cancelerShortName = cancelerKrbName.getShortName();
 if (!canceller.equals(owner)
  (renewer == null || renewer.toString().isEmpty() || 
 !cancelerShortName
 .equals(renewer.toString( {
   throw new AccessControlException(canceller
   +  is not authorized to cancel the token);
 }
 {code}
 The code shows 'owner' gets the full principal name. Where as the value of 
 'canceller' depends on who is calling it. 
 In some cases, it is the short name. REF: HistoryClientService.java
 {code}
 String user = UserGroupInformation.getCurrentUser().getShortUserName();
 jhsDTSecretManager.cancelToken(token, user);
 {code}
  
 In other cases, the value could be full principal name. REF: 
 FSNamesystem.java.
 {code}
 String canceller = getRemoteUser().getUserName();
   DelegationTokenIdentifier id = dtSecretManager
 .cancelToken(token, canceller);
 {code}
 Possible resolution:
 --
 Option 1: in cancelToken() method, compare with both : short name and full 
 principal name.
 Pros: Easy. Have to change in one place.
 Cons: Someone can argue that it is hacky!
  
 Option 2:
 All the caller sends the consistent value as 'canceller' : either short name 
 or full principal name.
 Pros: Cleaner.
 Cons: A lot of code changes and potential bug injections.
 I'm open for both options.
 Please give your opinion.
 Btw, how it is working now in most cases?  The short name and the full 
 principal name are usually the same for end-users.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10509) cancelToken doesn't work in some instances

2014-04-15 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HADOOP-10509:
--

 Summary: cancelToken doesn't work in some instances
 Key: HADOOP-10509
 URL: https://issues.apache.org/jira/browse/HADOOP-10509
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


When the owner of a token tries to explicitly cancel the token, it gets the 
following error/exception
{noformat} 
2014-04-14 20:07:35,744 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException 
as:someuser/machine_name.linkedin.com@realm.LINKEDIN.COM (auth:KERBEROS) 
cause:org.apache.hadoop.security.AccessControlException: someuser is not 
authorized to cancel the token
2014-04-14 20:07:35,744 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 
on 10020, call 
org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.cancelDelegationToken 
from 172.20.158.61:49042 Call#4 Retry#0: error: 
org.apache.hadoop.security.AccessControlException: someuser is not authorized 
to cancel the token
org.apache.hadoop.security.AccessControlException: someuser is not authorized 
to cancel the token
at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.cancelToken(AbstractDelegationTokenSecretManager.java:429)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.cancelDelegationToken(HistoryClientService.java:400)
at 
org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.cancelDelegationToken(MRClientProtocolPBServiceImpl.java:286)
at 
org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:301)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)

{noformat}


Details:
AbstractDelegationTokenSecretManager.cacelToken() gets the owner as full 
principal name where as the canceller is the short name.
The potential code snippets:
{code}
String owner = id.getUser().getUserName(); 
Text renewer = id.getRenewer();
HadoopKerberosName cancelerKrbName = new HadoopKerberosName(canceller);
String cancelerShortName = cancelerKrbName.getShortName();
if (!canceller.equals(owner)
 (renewer == null || renewer.toString().isEmpty() || 
!cancelerShortName
.equals(renewer.toString( {
  throw new AccessControlException(canceller
  +  is not authorized to cancel the token);
}
{code}

The code shows 'owner' gets the full principal name. Where as the value of 
'canceller' depends on who is calling it. 
In some cases, it is the short name. REF: HistoryClientService.java
{code}
String user = UserGroupInformation.getCurrentUser().getShortUserName();
jhsDTSecretManager.cancelToken(token, user);
{code}
 
In other cases, the value could be full principal name. REF: FSNamesystem.java.
{code}
String canceller = getRemoteUser().getUserName();
  DelegationTokenIdentifier id = dtSecretManager
.cancelToken(token, canceller);
{code}

Possible resolution:
--
Option 1: in cancelToken() method, compare with both : short name and full 
principal name.
Pros: Easy. Have to change in one place.
Cons: Someone can argue that it is hacky!
 
Option 2:
All the caller sends the consistent value as 'canceller' : either short name or 
full principal name.

Pros: Cleaner.
Cons: A lot of code changes and potential bug injections.

I'm open for both options.
Please give your opinion.

Btw, how it is working now in most cases?  The short name and the full 
principal name are usually the same for end-users.






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10409) Bzip2 error message isn't clear

2014-04-03 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam reassigned HADOOP-10409:
--

Assignee: Mohammad Kamrul Islam

 Bzip2 error message isn't clear
 ---

 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson
Assignee: Mohammad Kamrul Islam

 If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
 doesn't get compiled into libhadoop, as is expected.  This is not documented 
 however and the error message thrown from {{hadoop checknative -a}} is not 
 helpful.
 {noformat}
 [tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
 14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
 native-bzip2 library system-native, will use pure-Java version
 14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  false 
 14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
 {noformat}
 You can see that it wasn't compiled in here:
 {noformat}
 [mislam@eat1-hcl4060 ~]$ strings 
 /export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 {noformat}
 After installing bzip2-devel and recompiling:
 {noformat}
 [tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
 14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded  initialized 
 native-bzip2 library system-native
 14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  true /lib64/libbz2.so.1
 {noformat}
 {noformat}
 tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
 ./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
  |grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
 {noformat}
 The error message thrown should hint that perhaps libhadoop wasn't compiled 
 with the bzip2 headers installed.  It would also be nice if compile time 
 dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10409) Bzip2 error message isn't clear

2014-04-03 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam resolved HADOOP-10409.


Resolution: Won't Fix

 Bzip2 error message isn't clear
 ---

 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson
Assignee: Mohammad Kamrul Islam

 If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
 doesn't get compiled into libhadoop, as is expected.  This is not documented 
 however and the error message thrown from {{hadoop checknative -a}} is not 
 helpful.
 {noformat}
 [tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
 14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
 native-bzip2 library system-native, will use pure-Java version
 14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  false 
 14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
 {noformat}
 You can see that it wasn't compiled in here:
 {noformat}
 [mislam@eat1-hcl4060 ~]$ strings 
 /export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 {noformat}
 After installing bzip2-devel and recompiling:
 {noformat}
 [tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
 14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded  initialized 
 native-bzip2 library system-native
 14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  true /lib64/libbz2.so.1
 {noformat}
 {noformat}
 tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
 ./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
  |grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
 {noformat}
 The error message thrown should hint that perhaps libhadoop wasn't compiled 
 with the bzip2 headers installed.  It would also be nice if compile time 
 dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10409) Bzip2 error message isn't clear

2014-04-03 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959497#comment-13959497
 ] 

Mohammad Kamrul Islam commented on HADOOP-10409:


I found we don't need to do any code changes for this. The JIRA that 
[~tthompso] created should take care of the documentation task.

Therefore closing...


 Bzip2 error message isn't clear
 ---

 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson
Assignee: Mohammad Kamrul Islam

 If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
 doesn't get compiled into libhadoop, as is expected.  This is not documented 
 however and the error message thrown from {{hadoop checknative -a}} is not 
 helpful.
 {noformat}
 [tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
 14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
 native-bzip2 library system-native, will use pure-Java version
 14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  false 
 14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
 {noformat}
 You can see that it wasn't compiled in here:
 {noformat}
 [mislam@eat1-hcl4060 ~]$ strings 
 /export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 {noformat}
 After installing bzip2-devel and recompiling:
 {noformat}
 [tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
 14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded  initialized 
 native-bzip2 library system-native
 14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  true /lib64/libbz2.so.1
 {noformat}
 {noformat}
 tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
 ./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
  |grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
 {noformat}
 The error message thrown should hint that perhaps libhadoop wasn't compiled 
 with the bzip2 headers installed.  It would also be nice if compile time 
 dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10409) Bzip2 error message isn't clear

2014-03-19 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13941158#comment-13941158
 ] 

Mohammad Kamrul Islam commented on HADOOP-10409:


I can  see there are two sub-tasks:
1. Make the Warning message more explicit with real failure message.For 
example, the current warn message 
WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library 
system-native, will use pure-Java version: can be extended with thrown 
exception gerMessage. It could be even more helpful, if it said the possible 
missing artifacts.

2. Update the Building.txt, as proposed by others.

Comments?

 Bzip2 error message isn't clear
 ---

 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson

 If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
 doesn't get compiled into libhadoop, as is expected.  This is not documented 
 however and the error message thrown from {{hadoop checknative -a}} is not 
 helpful.
 {noformat}
 [tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
 14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
 native-bzip2 library system-native, will use pure-Java version
 14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  false 
 14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
 {noformat}
 You can see that it wasn't compiled in here:
 {noformat}
 [mislam@eat1-hcl4060 ~]$ strings 
 /export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 {noformat}
 After installing bzip2-devel and recompiling:
 {noformat}
 [tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
 14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded  initialized 
 native-bzip2 library system-native
 14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  true /lib64/libbz2.so.1
 {noformat}
 {noformat}
 tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
 ./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
  |grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
 {noformat}
 The error message thrown should hint that perhaps libhadoop wasn't compiled 
 with the bzip2 headers installed.  It would also be nice if compile time 
 dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10409) Bzip2 error message isn't clear

2014-03-18 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam reassigned HADOOP-10409:
--

Assignee: Mohammad Kamrul Islam

 Bzip2 error message isn't clear
 ---

 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson
Assignee: Mohammad Kamrul Islam

 If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
 doesn't get compiled into libhadoop, as is expected.  This is not documented 
 however and the error message thrown from {{hadoop checknative -a}} is not 
 helpful.
 {noformat}
 [tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
 14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
 native-bzip2 library system-native, will use pure-Java version
 14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  false 
 14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
 {noformat}
 You can see that it wasn't compiled in here:
 {noformat}
 [mislam@eat1-hcl4060 ~]$ strings 
 /export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 {noformat}
 After installing bzip2-devel and recompiling:
 {noformat}
 [tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
 14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded  initialized 
 native-bzip2 library system-native
 14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  true /lib64/libbz2.so.1
 {noformat}
 {noformat}
 tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
 ./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
  |grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
 {noformat}
 The error message thrown should hint that perhaps libhadoop wasn't compiled 
 with the bzip2 headers installed.  It would also be nice if compile time 
 dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10409) Bzip2 error message isn't clear

2014-03-18 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HADOOP-10409:
---

Assignee: (was: Mohammad Kamrul Islam)

 Bzip2 error message isn't clear
 ---

 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson

 If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
 doesn't get compiled into libhadoop, as is expected.  This is not documented 
 however and the error message thrown from {{hadoop checknative -a}} is not 
 helpful.
 {noformat}
 [tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
 14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
 native-bzip2 library system-native, will use pure-Java version
 14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  false 
 14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
 {noformat}
 You can see that it wasn't compiled in here:
 {noformat}
 [mislam@eat1-hcl4060 ~]$ strings 
 /export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 {noformat}
 After installing bzip2-devel and recompiling:
 {noformat}
 [tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
 14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded  initialized 
 native-bzip2 library system-native
 14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  true /lib64/libbz2.so.1
 {noformat}
 {noformat}
 tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
 ./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
  |grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
 {noformat}
 The error message thrown should hint that perhaps libhadoop wasn't compiled 
 with the bzip2 headers installed.  It would also be nice if compile time 
 dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10329) Fully qualified URIs are inconsistant and sometimes break in hadoop conf files

2014-02-15 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HADOOP-10329:
---

Attachment: HADOOP-10329.3.patch

Added apache header license to a new file.

 Fully qualified URIs are inconsistant and sometimes break in hadoop conf files
 --

 Key: HADOOP-10329
 URL: https://issues.apache.org/jira/browse/HADOOP-10329
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.2.0
Reporter: Travis Thompson
Assignee: Mohammad Kamrul Islam
 Attachments: HADOOP-10329.1.patch, HADOOP-10329.2.patch, 
 HADOOP-10329.3.patch


 When specifying paths in the *-site.xml files, some are required to be fully 
 qualified, while others (specifically hadoop.tmp.dir) break when a fully 
 qualified uri is used.
 Example:
 If I set hadoop.tmp.dir in core-site to file:///something it'll create a 
 file: directory in my $PWD.
 {noformat}
   property
 namehadoop.tmp.dir/name
 valuefile:///grid/a/tmp/hadoop-${user.name}/value
   /property
 {noformat}
 {noformat}
 [tthompso@test ~]$ tree file\:/
 file:/
 └── grid
 └── a
 └── tmp
 └── hadoop-tthompso
 {noformat}
 Other places, like the datanode, or the nodemanager, will complain if I don't 
 use fully qualified uris
 {noformat}
   property
 namedfs.datanode.data.dir/name
 value/grid/a/dfs-data/bs/value
   /property
 {noformat}
 {noformat}
 WARN org.apache.hadoop.hdfs.server.common.Util: Path /grid/a/dfs-data/bs 
 should be specified as a URI in configuration files. Please update hdfs 
 configuration.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-7592) Provide programmatic access to use job log

2011-08-30 Thread Mohammad Kamrul Islam (JIRA)
Provide programmatic access to use job log 
---

 Key: HADOOP-7592
 URL: https://issues.apache.org/jira/browse/HADOOP-7592
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Mohammad Kamrul Islam


Since hadoop job/task log is critical for debugging, providing a programmatic 
access would definitely add a big value to the end user. 

Oozie users are repeatedly asking to open up the hadoop job log for them. 
However, Oozie doesn't have any knowledge about the log location as well as the 
log contents of hadoop job.


The expected programmatic access could come in any or both ways:
1. Allowing an API such as getLogPath(jobId) that returns the base HDFS path of 
the job directory.
2. Allowing an API such as getLog(jobId) that will return the log contents (may 
be as streaming output).
 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira