[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560173#comment-14560173
 ] 

Chris Nauroth commented on HADOOP-11934:


Hi [~lmccay].  This looks great overall!  Here are a few comments, mostly minor.

# Both {{AbstractJavaKeyStoreProvider}} and {{LocalJavaKeyStoreProvider}} have 
copied some class-level JavaDocs from {{JavaKeyStoreProvider}}.  This isn't 
completely accurate, because those comments talk about pointing to different 
{{FileSystem}} implementations.  Could you please revise this?
# {{AbstractJavaKeyStoreProvider}} constructor: The trunk version of the 
following code would trim the password.  Do we need to keep that?
{code}
  try (InputStream is = pwdFile.openStream()) {
password = IOUtils.toCharArray(is);
  }
{code}
{code}
  try (InputStream is = pwdFile.openStream()) {
password = IOUtils.toString(is).trim().toCharArray();
  }
{code}
# {{AbstractJavaKeyStoreProvider#bytesToChars}}: The existing trunk code used 
{{Charsets#UTF_8}} to avoid the need to handle 
{{UnsupportedEncodingException}}.  Shall we keep it the same, or was this an 
intentional change?
# {{AbstractJavaKeyStoreProvider#getPathAsString}}: This has the same 
implementation in both subclasses.  Would it make sense to refactor that up to 
the base class as a {{protected final}} method?
# {{JavaKeyStoreProvider#getOutputStreamForKeystore}}: This isn't a new thing 
with your patch, but I wanted to mention that this overload of the 
{{FileSystem.create}} method is not atomic.  First it creates the file with 
default permissions (usually 644), and then setting the requested permissions 
is done separately.  In the case of HDFS, this is 2 separate RPCs.  That means 
there is a brief window in which the file has default permissions.  If the 
process dies after the first RPC but before the second, then the permissions 
will never be changed.  To do this atomically, we'd need to switch to one of 
the other (much uglier) overloads of {{FileSystem#create}}.  If you think 
changing this would be a good improvement, then I recommend queuing up a 
separate jira for that change, since we already have a mid-sized patch going 
here.
# {{JavaKeyStoreProvider}} and {{LocalJavaKeyStoreProvider}}: Please add the 
{{@Override}} annotation on all applicable methods.
# {{TestCredentialProviderFactory}}: After this patch, the tests fail on 
Windows, due to invalid string concatenation of a test directory that contains 
'\' characters, which are not valid URI characters.  (See below.)  There have 
been similar patches in the past to fix these tests on Windows, so you could 
look back at those for inspiration on how to fix this.  It will probably 
involve some kind of usage of {{Path#toUri}}, which results in all '/' 
characters, which is valid URI syntax.

{code}
java.io.IOException: Bad configuration of 
hadoop.security.credential.provider.path at 
jceks://fileC:\hdc\hadoop-common-project\hadoop-common\target\test\data\creds/test.jks
at java.net.URI$Parser.fail(URI.java:2829)
at java.net.URI$Parser.parseAuthority(URI.java:3167)
at java.net.URI$Parser.parseHierarchical(URI.java:3078)
at java.net.URI$Parser.parse(URI.java:3034)
at java.net.URI.init(URI.java:595)
at 
org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:55)
at 
org.apache.hadoop.security.alias.TestCredentialProviderFactory.testFactory(TestCredentialProviderFactory.java:58)
{code}


 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
 HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
 HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
 HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 

[jira] [Updated] (HADOOP-11894) Bump the version of HTrace to 3.2.0-incubating

2015-05-26 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11894:
--
Attachment: HADOOP-11894.003.patch

I attached updated patch. Thanks, [~cmccabe].

 Bump the version of HTrace to 3.2.0-incubating
 --

 Key: HADOOP-11894
 URL: https://issues.apache.org/jira/browse/HADOOP-11894
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
 Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch, 
 HADOOP-11894.003.patch


 * update pom.xml
 * update documentation
 * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
 {{addKVAnnotation(String key, String value)}}
 * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
 {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560229#comment-14560229
 ] 

Kengo Seki commented on HADOOP-12031:
-

whitespace plugin seems not to be able to detect the trailing whitespace in the 
first patch at line 33. But I don't know why for now.

 test-patch.sh should have an xml plugin
 ---

 Key: HADOOP-12031
 URL: https://issues.apache.org/jira/browse/HADOOP-12031
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Reporter: Allen Wittenauer
Assignee: Kengo Seki
  Labels: newbie, test-patch
 Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch


 HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
 change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560189#comment-14560189
 ] 

Hudson commented on HADOOP-11969:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #209 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/209/])
HADOOP-11969. ThreadLocal initialization in several classes is not thread safe 
(Sean Busbey via Colin P. McCabe) (cmccabe: rev 
7dba7005b79994106321b0f86bc8f4ea51a3c185)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesInput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestDirHelper.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordOutput.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/Chain.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MD5Hash.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableInput.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSMDCFilter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java


 ThreadLocal initialization in several classes is not thread safe
 

 Key: HADOOP-11969
 URL: https://issues.apache.org/jira/browse/HADOOP-11969
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: thread-safety
 Fix For: 2.8.0

 Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
 HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch


 Right now, the initialization of hte thread local factories for encoder / 
 decoder in Text are not marked final. This means they end up with a static 
 initializer that is not guaranteed to be finished running before the members 
 are visible. 
 Under heavy contention, this means during initialization some users will get 
 an NPE:
 {code}
 (2015-05-05 08:58:03.974 : solr_server_log.log) 
  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
   at org.apache.hadoop.io.Text.decode(Text.java:406)
   at org.apache.hadoop.io.Text.decode(Text.java:389)
   at org.apache.hadoop.io.Text.toString(Text.java:280)
   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
   at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
   at 

[jira] [Created] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-05-26 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12036:
-

 Summary: Consolidate all of the cmake extensions in one directory
 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer


Rather than have a half-dozen redefinitions, custom extensions, etc, we should 
move them all to one location so that the cmake environment is consistent 
between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560218#comment-14560218
 ] 

Larry McCay commented on HADOOP-11934:
--

Hi [~cnauroth] - thank you for the detailed review!
I will get right on it.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
 HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
 HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
 HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 

[jira] [Comment Edited] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560337#comment-14560337
 ] 

Allen Wittenauer edited comment on HADOOP-12027 at 5/27/15 3:19 AM:


This is way simpler than what I thought:

The CMakeList.txt needs to get changed to be:
{code}
SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
IF(${CMAKE_SYSTEM_NAME} MATCHES Darwin)
  # No effect. bzip2 not built as a shared lib 
ELSE()
  set_find_shared_library_version(1)
ENDIF()
find_package(BZip2 QUIET)
{code}

and then it appears that setting env vars, etc, works as expected.  (e.g., 
BZIP2_PREFIX_DIR=/usr/local/opt/bzip2 should make cmake pick it up from 
homebrew)


was (Author: aw):
This is way simpler than what I thought:

The CMakeList.txt needs to get changed to be:
{code}
SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
IF(${CMAKE_SYSTEM_NAME} MATCHES Darwin)
  # No effect. bzip2 not built as a shared lib 
ELSE()
  set_find_shared_library_version(1)
ENDIF()
find_package(BZip2 QUIET)
{code}

and then it appears that setting env vars, etc, works as expected.

 enable bzip2 on OS X
 

 Key: HADOOP-12027
 URL: https://issues.apache.org/jira/browse/HADOOP-12027
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Reporter: Allen Wittenauer

 OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
 expose the bzip2 headers+lib location to CMake like we do for snappy, 
 OpenSSL, etc.  Additionally, bzip2 only comes as a static library on Darwin, 
 so we need to escape out the forced shared library bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560348#comment-14560348
 ] 

Allen Wittenauer commented on HADOOP-12027:
---

OK, env var isn't needed.  It appears that our hack around forcing shared libs 
doesn't work for bzip2 on OS X.

 enable bzip2 on OS X
 

 Key: HADOOP-12027
 URL: https://issues.apache.org/jira/browse/HADOOP-12027
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Reporter: Allen Wittenauer

 OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
 expose the bzip2 headers+lib location to CMake like we do for snappy, 
 OpenSSL, etc.  Additionally, bzip2 only comes as a static library on Darwin, 
 so we need to escape out the forced shared library bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Open  (was: Patch Available)

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
 HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
 HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
 HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)

[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Attachment: HADOOP-11934-11.patch

Addresses [~cnauroth]'s review comments.

I will file a separate for issue #5 - as suggested.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 

[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Patch Available  (was: Open)

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 

[jira] [Updated] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12011:
---
Attachment: HADOOP-12011-HDFS-7285-v3.patch

Thanks Uma for the good comments! Updated the patch accordingly.
Also moved the utilities into the rawcoder package because they're needed there 
to dump data in the concrete processing during coding/decoding in native coders.

 Allow to dump verbose information to ease debugging in raw erasure coders
 -

 Key: HADOOP-12011
 URL: https://issues.apache.org/jira/browse/HADOOP-12011
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HADOOP-12011-HDFS-7285-v1.patch, 
 HADOOP-12011-HDFS-7285-v3.patch


 While working on native erasure coders, it was found useful to dump key 
 information like encode/decode matrix, erasures and etc. for the 
 encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560433#comment-14560433
 ] 

Hadoop QA commented on HADOOP-12011:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 47s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 46s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  6s | The applied patch generated  6 
new checkstyle issues (total was 0, now 6). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 40s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m  3s | Tests passed in 
hadoop-common. |
| | |  60m 26s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735520/HADOOP-12011-HDFS-7285-v3.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 1299357 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6838/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6838/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6838/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6838/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6838/console |


This message was automatically generated.

 Allow to dump verbose information to ease debugging in raw erasure coders
 -

 Key: HADOOP-12011
 URL: https://issues.apache.org/jira/browse/HADOOP-12011
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HADOOP-12011-HDFS-7285-v1.patch, 
 HADOOP-12011-HDFS-7285-v3.patch


 While working on native erasure coders, it was found useful to dump key 
 information like encode/decode matrix, erasures and etc. for the 
 encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12011:
---
Attachment: HADOOP-12011-HDFS-7285-v4.patch

Corrected the check style reported issues.

 Allow to dump verbose information to ease debugging in raw erasure coders
 -

 Key: HADOOP-12011
 URL: https://issues.apache.org/jira/browse/HADOOP-12011
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HADOOP-12011-HDFS-7285-v1.patch, 
 HADOOP-12011-HDFS-7285-v3.patch, HADOOP-12011-HDFS-7285-v4.patch


 While working on native erasure coders, it was found useful to dump key 
 information like encode/decode matrix, erasures and etc. for the 
 encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12027:
--
Summary: enable bzip2 on OS X  (was: need a maven property for bzip2 
headers)

 enable bzip2 on OS X
 

 Key: HADOOP-12027
 URL: https://issues.apache.org/jira/browse/HADOOP-12027
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Reporter: Allen Wittenauer

 OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
 expose the bzip2 headers+lib location to CMake like we do for snappy, 
 OpenSSL, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12027:
--
Description: OS X Mavericks + homebrew could compile bzip2 bits if there 
was a way to expose the bzip2 headers+lib location to CMake like we do for 
snappy, OpenSSL, etc.  Additionally, bzip2 only comes as a static library on 
Darwin, so we need to escape out the forced shared library bit.  (was: OS X 
Mavericks + homebrew could compile bzip2 bits if there was a way to expose the 
bzip2 headers+lib location to CMake like we do for snappy, OpenSSL, etc.)

 enable bzip2 on OS X
 

 Key: HADOOP-12027
 URL: https://issues.apache.org/jira/browse/HADOOP-12027
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Reporter: Allen Wittenauer

 OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
 expose the bzip2 headers+lib location to CMake like we do for snappy, 
 OpenSSL, etc.  Additionally, bzip2 only comes as a static library on Darwin, 
 so we need to escape out the forced shared library bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11894) Bump the version of HTrace to 3.2.0-incubating

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560378#comment-14560378
 ] 

Hadoop QA commented on HADOOP-11894:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  0s | Site still builds. |
| {color:red}-1{color} | checkstyle |   3m 33s | The applied patch generated  1 
new checkstyle issues (total was 118, now 118). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 42s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 20s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 163m 11s | Tests passed in hadoop-hdfs. 
|
| | | 235m 59s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735483/HADOOP-11894.003.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / cdbd66b |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6835/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6835/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6835/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6835/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6835/console |


This message was automatically generated.

 Bump the version of HTrace to 3.2.0-incubating
 --

 Key: HADOOP-11894
 URL: https://issues.apache.org/jira/browse/HADOOP-11894
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
 Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch, 
 HADOOP-11894.003.patch


 * update pom.xml
 * update documentation
 * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
 {{addKVAnnotation(String key, String value)}}
 * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
 {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Attachment: (was: HADOOP-11934-11.patch)

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
 HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
 HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
 HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 

[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Status: Patch Available  (was: Open)

 [JDK8] Update jersey version to latest 1.x release
 --

 Key: HADOOP-9613
 URL: https://issues.apache.org/jira/browse/HADOOP-9613
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.4.0, 3.0.0
Reporter: Timothy St. Clair
Assignee: Timothy St. Clair
  Labels: BB2015-05-TBR, maven
 Attachments: HADOOP-2.2.0-9613.patch, HADOOP-9613.1.patch, 
 HADOOP-9613.2.patch, HADOOP-9613.3.patch, HADOOP-9613.patch


 Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
 system dependencies on Fedora 18.  
 The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Patch Available  (was: Open)

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 

[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Attachment: HADOOP-11934-11.patch

Addressed review comments.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 

[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560447#comment-14560447
 ] 

Sean Busbey commented on HADOOP-12031:
--

{quote}
One concern is, this plugin depends on Python currently. I assume we can use 
Python in most build environment, but please advise if there is a more portable 
and not-so-hard way to validate XML.
{quote}

Relying on python is problematic. If you stick with it, you'll need to detect 
and gracefully degrade when the version you need isn't present.

If we're just checking well-formed-ness, how about using xmllint?

 test-patch.sh should have an xml plugin
 ---

 Key: HADOOP-12031
 URL: https://issues.apache.org/jira/browse/HADOOP-12031
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Reporter: Allen Wittenauer
Assignee: Kengo Seki
  Labels: newbie, test-patch
 Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch


 HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
 change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560337#comment-14560337
 ] 

Allen Wittenauer commented on HADOOP-12027:
---

This is way simpler than what I thought:

The CMakeList.txt needs to get changed to be:
{code}
SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
IF(${CMAKE_SYSTEM_NAME} MATCHES Darwin)
  # No effect. bzip2 not built as a shared lib 
ELSE()
  set_find_shared_library_version(1)
ENDIF()
find_package(BZip2 QUIET)
{code}

and then it appears that setting env vars, etc, works as expected.

 enable bzip2 on OS X
 

 Key: HADOOP-12027
 URL: https://issues.apache.org/jira/browse/HADOOP-12027
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Reporter: Allen Wittenauer

 OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
 expose the bzip2 headers+lib location to CMake like we do for snappy, 
 OpenSSL, etc.  Additionally, bzip2 only comes as a static library on Darwin, 
 so we need to escape out the forced shared library bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12027) enable bzip2 on OS X

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560337#comment-14560337
 ] 

Allen Wittenauer edited comment on HADOOP-12027 at 5/27/15 3:35 AM:


This is way simpler than what I thought:

The CMakeList.txt needs to get changed to be:
{code}
SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
IF(${CMAKE_SYSTEM_NAME} MATCHES Darwin)
  # bzip2 detection fails on OS X for some reason here
ELSE()
  set_find_shared_library_version(1)
ENDIF()
find_package(BZip2 QUIET)
{code}

and then it appears that setting env vars, etc, works as expected.  (e.g., 
BZIP2_PREFIX_DIR=/usr/local/opt/bzip2 should make cmake pick it up from 
homebrew)


was (Author: aw):
This is way simpler than what I thought:

The CMakeList.txt needs to get changed to be:
{code}
SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
IF(${CMAKE_SYSTEM_NAME} MATCHES Darwin)
  # No effect. bzip2 not built as a shared lib 
ELSE()
  set_find_shared_library_version(1)
ENDIF()
find_package(BZip2 QUIET)
{code}

and then it appears that setting env vars, etc, works as expected.  (e.g., 
BZIP2_PREFIX_DIR=/usr/local/opt/bzip2 should make cmake pick it up from 
homebrew)

 enable bzip2 on OS X
 

 Key: HADOOP-12027
 URL: https://issues.apache.org/jira/browse/HADOOP-12027
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Reporter: Allen Wittenauer

 OS X Mavericks + homebrew could compile bzip2 bits if there was a way to 
 expose the bzip2 headers+lib location to CMake like we do for snappy, 
 OpenSSL, etc.  Additionally, bzip2 only comes as a static library on Darwin, 
 so we need to escape out the forced shared library bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11952) Native compilation on Solaris fails on Yarn due to use of FTS

2015-05-26 Thread Malcolm Kavalsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560381#comment-14560381
 ] 

Malcolm Kavalsky commented on HADOOP-11952:
---

I have already ported to the ftw library ( It works on Hadoop 2.2, both 
Sparc and Intel)

I'll send you the code.




 Native compilation on Solaris fails on Yarn due to use of FTS
 -

 Key: HADOOP-11952
 URL: https://issues.apache.org/jira/browse/HADOOP-11952
 Project: Hadoop Common
  Issue Type: Sub-task
 Environment: Solaris 11.2
Reporter: Malcolm Kavalsky
Assignee: Alan Burlison
   Original Estimate: 24h
  Remaining Estimate: 24h

 Compiling the Yarn Node Manager results in fts not found. On Solaris we 
 have an alternative ftw with similar functionality.
 This is isolated to a single file container-executor.c
 Note that this will just fix the compilation error. A more serious issue is 
 that Solaris does not support cgroups as Linux does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Open  (was: Patch Available)

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 

[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Attachment: HADOOP-9613.3.patch

Updating a patch to fix failures of tests.

 [JDK8] Update jersey version to latest 1.x release
 --

 Key: HADOOP-9613
 URL: https://issues.apache.org/jira/browse/HADOOP-9613
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 3.0.0, 2.4.0
Reporter: Timothy St. Clair
Assignee: Timothy St. Clair
  Labels: BB2015-05-TBR, maven
 Attachments: HADOOP-2.2.0-9613.patch, HADOOP-9613.1.patch, 
 HADOOP-9613.2.patch, HADOOP-9613.3.patch, HADOOP-9613.patch


 Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
 system dependencies on Fedora 18.  
 The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560420#comment-14560420
 ] 

Hadoop QA commented on HADOOP-11934:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 12s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 48s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 15s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 42s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 59s | Tests passed in 
hadoop-common. |
| | |  64m  6s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735519/HADOOP-11934-11.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6837/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6837/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6837/console |


This message was automatically generated.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
 HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
 HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
 HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-26 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560421#comment-14560421
 ] 

Larry McCay commented on HADOOP-11934:
--

Ignore those last results - incorrectly run test-path.sh messed up the source 
and I regenerated the patch.


 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
 HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
 HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
 HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 

[jira] [Updated] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11847:
---
Attachment: HADOOP-11847-HDFS-7285-v9.patch

Thanks Yi for the more review.
Updated the patch addressing the comment.

 Enhance raw coder allowing to read least required inputs in decoding
 

 Key: HADOOP-11847
 URL: https://issues.apache.org/jira/browse/HADOOP-11847
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
 HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
 HADOOP-11847-HDFS-7285-v6.patch, HADOOP-11847-HDFS-7285-v7.patch, 
 HADOOP-11847-HDFS-7285-v8.patch, HADOOP-11847-HDFS-7285-v9.patch, 
 HADOOP-11847-v1.patch, HADOOP-11847-v2.patch


 This is to enhance raw erasure coder to allow only reading least required 
 inputs while decoding. It will also refine and document the relevant APIs for 
 better understanding and usage. When using least required inputs, it may add 
 computating overhead but will possiblly outperform overall since less network 
 traffic and disk IO are involved.
 This is something planned to do but just got reminded by [~zhz]' s question 
 raised in HDFS-7678, also copied here:
 bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
 is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
 I construct the inputs to RawErasureDecoder#decode?
 With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11847:
---
Hadoop Flags: Reviewed

 Enhance raw coder allowing to read least required inputs in decoding
 

 Key: HADOOP-11847
 URL: https://issues.apache.org/jira/browse/HADOOP-11847
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: BB2015-05-TBR
 Fix For: HDFS-7285

 Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
 HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
 HADOOP-11847-HDFS-7285-v6.patch, HADOOP-11847-HDFS-7285-v7.patch, 
 HADOOP-11847-HDFS-7285-v8.patch, HADOOP-11847-HDFS-7285-v9.patch, 
 HADOOP-11847-v1.patch, HADOOP-11847-v2.patch


 This is to enhance raw erasure coder to allow only reading least required 
 inputs while decoding. It will also refine and document the relevant APIs for 
 better understanding and usage. When using least required inputs, it may add 
 computating overhead but will possiblly outperform overall since less network 
 traffic and disk IO are involved.
 This is something planned to do but just got reminded by [~zhz]' s question 
 raised in HDFS-7678, also copied here:
 bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
 is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
 I construct the inputs to RawErasureDecoder#decode?
 With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11847:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

It was just committed to the branch. Thanks [~hitliuyi] and [~zhz] for the 
great review and comments!

 Enhance raw coder allowing to read least required inputs in decoding
 

 Key: HADOOP-11847
 URL: https://issues.apache.org/jira/browse/HADOOP-11847
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
 HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
 HADOOP-11847-HDFS-7285-v6.patch, HADOOP-11847-HDFS-7285-v7.patch, 
 HADOOP-11847-HDFS-7285-v8.patch, HADOOP-11847-HDFS-7285-v9.patch, 
 HADOOP-11847-v1.patch, HADOOP-11847-v2.patch


 This is to enhance raw erasure coder to allow only reading least required 
 inputs while decoding. It will also refine and document the relevant APIs for 
 better understanding and usage. When using least required inputs, it may add 
 computating overhead but will possiblly outperform overall since less network 
 traffic and disk IO are involved.
 This is something planned to do but just got reminded by [~zhz]' s question 
 raised in HDFS-7678, also copied here:
 bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
 is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
 I construct the inputs to RawErasureDecoder#decode?
 With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11847:
---
Fix Version/s: HDFS-7285

 Enhance raw coder allowing to read least required inputs in decoding
 

 Key: HADOOP-11847
 URL: https://issues.apache.org/jira/browse/HADOOP-11847
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: BB2015-05-TBR
 Fix For: HDFS-7285

 Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
 HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
 HADOOP-11847-HDFS-7285-v6.patch, HADOOP-11847-HDFS-7285-v7.patch, 
 HADOOP-11847-HDFS-7285-v8.patch, HADOOP-11847-HDFS-7285-v9.patch, 
 HADOOP-11847-v1.patch, HADOOP-11847-v2.patch


 This is to enhance raw erasure coder to allow only reading least required 
 inputs while decoding. It will also refine and document the relevant APIs for 
 better understanding and usage. When using least required inputs, it may add 
 computating overhead but will possiblly outperform overall since less network 
 traffic and disk IO are involved.
 This is something planned to do but just got reminded by [~zhz]' s question 
 raised in HDFS-7678, also copied here:
 bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
 is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
 I construct the inputs to RawErasureDecoder#decode?
 With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-26 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558731#comment-14558731
 ] 

Yi Liu commented on HADOOP-11847:
-

Kai, the patch looks good, one comment, +1 after addressing:
In RSRawDecoder#doDecode

{code}
+for (int bufferIdx = 0, i = 0; i  erasedOrNotToReadIndexes.length; i++) {
+  if (adjustedDirectBufferOutputsParameter[i] == null) {
+ByteBuffer buffer = checkGetDirectBuffer(bufferIdx, dataLen);
+buffer.limit(dataLen);
+adjustedDirectBufferOutputsParameter[i] = resetBuffer(buffer);
+bufferIdx++;
+  }
+}
{code}
Here, we need to set buffer position to 0.



 Enhance raw coder allowing to read least required inputs in decoding
 

 Key: HADOOP-11847
 URL: https://issues.apache.org/jira/browse/HADOOP-11847
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
 HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
 HADOOP-11847-HDFS-7285-v6.patch, HADOOP-11847-HDFS-7285-v7.patch, 
 HADOOP-11847-HDFS-7285-v8.patch, HADOOP-11847-v1.patch, HADOOP-11847-v2.patch


 This is to enhance raw erasure coder to allow only reading least required 
 inputs while decoding. It will also refine and document the relevant APIs for 
 better understanding and usage. When using least required inputs, it may add 
 computating overhead but will possiblly outperform overall since less network 
 traffic and disk IO are involved.
 This is something planned to do but just got reminded by [~zhz]' s question 
 raised in HDFS-7678, also copied here:
 bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
 is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
 I construct the inputs to RawErasureDecoder#decode?
 With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-26 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558742#comment-14558742
 ] 

Yi Liu commented on HADOOP-11847:
-

+1, thanks Kai

 Enhance raw coder allowing to read least required inputs in decoding
 

 Key: HADOOP-11847
 URL: https://issues.apache.org/jira/browse/HADOOP-11847
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
 HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
 HADOOP-11847-HDFS-7285-v6.patch, HADOOP-11847-HDFS-7285-v7.patch, 
 HADOOP-11847-HDFS-7285-v8.patch, HADOOP-11847-HDFS-7285-v9.patch, 
 HADOOP-11847-v1.patch, HADOOP-11847-v2.patch


 This is to enhance raw erasure coder to allow only reading least required 
 inputs while decoding. It will also refine and document the relevant APIs for 
 better understanding and usage. When using least required inputs, it may add 
 computating overhead but will possiblly outperform overall since less network 
 traffic and disk IO are involved.
 This is something planned to do but just got reminded by [~zhz]' s question 
 raised in HDFS-7678, also copied here:
 bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
 is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
 I construct the inputs to RawErasureDecoder#decode?
 With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8751) NPE in Token.toString() when Token is constructed using null identifier

2015-05-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558770#comment-14558770
 ] 

Akira AJISAKA commented on HADOOP-8751:
---

+1, committing this.

 NPE in Token.toString() when Token is constructed using null identifier
 ---

 Key: HADOOP-8751
 URL: https://issues.apache.org/jira/browse/HADOOP-8751
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Vlad Rozov
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-8751-01.patch, HADOOP-8751-01.patch, 
 HADOOP-8751-02.patch, HADOOP-8751-03.patch, HADOOP-8751.patch


 Token constructor allows null to be passed leading to NPE in 
 Token.toString(). Simple fix is to check for null in constructor and use 
 empty byte arrays.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2015-05-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558843#comment-14558843
 ] 

Akira AJISAKA commented on HADOOP-10105:


Marked YARN-3217 as incompatible change. Should we revert it from branch-2 and 
branch-2.7?

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
 HADOOP-10105.part2.patch, HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10296) null check for requestContentLen is wrong in SwiftRestClient#buildException()

2015-05-26 Thread kanaka kumar avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558769#comment-14558769
 ] 

kanaka kumar avvaru commented on HADOOP-10296:
--

Planning to re-base the patch on trunk latest code. [~rpalamut] , if you would 
like to continue work on this JIRA please feel free to assign back to you.

 null check for requestContentLen is wrong in SwiftRestClient#buildException()
 -

 Key: HADOOP-10296
 URL: https://issues.apache.org/jira/browse/HADOOP-10296
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Ted Yu
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: BB2015-05-TBR, newbie, patch
 Attachments: HADOOP-10296.1.patch


 {code}
 if (requestContentLen!=null) {
   errorText.append( available 
 ).append(availableContentRange.getValue());
 }
 {code}
 The null check should be for availableContentRange



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8751) NPE in Token.toString() when Token is constructed using null identifier

2015-05-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8751:
--
Labels:   (was: BB2015-05-TBR)

 NPE in Token.toString() when Token is constructed using null identifier
 ---

 Key: HADOOP-8751
 URL: https://issues.apache.org/jira/browse/HADOOP-8751
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Vlad Rozov
Assignee: kanaka kumar avvaru
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-8751-01.patch, HADOOP-8751-01.patch, 
 HADOOP-8751-02.patch, HADOOP-8751-03.patch, HADOOP-8751.patch


 Token constructor allows null to be passed leading to NPE in 
 Token.toString(). Simple fix is to check for null in constructor and use 
 empty byte arrays.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8751) NPE in Token.toString() when Token is constructed using null identifier

2015-05-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8751:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~kanaka] for contribution.

 NPE in Token.toString() when Token is constructed using null identifier
 ---

 Key: HADOOP-8751
 URL: https://issues.apache.org/jira/browse/HADOOP-8751
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Vlad Rozov
Assignee: kanaka kumar avvaru
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-8751-01.patch, HADOOP-8751-01.patch, 
 HADOOP-8751-02.patch, HADOOP-8751-03.patch, HADOOP-8751.patch


 Token constructor allows null to be passed leading to NPE in 
 Token.toString(). Simple fix is to check for null in constructor and use 
 empty byte arrays.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7266) Deprecate metrics v1

2015-05-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-7266:
--
Status: Patch Available  (was: Open)

v2 patch fixes javac warnings.

 Deprecate metrics v1
 

 Key: HADOOP-7266
 URL: https://issues.apache.org/jira/browse/HADOOP-7266
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 2.8.0
Reporter: Luke Lu
Assignee: Akira AJISAKA
Priority: Blocker
 Attachments: HADOOP-7266.001.patch, HADOOP-7266.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7266) Deprecate metrics v1

2015-05-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-7266:
--
Attachment: HADOOP-7266.002.patch

 Deprecate metrics v1
 

 Key: HADOOP-7266
 URL: https://issues.apache.org/jira/browse/HADOOP-7266
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 2.8.0
Reporter: Luke Lu
Assignee: Akira AJISAKA
Priority: Blocker
 Attachments: HADOOP-7266.001.patch, HADOOP-7266.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7266) Deprecate metrics v1

2015-05-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-7266:
--
Attachment: HADOOP-7266.002.patch

 Deprecate metrics v1
 

 Key: HADOOP-7266
 URL: https://issues.apache.org/jira/browse/HADOOP-7266
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 2.8.0
Reporter: Luke Lu
Assignee: Akira AJISAKA
Priority: Blocker
 Attachments: HADOOP-7266.001.patch, HADOOP-7266.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7266) Deprecate metrics v1

2015-05-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-7266:
--
Attachment: (was: HADOOP-7266.002.patch)

 Deprecate metrics v1
 

 Key: HADOOP-7266
 URL: https://issues.apache.org/jira/browse/HADOOP-7266
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 2.8.0
Reporter: Luke Lu
Assignee: Akira AJISAKA
Priority: Blocker
 Attachments: HADOOP-7266.001.patch, HADOOP-7266.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8751) NPE in Token.toString() when Token is constructed using null identifier

2015-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558778#comment-14558778
 ] 

Hudson commented on HADOOP-8751:


FAILURE: Integrated in Hadoop-trunk-Commit #7900 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7900/])
HADOOP-8751. NPE in Token.toString() when Token is constructed using null 
identifier. Contributed by kanaka kumar avvaru. (aajisaka: rev 
56996a685e6201cb186cea866d22418289174574)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestDelegationToken.java


 NPE in Token.toString() when Token is constructed using null identifier
 ---

 Key: HADOOP-8751
 URL: https://issues.apache.org/jira/browse/HADOOP-8751
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Vlad Rozov
Assignee: kanaka kumar avvaru
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-8751-01.patch, HADOOP-8751-01.patch, 
 HADOOP-8751-02.patch, HADOOP-8751-03.patch, HADOOP-8751.patch


 Token constructor allows null to be passed leading to NPE in 
 Token.toString(). Simple fix is to check for null in constructor and use 
 empty byte arrays.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558858#comment-14558858
 ] 

Tsuyoshi Ozawa commented on HADOOP-10105:
-

hadoop-yarn-server-web-proxy is not user-facing. I think we can drop the 
dependency safely regardless of its incompatibility. I confirmed that 
MAPREDUCE-6264 is not incompatible change since it doesn't drop any 
dependencies.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
 HADOOP-10105.part2.patch, HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-10296) null check for requestContentLen is wrong in SwiftRestClient#buildException()

2015-05-26 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru reassigned HADOOP-10296:


Assignee: kanaka kumar avvaru

 null check for requestContentLen is wrong in SwiftRestClient#buildException()
 -

 Key: HADOOP-10296
 URL: https://issues.apache.org/jira/browse/HADOOP-10296
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Ted Yu
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: BB2015-05-TBR, newbie, patch
 Attachments: HADOOP-10296.1.patch


 {code}
 if (requestContentLen!=null) {
   errorText.append( available 
 ).append(availableContentRange.getValue());
 }
 {code}
 The null check should be for availableContentRange



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2015-05-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558825#comment-14558825
 ] 

Akira AJISAKA commented on HADOOP-10105:


I'm thinking YARN-3217 is incompatible change, MAPREDUCE-6264 is not. YARN-3217 
drops httpclient dependency.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
 HADOOP-10105.part2.patch, HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558857#comment-14558857
 ] 

Tsuyoshi Ozawa commented on HADOOP-10105:
-

hadoop-yarn-server-web-proxy is not user-facing. I think we can drop the 
dependency safely regardless of its incompatibility. I confirmed that 
MAPREDUCE-6264 is not incompatible change since it doesn't drop any 
dependencies.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
 HADOOP-10105.part2.patch, HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10296) null check for requestContentLen is wrong in SwiftRestClient#buildException()

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558805#comment-14558805
 ] 

Hadoop QA commented on HADOOP-10296:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 43s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 33s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 19s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 39s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 15s | Tests passed in 
hadoop-openstack. |
| | |  35m 33s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12668695/HADOOP-10296.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 39077db |
| hadoop-openstack test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6822/artifact/patchprocess/testrun_hadoop-openstack.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6822/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6822/console |


This message was automatically generated.

 null check for requestContentLen is wrong in SwiftRestClient#buildException()
 -

 Key: HADOOP-10296
 URL: https://issues.apache.org/jira/browse/HADOOP-10296
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Ted Yu
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: BB2015-05-TBR, newbie, patch
 Attachments: HADOOP-10296.1.patch


 {code}
 if (requestContentLen!=null) {
   errorText.append( available 
 ).append(availableContentRange.getValue());
 }
 {code}
 The null check should be for availableContentRange



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558819#comment-14558819
 ] 

Tsuyoshi Ozawa commented on HADOOP-10105:
-

MAPREDUCE-6264 and YARN-3217 has been committed already. Should we mark them as 
incompatible changes? In paricular, I think MAPREDUCE-6264 is a user-facing 
change.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
 HADOOP-10105.part2.patch, HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7266) Deprecate metrics v1

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558881#comment-14558881
 ] 

Hadoop QA commented on HADOOP-7266:
---

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735279/HADOOP-7266.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 56996a6 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6823/console |


This message was automatically generated.

 Deprecate metrics v1
 

 Key: HADOOP-7266
 URL: https://issues.apache.org/jira/browse/HADOOP-7266
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 2.8.0
Reporter: Luke Lu
Assignee: Akira AJISAKA
Priority: Blocker
 Attachments: HADOOP-7266.001.patch, HADOOP-7266.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2015-05-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558889#comment-14558889
 ] 

Akira AJISAKA commented on HADOOP-10105:


bq. hadoop-yarn-server-web-proxy is not user-facing.
There is a class ({{AmIpFilter}}) marked as {{@InterfaceAudience.Public}} in 
the module. If a user uses this class and relies on httpclient, the application 
can fail.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
 HADOOP-10105.part2.patch, HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9995) Consistent log severity level guards and statements

2015-05-26 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru updated HADOOP-9995:

Target Version/s:   (was: 2.1.1-beta)
  Status: Open  (was: Patch Available)

Planning to upload patch on latest trunk code base as the current patch file is 
too old.

 Consistent log severity level guards and statements 
 

 Key: HADOOP-9995
 URL: https://issues.apache.org/jira/browse/HADOOP-9995
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jackie Chang
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9995.patch


 Developers use logs to do in-house debugging. These log statements are later 
 demoted to less severe levels and usually are guarded by their matching 
 severity levels. However, we do see inconsistencies in trunk. A log statement 
 like 
 {code}
if (LOG.isDebugEnabled()) {
 LOG.info(Assigned container ( + allocated + ) 
 {code}
 doesn't make much sense because the log message is actually only printed out 
 in DEBUG-level. We do see previous issues tried to correct this 
 inconsistency. I am proposing a comprehensive correction over trunk.
 Doug Cutting pointed it out in HADOOP-312: 
 https://issues.apache.org/jira/browse/HADOOP-312?focusedCommentId=12429498page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12429498
 HDFS-1611 also corrected this inconsistency.
 This could have been avoided by switching from log4j to slf4j's {} format 
 like CASSANDRA-625 (2010/3) and ZOOKEEPER-850 (2012/1), which gives cleaner 
 code and slightly higher performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-9995) Consistent log severity level guards and statements

2015-05-26 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru reassigned HADOOP-9995:
---

Assignee: kanaka kumar avvaru

 Consistent log severity level guards and statements 
 

 Key: HADOOP-9995
 URL: https://issues.apache.org/jira/browse/HADOOP-9995
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jackie Chang
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9995.patch


 Developers use logs to do in-house debugging. These log statements are later 
 demoted to less severe levels and usually are guarded by their matching 
 severity levels. However, we do see inconsistencies in trunk. A log statement 
 like 
 {code}
if (LOG.isDebugEnabled()) {
 LOG.info(Assigned container ( + allocated + ) 
 {code}
 doesn't make much sense because the log message is actually only printed out 
 in DEBUG-level. We do see previous issues tried to correct this 
 inconsistency. I am proposing a comprehensive correction over trunk.
 Doug Cutting pointed it out in HADOOP-312: 
 https://issues.apache.org/jira/browse/HADOOP-312?focusedCommentId=12429498page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12429498
 HDFS-1611 also corrected this inconsistency.
 This could have been avoided by switching from log4j to slf4j's {} format 
 like CASSANDRA-625 (2010/3) and ZOOKEEPER-850 (2012/1), which gives cleaner 
 code and slightly higher performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7266) Deprecate metrics v1

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558945#comment-14558945
 ] 

Hadoop QA commented on HADOOP-7266:
---

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 41s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:red}-1{color} | javac |   7m 36s | The applied patch generated  59  
additional warning messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 16s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  4s | The patch has 3  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m  9s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | mapreduce tests |   1m 35s | Tests passed in 
hadoop-mapreduce-client-core. |
| {color:green}+1{color} | tools/hadoop tests |   6m  9s | Tests passed in 
hadoop-streaming. |
| | |  71m 17s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735287/HADOOP-7266.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 56996a6 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6824/artifact/patchprocess/diffJavacWarnings.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6824/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6824/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-mapreduce-client-core test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6824/artifact/patchprocess/testrun_hadoop-mapreduce-client-core.txt
 |
| hadoop-streaming test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6824/artifact/patchprocess/testrun_hadoop-streaming.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6824/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6824/console |


This message was automatically generated.

 Deprecate metrics v1
 

 Key: HADOOP-7266
 URL: https://issues.apache.org/jira/browse/HADOOP-7266
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 2.8.0
Reporter: Luke Lu
Assignee: Akira AJISAKA
Priority: Blocker
 Attachments: HADOOP-7266.001.patch, HADOOP-7266.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-9695) double values not Double values

2015-05-26 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru reassigned HADOOP-9695:
---

Assignee: kanaka kumar avvaru

 double values not Double values
 ---

 Key: HADOOP-9695
 URL: https://issues.apache.org/jira/browse/HADOOP-9695
 Project: Hadoop Common
  Issue Type: Bug
Reporter: DeepakVohra
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9695.patch


 The class description for
 org.apache.hadoop.io 
 Class DoubleWritable
 is Writable for Double values.
 Should be Writable for double values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8751) NPE in Token.toString() when Token is constructed using null identifier

2015-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559004#comment-14559004
 ] 

Hudson commented on HADOOP-8751:


FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #208 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/208/])
HADOOP-8751. NPE in Token.toString() when Token is constructed using null 
identifier. Contributed by kanaka kumar avvaru. (aajisaka: rev 
56996a685e6201cb186cea866d22418289174574)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestDelegationToken.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 NPE in Token.toString() when Token is constructed using null identifier
 ---

 Key: HADOOP-8751
 URL: https://issues.apache.org/jira/browse/HADOOP-8751
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Vlad Rozov
Assignee: kanaka kumar avvaru
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-8751-01.patch, HADOOP-8751-01.patch, 
 HADOOP-8751-02.patch, HADOOP-8751-03.patch, HADOOP-8751.patch


 Token constructor allows null to be passed leading to NPE in 
 Token.toString(). Simple fix is to check for null in constructor and use 
 empty byte arrays.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12030) test-patch should only report on newly introduced findbugs warnings.

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559035#comment-14559035
 ] 

Hadoop QA commented on HADOOP-12030:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6825/console in case of 
problems.

 test-patch should only report on newly introduced findbugs warnings.
 

 Key: HADOOP-12030
 URL: https://issues.apache.org/jira/browse/HADOOP-12030
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Busbey
Assignee: Sean Busbey
  Labels: test-patch
 Attachments: HADOOP-12030.1.patch, HADOOP-12030.2.patch


 findbugs is currently reporting the total number of findbugs warnings for 
 touched modules rather than just newly introduced bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12030) test-patch should only report on newly introduced findbugs warnings.

2015-05-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-12030:
-
Status: Patch Available  (was: Open)

 test-patch should only report on newly introduced findbugs warnings.
 

 Key: HADOOP-12030
 URL: https://issues.apache.org/jira/browse/HADOOP-12030
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Busbey
Assignee: Sean Busbey
  Labels: test-patch
 Attachments: HADOOP-12030.1.patch, HADOOP-12030.2.patch


 findbugs is currently reporting the total number of findbugs warnings for 
 touched modules rather than just newly introduced bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9695) double values not Double values

2015-05-26 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru updated HADOOP-9695:

Assignee: (was: kanaka kumar avvaru)

 double values not Double values
 ---

 Key: HADOOP-9695
 URL: https://issues.apache.org/jira/browse/HADOOP-9695
 Project: Hadoop Common
  Issue Type: Bug
Reporter: DeepakVohra
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9695.patch


 The class description for
 org.apache.hadoop.io 
 Class DoubleWritable
 is Writable for Double values.
 Should be Writable for double values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8751) NPE in Token.toString() when Token is constructed using null identifier

2015-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559018#comment-14559018
 ] 

Hudson commented on HADOOP-8751:


SUCCESS: Integrated in Hadoop-Yarn-trunk #939 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/939/])
HADOOP-8751. NPE in Token.toString() when Token is constructed using null 
identifier. Contributed by kanaka kumar avvaru. (aajisaka: rev 
56996a685e6201cb186cea866d22418289174574)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestDelegationToken.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 NPE in Token.toString() when Token is constructed using null identifier
 ---

 Key: HADOOP-8751
 URL: https://issues.apache.org/jira/browse/HADOOP-8751
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Vlad Rozov
Assignee: kanaka kumar avvaru
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-8751-01.patch, HADOOP-8751-01.patch, 
 HADOOP-8751-02.patch, HADOOP-8751-03.patch, HADOOP-8751.patch


 Token constructor allows null to be passed leading to NPE in 
 Token.toString(). Simple fix is to check for null in constructor and use 
 empty byte arrays.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12030) test-patch should only report on newly introduced findbugs warnings.

2015-05-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-12030:
-
Attachment: HADOOP-12030.2.patch

-02
  * moves findbugs back into test-patch proper
  * adds opt-in failure of pre-patch when extant warnings

 test-patch should only report on newly introduced findbugs warnings.
 

 Key: HADOOP-12030
 URL: https://issues.apache.org/jira/browse/HADOOP-12030
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Busbey
Assignee: Sean Busbey
  Labels: test-patch
 Attachments: HADOOP-12030.1.patch, HADOOP-12030.2.patch


 findbugs is currently reporting the total number of findbugs warnings for 
 touched modules rather than just newly introduced bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559023#comment-14559023
 ] 

Tsuyoshi Ozawa commented on HADOOP-10105:
-

[~ajisakaa], you're right. But I don't think we should revert the change in 
this case - Hadoop Compatibility guideline doesn't mention any policies about 
dependency upgrades. Application developers who use AmIpFilter can avoid the 
issue just by adding old httpclient with the version a dependency. How about 
writing it on a release note of YARN-3217?



 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
 HADOOP-10105.part2.patch, HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12030) test-patch should only report on newly introduced findbugs warnings.

2015-05-26 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-12030:
-
Status: Open  (was: Patch Available)

 test-patch should only report on newly introduced findbugs warnings.
 

 Key: HADOOP-12030
 URL: https://issues.apache.org/jira/browse/HADOOP-12030
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Busbey
Assignee: Sean Busbey
  Labels: test-patch
 Attachments: HADOOP-12030.1.patch


 findbugs is currently reporting the total number of findbugs warnings for 
 touched modules rather than just newly introduced bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12030) test-patch should only report on newly introduced findbugs warnings.

2015-05-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559031#comment-14559031
 ] 

Sean Busbey commented on HADOOP-12030:
--

I agree that it's important to stay on top of findbugs problems, but it's also 
important that we properly indicate things that are problems before the patch 
and problems with the patch. otherwise folks will think of our warnings as 
false positives and stop listening to us.

I'm also torn on failing pre-patch over findbugs. Nightlies are really the 
place to flag issues with the main code base, though I get the advantage of how 
much more visible precommit QA is. Also, projects that want to be strict about 
findbugs status could always tie things into their main build so that pre-patch 
javac would fail with findbugs warnings anyways (and they could even do this 
just in QA by using the project patch process profile).

Anyhwo, I checked this version against HBase with HBASE-13716 with and without 
the cli option and it behaved correctly in both cases, either just saying 
everything is fine or before this patch there are 60-ish findbugs things 
wrong

 test-patch should only report on newly introduced findbugs warnings.
 

 Key: HADOOP-12030
 URL: https://issues.apache.org/jira/browse/HADOOP-12030
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Busbey
Assignee: Sean Busbey
  Labels: test-patch
 Attachments: HADOOP-12030.1.patch, HADOOP-12030.2.patch


 findbugs is currently reporting the total number of findbugs warnings for 
 touched modules rather than just newly introduced bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12030) test-patch should only report on newly introduced findbugs warnings.

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559036#comment-14559036
 ] 

Hadoop QA commented on HADOOP-12030:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | release audit |   0m 14s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck |   0m  9s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 27s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735307/HADOOP-12030.2.patch |
| Optional Tests | shellcheck |
| git revision | trunk / 9a3d617 |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6825/console |


This message was automatically generated.

 test-patch should only report on newly introduced findbugs warnings.
 

 Key: HADOOP-12030
 URL: https://issues.apache.org/jira/browse/HADOOP-12030
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Busbey
Assignee: Sean Busbey
  Labels: test-patch
 Attachments: HADOOP-12030.1.patch, HADOOP-12030.2.patch


 findbugs is currently reporting the total number of findbugs warnings for 
 touched modules rather than just newly introduced bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2015-05-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559098#comment-14559098
 ] 

Akira AJISAKA commented on HADOOP-10105:


Thanks [~ozawa]. I wrote that on YARN-3217.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
 HADOOP-10105.part2.patch, HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12001) Limiting LDAP search conflicts with posixGroup addition

2015-05-26 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12001:
---
Affects Version/s: 2.7.0

 Limiting LDAP search conflicts with posixGroup addition
 ---

 Key: HADOOP-12001
 URL: https://issues.apache.org/jira/browse/HADOOP-12001
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.7.0, 2.8.0
Reporter: Patrick White
 Attachments: HADOOP-12001.patch


 In HADOOP-9477, posixGroup support was added
 In HADOOP-10626, a limit on the returned attributes was added to speed up 
 queries.
 Limiting the attributes can break the SEARCH_CONTROLS object in the context 
 of the isPosix block, since it only asks LDAP for the groupNameAttr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11982) Inconsistency in handling URI without authority

2015-05-26 Thread Kannan Rajah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559562#comment-14559562
 ] 

Kannan Rajah commented on HADOOP-11982:
---

Does anyone have a comment on this issue? Is it OK to create a patch that 
defaults to empty authority?

 Inconsistency in handling URI without authority
 ---

 Key: HADOOP-11982
 URL: https://issues.apache.org/jira/browse/HADOOP-11982
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Kannan Rajah
Assignee: Kannan Rajah

 There are some inconsistencies coming from Hadoop class Path.java. This seems 
 to be the behavior for a very long time. I am not sure about the implications 
 of correcting it, so want to get some opinion.
 When you use makeQualified, a NULL authority is converted into empty 
 authority. When authority is NULL, the toString will not contain the // 
 before the actual absolute path. Otherwise it will not. There are ecosystem 
 components that may or may not use makeQualified consistently. We have hit 
 cases where the Path.toString() is used as key in hashmap. So lookups start 
 failing when the entry has Path object constructed using makeQualified and 
 lookup key does not.
 Proposal: Can we default to empty authority always when its NULL?
 -
 Examples
 ---
 Path p = new Path(hdfs:/a/b/c)
 p.toString() - hdfs:/a/b/c  - There is a single slash
 p.makeQualified(fs);
 p/toString() - hdfs:///a/b/c- There are 3 slashes
 -



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12034) Wrong comment for the filefilter function in test-patch checkstyle plugin

2015-05-26 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12034:
---

 Summary: Wrong comment for the filefilter function in test-patch 
checkstyle plugin
 Key: HADOOP-12034
 URL: https://issues.apache.org/jira/browse/HADOOP-12034
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Kengo Seki
Priority: Minor


This comment is attached to checkstyle_filefilter function, but it is a comment 
for shellcheck_filefilter actually.

{code}
# if it ends in an explicit .sh, then this is shell code.
# if it doesn't have an extension, we assume it is shell code too
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11985) Improve Solaris support in Hadoop

2015-05-26 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559591#comment-14559591
 ] 

Alan Burlison commented on HADOOP-11985:


Solaris-related changes to YARN and HDFS are covered under the two top-level 
issues:

YARN-3719 Improve Solaris support in YARN
HDFS-8478 Improve Solaris support in HDFS

 Improve Solaris support in Hadoop
 -

 Key: HADOOP-11985
 URL: https://issues.apache.org/jira/browse/HADOOP-11985
 Project: Hadoop Common
  Issue Type: New Feature
  Components: build, conf
Affects Versions: 2.7.0
 Environment: Solaris x86, Solaris sparc
Reporter: Alan Burlison
Assignee: Alan Burlison
  Labels: solaris

 At present the Hadoop native components aren't fully supported on Solaris 
 primarily due to differences between Linux and Solaris. This top-level task 
 will be used to group together both existing and new issues related to this 
 work. A second goal is to improve Hadoop performance on Solaris wherever 
 possible.
 Steve Loughran suggested a top-level JIRA was the best way to manage the work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559608#comment-14559608
 ] 

Ivan Mitic commented on HADOOP-12033:
-

If I had to guess (and I can only guess at this time:) ) I'd say this is 
something similar to the root cause from HADOOP-8423, where in case of a 
transient error (e.g. a networking error) someone's state gets out of sync, and 
results in a task failure.

 Reducer task failure with java.lang.NoClassDefFoundError: 
 Ljava/lang/InternalError at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
 ---

 Key: HADOOP-12033
 URL: https://issues.apache.org/jira/browse/HADOOP-12033
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic

 We have noticed intermittent reducer task failures with the below exception:
 {code}
 Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
 shuffle in fetcher#9 at 
 org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
 java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
  Method) at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
  at 
 org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
  at 
 org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
 org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
  at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534)
  at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
  at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) 
 Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
 java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
 java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
 java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
 sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
 java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
 {code}
 Usually, the reduce task succeeds on retry. 
 Some of the symptoms are similar to HADOOP-8423, but this fix is already 
 included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11807) add a lint mode to releasedocmaker

2015-05-26 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HADOOP-11807:

Attachment: HADOOP-11807.004.patch

Thank you [~sekikn] for review the patch, I attached a new patch regarding to 
your comment.

 add a lint mode to releasedocmaker
 --

 Key: HADOOP-11807
 URL: https://issues.apache.org/jira/browse/HADOOP-11807
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: ramtin
Priority: Minor
 Attachments: HADOOP-11807.001.patch, HADOOP-11807.002.patch, 
 HADOOP-11807.003.patch, HADOOP-11807.004.patch


 * check for missing components (error)
 * check for missing assignee (error)
 * check for common version problems (warning)
 * add an error message for missing release notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12035) shellcheck plugin displays a wrong version potentially

2015-05-26 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12035:
---

 Summary: shellcheck plugin displays a wrong version potentially
 Key: HADOOP-12035
 URL: https://issues.apache.org/jira/browse/HADOOP-12035
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Kengo Seki
Priority: Trivial


In dev-support/test-patch.d/shellcheck.sh:

{code}
SHELLCHECK_VERSION=$(shellcheck --version | ${GREP} version: | ${AWK} '{print 
$NF}')
{code}

it should be 

{code}
SHELLCHECK_VERSION=$(${SHELLCHECK} --version | …)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11984:
---
Attachment: HADOOP-11984.010.patch

Patch v010 fixes a problem in the last experiment that I was trying.

 Enable parallel JUnit tests in pre-commit.
 --

 Key: HADOOP-11984
 URL: https://issues.apache.org/jira/browse/HADOOP-11984
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, scripts, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
 HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
 HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
 HADOOP-11984.009.patch, HADOOP-11984.010.patch


 HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
 for running JUnit tests in multiple concurrent processes.  This issue 
 proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11937) Guarantee a full build of all native code during pre-commit.

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559499#comment-14559499
 ] 

Colin Patrick McCabe commented on HADOOP-11937:
---

test-patch already fails on OS X.  That's why we added a workaround that allows 
you to disable the native parts of the build in order to get a test-patch build 
on that platform.

 Guarantee a full build of all native code during pre-commit.
 

 Key: HADOOP-11937
 URL: https://issues.apache.org/jira/browse/HADOOP-11937
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Reporter: Chris Nauroth

 Some of the native components of the build are considered optional and either 
 will not build at all without passing special flags to Maven or will allow a 
 build to proceed if dependencies are missing from the build machine.  If 
 these components do not get built, then pre-commit isn't really providing 
 full coverage of the build.  This issue proposes to update test-patch.sh so 
 that it does a full build of all native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11924) Tolerate JDK-8047340-related exceptions in Shell#isSetSidAvailable preventing class init

2015-05-26 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559560#comment-14559560
 ] 

Gera Shegalov commented on HADOOP-11924:


[~ozawa], are you going to work on 002? I think at the very least we should 
change the log level when swallowing the exception. The exception itself should 
also be included in the LOG statement:
{code}
 LOG.info(Avoiding JDK-8047340 on BSD-based systems., t);
{code}

 Tolerate JDK-8047340-related exceptions in Shell#isSetSidAvailable preventing 
 class init
 

 Key: HADOOP-11924
 URL: https://issues.apache.org/jira/browse/HADOOP-11924
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Gera Shegalov
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-11924.001.patch


 Address the root cause of HADOOP-11916 per 
 https://issues.apache.org/jira/browse/HADOOP-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14528009#comment-14528009
 {quote}
 JDK-8047340 explicitly calls out BSD-like systems, should not we just exclude 
 those systems instead of enabling solely Linux?
 {code}
 Assume.assumeFalse(Avoiding JDK-8047340 on BSD-based systems, Shell.FREEBSD 
 || Shell.MAC);
 {code}
 However, I don't think this is the right fix. Shell on BSD-like systems is 
 broken with the TR locale. Shell class initialization happens only because 
 StringUtils references Shell.WINDOWS.
 We can simply catch Throwable in Shell#isSetsidSupported instead of 
 IOException. If we want to be pedantic we can rethrow
 {code}
 if (!(t instanceof IOException)  !(Shell.FREEBSD || Shell.MAC))
 {code}
 With such a change the test can run unchanged.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11997) CMake CMAKE_C_FLAGS are non-portable

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559619#comment-14559619
 ] 

Colin Patrick McCabe commented on HADOOP-11997:
---

I would rather set the flags explicitly than rely on {{CMAKE_BUILD_TYPE}}.  
It's clearer and less dependent on CMake version.

Are you going to post a patch to add Solaris compiler support, as Allen 
suggested?  Or add more \-W options and fix the resulting warnings?  Or should 
we close this JIRA and take up the discussion elsewhere?  It seems like if you 
are using gcc on Solaris, the flags don't need to be modified.

 CMake CMAKE_C_FLAGS are non-portable
 

 Key: HADOOP-11997
 URL: https://issues.apache.org/jira/browse/HADOOP-11997
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: All
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Critical

 hadoop-common-project/hadoop-common/src/CMakeLists.txt 
 (https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt#L110)
  contains the following unconditional assignments to CMAKE_C_FLAGS:
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -g -Wall -O2)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_REENTRANT -D_GNU_SOURCE)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_LARGEFILE_SOURCE 
 -D_FILE_OFFSET_BITS=64)
 There are several issues here:
 1. -D_GNU_SOURCE globally enables the use of all Linux-only extensions in 
 hadoop-common native source. This is probably a major contributor to the poor 
 cross-platform portability of Hadoop native code to non-Linux platforms as it 
 makes it easy for developers to use non-portable Linux features without 
 realising. Use of Linux-specific features should be correctly bracketed with 
 conditional macro blocks that provide an alternative for non-Linux platforms.
 2. -g -Wall -O2 turns on debugging for all builds, I believe the correct 
 mechanism is to set the CMAKE_BUILD_TYPE CMake variable. If it is still 
 necessary to override CFLAGS it should probably be done conditionally 
 dependent on the value of CMAKE_BUILD_TYPE.
 3. -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 On Solaris these flags are 
 only needed for largefile support in ILP32 applications, LP64 applications 
 are largefile by default. I believe the same is true on Linux, so these flags 
 are harmless but redundant for 64-bit compilation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11937) Guarantee a full build of all native code during pre-commit.

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559524#comment-14559524
 ] 

Allen Wittenauer commented on HADOOP-11937:
---

bq. test-patch already fails on OS X

You're behind the times. -Pnative has worked on OS X for almost a year now.  
(Also: It's probably worth pointing out that I rewrote test-patch.sh, including 
the Jenkins mode, on OS X)

 Guarantee a full build of all native code during pre-commit.
 

 Key: HADOOP-11937
 URL: https://issues.apache.org/jira/browse/HADOOP-11937
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Reporter: Chris Nauroth

 Some of the native components of the build are considered optional and either 
 will not build at all without passing special flags to Maven or will allow a 
 build to proceed if dependencies are missing from the build machine.  If 
 these components do not get built, then pre-commit isn't really providing 
 full coverage of the build.  This issue proposes to update test-patch.sh so 
 that it does a full build of all native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559525#comment-14559525
 ] 

zhihai xu commented on HADOOP-12033:


This looks likes hadoop native library was not loaded successfully.
Did you see this warning message?
  LOG.warn(Unable to load native-hadoop library for your platform...  +
   using builtin-java classes where applicable);
You need configure LD_LIBRARY_PATH correctly in your environment.


 Reducer task failure with java.lang.NoClassDefFoundError: 
 Ljava/lang/InternalError at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
 ---

 Key: HADOOP-12033
 URL: https://issues.apache.org/jira/browse/HADOOP-12033
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic

 We have noticed intermittent reducer task failures with the below exception:
 {code}
 Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
 shuffle in fetcher#9 at 
 org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
 java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
  Method) at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
  at 
 org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
  at 
 org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
 org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
  at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534)
  at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
  at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) 
 Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
 java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
 java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
 java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
 sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
 java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
 {code}
 Usually, the reduce task succeeds on retry. 
 Some of the symptoms are similar to HADOOP-8423, but this fix is already 
 included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559543#comment-14559543
 ] 

Ivan Mitic commented on HADOOP-12033:
-

Thanks for responding [~zxu]. The reducer task would succeed on retry, so I 
assumed it's not an environment problem. Below is the task syslog:
{noformat}
2015-05-21 18:33:10,773 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from 
hadoop-metrics2.properties
2015-05-21 18:33:10,976 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 
60 second(s).
2015-05-21 18:33:10,976 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system 
started
2015-05-21 18:33:10,991 INFO [main] org.apache.hadoop.mapred.YarnChild: 
Executing with tokens:
2015-05-21 18:33:10,991 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: 
mapreduce.job, Service: job_1432143397187_0004, Ident: 
(org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@5df3ade7)
2015-05-21 18:33:11,132 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: 
RM_DELEGATION_TOKEN, Service: 100.76.156.98:9010, Ident: (owner=btbig2, 
renewer=mr token, realUser=hdp, issueDate=1432225097662, maxDate=1432829897662, 
sequenceNumber=2, masterKeyId=2)
2015-05-21 18:33:11,351 INFO [main] org.apache.hadoop.mapred.YarnChild: 
Sleeping for 0ms before retrying again. Got null now.
2015-05-21 18:33:12,335 INFO [main] org.apache.hadoop.mapred.YarnChild: 
Sleeping for 500ms before retrying again. Got null now.
2015-05-21 18:33:13,804 INFO [main] org.apache.hadoop.mapred.YarnChild: 
Sleeping for 1000ms before retrying again. Got null now.
2015-05-21 18:33:16,308 INFO [main] org.apache.hadoop.mapred.YarnChild: 
mapreduce.cluster.local.dir for child: 
c:/apps/temp/hdfs/nm-local-dir/usercache/btbig2/appcache/application_1432143397187_0004
2015-05-21 18:33:17,199 INFO [main] 
org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. 
Instead, use dfs.metrics.session-id
2015-05-21 18:33:17,402 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from 
hadoop-metrics2-azure-file-system.properties
2015-05-21 18:33:17,418 INFO [main] 
org.apache.hadoop.metrics2.sink.WindowsAzureETWSink: Init starting.
2015-05-21 18:33:17,418 INFO [main] 
org.apache.hadoop.metrics2.sink.WindowsAzureETWSink: Successfully loaded native 
library. LibraryName = EtwLogger
2015-05-21 18:33:17,418 INFO [main] 
org.apache.hadoop.metrics2.sink.WindowsAzureETWSink: Init completed. Native 
library loaded and ETW handle obtained.
2015-05-21 18:33:17,418 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink azurefs2 started
2015-05-21 18:33:17,433 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 
60 second(s).
2015-05-21 18:33:17,433 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: azure-file-system metrics 
system started
2015-05-21 18:33:17,699 INFO [main] 
org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: ProcfsBasedProcessTree 
currently is supported only on Linux.
2015-05-21 18:33:17,714 INFO [main] org.apache.hadoop.mapred.Task:  Using 
ResourceCalculatorProcessTree : 
org.apache.hadoop.yarn.util.WindowsBasedProcessTree@36c76ec3
2015-05-21 18:33:17,746 INFO [main] org.apache.hadoop.mapred.ReduceTask: Using 
ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@5c7b1796
2015-05-21 18:33:17,793 INFO [main] 
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: MergerManager: 
memoryLimit=741710208, maxSingleShuffleLimit=185427552, 
mergeThreshold=489528768, ioSortFactor=100, memToMemMergeOutputsThreshold=100
2015-05-21 18:33:17,793 INFO [EventFetcher for fetching Map Completion Events] 
org.apache.hadoop.mapreduce.task.reduce.EventFetcher: 
attempt_1432143397187_0004_r_001735_0 Thread started: EventFetcher for fetching 
Map Completion Events
2015-05-21 18:33:19,187 INFO [fetcher#30] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: Assigning 
workernode165.btbig2.c2.internal.cloudapp.net:13562 with 1 to fetcher#30
2015-05-21 18:33:19,187 INFO [fetcher#30] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: assigned 1 of 1 
to workernode165.btbig2.c2.internal.cloudapp.net:13562 to fetcher#30
2015-05-21 18:33:19,187 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: Assigning 
workernode279.btbig2.c2.internal.cloudapp.net:13562 with 1 to fetcher#1
2015-05-21 18:33:19,187 INFO [fetcher#1] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: assigned 1 of 1 
to workernode279.btbig2.c2.internal.cloudapp.net:13562 to fetcher#1
(fetch logs removed)
2015-05-21 19:25:08,983 INFO [fetcher#9] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: Assigning 
workernode133.btbig2.c2.internal.cloudapp.net:13562 with 88 to fetcher#9
2015-05-21 19:25:08,983 INFO 

[jira] [Commented] (HADOOP-11347) Inconsistent enforcement of umask between FileSystem and FileContext interacting with local file system.

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559566#comment-14559566
 ] 

Colin Patrick McCabe commented on HADOOP-11347:
---

Thanks for looking at this, Varun.  I don't think we need to change the 
FileSystem base class.  This JIRA is about the local file system-- that's the 
FS that is having trouble with this, and that's the one that should change.

 Inconsistent enforcement of umask between FileSystem and FileContext 
 interacting with local file system.
 

 Key: HADOOP-11347
 URL: https://issues.apache.org/jira/browse/HADOOP-11347
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Chris Nauroth
Assignee: Varun Saxena
  Labels: BB2015-05-RFC
 Attachments: HADOOP-11347.001.patch, HADOOP-11347.002.patch, 
 HADOOP-11347.03.patch


 The {{FileSystem}} and {{FileContext}} APIs are inconsistent in enforcement 
 of umask for newly created directories.  {{FileContext}} utilizes 
 configuration property {{fs.permissions.umask-mode}} and runs a separate 
 {{chmod}} call to guarantee bypassing the process umask.  This is the 
 expected behavior for Hadoop as discussed in the documentation of 
 {{fs.permissions.umask-mode}}.  For the equivalent {{FileSystem}} APIs, it 
 does not use {{fs.permissions.umask-mode}}.  Instead, the permissions end up 
 getting controlled by the process umask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559497#comment-14559497
 ] 

Hadoop QA commented on HADOOP-11984:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6827/console in case of 
problems.

 Enable parallel JUnit tests in pre-commit.
 --

 Key: HADOOP-11984
 URL: https://issues.apache.org/jira/browse/HADOOP-11984
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, scripts, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
 HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
 HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
 HADOOP-11984.009.patch, HADOOP-11984.010.patch


 HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
 for running JUnit tests in multiple concurrent processes.  This issue 
 proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559583#comment-14559583
 ] 

Hadoop QA commented on HADOOP-11984:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |  14m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  5s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | shellcheck |   0m  9s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 39s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |   1m 23s | Tests passed in 
hadoop-common. |
| | |  38m 38s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735365/HADOOP-11984.010.patch 
|
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / 022f49d |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6827/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6827/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6827/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6827/console |


This message was automatically generated.

 Enable parallel JUnit tests in pre-commit.
 --

 Key: HADOOP-11984
 URL: https://issues.apache.org/jira/browse/HADOOP-11984
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, scripts, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
 HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
 HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
 HADOOP-11984.009.patch, HADOOP-11984.010.patch


 HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
 for running JUnit tests in multiple concurrent processes.  This issue 
 proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559596#comment-14559596
 ] 

zhihai xu commented on HADOOP-12033:


Is it possible some early failure such as ClassNotFoundException or an 
ExceptionInInitializerError (indicating a failure in the static initialization 
block) or some incompatible version of the class found at runtime cause this 
exception?

 Reducer task failure with java.lang.NoClassDefFoundError: 
 Ljava/lang/InternalError at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
 ---

 Key: HADOOP-12033
 URL: https://issues.apache.org/jira/browse/HADOOP-12033
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic

 We have noticed intermittent reducer task failures with the below exception:
 {code}
 Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
 shuffle in fetcher#9 at 
 org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
 java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
  Method) at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
  at 
 org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
  at 
 org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
 org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
  at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534)
  at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
  at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) 
 Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
 java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
 java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
 java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
 sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
 java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
 {code}
 Usually, the reduce task succeeds on retry. 
 Some of the symptoms are similar to HADOOP-8423, but this fix is already 
 included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8751) NPE in Token.toString() when Token is constructed using null identifier

2015-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559261#comment-14559261
 ] 

Hudson commented on HADOOP-8751:


SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2155 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2155/])
HADOOP-8751. NPE in Token.toString() when Token is constructed using null 
identifier. Contributed by kanaka kumar avvaru. (aajisaka: rev 
56996a685e6201cb186cea866d22418289174574)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestDelegationToken.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 NPE in Token.toString() when Token is constructed using null identifier
 ---

 Key: HADOOP-8751
 URL: https://issues.apache.org/jira/browse/HADOOP-8751
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Vlad Rozov
Assignee: kanaka kumar avvaru
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-8751-01.patch, HADOOP-8751-01.patch, 
 HADOOP-8751-02.patch, HADOOP-8751-03.patch, HADOOP-8751.patch


 Token constructor allows null to be passed leading to NPE in 
 Token.toString(). Simple fix is to check for null in constructor and use 
 empty byte arrays.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-26 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559265#comment-14559265
 ] 

Sangjin Lee commented on HADOOP-11983:
--

[~aw], could you take a quick look at it? Thanks!

 HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do
 ---

 Key: HADOOP-11983
 URL: https://issues.apache.org/jira/browse/HADOOP-11983
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-11983.001.patch


 The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
 should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
 appended.
 You can easily try out by doing something like
 {noformat}
 HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
 {noformat}
 (HADOOP_CLASSPATH should point to an existing directory)
 I think the if clause in hadoop_add_to_classpath_userpath is reversed.
 This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11952) Native compilation on Solaris fails on Yarn due to use of FTS

2015-05-26 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison reassigned HADOOP-11952:
--

Assignee: Alan Burlison  (was: Malcolm Kavalsky)

 Native compilation on Solaris fails on Yarn due to use of FTS
 -

 Key: HADOOP-11952
 URL: https://issues.apache.org/jira/browse/HADOOP-11952
 Project: Hadoop Common
  Issue Type: Sub-task
 Environment: Solaris 11.2
Reporter: Malcolm Kavalsky
Assignee: Alan Burlison
   Original Estimate: 24h
  Remaining Estimate: 24h

 Compiling the Yarn Node Manager results in fts not found. On Solaris we 
 have an alternative ftw with similar functionality.
 This is isolated to a single file container-executor.c
 Note that this will just fix the compilation error. A more serious issue is 
 that Solaris does not support cgroups as Linux does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11984:
---
Attachment: HADOOP-11984.009.patch

As I suspected, the base directory wasn't in place at the time the test started.

Patch v009 is another troubleshooting patch.  The pom.xml has been changed to 
fail fast if the mkdir of the parallel testing directories fails.  This will 
only run {{TestCredentials}}.  This is going to help narrow down if the problem 
is happening at initial directory creation time or if it might be another test 
deleting the whole directory.

 Enable parallel JUnit tests in pre-commit.
 --

 Key: HADOOP-11984
 URL: https://issues.apache.org/jira/browse/HADOOP-11984
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, scripts, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
 HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
 HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
 HADOOP-11984.009.patch


 HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
 for running JUnit tests in multiple concurrent processes.  This issue 
 proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559765#comment-14559765
 ] 

Hudson commented on HADOOP-11969:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7905 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7905/])
HADOOP-11969. ThreadLocal initialization in several classes is not thread safe 
(Sean Busbey via Colin P. McCabe) (cmccabe: rev 
7dba7005b79994106321b0f86bc8f4ea51a3c185)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableInput.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordOutput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestDirHelper.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/Chain.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesInput.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordInput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MD5Hash.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSMDCFilter.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java


 ThreadLocal initialization in several classes is not thread safe
 

 Key: HADOOP-11969
 URL: https://issues.apache.org/jira/browse/HADOOP-11969
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: thread-safety
 Fix For: 2.8.0

 Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
 HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch


 Right now, the initialization of hte thread local factories for encoder / 
 decoder in Text are not marked final. This means they end up with a static 
 initializer that is not guaranteed to be finished running before the members 
 are visible. 
 Under heavy contention, this means during initialization some users will get 
 an NPE:
 {code}
 (2015-05-05 08:58:03.974 : solr_server_log.log) 
  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
   at org.apache.hadoop.io.Text.decode(Text.java:406)
   at org.apache.hadoop.io.Text.decode(Text.java:389)
   at org.apache.hadoop.io.Text.toString(Text.java:280)
   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
   at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
   at 

[jira] [Commented] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559741#comment-14559741
 ] 

Hadoop QA commented on HADOOP-11984:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |  15m 20s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  1s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  6s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | shellcheck |   0m  8s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 43s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 44s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |   1m 16s | Tests passed in 
hadoop-common. |
| | |  40m 16s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735382/HADOOP-11984.011.patch 
|
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / 10732d5 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6829/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6829/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6829/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6829/console |


This message was automatically generated.

 Enable parallel JUnit tests in pre-commit.
 --

 Key: HADOOP-11984
 URL: https://issues.apache.org/jira/browse/HADOOP-11984
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, scripts, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
 HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
 HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
 HADOOP-11984.009.patch, HADOOP-11984.010.patch, HADOOP-11984.011.patch


 HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
 for running JUnit tests in multiple concurrent processes.  This issue 
 proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12021) Augmenting Configuration to accomodate description

2015-05-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559745#comment-14559745
 ] 

Andrew Wang commented on HADOOP-12021:
--

Lewis, could you give a little more detail of your Nutch usecase?

It's also worth noting that while we provide the description in 
core-default.xml / hdfs-default.xml / etc for documentation, but probably not 
in user-provided config files. The -default.xml files are already included in 
our JARs, so it shouldn't increase dependency size. Loading them in will, 
however, increase in-memory size, which is probably a concern for some user app.

 Augmenting Configuration to accomodate description
 

 Key: HADOOP-12021
 URL: https://issues.apache.org/jira/browse/HADOOP-12021
 Project: Hadoop Common
  Issue Type: New Feature
  Components: conf
Reporter: Lewis John McGibbney
Priority: Minor
 Fix For: 1.3.0, 2.8.0


 Over on the 
 [common-dev|http://www.mail-archive.com/common-dev%40hadoop.apache.org/msg16099.html]
  ML I explained a use case which requires me to obtain the value of the 
 Configuration description tags.
 [~cnauroth] advised me to raise the issue to Jira for discussion.
 I am happy to provide a patch so that the description values are parsed out 
 of the various XML files and stored, and also that the Configuration class is 
 augmented to provide accessors to accommodate the use case.
 I wanted to find out what people think about this one and whether I should 
 check out Hadoop source and submit a patch. If you guys could provide some 
 advice it would be appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11937) Guarantee a full build of all native code during pre-commit.

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559680#comment-14559680
 ] 

Colin Patrick McCabe edited comment on HADOOP-11937 at 5/26/15 8:17 PM:


You're right, I am behind the times.  It's nice that \-Pnative works on more 
platforms now.

If there is stuff included in [~cnauroth]'s full build that doesn't yet work 
on Mac, test-patch.sh can simply detect that we are running on a Mac and not 
add those compilation flags.  That way, we are not blocked here, but Mac users 
still can run test-patch.sh.


was (Author: cmccabe):
You're right, I am behind the times.  It's nice that \-Pnative works on more 
platforms now.

If there is still included in [~cnauroth]'s full build that doesn't yet work 
on Mac, test-patch.sh can simply detect that we are running on a Mac and not 
add those compilation flags.  That way, we are not blocked here, but Mac users 
still can run test-patch.sh.

 Guarantee a full build of all native code during pre-commit.
 

 Key: HADOOP-11937
 URL: https://issues.apache.org/jira/browse/HADOOP-11937
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Reporter: Chris Nauroth

 Some of the native components of the build are considered optional and either 
 will not build at all without passing special flags to Maven or will allow a 
 build to proceed if dependencies are missing from the build machine.  If 
 these components do not get built, then pre-commit isn't really providing 
 full coverage of the build.  This issue proposes to update test-patch.sh so 
 that it does a full build of all native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559791#comment-14559791
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-12033:
--

If the problem turns around to be in MR, please move this to the MapReduce JIRA 
project.

 Reducer task failure with java.lang.NoClassDefFoundError: 
 Ljava/lang/InternalError at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
 ---

 Key: HADOOP-12033
 URL: https://issues.apache.org/jira/browse/HADOOP-12033
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic

 We have noticed intermittent reducer task failures with the below exception:
 {code}
 Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
 shuffle in fetcher#9 at 
 org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
 java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
  Method) at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
  at 
 org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
  at 
 org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
 org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
  at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534)
  at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
  at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) 
 Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
 java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
 java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
 java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
 sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
 java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
 {code}
 Usually, the reduce task succeeds on retry. 
 Some of the symptoms are similar to HADOOP-8423, but this fix is already 
 included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-05-26 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559801#comment-14559801
 ] 

Ivan Mitic commented on HADOOP-12033:
-

bq. If the problem turns around to be in MR, please move this to the MapReduce 
JIRA project
Sounds good Vinod. I placed it under Hadoop based on my best guess. 

 Reducer task failure with java.lang.NoClassDefFoundError: 
 Ljava/lang/InternalError at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
 ---

 Key: HADOOP-12033
 URL: https://issues.apache.org/jira/browse/HADOOP-12033
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic

 We have noticed intermittent reducer task failures with the below exception:
 {code}
 Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
 shuffle in fetcher#9 at 
 org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
 org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 javax.security.auth.Subject.doAs(Subject.java:415) at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
 java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
  Method) at 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
  at 
 org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
  at 
 org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
 org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
  at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534)
  at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
  at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) 
 Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
 java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
 java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
 java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
 sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
 java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
 {code}
 Usually, the reduce task succeeds on retry. 
 Some of the symptoms are similar to HADOOP-8423, but this fix is already 
 included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11229) JobStoryProducer is not closed upon return from Gridmix#setupDistCacheEmulation()

2015-05-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-11229:

Description: 
Here is related code:
{code}
  JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
  exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
{code}
jsp should be closed upon return from setupDistCacheEmulation().

  was:
Here is related code:
{code}
  JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
  exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
{code}

jsp should be closed upon return from setupDistCacheEmulation().


 JobStoryProducer is not closed upon return from 
 Gridmix#setupDistCacheEmulation()
 -

 Key: HADOOP-11229
 URL: https://issues.apache.org/jira/browse/HADOOP-11229
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: skrho
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11229_001.patch, HADOOP-11229_002.patch


 Here is related code:
 {code}
   JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
   exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
 {code}
 jsp should be closed upon return from setupDistCacheEmulation().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8751) NPE in Token.toString() when Token is constructed using null identifier

2015-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559226#comment-14559226
 ] 

Hudson commented on HADOOP-8751:


SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #207 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/207/])
HADOOP-8751. NPE in Token.toString() when Token is constructed using null 
identifier. Contributed by kanaka kumar avvaru. (aajisaka: rev 
56996a685e6201cb186cea866d22418289174574)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestDelegationToken.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 NPE in Token.toString() when Token is constructed using null identifier
 ---

 Key: HADOOP-8751
 URL: https://issues.apache.org/jira/browse/HADOOP-8751
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Vlad Rozov
Assignee: kanaka kumar avvaru
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-8751-01.patch, HADOOP-8751-01.patch, 
 HADOOP-8751-02.patch, HADOOP-8751-03.patch, HADOOP-8751.patch


 Token constructor allows null to be passed leading to NPE in 
 Token.toString(). Simple fix is to check for null in constructor and use 
 empty byte arrays.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11984:
---
Attachment: HADOOP-11984.011.patch

Even when {{TestCredentials}} runs in isolation, the parent directory isn't 
there, so that rules out another concurrent test interfering.  This is very 
strange.  Is something going wrong with the mkdir of the test directories 
inside pom.xml?  I wouldn't expect so, because we'd see error output and an 
earlier failure in the build.

The fix might be just to change {{TestCredentials}} to use a recursive 
{{mkdirs}}, which is what other tests do.  I'm really curious about this 
though, so patch v011 is one more troubleshooting patch that echoes the 
directories that pom.xml tries to create.  Let's see if these are any different 
from what I see on my local machine.

 Enable parallel JUnit tests in pre-commit.
 --

 Key: HADOOP-11984
 URL: https://issues.apache.org/jira/browse/HADOOP-11984
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, scripts, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
 HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
 HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
 HADOOP-11984.009.patch, HADOOP-11984.010.patch, HADOOP-11984.011.patch


 HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
 for running JUnit tests in multiple concurrent processes.  This issue 
 proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11975) Native code needs to be built to match the 32/64 bitness of the JVM

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559654#comment-14559654
 ] 

Colin Patrick McCabe commented on HADOOP-11975:
---

I agree that it's a vile hack, but so far it's the best we've got.  If you have 
a patch for JNIFlags.cmake to handle your case, I will review it.

 Native code needs to be built to match the 32/64 bitness of the JVM
 ---

 Key: HADOOP-11975
 URL: https://issues.apache.org/jira/browse/HADOOP-11975
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: Solaris
Reporter: Alan Burlison
Assignee: Alan Burlison

 When building with a 64-bit JVM on Solaris the following error occurs at the 
 link stage of building the native code:
  [exec] ld: fatal: file 
 /usr/jdk/instances/jdk1.8.0/jre/lib/amd64/server/libjvm.so: wrong ELF class: 
 ELFCLASS64
  [exec] collect2: error: ld returned 1 exit status
  [exec] make[2]: *** [target/usr/local/lib/libhadoop.so.1.0.0] Error 1
  [exec] make[1]: *** [CMakeFiles/hadoop.dir/all] Error 2
 The compilation flags in the makefiles need to explicitly state if 32 or 64 
 bit code is to be generated, to match the JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >