[jira] [Commented] (HADOOP-3619) DNS.getHosts triggers an ArrayIndexOutOfBoundsException in reverseDNS if one of the interfaces is IPv6

2014-07-14 Thread Dr. Martin Menzel (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060365#comment-14060365
 ] 

Dr. Martin Menzel commented on HADOOP-3619:
---

The test org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl doesn't fail on 
my system.

One of my testcases checks a DNS server entry of a.nic.de. Is it possible that 
the Jenkins system is not able to resolve external / internet addresses?




 DNS.getHosts triggers an ArrayIndexOutOfBoundsException in reverseDNS if one 
 of the interfaces is IPv6
 --

 Key: HADOOP-3619
 URL: https://issues.apache.org/jira/browse/HADOOP-3619
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Steve Loughran
  Labels: patch
 Attachments: HADOOP-3619.patch


 reverseDNS tries to split a host address string by ., and so fails if : 
 is the separator, as it is in IPv6. When it tries to access the parts of the 
 address, a stack trace is seen.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2014-07-14 Thread Dr. Martin Menzel (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060385#comment-14060385
 ] 

Dr. Martin Menzel commented on HADOOP-8568:
---

This issue is related to

https://issues.apache.org/jira/browse/HADOOP-3619

I provided a similar patch which includes also a testcase.

 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Tony Kew
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2014-07-14 Thread Dr. Martin Menzel (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060386#comment-14060386
 ] 

Dr. Martin Menzel commented on HADOOP-8568:
---

Similar patches are available in both issues

 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Tony Kew
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10821) Prepare the release notes for Hadoop 2.5.0

2014-07-14 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-10821:
--

 Summary: Prepare the release notes for Hadoop 2.5.0
 Key: HADOOP-10821
 URL: https://issues.apache.org/jira/browse/HADOOP-10821
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Priority: Blocker


The release notes for 2.3.0+ still talk about federation and MRv2
being new features. We should update them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10821) Prepare the release notes for Hadoop 2.5.0

2014-07-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10821:
---

Description: 
The release notes for 2.3.0+ (http://hadoop.apache.org/docs/r2.4.1/index.html) 
still talk about federation and MRv2
being new features. We should update them.

  was:
The release notes for 2.3.0+ still talk about federation and MRv2
being new features. We should update them.


 Prepare the release notes for Hadoop 2.5.0
 --

 Key: HADOOP-10821
 URL: https://issues.apache.org/jira/browse/HADOOP-10821
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Priority: Blocker

 The release notes for 2.3.0+ 
 (http://hadoop.apache.org/docs/r2.4.1/index.html) still talk about federation 
 and MRv2
 being new features. We should update them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs

2014-07-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-10480:
--

Assignee: Akira AJISAKA  (was: Swarnim Kulkarni)

 Fix new findbugs warnings in hadoop-hdfs
 

 Key: HADOOP-10480
 URL: https://issues.apache.org/jira/browse/HADOOP-10480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Akira AJISAKA
  Labels: newbie

 The following findbugs warnings need to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
 [INFO] BugInstance size is 14
 [INFO] Error size is 0
 [INFO] Total bugs: 14
 [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
 [org.apache.hadoop.hdfs.BlockReaderFactory] At 
 BlockReaderFactory.java:[lines 68-808]
 [INFO] Increment of volatile field 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
  [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] At 
 DFSOutputStream.java:[lines 308-1492]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
  DataInputStream, DataOutputStream, String, DataTransferThrottler, 
 DatanodeInfo[]): new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.BlockReceiver] At 
 BlockReceiver.java:[lines 66-905]
 [INFO] b must be nonnull but is marked as nullable 
 [org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2] At 
 DatanodeJspHelper.java:[lines 546-549]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
  File, boolean): new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
  new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
  new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Redundant nullcheck of f, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
  Block[]) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl] At 
 FsDatasetImpl.java:[lines 60-1910]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for 
 FSImageUtil(): String.getBytes() 
 [org.apache.hadoop.hdfs.server.namenode.FSImageUtil] At 
 FSImageUtil.java:[lines 34-89]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, 
 byte[], boolean): new String(byte[]) 
 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem] At 
 FSNamesystem.java:[lines 301-7701]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream):
  new java.io.PrintWriter(OutputStream, boolean) 
 [org.apache.hadoop.hdfs.server.namenode.INode] At INode.java:[lines 51-744]
 [INFO] Redundant nullcheck of fos, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String,
  HdfsFileStatus, LocatedBlocks) 
 [org.apache.hadoop.hdfs.server.namenode.NamenodeFsck] At 
 NamenodeFsck.java:[lines 94-710]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(File) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(OutputStream) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs

2014-07-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060398#comment-14060398
 ] 

Akira AJISAKA commented on HADOOP-10480:


Assigned to me. Please feel free to re-assign.

 Fix new findbugs warnings in hadoop-hdfs
 

 Key: HADOOP-10480
 URL: https://issues.apache.org/jira/browse/HADOOP-10480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Akira AJISAKA
  Labels: newbie

 The following findbugs warnings need to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
 [INFO] BugInstance size is 14
 [INFO] Error size is 0
 [INFO] Total bugs: 14
 [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
 [org.apache.hadoop.hdfs.BlockReaderFactory] At 
 BlockReaderFactory.java:[lines 68-808]
 [INFO] Increment of volatile field 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
  [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] At 
 DFSOutputStream.java:[lines 308-1492]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
  DataInputStream, DataOutputStream, String, DataTransferThrottler, 
 DatanodeInfo[]): new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.BlockReceiver] At 
 BlockReceiver.java:[lines 66-905]
 [INFO] b must be nonnull but is marked as nullable 
 [org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2] At 
 DatanodeJspHelper.java:[lines 546-549]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
  File, boolean): new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
  new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
  new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Redundant nullcheck of f, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
  Block[]) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl] At 
 FsDatasetImpl.java:[lines 60-1910]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for 
 FSImageUtil(): String.getBytes() 
 [org.apache.hadoop.hdfs.server.namenode.FSImageUtil] At 
 FSImageUtil.java:[lines 34-89]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, 
 byte[], boolean): new String(byte[]) 
 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem] At 
 FSNamesystem.java:[lines 301-7701]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream):
  new java.io.PrintWriter(OutputStream, boolean) 
 [org.apache.hadoop.hdfs.server.namenode.INode] At INode.java:[lines 51-744]
 [INFO] Redundant nullcheck of fos, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String,
  HdfsFileStatus, LocatedBlocks) 
 [org.apache.hadoop.hdfs.server.namenode.NamenodeFsck] At 
 NamenodeFsck.java:[lines 94-710]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(File) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(OutputStream) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs

2014-07-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10480:
---

Attachment: HADOOP-10480.patch

 Fix new findbugs warnings in hadoop-hdfs
 

 Key: HADOOP-10480
 URL: https://issues.apache.org/jira/browse/HADOOP-10480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10480.patch


 The following findbugs warnings need to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
 [INFO] BugInstance size is 14
 [INFO] Error size is 0
 [INFO] Total bugs: 14
 [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
 [org.apache.hadoop.hdfs.BlockReaderFactory] At 
 BlockReaderFactory.java:[lines 68-808]
 [INFO] Increment of volatile field 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
  [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] At 
 DFSOutputStream.java:[lines 308-1492]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
  DataInputStream, DataOutputStream, String, DataTransferThrottler, 
 DatanodeInfo[]): new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.BlockReceiver] At 
 BlockReceiver.java:[lines 66-905]
 [INFO] b must be nonnull but is marked as nullable 
 [org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2] At 
 DatanodeJspHelper.java:[lines 546-549]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
  File, boolean): new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
  new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
  new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Redundant nullcheck of f, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
  Block[]) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl] At 
 FsDatasetImpl.java:[lines 60-1910]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for 
 FSImageUtil(): String.getBytes() 
 [org.apache.hadoop.hdfs.server.namenode.FSImageUtil] At 
 FSImageUtil.java:[lines 34-89]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, 
 byte[], boolean): new String(byte[]) 
 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem] At 
 FSNamesystem.java:[lines 301-7701]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream):
  new java.io.PrintWriter(OutputStream, boolean) 
 [org.apache.hadoop.hdfs.server.namenode.INode] At INode.java:[lines 51-744]
 [INFO] Redundant nullcheck of fos, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String,
  HdfsFileStatus, LocatedBlocks) 
 [org.apache.hadoop.hdfs.server.namenode.NamenodeFsck] At 
 NamenodeFsck.java:[lines 94-710]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(File) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(OutputStream) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs

2014-07-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10480:
---

Target Version/s: 2.6.0
  Status: Patch Available  (was: Open)

 Fix new findbugs warnings in hadoop-hdfs
 

 Key: HADOOP-10480
 URL: https://issues.apache.org/jira/browse/HADOOP-10480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10480.patch


 The following findbugs warnings need to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
 [INFO] BugInstance size is 14
 [INFO] Error size is 0
 [INFO] Total bugs: 14
 [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
 [org.apache.hadoop.hdfs.BlockReaderFactory] At 
 BlockReaderFactory.java:[lines 68-808]
 [INFO] Increment of volatile field 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
  [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] At 
 DFSOutputStream.java:[lines 308-1492]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
  DataInputStream, DataOutputStream, String, DataTransferThrottler, 
 DatanodeInfo[]): new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.BlockReceiver] At 
 BlockReceiver.java:[lines 66-905]
 [INFO] b must be nonnull but is marked as nullable 
 [org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2] At 
 DatanodeJspHelper.java:[lines 546-549]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
  File, boolean): new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
  new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
  new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Redundant nullcheck of f, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
  Block[]) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl] At 
 FsDatasetImpl.java:[lines 60-1910]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for 
 FSImageUtil(): String.getBytes() 
 [org.apache.hadoop.hdfs.server.namenode.FSImageUtil] At 
 FSImageUtil.java:[lines 34-89]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, 
 byte[], boolean): new String(byte[]) 
 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem] At 
 FSNamesystem.java:[lines 301-7701]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream):
  new java.io.PrintWriter(OutputStream, boolean) 
 [org.apache.hadoop.hdfs.server.namenode.INode] At INode.java:[lines 51-744]
 [INFO] Redundant nullcheck of fos, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String,
  HdfsFileStatus, LocatedBlocks) 
 [org.apache.hadoop.hdfs.server.namenode.NamenodeFsck] At 
 NamenodeFsck.java:[lines 94-710]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(File) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(OutputStream) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8719) Workaround for kerberos-related log errors upon running any hadoop command on OSX

2014-07-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060406#comment-14060406
 ] 

Haohui Mai commented on HADOOP-8719:


Thanks [~qwertymaniac] for taking care of this.

 Workaround for kerberos-related log errors upon running any hadoop command on 
 OSX
 -

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2014-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060411#comment-14060411
 ] 

Hadoop QA commented on HADOOP-8568:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12542389/HADOOP-8568.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
  org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
  org.apache.hadoop.ipc.TestIPC
  org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4260//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4260//console

This message is automatically generated.

 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Tony Kew
  Labels: newbie
 Attachments: HADOOP-8568.patch


 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs

2014-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060504#comment-14060504
 ] 

Hadoop QA commented on HADOOP-10480:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12655502/HADOOP-10480.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.balancer.TestBalancer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4261//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4261//console

This message is automatically generated.

 Fix new findbugs warnings in hadoop-hdfs
 

 Key: HADOOP-10480
 URL: https://issues.apache.org/jira/browse/HADOOP-10480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10480.patch


 The following findbugs warnings need to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
 [INFO] BugInstance size is 14
 [INFO] Error size is 0
 [INFO] Total bugs: 14
 [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
 [org.apache.hadoop.hdfs.BlockReaderFactory] At 
 BlockReaderFactory.java:[lines 68-808]
 [INFO] Increment of volatile field 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
  [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] At 
 DFSOutputStream.java:[lines 308-1492]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
  DataInputStream, DataOutputStream, String, DataTransferThrottler, 
 DatanodeInfo[]): new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.BlockReceiver] At 
 BlockReceiver.java:[lines 66-905]
 [INFO] b must be nonnull but is marked as nullable 
 [org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2] At 
 DatanodeJspHelper.java:[lines 546-549]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
  File, boolean): new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
  new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
  new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Redundant nullcheck of f, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
  Block[]) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl] At 
 FsDatasetImpl.java:[lines 60-1910]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for 
 FSImageUtil(): String.getBytes() 
 [org.apache.hadoop.hdfs.server.namenode.FSImageUtil] At 
 FSImageUtil.java:[lines 34-89]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, 
 byte[], boolean): 

[jira] [Updated] (HADOOP-3619) DNS.getHosts triggers an ArrayIndexOutOfBoundsException in reverseDNS if one of the interfaces is IPv6

2014-07-14 Thread Dr. Martin Menzel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr. Martin Menzel updated HADOOP-3619:
--

Attachment: HADOOP-3619-v2.patch

Updated version of IPv6 enabled rDNS functionality incl. TestCase

 DNS.getHosts triggers an ArrayIndexOutOfBoundsException in reverseDNS if one 
 of the interfaces is IPv6
 --

 Key: HADOOP-3619
 URL: https://issues.apache.org/jira/browse/HADOOP-3619
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Steve Loughran
  Labels: patch
 Attachments: HADOOP-3619-v2.patch


 reverseDNS tries to split a host address string by ., and so fails if : 
 is the separator, as it is in IPv6. When it tries to access the parts of the 
 address, a stack trace is seen.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-3619) DNS.getHosts triggers an ArrayIndexOutOfBoundsException in reverseDNS if one of the interfaces is IPv6

2014-07-14 Thread Dr. Martin Menzel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr. Martin Menzel updated HADOOP-3619:
--

Attachment: (was: HADOOP-3619.patch)

 DNS.getHosts triggers an ArrayIndexOutOfBoundsException in reverseDNS if one 
 of the interfaces is IPv6
 --

 Key: HADOOP-3619
 URL: https://issues.apache.org/jira/browse/HADOOP-3619
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Steve Loughran
  Labels: patch
 Attachments: HADOOP-3619-v2.patch


 reverseDNS tries to split a host address string by ., and so fails if : 
 is the separator, as it is in IPv6. When it tries to access the parts of the 
 address, a stack trace is seen.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10733) Potential null dereference in CredentialShell#promptForCredential()

2014-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060565#comment-14060565
 ] 

Hadoop QA commented on HADOOP-10733:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651807/hadoop-10733-v1.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
  org.apache.hadoop.ipc.TestIPC
  org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4262//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4262//console

This message is automatically generated.

 Potential null dereference in CredentialShell#promptForCredential()
 ---

 Key: HADOOP-10733
 URL: https://issues.apache.org/jira/browse/HADOOP-10733
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: hadoop-10733-v1.txt


 {code}
   char[] newPassword1 = c.readPassword(Enter password: );
   char[] newPassword2 = c.readPassword(Enter password again: );
   noMatch = !Arrays.equals(newPassword1, newPassword2);
   if (noMatch) {
 Arrays.fill(newPassword1, ' ');
 {code}
 newPassword1 might be null, leading to NullPointerException in Arrays.fill() 
 call.
 Similar issue for the following call on line 381:
 {code}
   Arrays.fill(newPassword2, ' ');
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-3619) DNS.getHosts triggers an ArrayIndexOutOfBoundsException in reverseDNS if one of the interfaces is IPv6

2014-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060567#comment-14060567
 ] 

Hadoop QA commented on HADOOP-3619:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12655523/HADOOP-3619-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestIPC
  org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
  org.apache.hadoop.fs.TestSymlinkLocalFSFileContext

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4263//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4263//console

This message is automatically generated.

 DNS.getHosts triggers an ArrayIndexOutOfBoundsException in reverseDNS if one 
 of the interfaces is IPv6
 --

 Key: HADOOP-3619
 URL: https://issues.apache.org/jira/browse/HADOOP-3619
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Steve Loughran
  Labels: patch
 Attachments: HADOOP-3619-v2.patch


 reverseDNS tries to split a host address string by ., and so fails if : 
 is the separator, as it is in IPv6. When it tries to access the parts of the 
 address, a stack trace is seen.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2014-07-14 Thread Dmitry Sivachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Sivachenko updated HADOOP-10783:
---

Attachment: commons-lang3.patch

 apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
 ---

 Key: HADOOP-10783
 URL: https://issues.apache.org/jira/browse/HADOOP-10783
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Dmitry Sivachenko
Assignee: Steve Loughran
 Attachments: commons-lang3.patch


 Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
 It does not support FreeBSD (IS_OS_UNIX returns False).
 This is fixed in recent versions of apache-commons.jar
 Please update apache-commons.jar to recent version so it correctly recognizes 
 FreeBSD as UNIX-like system.
 Right now I get in datanode's log:
 2014-07-04 11:58:10,459 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.ShortCircui
 tRegistry: Disabling ShortCircuitRegistry
 java.io.IOException: The OS is not UNIX.
 at 
 org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.init(ShortCircuitRegistry.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:289)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2014-07-14 Thread Dmitry Sivachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Sivachenko updated HADOOP-10783:
---

Status: Patch Available  (was: Open)

I am attaching a patch for this update.

 apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
 ---

 Key: HADOOP-10783
 URL: https://issues.apache.org/jira/browse/HADOOP-10783
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Dmitry Sivachenko
Assignee: Steve Loughran
 Attachments: commons-lang3.patch


 Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
 It does not support FreeBSD (IS_OS_UNIX returns False).
 This is fixed in recent versions of apache-commons.jar
 Please update apache-commons.jar to recent version so it correctly recognizes 
 FreeBSD as UNIX-like system.
 Right now I get in datanode's log:
 2014-07-04 11:58:10,459 DEBUG 
 org.apache.hadoop.hdfs.server.datanode.ShortCircui
 tRegistry: Disabling ShortCircuitRegistry
 java.io.IOException: The OS is not UNIX.
 at 
 org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.init(ShortCircuitRegistry.java:169)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:289)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2014-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060812#comment-14060812
 ] 

Hadoop QA commented on HADOOP-10783:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12655528/commons-lang3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 30 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.
See 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4264//artifact/trunk/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to cause Findbugs 
(version 2.0.3) to fail.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-minikdc hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-nfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-tools/hadoop-gridmix hadoop-tools/hadoop-rumen 
hadoop-tools/hadoop-streaming hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
  org.apache.hadoop.ipc.TestIPC
  org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
  org.apache.hadoop.streaming.TestMultipleArchiveFiles
  org.apache.hadoop.streaming.TestStreamingBadRecords
  org.apache.hadoop.streaming.TestStreamingOutputKeyValueTypes
  org.apache.hadoop.streaming.TestStreamAggregate
  org.apache.hadoop.streaming.TestStreamReduceNone
  org.apache.hadoop.streaming.TestUnconsumedInput
  
org.apache.hadoop.streaming.mapreduce.TestStreamXmlRecordReader
  org.apache.hadoop.streaming.TestGzipInput
  org.apache.hadoop.streaming.TestStreaming
  org.apache.hadoop.streaming.TestStreamingFailure
  org.apache.hadoop.streaming.TestStreamingSeparator
  org.apache.hadoop.streaming.TestStreamingCounters
  org.apache.hadoop.streaming.TestFileArgs
  org.apache.hadoop.streaming.TestStreamDataProtocol
  org.apache.hadoop.streaming.TestStreamingExitStatus
  org.apache.hadoop.streaming.TestSymLink
  org.apache.hadoop.streaming.TestRawBytesStreaming
  org.apache.hadoop.streaming.TestStreamingCombiner
  org.apache.hadoop.streaming.TestStreamingStderr
  org.apache.hadoop.streaming.TestTypedBytesStreaming
  org.apache.hadoop.streaming.TestStreamingBackground
  org.apache.hadoop.streaming.TestStreamXmlRecordReader
  org.apache.hadoop.streaming.TestStreamingKeyValue
  org.apache.hadoop.streaming.TestStreamXmlMultipleRecords
  org.apache.hadoop.streaming.TestMultipleCachefiles
  org.apache.hadoop.streaming.TestStreamingOutputOnlyKeys
  
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell
  org.apache.hadoop.yarn.util.TestFSDownload
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler

  The test build failed in 
hadoop-tools/hadoop-rumen hadoop-tools/hadoop-gridmix 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 

[jira] [Commented] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs

2014-07-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060879#comment-14060879
 ] 

Haohui Mai commented on HADOOP-10480:
-

Looking at the log of jenkins:

{quote}
/home/jenkins/tools/maven/latest/bin/mvn clean test javadoc:javadoc -DskipTests 
-Pdocs -DHadoopPatchProcess  
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/patchJavadocWarnings.txt
 21
There appear to be 26 javadoc warnings before the patch and 26 javadoc warnings 
after applying the patch.
{quote}

Is the proposed fix a JDK-specific issue?

 Fix new findbugs warnings in hadoop-hdfs
 

 Key: HADOOP-10480
 URL: https://issues.apache.org/jira/browse/HADOOP-10480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10480.patch


 The following findbugs warnings need to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
 [INFO] BugInstance size is 14
 [INFO] Error size is 0
 [INFO] Total bugs: 14
 [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
 [org.apache.hadoop.hdfs.BlockReaderFactory] At 
 BlockReaderFactory.java:[lines 68-808]
 [INFO] Increment of volatile field 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
  [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] At 
 DFSOutputStream.java:[lines 308-1492]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
  DataInputStream, DataOutputStream, String, DataTransferThrottler, 
 DatanodeInfo[]): new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.BlockReceiver] At 
 BlockReceiver.java:[lines 66-905]
 [INFO] b must be nonnull but is marked as nullable 
 [org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2] At 
 DatanodeJspHelper.java:[lines 546-549]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
  File, boolean): new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
  new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
  new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Redundant nullcheck of f, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
  Block[]) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl] At 
 FsDatasetImpl.java:[lines 60-1910]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for 
 FSImageUtil(): String.getBytes() 
 [org.apache.hadoop.hdfs.server.namenode.FSImageUtil] At 
 FSImageUtil.java:[lines 34-89]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, 
 byte[], boolean): new String(byte[]) 
 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem] At 
 FSNamesystem.java:[lines 301-7701]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream):
  new java.io.PrintWriter(OutputStream, boolean) 
 [org.apache.hadoop.hdfs.server.namenode.INode] At INode.java:[lines 51-744]
 [INFO] Redundant nullcheck of fos, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String,
  HdfsFileStatus, LocatedBlocks) 
 [org.apache.hadoop.hdfs.server.namenode.NamenodeFsck] At 
 NamenodeFsck.java:[lines 94-710]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(File) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(OutputStream) 
 

[jira] [Commented] (HADOOP-10794) A hadoop cluster needs clock synchronization

2014-07-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060928#comment-14060928
 ] 

Colin Patrick McCabe commented on HADOOP-10794:
---

I agree with Andrew and Steve here.  It probably makes sense to have somebody 
monitor the clock skew between nodes, and warn if it gets too high.

It's worth pointing out that we have very carefully avoided depending on 
synchronized clocks in HDFS and MR.  If YARN wants to use local clocks to give 
an approximation of task runtime, that's fine, but we should not depend on too 
much accuracy there. NTP has its limits.

I think it makes sense to make YARN have its NodeManagers ping back 
periodically, and complain if their local clocks are too far off (probably we 
want a granularity of minutes here...)  It fits in well with the other 
resources YARN is managing, and would allow people to easily diagnose incorrect 
task runtimes.

 A hadoop cluster needs clock synchronization
 

 Key: HADOOP-10794
 URL: https://issues.apache.org/jira/browse/HADOOP-10794
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhijie Shen

 As a distributed system, a hadoop cluster wants the clock on all the 
 participating hosts synchronized. Otherwise, some problems might happen. For 
 example, in YARN-2251, due to the clock on the host for the task container 
 falls behind that on the host of the AM container, the computed elapsed time 
 (the diff between the timestamps produced on two hosts) becomes negative.
 In YARN-2251, we tried to mask the negative elapsed time. However, we should 
 seek for a decent long term solution, such as providing mechanism to do and 
 check clock synchronization.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10780) hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use

2014-07-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10780:
--

Summary: hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf 
use  (was: namenode throws java.lang.OutOfMemoryError upon 
DatanodeProtocol.versionRequest from datanode)

 hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use
 

 Key: HADOOP-10780
 URL: https://issues.apache.org/jira/browse/HADOOP-10780
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
 Environment: FreeBSD-10/stable
 openjdk version 1.7.0_60
 OpenJDK Runtime Environment (build 1.7.0_60-b19)
 OpenJDK 64-Bit Server VM (build 24.60-b09, mixed mode)
Reporter: Dmitry Sivachenko
 Attachments: buf_sz.patch


 I am trying hadoop-2.4.1 on FreeBSD-10/stable.
 namenode starts up, but after first datanode contacts it, it throws an 
 exception.
 All limits seem to be high enough:
 % limits -a
 Resource limits (current):
   cputime  infinity secs
   filesize infinity kB
   datasize 33554432 kB
   stacksize  524288 kB
   coredumpsize infinity kB
   memoryuseinfinity kB
   memorylocked infinity kB
   maxprocesses   122778
   openfiles  14
   sbsize   infinity bytes
   vmemoryuse   infinity kB
   pseudo-terminals infinity
   swapuse  infinity kB
 14944  1  S0:06.59 /usr/local/openjdk7/bin/java -Dproc_namenode 
 -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop 
 -Dhadoop.log.file=hadoop-hdfs-namenode-nezabudka3-00.log 
 -Dhadoop.home.dir=/usr/local -Dhadoop.id.str=hdfs 
 -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
 -Djava.net.preferIPv4Stack=true -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Dhadoop.security.logger=INFO,RFAS 
 org.apache.hadoop.hdfs.server.namenode.NameNode
 From the namenode's log:
 2014-07-03 23:28:15,070 WARN  [IPC Server handler 5 on 8020] ipc.Server 
 (Server.java:run(2032)) - IPC Server handler 5 on 8020, call 
 org.apache.hadoop.hdfs.server.protocol.Datano
 deProtocol.versionRequest from 5.255.231.209:57749 Call#842 Retry#0
 java.lang.OutOfMemoryError
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupsForUser(Native 
 Method)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:80)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:139)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1417)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:81)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3331)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:5491)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.versionRequest(NameNodeRpcServer.java:1082)
 at 
 org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.versionRequest(DatanodeProtocolServerSideTranslatorPB.java:234)
 at 
 org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28069)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
 I did not have such an issue with hadoop-1.2.1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10780) hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use

2014-07-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10780:
--

Assignee: Dmitry Sivachenko

 hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use
 

 Key: HADOOP-10780
 URL: https://issues.apache.org/jira/browse/HADOOP-10780
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
 Environment: FreeBSD-10/stable
 openjdk version 1.7.0_60
 OpenJDK Runtime Environment (build 1.7.0_60-b19)
 OpenJDK 64-Bit Server VM (build 24.60-b09, mixed mode)
Reporter: Dmitry Sivachenko
Assignee: Dmitry Sivachenko
 Attachments: buf_sz.patch


 I am trying hadoop-2.4.1 on FreeBSD-10/stable.
 namenode starts up, but after first datanode contacts it, it throws an 
 exception.
 All limits seem to be high enough:
 % limits -a
 Resource limits (current):
   cputime  infinity secs
   filesize infinity kB
   datasize 33554432 kB
   stacksize  524288 kB
   coredumpsize infinity kB
   memoryuseinfinity kB
   memorylocked infinity kB
   maxprocesses   122778
   openfiles  14
   sbsize   infinity bytes
   vmemoryuse   infinity kB
   pseudo-terminals infinity
   swapuse  infinity kB
 14944  1  S0:06.59 /usr/local/openjdk7/bin/java -Dproc_namenode 
 -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop 
 -Dhadoop.log.file=hadoop-hdfs-namenode-nezabudka3-00.log 
 -Dhadoop.home.dir=/usr/local -Dhadoop.id.str=hdfs 
 -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
 -Djava.net.preferIPv4Stack=true -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Dhadoop.security.logger=INFO,RFAS 
 org.apache.hadoop.hdfs.server.namenode.NameNode
 From the namenode's log:
 2014-07-03 23:28:15,070 WARN  [IPC Server handler 5 on 8020] ipc.Server 
 (Server.java:run(2032)) - IPC Server handler 5 on 8020, call 
 org.apache.hadoop.hdfs.server.protocol.Datano
 deProtocol.versionRequest from 5.255.231.209:57749 Call#842 Retry#0
 java.lang.OutOfMemoryError
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupsForUser(Native 
 Method)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:80)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:139)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1417)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:81)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3331)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:5491)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.versionRequest(NameNodeRpcServer.java:1082)
 at 
 org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.versionRequest(DatanodeProtocolServerSideTranslatorPB.java:234)
 at 
 org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28069)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
 I did not have such an issue with hadoop-1.2.1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10797) Hardcoded path to bash is not portable

2014-07-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060944#comment-14060944
 ] 

Colin Patrick McCabe commented on HADOOP-10797:
---

There's a pretty large number of scripts relying on '#!/bin/bash'... I don't 
think the patch posted here fixes them all.

{code}
cmccabe@keter:~/hadoopST/trunk grep -rI '#!/bin/bash' *
dev-support/findHangingTest.sh:#!/bin/bash
dev-support/create-release.sh:#!/bin/bash
hadoop-common-project/hadoop-kms/src/main/libexec/kms-config.sh:#!/bin/bash
hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh:#!/bin/bash
hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh:#!/bin/bash
hadoop-common-project/hadoop-kms/target/hadoop-kms-3.0.0-SNAPSHOT/libexec/kms-config.sh:#!/bin/bash
hadoop-common-project/hadoop-kms/target/hadoop-kms-3.0.0-SNAPSHOT/sbin/kms.sh:#!/bin/bash
hadoop-common-project/hadoop-kms/target/hadoop-kms-3.0.0-SNAPSHOT/etc/hadoop/kms-env.sh:#!/bin/bash
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/libexec/kms-config.sh:#!/bin/bash
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/libexec/httpfs-config.sh:#!/bin/bash
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/share/hadoop/tools/sls/bin/rumen2sls.sh:#!/bin/bash
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/share/hadoop/tools/sls/bin/slsrun.sh:#!/bin/bash
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/sbin/httpfs.sh:#!/bin/bash
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/sbin/kms.sh:#!/bin/bash
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/httpfs-env.sh:#!/bin/bash
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/kms-env.sh:#!/bin/bash
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh:#!/bin/bash
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh:#!/bin/bash
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh:#!/bin/bash
hadoop-hdfs-project/hadoop-hdfs-httpfs/target/hadoop-hdfs-httpfs-3.0.0-SNAPSHOT/libexec/httpfs-config.sh:#!/bin/bash
hadoop-hdfs-project/hadoop-hdfs-httpfs/target/hadoop-hdfs-httpfs-3.0.0-SNAPSHOT/sbin/httpfs.sh:#!/bin/bash
hadoop-hdfs-project/hadoop-hdfs-httpfs/target/hadoop-hdfs-httpfs-3.0.0-SNAPSHOT/etc/hadoop/httpfs-env.sh:#!/bin/bash
hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT/share/hadoop/tools/sls/bin/rumen2sls.sh:#!/bin/bash
hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT/share/hadoop/tools/sls/bin/slsrun.sh:#!/bin/bash
hadoop-tools/hadoop-sls/src/main/bin/rumen2sls.sh:#!/bin/bash
hadoop-tools/hadoop-sls/src/main/bin/slsrun.sh:#!/bin/bash
hadoop-tools/hadoop-sls/target/hadoop-sls-3.0.0-SNAPSHOT/sls/bin/rumen2sls.sh:#!/bin/bash
hadoop-tools/hadoop-sls/target/hadoop-sls-3.0.0-SNAPSHOT/sls/bin/slsrun.sh:#!/bin/bash
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java:
  writer.println(#!/bin/bash\n\n);
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerShutdown.java:
  fileWriter.write(#!/bin/bash\n\n);
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c:
  if (fprintf(script, #!/bin/bash\n
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:
  line(#!/bin/bash);
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java:
  pout.println(#!/bin/bash);
{code}

bq. Okay, I see. The point to use /bin/sh is that most of these shell scripts 
are very simple and do not require any bash-specific things. And since there 
are systems that do not ship bash by default, this would eliminate one extra 
dependency. But it is not a big deal if you prefer to stick with more 
heavyweight bash instead.

I don't have any objections to switching to /bin/sh, but I think that you'll 
find it a very challenging task.  If your goal is just to get stuff working on 
FreeBSD, you're probably better off spending your effort elsewhere and coming 
back to this later.  You would also need to have a vote on the main mailing 
lists to get a policy enacted to only use /bin/sh in the future, or else your 
work would quickly be undone by people adding new scripts.  Again, I would 
support this, but it seems like the effort to reward ratio is pretty low.

 Hardcoded path to bash is not portable
 

 Key: HADOOP-10797
 URL: https://issues.apache.org/jira/browse/HADOOP-10797
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1

[jira] [Updated] (HADOOP-10780) hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use

2014-07-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10780:
--

  Resolution: Fixed
   Fix Version/s: 2.6.0
Target Version/s: 2.6.0
  Status: Resolved  (was: Patch Available)

 hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use
 

 Key: HADOOP-10780
 URL: https://issues.apache.org/jira/browse/HADOOP-10780
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
 Environment: FreeBSD-10/stable
 openjdk version 1.7.0_60
 OpenJDK Runtime Environment (build 1.7.0_60-b19)
 OpenJDK 64-Bit Server VM (build 24.60-b09, mixed mode)
Reporter: Dmitry Sivachenko
Assignee: Dmitry Sivachenko
 Fix For: 2.6.0

 Attachments: buf_sz.patch


 I am trying hadoop-2.4.1 on FreeBSD-10/stable.
 namenode starts up, but after first datanode contacts it, it throws an 
 exception.
 All limits seem to be high enough:
 % limits -a
 Resource limits (current):
   cputime  infinity secs
   filesize infinity kB
   datasize 33554432 kB
   stacksize  524288 kB
   coredumpsize infinity kB
   memoryuseinfinity kB
   memorylocked infinity kB
   maxprocesses   122778
   openfiles  14
   sbsize   infinity bytes
   vmemoryuse   infinity kB
   pseudo-terminals infinity
   swapuse  infinity kB
 14944  1  S0:06.59 /usr/local/openjdk7/bin/java -Dproc_namenode 
 -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop 
 -Dhadoop.log.file=hadoop-hdfs-namenode-nezabudka3-00.log 
 -Dhadoop.home.dir=/usr/local -Dhadoop.id.str=hdfs 
 -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
 -Djava.net.preferIPv4Stack=true -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Dhadoop.security.logger=INFO,RFAS 
 org.apache.hadoop.hdfs.server.namenode.NameNode
 From the namenode's log:
 2014-07-03 23:28:15,070 WARN  [IPC Server handler 5 on 8020] ipc.Server 
 (Server.java:run(2032)) - IPC Server handler 5 on 8020, call 
 org.apache.hadoop.hdfs.server.protocol.Datano
 deProtocol.versionRequest from 5.255.231.209:57749 Call#842 Retry#0
 java.lang.OutOfMemoryError
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupsForUser(Native 
 Method)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:80)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:139)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1417)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:81)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3331)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:5491)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.versionRequest(NameNodeRpcServer.java:1082)
 at 
 org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.versionRequest(DatanodeProtocolServerSideTranslatorPB.java:234)
 at 
 org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28069)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
 I did not have such an issue with hadoop-1.2.1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10780) hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use

2014-07-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060951#comment-14060951
 ] 

Hudson commented on HADOOP-10780:
-

FAILURE: Integrated in Hadoop-trunk-Commit #5876 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5876/])
HADOOP-10780. hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf 
use (trtrmitya via cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1610470)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/hadoop_user_info.c


 hadoop_user_info_alloc fails on FreeBSD due to incorrect sysconf use
 

 Key: HADOOP-10780
 URL: https://issues.apache.org/jira/browse/HADOOP-10780
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
 Environment: FreeBSD-10/stable
 openjdk version 1.7.0_60
 OpenJDK Runtime Environment (build 1.7.0_60-b19)
 OpenJDK 64-Bit Server VM (build 24.60-b09, mixed mode)
Reporter: Dmitry Sivachenko
Assignee: Dmitry Sivachenko
 Fix For: 2.6.0

 Attachments: buf_sz.patch


 I am trying hadoop-2.4.1 on FreeBSD-10/stable.
 namenode starts up, but after first datanode contacts it, it throws an 
 exception.
 All limits seem to be high enough:
 % limits -a
 Resource limits (current):
   cputime  infinity secs
   filesize infinity kB
   datasize 33554432 kB
   stacksize  524288 kB
   coredumpsize infinity kB
   memoryuseinfinity kB
   memorylocked infinity kB
   maxprocesses   122778
   openfiles  14
   sbsize   infinity bytes
   vmemoryuse   infinity kB
   pseudo-terminals infinity
   swapuse  infinity kB
 14944  1  S0:06.59 /usr/local/openjdk7/bin/java -Dproc_namenode 
 -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop 
 -Dhadoop.log.file=hadoop-hdfs-namenode-nezabudka3-00.log 
 -Dhadoop.home.dir=/usr/local -Dhadoop.id.str=hdfs 
 -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
 -Djava.net.preferIPv4Stack=true -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m 
 -Djava.library.path=/usr/local/lib -Dhadoop.security.logger=INFO,RFAS 
 org.apache.hadoop.hdfs.server.namenode.NameNode
 From the namenode's log:
 2014-07-03 23:28:15,070 WARN  [IPC Server handler 5 on 8020] ipc.Server 
 (Server.java:run(2032)) - IPC Server handler 5 on 8020, call 
 org.apache.hadoop.hdfs.server.protocol.Datano
 deProtocol.versionRequest from 5.255.231.209:57749 Call#842 Retry#0
 java.lang.OutOfMemoryError
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupsForUser(Native 
 Method)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:80)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:139)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1417)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:81)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3331)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:5491)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.versionRequest(NameNodeRpcServer.java:1082)
 at 
 org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.versionRequest(DatanodeProtocolServerSideTranslatorPB.java:234)
 at 
 org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28069)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
 I did not have such an issue with hadoop-1.2.1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

[jira] [Issue Comment Deleted] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs

2014-07-14 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-10480:


Comment: was deleted

(was: Looking at the log of jenkins:

{quote}
/home/jenkins/tools/maven/latest/bin/mvn clean test javadoc:javadoc -DskipTests 
-Pdocs -DHadoopPatchProcess  
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/patchJavadocWarnings.txt
 21
There appear to be 26 javadoc warnings before the patch and 26 javadoc warnings 
after applying the patch.
{quote}

Is the proposed fix a JDK-specific issue?)

 Fix new findbugs warnings in hadoop-hdfs
 

 Key: HADOOP-10480
 URL: https://issues.apache.org/jira/browse/HADOOP-10480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10480.patch


 The following findbugs warnings need to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
 [INFO] BugInstance size is 14
 [INFO] Error size is 0
 [INFO] Total bugs: 14
 [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
 [org.apache.hadoop.hdfs.BlockReaderFactory] At 
 BlockReaderFactory.java:[lines 68-808]
 [INFO] Increment of volatile field 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
  [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] At 
 DFSOutputStream.java:[lines 308-1492]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
  DataInputStream, DataOutputStream, String, DataTransferThrottler, 
 DatanodeInfo[]): new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.BlockReceiver] At 
 BlockReceiver.java:[lines 66-905]
 [INFO] b must be nonnull but is marked as nullable 
 [org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2] At 
 DatanodeJspHelper.java:[lines 546-549]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
  File, boolean): new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
  new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
  new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Redundant nullcheck of f, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
  Block[]) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl] At 
 FsDatasetImpl.java:[lines 60-1910]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for 
 FSImageUtil(): String.getBytes() 
 [org.apache.hadoop.hdfs.server.namenode.FSImageUtil] At 
 FSImageUtil.java:[lines 34-89]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, 
 byte[], boolean): new String(byte[]) 
 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem] At 
 FSNamesystem.java:[lines 301-7701]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream):
  new java.io.PrintWriter(OutputStream, boolean) 
 [org.apache.hadoop.hdfs.server.namenode.INode] At INode.java:[lines 51-744]
 [INFO] Redundant nullcheck of fos, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String,
  HdfsFileStatus, LocatedBlocks) 
 [org.apache.hadoop.hdfs.server.namenode.NamenodeFsck] At 
 NamenodeFsck.java:[lines 94-710]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(File) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(OutputStream) 
 

[jira] [Commented] (HADOOP-10798) globStatus() does not return sorted list of files

2014-07-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060955#comment-14060955
 ] 

Colin Patrick McCabe commented on HADOOP-10798:
---

[~daryn]: smart.  I was wondering why it worked on HDFS but not on localFS.

I don't know how I feel about this JIRA.  The obvious solution is to have 
localFS sort its output on Linux, but this will cripple performance by forcing 
us to buffer the whole list of files before we return anything (in addition to 
the cost of the sort itself, of course).

It would be a bit easier to do in globStatus, since we always have an array 
there (unlike in listStatus where we might just have an iterator), but the same 
issues crop up.  We're slowing stuff down, for a feature most don't need.

Do users really depend on this behavior, or can we just drop this from the 
spec?  I guess the shell probably wants sorted output, to provide a consistent 
display.  But it can sort it itself, of course.  Thoughts?

 globStatus() does not return sorted list of files
 -

 Key: HADOOP-10798
 URL: https://issues.apache.org/jira/browse/HADOOP-10798
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Felix Borchers
Priority: Minor

 (FileSystem) globStatus() does not return a sorted file list anymore.
 But the API says:  ... Results are sorted by their names.
 Seems to be lost, when the Globber Object was introduced. Can't find a sort 
 in actual code.
 code to check this behavior:
 {code}
 Configuration conf = new Configuration();
 FileSystem fs = FileSystem.get(conf);
 Path path = new Path(/tmp/ + System.currentTimeMillis());
 fs.mkdirs(path);
 fs.deleteOnExit(path);
 fs.createNewFile(new Path(path, 2));
 fs.createNewFile(new Path(path, 3));
 fs.createNewFile(new Path(path, 1));
 FileStatus[] status = fs.globStatus(new Path(path, *));
 Collection list = new ArrayList();
 for (FileStatus f: status) {
 list.add(f.getPath().toString());
 //System.out.println(f.getPath().toString());
 }
 boolean sorted = Ordering.natural().isOrdered(list);
 Assert.assertTrue(sorted);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10795) unale to build hadoop 2.4.1(redhat5.8 x64)

2014-07-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060988#comment-14060988
 ] 

Colin Patrick McCabe commented on HADOOP-10795:
---

Moses, you need to look at the ERROR output.  The WARNING output is not the 
issue here.

In this case, there is one error:
{code}
testROBufferDirAndRWBufferDir[1](org.apache.hadoop.fs.TestLocalDirAllocator) 
Time elapsed: 0.014 sec  FAILURE!
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-common: There are test failures.
[ERROR]
{code}

So {{TestLocalDirAllocator}} is failing.

I think you'll find that if you run with {{mvn package -DskipTests}} (skipping 
tests), you will not get this error.  Without knowing more about that test, I 
can't say why it might be failing for you.

 unale to build hadoop 2.4.1(redhat5.8 x64)
 --

 Key: HADOOP-10795
 URL: https://issues.apache.org/jira/browse/HADOOP-10795
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
 Environment: OS version rehat 5.8 x64
 maven version 3.3.1
 java version jdk 1.7_15 for x64
Reporter: moses.wang

 unale to build hadoop 2.4.1(redhat5.8 x64)
 WARNING] Some problems were encountered while building the effective model 
 for org.apache.hadoop:hadoop-project:pom:2.4.1
 [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
 found duplicate declaration of plugin 
 org.apache.maven.plugins:maven-enforcer-plugin @ line 1015, column 15
 [WARNING] Some problems were encountered while building the effective model 
 for org.apache.hadoop:hadoop-common:jar:2.4.1
 [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
 found duplicate declaration of plugin 
 org.apache.maven.plugins:maven-surefire-plugin @ line 479, column 15
 [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
 found duplicate declaration of plugin 
 org.apache.maven.plugins:maven-enforcer-plugin @ 
 org.apache.hadoop:hadoop-project:2.4.1, 
 /home/software/Server/hadoop-2.4.1-src/hadoop-project/pom.xml, line 1015, 
 column 15
 WARNING] 
 /home/software/Server/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/FastByteComparisons.java:[25,15]
  Unsafe is internal proprietary API and may be removed in a future release
 [WARNING] 
 /home/software/Server/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java:[42,15]
  Unsafe is internal proprietary API and may be removed in a future release
 [WARNING] 
 /home/software/Server/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SignalLogger.java:[21,15]
  Signal is internal proprietary API and may be removed in a future release
 [WARNING] 
 /home/software/Server/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SignalLogger.java:[22,15]
  SignalHandler is internal proprietary API and may be removed in a future 
 release
 [WARNING] 
 /home/software/Server/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:[22,24]
  AlgorithmId is internal proprietary API and may be removed in a future 
 release
 [WARNING] 
 /home/software/Server/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:[23,24]
  CertificateAlgorithmId is internal proprietary API and may be removed in a 
 future release
 testROBufferDirAndRWBufferDir[1](org.apache.hadoop.fs.TestLocalDirAllocator)  
 Time elapsed: 0.014 sec   FAILURE!
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
 project hadoop-common: There are test failures.
 [ERROR] 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10810) Clean up native code compilation warnings.

2014-07-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060997#comment-14060997
 ] 

Colin Patrick McCabe commented on HADOOP-10810:
---

Thanks for looking at this, Chris.

{code}
   for (i = 0; i  used_size; i++) {
-pollfd = sd-pollfd + i;
-if (pollfd-fd == fd) break;
+if ((sd-pollfd + i)-fd == fd) {
+  pollfd = sd-pollfd + i;
+  break;
+}
   }
-  if (i == used_size) {
+  if (pollfd == NULL) {
{code}

I don't understand the motivation behind this, can you explain?  Also you would 
probably express this as {{sd-pollfd\[i\]-fd}}.

The rest looks good.  You might want to try clang if you've got that.  It often 
reveals warnings that gcc doesn't (and vice versa)

 Clean up native code compilation warnings.
 --

 Key: HADOOP-10810
 URL: https://issues.apache.org/jira/browse/HADOOP-10810
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10810.1.patch


 There are several compilation warnings coming from the native code on both 
 Linux and Windows.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10816) key shell returns -1 to the shell on error, should be 1

2014-07-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061004#comment-14061004
 ] 

Colin Patrick McCabe commented on HADOOP-10816:
---

Good find.  We had this problem in a few cases with hadoop commands in the 
past.  Seems like an easy patch if you're up for it... just fix a few return 
-1 cases.  Probably better to do this before we make a release with this code 
and start having to talk about compatibility issues.

 key shell returns -1 to the shell on error, should be 1
 ---

 Key: HADOOP-10816
 URL: https://issues.apache.org/jira/browse/HADOOP-10816
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Mike Yoder

 I've seen this in several places now - commands returning -1 on failure to 
 the shell. It's a bug. Someone confused their posix style returns (0 on 
 success,  0 on failure) with program returns, which are an unsigned 
 character. Thus, a return of -1 actually becomes 255 to the shell.
 {noformat}
 $ hadoop key create happykey2 --provider kms://http@localhost:16000/kms 
 --attr a=a --attr a=b
 Each attribute must correspond to only one value:
 atttribute a was repeated
 ...
 $ echo $?
 255
 {noformat}
 A return value of 1 instead of -1 does the right thing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10816) key shell returns -1 to the shell on error, should be 1

2014-07-14 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder reassigned HADOOP-10816:
---

Assignee: Mike Yoder

 key shell returns -1 to the shell on error, should be 1
 ---

 Key: HADOOP-10816
 URL: https://issues.apache.org/jira/browse/HADOOP-10816
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Mike Yoder
Assignee: Mike Yoder

 I've seen this in several places now - commands returning -1 on failure to 
 the shell. It's a bug. Someone confused their posix style returns (0 on 
 success,  0 on failure) with program returns, which are an unsigned 
 character. Thus, a return of -1 actually becomes 255 to the shell.
 {noformat}
 $ hadoop key create happykey2 --provider kms://http@localhost:16000/kms 
 --attr a=a --attr a=b
 Each attribute must correspond to only one value:
 atttribute a was repeated
 ...
 $ echo $?
 255
 {noformat}
 A return value of 1 instead of -1 does the right thing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10468) TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately

2014-07-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061022#comment-14061022
 ] 

Haohui Mai commented on HADOOP-10468:
-

Looking at the code for a while to refresh my memory. Here is a quick recap:

# Metric2 implements hierarchical configuration with {{SubsetConfiguration}}. 
The configuration key has the format {{foo.bar.spam}}, where  each parts of the 
configuration is a hierarchy. For example, {{spam}} is a child of {{bar}}, and 
{{bar}} is a children of {{foo}. Metric2 turns some parts of the hierarchy keys 
in lowercase: (MetricsConfig:87)

{code}
  MetricsConfig(Configuration c, String prefix) {
super(c, prefix.toLowerCase(Locale.US), .);
  }
{code}

All keys of immediate children will be stored in lowercase strings 
(MetricConfig:151)

{code}
  MapString, MetricsConfig getInstanceConfigs(String type) {
MapString, MetricsConfig map = Maps.newHashMap();
MetricsConfig sub = subset(type);

for (String key : sub.keys()) {
  Matcher matcher = INSTANCE_REGEX.matcher(key);
  if (matcher.matches()) {
String instance = matcher.group(1);
if (!map.containsKey(instance)) {
  map.put(instance, sub.subset(instance));
}
  }
}
return map;
  }
{code}

# To look up the value in the hierarchical configuration, {{MetricConfig}} 
first finds the key in itself and then reconstructs the full key for its parent 
by concatenating the prefix, the delimiter and the key itself 
(SubsetConfiguration:88): 

{code}
protected String getParentKey(String key)
{
if (.equals(key) || key == null)
{
return prefix;
}
else
{
return delimiter == null ? prefix + key : prefix + delimiter + key;
}
}
{code}

In this test case, the reconstructed key is 
{{test.sink.collector.queue.capacity}} instead of 
{{test.sink.Collector.queue.capacity}}, which leads to the failure.

 TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately
 ---

 Key: HADOOP-10468
 URL: https://issues.apache.org/jira/browse/HADOOP-10468
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-10468.000.patch, HADOOP-10468.001.patch, 
 HADOOP-10468.2.patch


 {{TestMetricsSystemImpl.testMultiThreadedPublish}} can fail intermediately 
 due to the insufficient size of the sink queue:
 {code}
 2014-04-06 21:34:55,269 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,270 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,271 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 {code}
 The unit test should increase the default queue size to avoid intermediate 
 failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10468) TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately

2014-07-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061031#comment-14061031
 ] 

Haohui Mai commented on HADOOP-10468:
-

bq. My preference would be to just fix the test if that's sufficient since it 
retains the semantic behavior. However if we really need to remove the 
lowercase call as in the original patch then I think we should also change the 
places where we register metrics to register them with lowercase strings. That 
would retain compatibility with existing metrics properties configuration files.

Fixing only the test looks okay to me. The current semantic of the metric 
configuration, however, seems pretty confusing. For example, the original test 
case in {{testMultiThreadedPublish()}} actually looks quite legit. I did a 
quick search and it does not seem that the behavior is documented, I suspect 
that many configurations are silently ignored as there are no messages on 
unrecognized values. Therefore, I think it might be worthwhile to fix the 
semantic in trunk. Thoughts?

 TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately
 ---

 Key: HADOOP-10468
 URL: https://issues.apache.org/jira/browse/HADOOP-10468
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-10468.000.patch, HADOOP-10468.001.patch, 
 HADOOP-10468.2.patch


 {{TestMetricsSystemImpl.testMultiThreadedPublish}} can fail intermediately 
 due to the insufficient size of the sink queue:
 {code}
 2014-04-06 21:34:55,269 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,270 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,271 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 {code}
 The unit test should increase the default queue size to avoid intermediate 
 failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-07-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Attachment: HADOOP-9902-4.patch

Updated patch for git rev 8d5e8c860ed361ed792affcfe06f1a34b017e421.

This includes many edge case bug fixes, a much more consistent coding style, 
the requested addition of the hadoop jnipath command, and a run through 
shellcheck.

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Attachments: HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-07-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Status: Patch Available  (was: Open)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Attachments: HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10810) Clean up native code compilation warnings.

2014-07-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10810:
---

Attachment: HADOOP-10810.2.patch

Thanks for the review, Colin.  I'm attaching patch v2 with a slight 
modification on the change you suggested.  I used {{sd-pollfd\[i\].fd}}.  (Dot 
instead of arrow needed after the indexing into the array.)  I agree that this 
looks better.

The change in DomainSocketWatcher.c addresses this warning from gcc:

{code}
 [exec] 
/mnt/data/cnauroth/git/hadoop-common/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocketWatcher.c:142:20:
 warning: ‘pollfd’ may be used uninitialized in this function [-Wuninitialized]
{code}

The interesting case to consider is if {{used_size}} is 0, because then the 
{{pollfd}} assignment line never executes.  It seems the compiler then doesn't 
think the {{i == used_size}} check is sufficient to give an early exit before 
the later code accesses {{pollfd}}.  It's probably considering the possibility 
that {{used_size}} could have been assigned a negative value.  (That's not 
something that really happens in practice, but the compiler doesn't know that.) 
 To fix this, I switched the logic to something equivalent but more explicit by 
initializing {{pollfd}} to {{NULL}} and then checking for that condition in the 
early exit.

bq. You might want to try clang if you've got that.

That's an interesting idea.  If you don't mind, I'd like to keep this patch 
focused on this set of warnings and leave clang warnings for future 
investigation.

 Clean up native code compilation warnings.
 --

 Key: HADOOP-10810
 URL: https://issues.apache.org/jira/browse/HADOOP-10810
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10810.1.patch, HADOOP-10810.2.patch


 There are several compilation warnings coming from the native code on both 
 Linux and Windows.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-07-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Status: Open  (was: Patch Available)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Attachments: HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10468) TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately

2014-07-14 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061068#comment-14061068
 ] 

Jason Lowe commented on HADOOP-10468:
-

Thanks for commenting, Haohui.  +1 to just fixing the test case in branch-2 for 
now.  Given it's been this way for 4 years I'm hesitant to change this, even in 
trunk, because other systems outside of Hadoop core could be using it and 
changing the case could break them in a similar way.  If we do change the 
semantics in trunk then I strongly suggest we go ahead and lowercase the 
registered names of existing entities, e.g.: 
{{DefaultMetricsSystem.initialize(namenode);}} instead of 
{{DefaultMetricsSystem.initialize(NameNode);}} to minimize the breakage to 
existing metrics2 config files.

So I propose we commit this to trunk, branch-2, and branch-2.5 and track the 
proposed change to trunk in a separate JIRA where we can discuss whether the 
backwards compatibility breakage is worth it.  Objections?


 TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately
 ---

 Key: HADOOP-10468
 URL: https://issues.apache.org/jira/browse/HADOOP-10468
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-10468.000.patch, HADOOP-10468.001.patch, 
 HADOOP-10468.2.patch


 {{TestMetricsSystemImpl.testMultiThreadedPublish}} can fail intermediately 
 due to the insufficient size of the sink queue:
 {code}
 2014-04-06 21:34:55,269 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,270 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,271 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 {code}
 The unit test should increase the default queue size to avoid intermediate 
 failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10281) Create a scheduler, which assigns schedulables a priority level

2014-07-14 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061070#comment-14061070
 ] 

Lei (Eddy) Xu commented on HADOOP-10281:


[~chrili] Thank you for this great work. It looks nice overall. I only have a 
few minor questions:

1. Does the latest design abandon the concept of the RPC window (e.g., tracking 
latest 1000 RPCs)? 
2. With the combination of a relatively larger {{decayPeriodMillis}} (i.e., 
inappropriate settings?), would it be possible that {{totalCount}} actually 
increases for a long time?In this case, when there is a large {{totalCount}} 
(e.g., 1M), it might be difficult to detect a burst traffic (e.g., 800 of the 
latest 1000 RPCs from the same faulty user)?
3. Also, would the schedule cache be difficult to detect the same burst above?

What do you think?

 Create a scheduler, which assigns schedulables a priority level
 ---

 Key: HADOOP-10281
 URL: https://issues.apache.org/jira/browse/HADOOP-10281
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Chris Li
Assignee: Chris Li
 Attachments: HADOOP-10281-preview.patch, HADOOP-10281.patch, 
 HADOOP-10281.patch, HADOOP-10281.patch


 The Scheduler decides which sub-queue to assign a given Call. It implements a 
 single method getPriorityLevel(Schedulable call) which returns an integer 
 corresponding to the subqueue the FairCallQueue should place the call in.
 The HistoryRpcScheduler is one such implementation which uses the username of 
 each call and determines what % of calls in recent history were made by this 
 user.
 It is configured with a historyLength (how many calls to track) and a list of 
 integer thresholds which determine the boundaries between priority levels.
 For instance, if the scheduler has a historyLength of 8; and priority 
 thresholds of 4,2,1; and saw calls made by these users in order:
 Alice, Bob, Alice, Alice, Bob, Jerry, Alice, Alice
 * Another call by Alice would be placed in queue 3, since she has already 
 made = 4 calls
 * Another call by Bob would be placed in queue 2, since he has = 2 but less 
 than 4 calls
 * A call by Carlos would be placed in queue 0, since he has no calls in the 
 history
 Also, some versions of this patch include the concept of a 'service user', 
 which is a user that is always scheduled high-priority. Currently this seems 
 redundant and will probably be removed in later patches, since its not too 
 useful.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10810) Clean up native code compilation warnings.

2014-07-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061084#comment-14061084
 ] 

Colin Patrick McCabe commented on HADOOP-10810:
---

+1.  Thanks, Chris.

 Clean up native code compilation warnings.
 --

 Key: HADOOP-10810
 URL: https://issues.apache.org/jira/browse/HADOOP-10810
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10810.1.patch, HADOOP-10810.2.patch


 There are several compilation warnings coming from the native code on both 
 Linux and Windows.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10810) Clean up native code compilation warnings.

2014-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061146#comment-14061146
 ] 

Hadoop QA commented on HADOOP-10810:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12655596/HADOOP-10810.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestIPC
  org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
  org.apache.hadoop.fs.TestSymlinkLocalFSFileContext

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4266//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4266//console

This message is automatically generated.

 Clean up native code compilation warnings.
 --

 Key: HADOOP-10810
 URL: https://issues.apache.org/jira/browse/HADOOP-10810
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-10810.1.patch, HADOOP-10810.2.patch


 There are several compilation warnings coming from the native code on both 
 Linux and Windows.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10468) TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately

2014-07-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061159#comment-14061159
 ] 

Haohui Mai commented on HADOOP-10468:
-

The proposed solution looks good to me.

 TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately
 ---

 Key: HADOOP-10468
 URL: https://issues.apache.org/jira/browse/HADOOP-10468
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-10468.000.patch, HADOOP-10468.001.patch, 
 HADOOP-10468.2.patch


 {{TestMetricsSystemImpl.testMultiThreadedPublish}} can fail intermediately 
 due to the insufficient size of the sink queue:
 {code}
 2014-04-06 21:34:55,269 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,270 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,271 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 {code}
 The unit test should increase the default queue size to avoid intermediate 
 failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-10468) TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately

2014-07-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061159#comment-14061159
 ] 

Haohui Mai edited comment on HADOOP-10468 at 7/14/14 8:21 PM:
--

The proposed solution looks good to me. We can file a separate jira to discuss 
the semantic and compatibility concerns.


was (Author: wheat9):
The proposed solution looks good to me.

 TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately
 ---

 Key: HADOOP-10468
 URL: https://issues.apache.org/jira/browse/HADOOP-10468
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-10468.000.patch, HADOOP-10468.001.patch, 
 HADOOP-10468.2.patch


 {{TestMetricsSystemImpl.testMultiThreadedPublish}} can fail intermediately 
 due to the insufficient size of the sink queue:
 {code}
 2014-04-06 21:34:55,269 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,270 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,271 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 {code}
 The unit test should increase the default queue size to avoid intermediate 
 failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10810) Clean up native code compilation warnings.

2014-07-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10810:
---

   Resolution: Fixed
Fix Version/s: 2.6.0
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Colin, thanks very much for the code review.  I committed this to trunk and 
branch-2.

bq. -1 tests included. The patch doesn't appear to include any new or modified 
tests.

These are unrelated test failures that are under investigation after deployment 
of new Jenkins machines.

 Clean up native code compilation warnings.
 --

 Key: HADOOP-10810
 URL: https://issues.apache.org/jira/browse/HADOOP-10810
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10810.1.patch, HADOOP-10810.2.patch


 There are several compilation warnings coming from the native code on both 
 Linux and Windows.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10810) Clean up native code compilation warnings.

2014-07-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061257#comment-14061257
 ] 

Hudson commented on HADOOP-10810:
-

FAILURE: Integrated in Hadoop-trunk-Commit #5878 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5878/])
HADOOP-10810. Clean up native code compilation warnings. Contributed by Chris 
Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1610524)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocketWatcher.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMappingWin.c


 Clean up native code compilation warnings.
 --

 Key: HADOOP-10810
 URL: https://issues.apache.org/jira/browse/HADOOP-10810
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 3.0.0, 2.6.0

 Attachments: HADOOP-10810.1.patch, HADOOP-10810.2.patch


 There are several compilation warnings coming from the native code on both 
 Linux and Windows.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10735) Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native support.

2014-07-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061279#comment-14061279
 ] 

Colin Patrick McCabe commented on HADOOP-10735:
---

Thanks for working on this, Yi.

{code}
property
  namehadoop.security.crypto.cipher.suite/name
  value/value
  description
Cipher suite for crypto codec.
  /description
/property
{code}

What's the default value here?  In the code it's described as:

{code}
  public static final String HADOOP_SECURITY_CRYPTO_CIPHER_SUITE_DEFAULT = 
AES/CTR/NoPadding;
{code}

{code}
  public U ListClass? extends U getClasses(String name, 
  ClassU xface, ListClass? extends U defaultValue) {
...
  try {
Class? cls = getClassByName(c);
classes.add(cls.asSubclass(xface));
  } catch (ClassCastException e) {
throw new IllegalArgumentException(Class  + c + 
 is not a  + xface.getSimpleName(), e);
  } catch (ClassNotFoundException e) {
throw new IllegalArgumentException(xface.getSimpleName() +   + c + 
 not found., e);
  }
}
{code}

The behavior in this patch is that if one of the classes the user specified 
wasn't found, we get a hard failure (can't start up).  I don't think this is 
quite right.

Consider if there is a different codec added in a new version of Hadoop.  If 
specifying a missing codec as part of the list is a hard failure, management 
software designed to work with multiple versions of hadoop (or just 
configuration files and tutorials designed to work with multiple versions of 
hadoop) are not going to be able to use the new codec.

I commented on this earlier:
bq. This doesn't make sense to me. Let's say I ask for the Foobar2000 codec, 
but you don't have it (the class doesn't exist because you're running an older 
version of Hadoop.) Then this is going to give you a null pointer exception and 
nothing will work... the fallback fails completely.

The behavior we want is that if one class isn't found, we fall back to the next 
class in the list (maybe with a log message.)

 Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native 
 support.
 -

 Key: HADOOP-10735
 URL: https://issues.apache.org/jira/browse/HADOOP-10735
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HADOOP-10735.001.patch, HADOOP-10735.002.patch, 
 HADOOP-10735.003.patch, HADOOP-10735.004.patch, HADOOP-10735.005.patch


 If there is no native support or OpenSSL version is too low not supporting 
 AES-CTR, but {{OpensslAesCtrCryptoCodec}} is configured, we need to fall back 
 it to JCE implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061351#comment-14061351
 ] 

Hadoop QA commented on HADOOP-9902:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12655595/HADOOP-9902-4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-assemblies hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
  org.apache.hadoop.ipc.TestIPC
  org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
  org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4265//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4265//console

This message is automatically generated.

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Attachments: HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10641) Introduce Coordination Engine

2014-07-14 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061357#comment-14061357
 ] 

Aaron T. Myers commented on HADOOP-10641:
-

bq. There is no resistance. The plan has always been to build CNode on a 
branch. I am just trying to optimize development of CNode and HBase region 
replication, which is going on in parallel. My thinking was to commit the CE 
interface to trunk and then branch off HDFS of it. That way both both HDFS and 
HBase can use the interface.

I'm not comfortable with committing this to Hadoop trunk before it's actually 
something that Hadop trunk will use. How about committing this to both HBase 
and the HDFS-6469 development branch? Or, you could of course go the route I 
originally suggested of making the CE interface and ZK reference implementation 
an entirely separate project that both HBase and the HDFS-6469 branch could 
depend on.

 Introduce Coordination Engine
 -

 Key: HADOOP-10641
 URL: https://issues.apache.org/jira/browse/HADOOP-10641
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Attachments: HADOOP-10641.patch, HADOOP-10641.patch, 
 HADOOP-10641.patch, hadoop-coordination.patch


 Coordination Engine (CE) is a system, which allows to agree on a sequence of 
 events in a distributed system. In order to be reliable CE should be 
 distributed by itself.
 Coordination Engine can be based on different algorithms (paxos, raft, 2PC, 
 zab) and have different implementations, depending on use cases, reliability, 
 availability, and performance requirements.
 CE should have a common API, so that it could serve as a pluggable component 
 in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and 
 HBase (HBASE-10909).
 First implementation is proposed to be based on ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-07-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Status: Open  (was: Patch Available)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Attachments: HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
 hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-07-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Attachment: HADOOP-9902-5.patch

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Attachments: HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
 hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10822) Refactor HTTP proxyuser support out of HttpFS into common

2014-07-14 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-10822:
---

 Summary: Refactor HTTP proxyuser support out of HttpFS into common
 Key: HADOOP-10822
 URL: https://issues.apache.org/jira/browse/HADOOP-10822
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur


HttpFS implements HTTP proxyuser support inline in httpfs code.

For HADOOP-10698 we need similar functionality for KMS.

Not to duplicate code, we should refactor existing code to common.

We should also leverage HADOOP-10817.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-07-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Release Note: 
The Hadoop shell scripts have been rewritten to fix many long standing bugs and 
include some new features.  While an eye has been kept towards compatibility, 
some changes may break existing installations.

INCOMPATIBLE CHANGES:

* The pid, out, etc  files for secure daemons have been renamed to include the 
appropriate ${HADOOP_IDENT_STR}.  This should allow, with proper configurations 
in place, for multiple versions of the same secure daemon to run on a host.
* All Hadoop shell script subsystems execute hadoop-env.sh, which allows for 
all of the environment variables to be in one location.  This was not the case 
previously.
* The default content of *-env.sh has been significantly alterated, with the 
majority of defaults moved into more protected areas. 
* All YARN_* and MAPRED_* environment variables act as overrides to their 
equivalent HADOOP_* environment variables when 'yarn', 'mapred' and related 
commands are executed. Previously, these were separated out which meant a 
significant amount of duplication of common settings.  
* hdfs-config.sh and hdfs-config.cmd were inadvertently duplicated into libexec 
and sbin.  The sbin versions have been removed.
* The log4j settings forcibly set by some *-daemon.sh commands have been 
removed.  These settings are now configurable in the *-env.sh files, in 
particular via *_OPT. 
* Support for various undocumentented YARN log4j.properties files has been 
removed.
* Support for $HADOOP_MASTER and the related rsync code have been removed.
* yarn.id.str has been removed.
* We now require bash v3 (released July 27, 2004) or better in order to take 
advantage of better regex handling and ${BASH_SOURCE}.  POSIX sh will not work.
* Support for --script has been removed. We now use ${HADOOP_*_PATH} or 
${HADOOP_PREFIX} to find the necessary binaries.  (See other note regarding 
${HADOOP_PREFIX} auto discovery.)
* Non-existent classpaths, ld.so library paths, JNI library paths, etc, will be 
ignored and stripped from their respective environment settings.

BUG FIXES:

* ${HADOOP_CONF_DIR} is now properly honored everywhere.
* Documented hadoop-layout.sh with a provided hadoop-layout.sh.example file.
* Shell commands should now work properly when called as a relative path and 
without HADOOP_PREFIX being defined. If ${HADOOP_PREFIX} is not set, it will be 
automatically determined based upon the current location of the shell library.  
Note that other parts of the ecosystem may require this environment variable to 
be configured.
* Operations which trigger ssh will now limit the number of connections to run 
in parallel to ${HADOOP_SSH_PARALLEL} to prevent memory and network exhaustion. 
 By default, this is set to 10.
* ${HADOOP_CLIENT_OPTS} support has been added to a few more commands.
* Various options on hadoop command lines were supported inconsistently.  These 
have been unified into hadoop-config.sh. --config still needs to come first, 
however.
* ulimit logging for secure daemons no longer assumes /bin/bash but does assume 
bash on the command line path.
* Removed references to some Yahoo! specific paths.
* Removed unused slaves.sh from YARN build tree.

IMPROVEMENTS:

* Significant amounts of redundant code have been moved into a new file called 
hadoop-functions.sh.
* Improved information in the default *-env.sh on what can be set, 
ramifications of setting, etc.
* There is an attempt to do some trivial deduplication and sanitization of the 
classpath and JVM options.  This allows, amongst other things, for custom 
settings in *_OPTS for Hadoop daemons to override defaults and other generic 
settings (i.e., $HADOOP_OPTS).  This is particularly relevant for Xmx settings, 
as one can now set them in _OPTS and ignore the heap specific options for 
daemons which force the size in megabytes.
* Operations which trigger ssh connections can now use pdsh if installed.  
$HADOOP_SSH_OPTS still gets applied. 
* Subcommands have been alphabetized in both usage and in the code.
* All/most of the functionality provided by the sbin/* commands has been moved 
to either their bin/ equivalents or made into functions.  The rewritten 
versions of these commands are now wrappers to maintain backward compatibility. 
Of particular note is the new --daemon option present in some bin/ commands 
which allow certain subcommands to be daemonized.
* It is now possible to override some of the shell code capabilities to provide 
site specific functionality. 
* A new option called --buildpaths will attempt to add developer build 
directories to the classpath to allow for in source tree testing.
* If a usage function is defined, the following will trigger a help message if 
it is given in the option path to the shell script: --? -? ? --help -help -h 
help 
* 

[jira] [Commented] (HADOOP-10641) Introduce Coordination Engine

2014-07-14 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061397#comment-14061397
 ] 

Konstantin Boudnik commented on HADOOP-10641:
-

bq. I'm not comfortable with committing this to Hadoop trunk before it's 
actually something that Hadop trunk will use.
This is a chicken-n-egg problem, don't you think? You don't want to get this 
piece into common before something in the trunk will use it. However, it isn't 
possible to have _anything_ in the trunk to use the APO until it is committed. 
Am I missing anything?

 Introduce Coordination Engine
 -

 Key: HADOOP-10641
 URL: https://issues.apache.org/jira/browse/HADOOP-10641
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Attachments: HADOOP-10641.patch, HADOOP-10641.patch, 
 HADOOP-10641.patch, hadoop-coordination.patch


 Coordination Engine (CE) is a system, which allows to agree on a sequence of 
 events in a distributed system. In order to be reliable CE should be 
 distributed by itself.
 Coordination Engine can be based on different algorithms (paxos, raft, 2PC, 
 zab) and have different implementations, depending on use cases, reliability, 
 availability, and performance requirements.
 CE should have a common API, so that it could serve as a pluggable component 
 in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and 
 HBase (HBASE-10909).
 First implementation is proposed to be based on ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-07-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061399#comment-14061399
 ] 

Allen Wittenauer commented on HADOOP-9902:
--

This same patch applies to branch-2, if someone wants to play on a relatively 
closer to live system.

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Attachments: HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
 hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10816) key shell returns -1 to the shell on error, should be 1

2014-07-14 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-10816:


Attachment: HADOOP-10816.001.patch

 key shell returns -1 to the shell on error, should be 1
 ---

 Key: HADOOP-10816
 URL: https://issues.apache.org/jira/browse/HADOOP-10816
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Mike Yoder
Assignee: Mike Yoder
 Attachments: HADOOP-10816.001.patch


 I've seen this in several places now - commands returning -1 on failure to 
 the shell. It's a bug. Someone confused their posix style returns (0 on 
 success,  0 on failure) with program returns, which are an unsigned 
 character. Thus, a return of -1 actually becomes 255 to the shell.
 {noformat}
 $ hadoop key create happykey2 --provider kms://http@localhost:16000/kms 
 --attr a=a --attr a=b
 Each attribute must correspond to only one value:
 atttribute a was repeated
 ...
 $ echo $?
 255
 {noformat}
 A return value of 1 instead of -1 does the right thing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10816) key shell returns -1 to the shell on error, should be 1

2014-07-14 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-10816:


   Fix Version/s: 3.0.0
Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

Patch available: changes return value, added javadoc, changes test code to 
match.

 key shell returns -1 to the shell on error, should be 1
 ---

 Key: HADOOP-10816
 URL: https://issues.apache.org/jira/browse/HADOOP-10816
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Mike Yoder
Assignee: Mike Yoder
 Fix For: 3.0.0

 Attachments: HADOOP-10816.001.patch


 I've seen this in several places now - commands returning -1 on failure to 
 the shell. It's a bug. Someone confused their posix style returns (0 on 
 success,  0 on failure) with program returns, which are an unsigned 
 character. Thus, a return of -1 actually becomes 255 to the shell.
 {noformat}
 $ hadoop key create happykey2 --provider kms://http@localhost:16000/kms 
 --attr a=a --attr a=b
 Each attribute must correspond to only one value:
 atttribute a was repeated
 ...
 $ echo $?
 255
 {noformat}
 A return value of 1 instead of -1 does the right thing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10720) KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API

2014-07-14 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-10720:
-

Attachment: HADOOP-10720.8.patch

Updated patch :
* Modified ValueQueue to handle race condition when multiple tasks to refill 
Queue for a single key is submitted and ensure only 1 task is queued.

 KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API
 ---

 Key: HADOOP-10720
 URL: https://issues.apache.org/jira/browse/HADOOP-10720
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: COMBO.patch, COMBO.patch, COMBO.patch, COMBO.patch, 
 COMBO.patch, HADOOP-10720.1.patch, HADOOP-10720.2.patch, 
 HADOOP-10720.3.patch, HADOOP-10720.4.patch, HADOOP-10720.5.patch, 
 HADOOP-10720.6.patch, HADOOP-10720.7.patch, HADOOP-10720.8.patch, 
 HADOOP-10720.patch, HADOOP-10720.patch, HADOOP-10720.patch, 
 HADOOP-10720.patch, HADOOP-10720.patch


 KMS client/server should implement support for generating encrypted keys and 
 decrypting them via the REST API being introduced by HADOOP-10719.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10641) Introduce Coordination Engine

2014-07-14 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061447#comment-14061447
 ] 

Aaron T. Myers commented on HADOOP-10641:
-

I'm saying you should commit the Coordination Engine interface to the 
ConsensusNode feature branch and use it on that branch, and then at some point 
we may merge the whole branch to trunk, CE and CN simultaneously. This is 
exactly what I said previously:

{quote}
I'm fine with you proceeding with this on a development branch. That will give 
you an opportunity to commit the coordination engine interface and start making 
progress on HDFS-6469. If and when that materializes as a stable system that 
the community wants to adopt into Hadoop, then we'll merge it back to trunk 
just like we've done with many large features that are better accomplished via 
multiple JIRAs and doing the work piecemeal.
{quote}

 Introduce Coordination Engine
 -

 Key: HADOOP-10641
 URL: https://issues.apache.org/jira/browse/HADOOP-10641
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Attachments: HADOOP-10641.patch, HADOOP-10641.patch, 
 HADOOP-10641.patch, hadoop-coordination.patch


 Coordination Engine (CE) is a system, which allows to agree on a sequence of 
 events in a distributed system. In order to be reliable CE should be 
 distributed by itself.
 Coordination Engine can be based on different algorithms (paxos, raft, 2PC, 
 zab) and have different implementations, depending on use cases, reliability, 
 availability, and performance requirements.
 CE should have a common API, so that it could serve as a pluggable component 
 in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and 
 HBase (HBASE-10909).
 First implementation is proposed to be based on ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10816) key shell returns -1 to the shell on error, should be 1

2014-07-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061459#comment-14061459
 ] 

Andrew Wang commented on HADOOP-10816:
--

Looks good overall. Do we want the tests to instead be doing assertEquals(1) 
rather than assertNotEquals(0)? If someone changed a return 1 to a return 2 
that would technically be an incompatible change, so it'd be good to have this 
contract tested.

 key shell returns -1 to the shell on error, should be 1
 ---

 Key: HADOOP-10816
 URL: https://issues.apache.org/jira/browse/HADOOP-10816
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Mike Yoder
Assignee: Mike Yoder
 Fix For: 3.0.0

 Attachments: HADOOP-10816.001.patch


 I've seen this in several places now - commands returning -1 on failure to 
 the shell. It's a bug. Someone confused their posix style returns (0 on 
 success,  0 on failure) with program returns, which are an unsigned 
 character. Thus, a return of -1 actually becomes 255 to the shell.
 {noformat}
 $ hadoop key create happykey2 --provider kms://http@localhost:16000/kms 
 --attr a=a --attr a=b
 Each attribute must correspond to only one value:
 atttribute a was repeated
 ...
 $ echo $?
 255
 {noformat}
 A return value of 1 instead of -1 does the right thing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10468) TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately

2014-07-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061465#comment-14061465
 ] 

Akira AJISAKA commented on HADOOP-10468:


bq. So I propose we commit this to trunk, branch-2, and branch-2.5 and track 
the proposed change to trunk in a separate JIRA where we can discuss whether 
the backwards compatibility breakage is worth it. Objections?
I agree to the proposal. Thanks for the comments, [~jlowe] and [~wheat9]. 


 TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately
 ---

 Key: HADOOP-10468
 URL: https://issues.apache.org/jira/browse/HADOOP-10468
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-10468.000.patch, HADOOP-10468.001.patch, 
 HADOOP-10468.2.patch


 {{TestMetricsSystemImpl.testMultiThreadedPublish}} can fail intermediately 
 due to the insufficient size of the sink queue:
 {code}
 2014-04-06 21:34:55,269 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,270 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 2014-04-06 21:34:55,271 WARN  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
 queue and can't consume the given metrics.
 {code}
 The unit test should increase the default queue size to avoid intermediate 
 failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10816) key shell returns -1 to the shell on error, should be 1

2014-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061481#comment-14061481
 ] 

Hadoop QA commented on HADOOP-10816:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12655637/HADOOP-10816.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.shell.TestCopyPreserveFlag
  org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
  org.apache.hadoop.fs.shell.TestTextCommand
  org.apache.hadoop.ipc.TestIPC
  org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
  org.apache.hadoop.fs.shell.TestPathData
  org.apache.hadoop.fs.TestDFVariations

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4267//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4267//console

This message is automatically generated.

 key shell returns -1 to the shell on error, should be 1
 ---

 Key: HADOOP-10816
 URL: https://issues.apache.org/jira/browse/HADOOP-10816
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Mike Yoder
Assignee: Mike Yoder
 Fix For: 3.0.0

 Attachments: HADOOP-10816.001.patch


 I've seen this in several places now - commands returning -1 on failure to 
 the shell. It's a bug. Someone confused their posix style returns (0 on 
 success,  0 on failure) with program returns, which are an unsigned 
 character. Thus, a return of -1 actually becomes 255 to the shell.
 {noformat}
 $ hadoop key create happykey2 --provider kms://http@localhost:16000/kms 
 --attr a=a --attr a=b
 Each attribute must correspond to only one value:
 atttribute a was repeated
 ...
 $ echo $?
 255
 {noformat}
 A return value of 1 instead of -1 does the right thing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10816) key shell returns -1 to the shell on error, should be 1

2014-07-14 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061488#comment-14061488
 ] 

Mike Yoder commented on HADOOP-10816:
-

{quote}
Do we want the tests to instead be doing assertEquals(1)
{quote}
Yeah, I was thinking about this... to be pedantic, the contract (as specified 
only in comments) only says small positive integer and not must be 1.  
Therefore one could say the test is correct.  If in the future we add another 
return code, say 2, it would still mean failure, but a failure mode that 
would differentiate itself from 1 - and the test code would still be correct, 
although we'd want to add additional tests.

Or, we could change small positive integer to 1 in the comments, and 
enforce the return of 1 on failure as you suggest.

Either way works and doesn't matter that much to me.


 key shell returns -1 to the shell on error, should be 1
 ---

 Key: HADOOP-10816
 URL: https://issues.apache.org/jira/browse/HADOOP-10816
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Mike Yoder
Assignee: Mike Yoder
 Fix For: 3.0.0

 Attachments: HADOOP-10816.001.patch


 I've seen this in several places now - commands returning -1 on failure to 
 the shell. It's a bug. Someone confused their posix style returns (0 on 
 success,  0 on failure) with program returns, which are an unsigned 
 character. Thus, a return of -1 actually becomes 255 to the shell.
 {noformat}
 $ hadoop key create happykey2 --provider kms://http@localhost:16000/kms 
 --attr a=a --attr a=b
 Each attribute must correspond to only one value:
 atttribute a was repeated
 ...
 $ echo $?
 255
 {noformat}
 A return value of 1 instead of -1 does the right thing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10735) Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native support.

2014-07-14 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10735:


Attachment: HADOOP-10735.006.patch

Thanks [~cmccabe], update the patch.

 Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native 
 support.
 -

 Key: HADOOP-10735
 URL: https://issues.apache.org/jira/browse/HADOOP-10735
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HADOOP-10735.001.patch, HADOOP-10735.002.patch, 
 HADOOP-10735.003.patch, HADOOP-10735.004.patch, HADOOP-10735.005.patch, 
 HADOOP-10735.006.patch


 If there is no native support or OpenSSL version is too low not supporting 
 AES-CTR, but {{OpensslAesCtrCryptoCodec}} is configured, we need to fall back 
 it to JCE implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10816) key shell returns -1 to the shell on error, should be 1

2014-07-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061513#comment-14061513
 ] 

Andrew Wang commented on HADOOP-10816:
--

I think we should still say small positive integer, but also test that it 
returns 1 right now. I think this will let us add new return codes later, but 
also make sure that the code for a particular error doesn't change between 
releases. Does this make sense to you too? Thanks Mike. :)

 key shell returns -1 to the shell on error, should be 1
 ---

 Key: HADOOP-10816
 URL: https://issues.apache.org/jira/browse/HADOOP-10816
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Mike Yoder
Assignee: Mike Yoder
 Fix For: 3.0.0

 Attachments: HADOOP-10816.001.patch


 I've seen this in several places now - commands returning -1 on failure to 
 the shell. It's a bug. Someone confused their posix style returns (0 on 
 success,  0 on failure) with program returns, which are an unsigned 
 character. Thus, a return of -1 actually becomes 255 to the shell.
 {noformat}
 $ hadoop key create happykey2 --provider kms://http@localhost:16000/kms 
 --attr a=a --attr a=b
 Each attribute must correspond to only one value:
 atttribute a was repeated
 ...
 $ echo $?
 255
 {noformat}
 A return value of 1 instead of -1 does the right thing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10720) KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API

2014-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061516#comment-14061516
 ] 

Hadoop QA commented on HADOOP-10720:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12655638/HADOOP-10720.8.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

  org.apache.hadoop.fs.shell.TestCopyPreserveFlag
  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
  org.apache.hadoop.fs.TestSymlinkLocalFSFileContext
  org.apache.hadoop.fs.shell.TestTextCommand
  org.apache.hadoop.ipc.TestIPC
  org.apache.hadoop.fs.TestSymlinkLocalFSFileSystem
  org.apache.hadoop.fs.shell.TestPathData
  org.apache.hadoop.fs.TestDFVariations

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4268//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4268//console

This message is automatically generated.

 KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API
 ---

 Key: HADOOP-10720
 URL: https://issues.apache.org/jira/browse/HADOOP-10720
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: COMBO.patch, COMBO.patch, COMBO.patch, COMBO.patch, 
 COMBO.patch, HADOOP-10720.1.patch, HADOOP-10720.2.patch, 
 HADOOP-10720.3.patch, HADOOP-10720.4.patch, HADOOP-10720.5.patch, 
 HADOOP-10720.6.patch, HADOOP-10720.7.patch, HADOOP-10720.8.patch, 
 HADOOP-10720.patch, HADOOP-10720.patch, HADOOP-10720.patch, 
 HADOOP-10720.patch, HADOOP-10720.patch


 KMS client/server should implement support for generating encrypted keys and 
 decrypting them via the REST API being introduced by HADOOP-10719.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs

2014-07-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061587#comment-14061587
 ] 

Akira AJISAKA commented on HADOOP-10480:


The patch is not related to the failure because HDFS-6506 tracks it.

 Fix new findbugs warnings in hadoop-hdfs
 

 Key: HADOOP-10480
 URL: https://issues.apache.org/jira/browse/HADOOP-10480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10480.patch


 The following findbugs warnings need to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
 [INFO] BugInstance size is 14
 [INFO] Error size is 0
 [INFO] Total bugs: 14
 [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
 [org.apache.hadoop.hdfs.BlockReaderFactory] At 
 BlockReaderFactory.java:[lines 68-808]
 [INFO] Increment of volatile field 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
  [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] At 
 DFSOutputStream.java:[lines 308-1492]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
  DataInputStream, DataOutputStream, String, DataTransferThrottler, 
 DatanodeInfo[]): new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.BlockReceiver] At 
 BlockReceiver.java:[lines 66-905]
 [INFO] b must be nonnull but is marked as nullable 
 [org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2] At 
 DatanodeJspHelper.java:[lines 546-549]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
  File, boolean): new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
  new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
  new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Redundant nullcheck of f, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
  Block[]) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl] At 
 FsDatasetImpl.java:[lines 60-1910]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for 
 FSImageUtil(): String.getBytes() 
 [org.apache.hadoop.hdfs.server.namenode.FSImageUtil] At 
 FSImageUtil.java:[lines 34-89]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, 
 byte[], boolean): new String(byte[]) 
 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem] At 
 FSNamesystem.java:[lines 301-7701]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream):
  new java.io.PrintWriter(OutputStream, boolean) 
 [org.apache.hadoop.hdfs.server.namenode.INode] At INode.java:[lines 51-744]
 [INFO] Redundant nullcheck of fos, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String,
  HdfsFileStatus, LocatedBlocks) 
 [org.apache.hadoop.hdfs.server.namenode.NamenodeFsck] At 
 NamenodeFsck.java:[lines 94-710]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(File) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(OutputStream) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10735) Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native support.

2014-07-14 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10735:


Attachment: HADOOP-10693.6.patch

 Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native 
 support.
 -

 Key: HADOOP-10735
 URL: https://issues.apache.org/jira/browse/HADOOP-10735
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HADOOP-10735.001.patch, HADOOP-10735.002.patch, 
 HADOOP-10735.003.patch, HADOOP-10735.004.patch, HADOOP-10735.005.patch, 
 HADOOP-10735.006.patch


 If there is no native support or OpenSSL version is too low not supporting 
 AES-CTR, but {{OpensslAesCtrCryptoCodec}} is configured, we need to fall back 
 it to JCE implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10735) Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native support.

2014-07-14 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10735:


Attachment: HADOOP-10735.006.patch

 Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native 
 support.
 -

 Key: HADOOP-10735
 URL: https://issues.apache.org/jira/browse/HADOOP-10735
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HADOOP-10735.001.patch, HADOOP-10735.002.patch, 
 HADOOP-10735.003.patch, HADOOP-10735.004.patch, HADOOP-10735.005.patch, 
 HADOOP-10735.006.patch


 If there is no native support or OpenSSL version is too low not supporting 
 AES-CTR, but {{OpensslAesCtrCryptoCodec}} is configured, we need to fall back 
 it to JCE implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10735) Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native support.

2014-07-14 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10735:


Attachment: (was: HADOOP-10693.6.patch)

 Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native 
 support.
 -

 Key: HADOOP-10735
 URL: https://issues.apache.org/jira/browse/HADOOP-10735
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HADOOP-10735.001.patch, HADOOP-10735.002.patch, 
 HADOOP-10735.003.patch, HADOOP-10735.004.patch, HADOOP-10735.005.patch, 
 HADOOP-10735.006.patch


 If there is no native support or OpenSSL version is too low not supporting 
 AES-CTR, but {{OpensslAesCtrCryptoCodec}} is configured, we need to fall back 
 it to JCE implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10735) Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native support.

2014-07-14 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10735:


Attachment: (was: HADOOP-10735.006.patch)

 Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native 
 support.
 -

 Key: HADOOP-10735
 URL: https://issues.apache.org/jira/browse/HADOOP-10735
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HADOOP-10735.001.patch, HADOOP-10735.002.patch, 
 HADOOP-10735.003.patch, HADOOP-10735.004.patch, HADOOP-10735.005.patch, 
 HADOOP-10735.006.patch


 If there is no native support or OpenSSL version is too low not supporting 
 AES-CTR, but {{OpensslAesCtrCryptoCodec}} is configured, we need to fall back 
 it to JCE implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)