[jira] [Commented] (HADOOP-8469) Make NetworkTopology class pluggable

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672873#comment-13672873
 ] 

Hudson commented on HADOOP-8469:


Integrated in Hadoop-trunk-Commit #3836 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3836/])
Move HADOOP-8469 and HADOOP-8470 to 2.1.0-beta in CHANGES.txt. (Revision 
1488873)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488873
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Make NetworkTopology class pluggable
 

 Key: HADOOP-8469
 URL: https://issues.apache.org/jira/browse/HADOOP-8469
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 1.2.0, 2.1.0-beta

 Attachments: HADOOP-8469-branch-2-02.patch, 
 HADOOP-8469-branch-2.patch, HADOOP-8469-NetworkTopology-pluggable.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v2.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v3.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v4.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v5.patch


 The class NetworkTopology is where the three-layer hierarchical topology is 
 modeled in the current code base and is instantiated directly by the 
 DatanodeManager and Balancer.
 To support alternative topologies, changes were make the topology class 
 pluggable, that is to support using a user specified topology class specified 
 in the Hadoop configuration file core-defaul.xml. The user specified topology 
 class is instantiated using reflection in the same manner as other 
 customizable classes in Hadoop. If no use specified topology class is found, 
 the fallback is to use the NetworkTopology to preserve current behavior. To 
 make it possible to reuse code in NetworkTopology several minor changes were 
 made to make the class more extensible. The NetworkTopology class is 
 currently annotated with @InterfaceAudience.LimitedPrivate({HDFS, 
 MapReduce}) and @InterfaceStability.Unstable.
 The proposed changes in NetworkTopology listed below
 1. Some fields were changes from private to protected
 2. Added some protected methods so that sub classes could override behavior
 3. Added a new method,isNodeGroupAware,to NetworkTopology
 4. The inner class InnerNode was made a package protected class to it would 
 be easier to subclass

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8470) Implementation of 4-layer subclass of NetworkTopology (NetworkTopologyWithNodeGroup)

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672874#comment-13672874
 ] 

Hudson commented on HADOOP-8470:


Integrated in Hadoop-trunk-Commit #3836 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3836/])
Move HADOOP-8469 and HADOOP-8470 to 2.1.0-beta in CHANGES.txt. (Revision 
1488873)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488873
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Implementation of 4-layer subclass of NetworkTopology 
 (NetworkTopologyWithNodeGroup)
 

 Key: HADOOP-8470
 URL: https://issues.apache.org/jira/browse/HADOOP-8470
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 1.2.0, 2.1.0-beta

 Attachments: HADOOP-8470-branch-2.patch, 
 HADOOP-8470-NetworkTopology-new-impl.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v2.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v3.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v4.patch


 To support the four-layer hierarchical topology shown in attached figure as a 
 subclass of NetworkTopology, NetworkTopologyWithNodeGroup was developed along 
 with unit tests. NetworkTopologyWithNodeGroup overriding the methods add, 
 remove, and pseudoSortByDistance were the most relevant to support the 
 four-layer topology. The method seudoSortByDistance selects the nodes to use 
 for reading data and sorts the nodes in sequence of node-local, 
 nodegroup-local, rack- local, rack–off. Another slightly change to 
 seudoSortByDistance is to support cases of separation data node and node 
 manager: if the reader cannot be found in NetworkTopology tree (formed by 
 data nodes only), then it will try to sort according to reader's sibling node 
 in the tree.
 The distance calculation changes the weights from 0 (local), 2 (rack- local), 
 4 (rack-off) to: 0 (local), 2 (nodegroup-local), 4 (rack-local), 6 (rack-off).
 The additional node group layer should be specified in the topology script or 
 table mapping, e.g. input 10.1.1.1, output: /rack1/nodegroup1
 A subclass on InnerNode, InnerNodeWithNodeGroup, was also needed to support 
 NetworkTopologyWithNodeGroup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672934#comment-13672934
 ] 

Junping Du commented on HADOOP-9447:


Thanks for the patch. Steve. So you are suggesting we include more info in 
exception like log so user can easily to check the reason for failure. Isn't 
it? Patch looks good to me. However, do we want to define a subclass of 
RuntimeException, i.e. MisConfigRuntimeException (constructed by file name and 
e)? I think it could be generic for using in somewhere else.

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9616) In branch-2, baseline of Javadoc Warnings (specified in test-patch.properties) is mismatch with Javadoc warnings in current codebase

2013-06-03 Thread Junping Du (JIRA)
Junping Du created HADOOP-9616:
--

 Summary: In branch-2, baseline of Javadoc Warnings (specified in 
test-patch.properties) is mismatch with  Javadoc warnings in current codebase
 Key: HADOOP-9616
 URL: https://issues.apache.org/jira/browse/HADOOP-9616
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Junping Du
Assignee: Junping Du


Now the baseline is set to 13 warnings, but they are 29 warnings now. 16 
warnings belongs to using Sun proprietary APIs, and 13 warnings is using 
incorrect link in doc. I think we should at least fix 13 warnings and set the 
baseline to 16.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672967#comment-13672967
 ] 

Steve Loughran commented on HADOOP-9447:


Sounds reasonable, though {{ConfigReadRuntimeException}} could be even better, 
as it would let it be used for various problems without creating a whole tree 
of problems.

Do you want to extend the patch with this?

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8469) Make NetworkTopology class pluggable

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672996#comment-13672996
 ] 

Hudson commented on HADOOP-8469:


Integrated in Hadoop-Yarn-trunk #229 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/229/])
Move HADOOP-8469 and HADOOP-8470 to 2.1.0-beta in CHANGES.txt. (Revision 
1488873)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488873
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Make NetworkTopology class pluggable
 

 Key: HADOOP-8469
 URL: https://issues.apache.org/jira/browse/HADOOP-8469
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 1.2.0, 2.1.0-beta

 Attachments: HADOOP-8469-branch-2-02.patch, 
 HADOOP-8469-branch-2.patch, HADOOP-8469-NetworkTopology-pluggable.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v2.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v3.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v4.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v5.patch


 The class NetworkTopology is where the three-layer hierarchical topology is 
 modeled in the current code base and is instantiated directly by the 
 DatanodeManager and Balancer.
 To support alternative topologies, changes were make the topology class 
 pluggable, that is to support using a user specified topology class specified 
 in the Hadoop configuration file core-defaul.xml. The user specified topology 
 class is instantiated using reflection in the same manner as other 
 customizable classes in Hadoop. If no use specified topology class is found, 
 the fallback is to use the NetworkTopology to preserve current behavior. To 
 make it possible to reuse code in NetworkTopology several minor changes were 
 made to make the class more extensible. The NetworkTopology class is 
 currently annotated with @InterfaceAudience.LimitedPrivate({HDFS, 
 MapReduce}) and @InterfaceStability.Unstable.
 The proposed changes in NetworkTopology listed below
 1. Some fields were changes from private to protected
 2. Added some protected methods so that sub classes could override behavior
 3. Added a new method,isNodeGroupAware,to NetworkTopology
 4. The inner class InnerNode was made a package protected class to it would 
 be easier to subclass

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8470) Implementation of 4-layer subclass of NetworkTopology (NetworkTopologyWithNodeGroup)

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13672999#comment-13672999
 ] 

Hudson commented on HADOOP-8470:


Integrated in Hadoop-Yarn-trunk #229 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/229/])
Move HADOOP-8469 and HADOOP-8470 to 2.1.0-beta in CHANGES.txt. (Revision 
1488873)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488873
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Implementation of 4-layer subclass of NetworkTopology 
 (NetworkTopologyWithNodeGroup)
 

 Key: HADOOP-8470
 URL: https://issues.apache.org/jira/browse/HADOOP-8470
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 1.2.0, 2.1.0-beta

 Attachments: HADOOP-8470-branch-2.patch, 
 HADOOP-8470-NetworkTopology-new-impl.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v2.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v3.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v4.patch


 To support the four-layer hierarchical topology shown in attached figure as a 
 subclass of NetworkTopology, NetworkTopologyWithNodeGroup was developed along 
 with unit tests. NetworkTopologyWithNodeGroup overriding the methods add, 
 remove, and pseudoSortByDistance were the most relevant to support the 
 four-layer topology. The method seudoSortByDistance selects the nodes to use 
 for reading data and sorts the nodes in sequence of node-local, 
 nodegroup-local, rack- local, rack–off. Another slightly change to 
 seudoSortByDistance is to support cases of separation data node and node 
 manager: if the reader cannot be found in NetworkTopology tree (formed by 
 data nodes only), then it will try to sort according to reader's sibling node 
 in the tree.
 The distance calculation changes the weights from 0 (local), 2 (rack- local), 
 4 (rack-off) to: 0 (local), 2 (nodegroup-local), 4 (rack-local), 6 (rack-off).
 The additional node group layer should be specified in the topology script or 
 table mapping, e.g. input 10.1.1.1, output: /rack1/nodegroup1
 A subclass on InnerNode, InnerNodeWithNodeGroup, was also needed to support 
 NetworkTopologyWithNodeGroup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8470) Implementation of 4-layer subclass of NetworkTopology (NetworkTopologyWithNodeGroup)

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673089#comment-13673089
 ] 

Hudson commented on HADOOP-8470:


Integrated in Hadoop-Hdfs-trunk #1419 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1419/])
Move HADOOP-8469 and HADOOP-8470 to 2.1.0-beta in CHANGES.txt. (Revision 
1488873)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488873
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Implementation of 4-layer subclass of NetworkTopology 
 (NetworkTopologyWithNodeGroup)
 

 Key: HADOOP-8470
 URL: https://issues.apache.org/jira/browse/HADOOP-8470
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 1.2.0, 2.1.0-beta

 Attachments: HADOOP-8470-branch-2.patch, 
 HADOOP-8470-NetworkTopology-new-impl.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v2.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v3.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v4.patch


 To support the four-layer hierarchical topology shown in attached figure as a 
 subclass of NetworkTopology, NetworkTopologyWithNodeGroup was developed along 
 with unit tests. NetworkTopologyWithNodeGroup overriding the methods add, 
 remove, and pseudoSortByDistance were the most relevant to support the 
 four-layer topology. The method seudoSortByDistance selects the nodes to use 
 for reading data and sorts the nodes in sequence of node-local, 
 nodegroup-local, rack- local, rack–off. Another slightly change to 
 seudoSortByDistance is to support cases of separation data node and node 
 manager: if the reader cannot be found in NetworkTopology tree (formed by 
 data nodes only), then it will try to sort according to reader's sibling node 
 in the tree.
 The distance calculation changes the weights from 0 (local), 2 (rack- local), 
 4 (rack-off) to: 0 (local), 2 (nodegroup-local), 4 (rack-local), 6 (rack-off).
 The additional node group layer should be specified in the topology script or 
 table mapping, e.g. input 10.1.1.1, output: /rack1/nodegroup1
 A subclass on InnerNode, InnerNodeWithNodeGroup, was also needed to support 
 NetworkTopologyWithNodeGroup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8469) Make NetworkTopology class pluggable

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673086#comment-13673086
 ] 

Hudson commented on HADOOP-8469:


Integrated in Hadoop-Hdfs-trunk #1419 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1419/])
Move HADOOP-8469 and HADOOP-8470 to 2.1.0-beta in CHANGES.txt. (Revision 
1488873)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488873
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Make NetworkTopology class pluggable
 

 Key: HADOOP-8469
 URL: https://issues.apache.org/jira/browse/HADOOP-8469
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 1.2.0, 2.1.0-beta

 Attachments: HADOOP-8469-branch-2-02.patch, 
 HADOOP-8469-branch-2.patch, HADOOP-8469-NetworkTopology-pluggable.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v2.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v3.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v4.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v5.patch


 The class NetworkTopology is where the three-layer hierarchical topology is 
 modeled in the current code base and is instantiated directly by the 
 DatanodeManager and Balancer.
 To support alternative topologies, changes were make the topology class 
 pluggable, that is to support using a user specified topology class specified 
 in the Hadoop configuration file core-defaul.xml. The user specified topology 
 class is instantiated using reflection in the same manner as other 
 customizable classes in Hadoop. If no use specified topology class is found, 
 the fallback is to use the NetworkTopology to preserve current behavior. To 
 make it possible to reuse code in NetworkTopology several minor changes were 
 made to make the class more extensible. The NetworkTopology class is 
 currently annotated with @InterfaceAudience.LimitedPrivate({HDFS, 
 MapReduce}) and @InterfaceStability.Unstable.
 The proposed changes in NetworkTopology listed below
 1. Some fields were changes from private to protected
 2. Added some protected methods so that sub classes could override behavior
 3. Added a new method,isNodeGroupAware,to NetworkTopology
 4. The inner class InnerNode was made a package protected class to it would 
 be easier to subclass

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2013-06-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-8545:
---

Status: Open  (was: Patch Available)

 Filesystem Implementation for OpenStack Swift
 -

 Key: HADOOP-8545
 URL: https://issues.apache.org/jira/browse/HADOOP-8545
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.0.3-alpha, 1.2.0
Reporter: Tim Miller
Assignee: Dmitry Mezhensky
  Labels: hadoop, patch
 Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, 
 HADOOP-8545-10.patch, HADOOP-8545-11.patch, HADOOP-8545-12.patch, 
 HADOOP-8545-13.patch, HADOOP-8545-14.patch, HADOOP-8545-15.patch, 
 HADOOP-8545-16.patch, HADOOP-8545-17.patch, HADOOP-8545-18.patch, 
 HADOOP-8545-19.patch, HADOOP-8545-1.patch, HADOOP-8545-20.patch, 
 HADOOP-8545-21.patch, HADOOP-8545-22.patch, HADOOP-8545-23.patch, 
 HADOOP-8545-24.patch, HADOOP-8545-25.patch, HADOOP-8545-2.patch, 
 HADOOP-8545-3.patch, HADOOP-8545-4.patch, HADOOP-8545-5.patch, 
 HADOOP-8545-6.patch, HADOOP-8545-7.patch, HADOOP-8545-8.patch, 
 HADOOP-8545-9.patch, HADOOP-8545-javaclouds-2.patch, HADOOP-8545.patch, 
 HADOOP-8545.patch


 ,Add a filesystem implementation for OpenStack Swift object store, similar to 
 the one which exists today for S3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2013-06-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-8545:
---

Target Version/s: 3.0.0, 2.1.0-beta  (was: 3.0.0)
  Status: Patch Available  (was: Open)

 Filesystem Implementation for OpenStack Swift
 -

 Key: HADOOP-8545
 URL: https://issues.apache.org/jira/browse/HADOOP-8545
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.0.3-alpha, 1.2.0
Reporter: Tim Miller
Assignee: Dmitry Mezhensky
  Labels: hadoop, patch
 Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, 
 HADOOP-8545-028.patch, HADOOP-8545-10.patch, HADOOP-8545-11.patch, 
 HADOOP-8545-12.patch, HADOOP-8545-13.patch, HADOOP-8545-14.patch, 
 HADOOP-8545-15.patch, HADOOP-8545-16.patch, HADOOP-8545-17.patch, 
 HADOOP-8545-18.patch, HADOOP-8545-19.patch, HADOOP-8545-1.patch, 
 HADOOP-8545-20.patch, HADOOP-8545-21.patch, HADOOP-8545-22.patch, 
 HADOOP-8545-23.patch, HADOOP-8545-24.patch, HADOOP-8545-25.patch, 
 HADOOP-8545-2.patch, HADOOP-8545-3.patch, HADOOP-8545-4.patch, 
 HADOOP-8545-5.patch, HADOOP-8545-6.patch, HADOOP-8545-7.patch, 
 HADOOP-8545-8.patch, HADOOP-8545-9.patch, HADOOP-8545-javaclouds-2.patch, 
 HADOOP-8545.patch, HADOOP-8545.patch


 ,Add a filesystem implementation for OpenStack Swift object store, similar to 
 the one which exists today for S3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2013-06-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-8545:
---

Attachment: HADOOP-8545-028.patch

Attaching the latest patch, HADOOP-8545-028. We've been running this for a 
couple of weeks and it is essentially stable. I think it's ready for checking 
in and adding to the next beta release -where we can see what issues crop up in 
the hands of users. 

I've documented how to run the tests in the site documentation -if any reviewer 
wants to run the tests, read that and, if it's not clear, contact me. be 
warned: the tests take an hour to run against a public Swift service,
because they get throttled -resilience to throttling is actually what the test 
is verifying.

 Filesystem Implementation for OpenStack Swift
 -

 Key: HADOOP-8545
 URL: https://issues.apache.org/jira/browse/HADOOP-8545
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 1.2.0, 2.0.3-alpha
Reporter: Tim Miller
Assignee: Dmitry Mezhensky
  Labels: hadoop, patch
 Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, 
 HADOOP-8545-028.patch, HADOOP-8545-10.patch, HADOOP-8545-11.patch, 
 HADOOP-8545-12.patch, HADOOP-8545-13.patch, HADOOP-8545-14.patch, 
 HADOOP-8545-15.patch, HADOOP-8545-16.patch, HADOOP-8545-17.patch, 
 HADOOP-8545-18.patch, HADOOP-8545-19.patch, HADOOP-8545-1.patch, 
 HADOOP-8545-20.patch, HADOOP-8545-21.patch, HADOOP-8545-22.patch, 
 HADOOP-8545-23.patch, HADOOP-8545-24.patch, HADOOP-8545-25.patch, 
 HADOOP-8545-2.patch, HADOOP-8545-3.patch, HADOOP-8545-4.patch, 
 HADOOP-8545-5.patch, HADOOP-8545-6.patch, HADOOP-8545-7.patch, 
 HADOOP-8545-8.patch, HADOOP-8545-9.patch, HADOOP-8545-javaclouds-2.patch, 
 HADOOP-8545.patch, HADOOP-8545.patch


 ,Add a filesystem implementation for OpenStack Swift object store, similar to 
 the one which exists today for S3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8470) Implementation of 4-layer subclass of NetworkTopology (NetworkTopologyWithNodeGroup)

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673138#comment-13673138
 ] 

Hudson commented on HADOOP-8470:


Integrated in Hadoop-Mapreduce-trunk #1445 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1445/])
Move HADOOP-8469 and HADOOP-8470 to 2.1.0-beta in CHANGES.txt. (Revision 
1488873)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488873
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Implementation of 4-layer subclass of NetworkTopology 
 (NetworkTopologyWithNodeGroup)
 

 Key: HADOOP-8470
 URL: https://issues.apache.org/jira/browse/HADOOP-8470
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 1.2.0, 2.1.0-beta

 Attachments: HADOOP-8470-branch-2.patch, 
 HADOOP-8470-NetworkTopology-new-impl.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v2.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v3.patch, 
 HADOOP-8470-NetworkTopology-new-impl-v4.patch


 To support the four-layer hierarchical topology shown in attached figure as a 
 subclass of NetworkTopology, NetworkTopologyWithNodeGroup was developed along 
 with unit tests. NetworkTopologyWithNodeGroup overriding the methods add, 
 remove, and pseudoSortByDistance were the most relevant to support the 
 four-layer topology. The method seudoSortByDistance selects the nodes to use 
 for reading data and sorts the nodes in sequence of node-local, 
 nodegroup-local, rack- local, rack–off. Another slightly change to 
 seudoSortByDistance is to support cases of separation data node and node 
 manager: if the reader cannot be found in NetworkTopology tree (formed by 
 data nodes only), then it will try to sort according to reader's sibling node 
 in the tree.
 The distance calculation changes the weights from 0 (local), 2 (rack- local), 
 4 (rack-off) to: 0 (local), 2 (nodegroup-local), 4 (rack-local), 6 (rack-off).
 The additional node group layer should be specified in the topology script or 
 table mapping, e.g. input 10.1.1.1, output: /rack1/nodegroup1
 A subclass on InnerNode, InnerNodeWithNodeGroup, was also needed to support 
 NetworkTopologyWithNodeGroup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8469) Make NetworkTopology class pluggable

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673135#comment-13673135
 ] 

Hudson commented on HADOOP-8469:


Integrated in Hadoop-Mapreduce-trunk #1445 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1445/])
Move HADOOP-8469 and HADOOP-8470 to 2.1.0-beta in CHANGES.txt. (Revision 
1488873)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1488873
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Make NetworkTopology class pluggable
 

 Key: HADOOP-8469
 URL: https://issues.apache.org/jira/browse/HADOOP-8469
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 1.2.0, 2.1.0-beta

 Attachments: HADOOP-8469-branch-2-02.patch, 
 HADOOP-8469-branch-2.patch, HADOOP-8469-NetworkTopology-pluggable.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v2.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v3.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v4.patch, 
 HADOOP-8469-NetworkTopology-pluggable-v5.patch


 The class NetworkTopology is where the three-layer hierarchical topology is 
 modeled in the current code base and is instantiated directly by the 
 DatanodeManager and Balancer.
 To support alternative topologies, changes were make the topology class 
 pluggable, that is to support using a user specified topology class specified 
 in the Hadoop configuration file core-defaul.xml. The user specified topology 
 class is instantiated using reflection in the same manner as other 
 customizable classes in Hadoop. If no use specified topology class is found, 
 the fallback is to use the NetworkTopology to preserve current behavior. To 
 make it possible to reuse code in NetworkTopology several minor changes were 
 made to make the class more extensible. The NetworkTopology class is 
 currently annotated with @InterfaceAudience.LimitedPrivate({HDFS, 
 MapReduce}) and @InterfaceStability.Unstable.
 The proposed changes in NetworkTopology listed below
 1. Some fields were changes from private to protected
 2. Added some protected methods so that sub classes could override behavior
 3. Added a new method,isNodeGroupAware,to NetworkTopology
 4. The inner class InnerNode was made a package protected class to it would 
 be easier to subclass

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-06-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673152#comment-13673152
 ] 

Jason Lowe commented on HADOOP-9397:


This was initially checked into trunk then later merged to branch-2.  When it 
was merged to branch-2, the CHANGES.txt on branch-2 was updated to include this 
JIRA.  Are we supposed to be updating the trunk's version of CHANGES.txt every 
time we later merge something into another branch?  That has definitely not 
been happening for a lot of changes that have been subsequently pulled into 
branch-0.23.

 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673155#comment-13673155
 ] 

Hadoop QA commented on HADOOP-8545:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585852/HADOOP-8545-028.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 29 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1168 javac 
compiler warnings (more than the trunk's current 1156 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-tools/hadoop-openstack 
hadoop-tools/hadoop-tools-dist.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2594//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2594//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-openstack.html
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2594//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2594//console

This message is automatically generated.

 Filesystem Implementation for OpenStack Swift
 -

 Key: HADOOP-8545
 URL: https://issues.apache.org/jira/browse/HADOOP-8545
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 1.2.0, 2.0.3-alpha
Reporter: Tim Miller
Assignee: Dmitry Mezhensky
  Labels: hadoop, patch
 Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, 
 HADOOP-8545-028.patch, HADOOP-8545-10.patch, HADOOP-8545-11.patch, 
 HADOOP-8545-12.patch, HADOOP-8545-13.patch, HADOOP-8545-14.patch, 
 HADOOP-8545-15.patch, HADOOP-8545-16.patch, HADOOP-8545-17.patch, 
 HADOOP-8545-18.patch, HADOOP-8545-19.patch, HADOOP-8545-1.patch, 
 HADOOP-8545-20.patch, HADOOP-8545-21.patch, HADOOP-8545-22.patch, 
 HADOOP-8545-23.patch, HADOOP-8545-24.patch, HADOOP-8545-25.patch, 
 HADOOP-8545-2.patch, HADOOP-8545-3.patch, HADOOP-8545-4.patch, 
 HADOOP-8545-5.patch, HADOOP-8545-6.patch, HADOOP-8545-7.patch, 
 HADOOP-8545-8.patch, HADOOP-8545-9.patch, HADOOP-8545-javaclouds-2.patch, 
 HADOOP-8545.patch, HADOOP-8545.patch


 ,Add a filesystem implementation for OpenStack Swift object store, similar to 
 the one which exists today for S3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673183#comment-13673183
 ] 

Junping Du commented on HADOOP-9447:


Yes. ConfigReadRuntimeException sounds better. Ok. I am glad to deliver a patch 
soon. Thanks!

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-06-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673207#comment-13673207
 ] 

Jason Lowe commented on HADOOP-9397:


I updated trunk's CHANGES.txt to move HADOOP-9397 down into the 2.1.0 section, 
since that's what others are doing for trunk patches that have been merged to 
branch-2.

Note that there's a ton of stuff missing for branch-0.23 if the same should 
have been done there.  The only accurate CHANGES.txt for branch-0.23 is the one 
on branch-0.23 itself.  If we need to keep the CHANGES.txt up-to-date on all 
other branches when a change is merged between two branches, that's going to be 
fun times.

 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9397) Incremental dist tar build fails

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673209#comment-13673209
 ] 

Hudson commented on HADOOP-9397:


Integrated in Hadoop-trunk-Commit #3838 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3838/])
Move HADOOP-9397 to 2.1.0-beta after merging it into branch-2. (Revision 
1489026)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1489026
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Incremental dist tar build fails
 

 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9397.1.patch


 Building a dist tar build when the dist tarball already exists from a 
 previous build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-9447:
---

Attachment: HADOOP-9447-v3.patch

Extend v2 patch with a new defined exception in v3 patch

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, 
 HADOOP-9447-v3.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9287) Parallel testing hadoop-common

2013-06-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673284#comment-13673284
 ] 

Jason Lowe commented on HADOOP-9287:


+1, branch-2 patch looks good to me.  I'll commit this to branch-2 later today 
to give others a chance to comment.

 Parallel testing hadoop-common
 --

 Key: HADOOP-9287
 URL: https://issues.apache.org/jira/browse/HADOOP-9287
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Tsuyoshi OZAWA
Assignee: Andrey Klochkov
 Fix For: 3.0.0

 Attachments: HADOOP-9287.1.patch, HADOOP-9287-branch-2--N1.patch, 
 HADOOP-9287--N3.patch, HADOOP-9287--N3.patch, HADOOP-9287--N4.patch, 
 HADOOP-9287--N5.patch, HADOOP-9287--N6.patch, HADOOP-9287--N7.patch, 
 HADOOP-9287.patch, HADOOP-9287.patch


 The maven surefire plugin supports parallel testing feature. By using it, the 
 tests can be run more faster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673252#comment-13673252
 ] 

Hadoop QA commented on HADOOP-9447:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585874/HADOOP-9447-v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.conf.TestConfiguration

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2595//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2595//console

This message is automatically generated.

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, 
 HADOOP-9447-v3.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9602) Trash#moveToAppropriateTrash should output logs of execution instead of STDOUT

2013-06-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673345#comment-13673345
 ] 

Daryn Sharp commented on HADOOP-9602:
-

I don't agree this should be a logging message.  The trash routine is only 
called by FsShell, so changing the output to a log message makes the output 
very noisy.

 Trash#moveToAppropriateTrash should output logs of execution instead of STDOUT
 --

 Key: HADOOP-9602
 URL: https://issues.apache.org/jira/browse/HADOOP-9602
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: HADOOP-9602.1.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Currently, Trash#moveToAppropriateTrash outputs logs of execution into 
 STDOUT. It should use logging feature, because the other components outputs 
 its logs via logging features and it can confuse users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9617) HA HDFS client is too strict with validating URI authorities

2013-06-03 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HADOOP-9617:
--

 Summary: HA HDFS client is too strict with validating URI 
authorities
 Key: HADOOP-9617
 URL: https://issues.apache.org/jira/browse/HADOOP-9617
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, ha
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS 
resolution of logical URIs. This has the side effect of changing the way Paths 
are verified when passed to a FileSystem instance created with an authority 
that differs from the authority of the Path. Previous to HADOOP-9150, a default 
port would be added to either authority in the event that either URI did not 
have a port. Post HADOOP-9150, no default port is added. This means that a 
FileSystem instance created using the URI hdfs://ha-logical-uri:8020 will no 
longer process paths containing just the authority hdfs://ha-logical-uri, and 
will throw an error like the following:

{noformat}
java.lang.IllegalArgumentException: Wrong FS: 
hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected: 
hdfs://ns1:8020
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
{noformat}

Though this is not necessarily incorrect behavior, it is a 
backward-incompatible change that at least breaks certain clients' ability to 
connect to an HA HDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9617) HA HDFS client is too strict with validating URI authorities

2013-06-03 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-9617:
---

Status: Patch Available  (was: Open)

 HA HDFS client is too strict with validating URI authorities
 

 Key: HADOOP-9617
 URL: https://issues.apache.org/jira/browse/HADOOP-9617
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, ha
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HADOOP-9617.patch


 HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS 
 resolution of logical URIs. This has the side effect of changing the way 
 Paths are verified when passed to a FileSystem instance created with an 
 authority that differs from the authority of the Path. Previous to 
 HADOOP-9150, a default port would be added to either authority in the event 
 that either URI did not have a port. Post HADOOP-9150, no default port is 
 added. This means that a FileSystem instance created using the URI 
 hdfs://ha-logical-uri:8020 will no longer process paths containing just the 
 authority hdfs://ha-logical-uri, and will throw an error like the following:
 {noformat}
 java.lang.IllegalArgumentException: Wrong FS: 
 hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected: 
 hdfs://ns1:8020
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
 {noformat}
 Though this is not necessarily incorrect behavior, it is a 
 backward-incompatible change that at least breaks certain clients' ability to 
 connect to an HA HDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9617) HA HDFS client is too strict with validating URI authorities

2013-06-03 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-9617:
---

Attachment: HADOOP-9617.patch

The attached patch addresses the issue by changing FileSystem#checkPath to 
check if the passed-in path does not contain a port but this FS URI does 
contain a port. In this case, the default port is added to the provided Path's 
authority before checking for equality of the authorities.

 HA HDFS client is too strict with validating URI authorities
 

 Key: HADOOP-9617
 URL: https://issues.apache.org/jira/browse/HADOOP-9617
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, ha
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HADOOP-9617.patch


 HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS 
 resolution of logical URIs. This has the side effect of changing the way 
 Paths are verified when passed to a FileSystem instance created with an 
 authority that differs from the authority of the Path. Previous to 
 HADOOP-9150, a default port would be added to either authority in the event 
 that either URI did not have a port. Post HADOOP-9150, no default port is 
 added. This means that a FileSystem instance created using the URI 
 hdfs://ha-logical-uri:8020 will no longer process paths containing just the 
 authority hdfs://ha-logical-uri, and will throw an error like the following:
 {noformat}
 java.lang.IllegalArgumentException: Wrong FS: 
 hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected: 
 hdfs://ns1:8020
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
 {noformat}
 Though this is not necessarily incorrect behavior, it is a 
 backward-incompatible change that at least breaks certain clients' ability to 
 connect to an HA HDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9614) smart-test-patch.sh hangs for new version of patch (2.7.1)

2013-06-03 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673457#comment-13673457
 ] 

Jonathan Eagles commented on HADOOP-9614:
-

+1. Looks good to me based on change listed in patch's git repository
remote.origin.url=git://git.savannah.gnu.org/patch.git
commit 3ccb16e10b7b4312e9b6096760ddc4c2d90194f2
Date:   Tue Apr 17 22:37:17 2012 +0200

Improve messages when in --dry-run mode

* src/patch.c (main): Say that we are checking a file and not that we are
patching it in --dry-run mode.  Don't say saving rejects to file when we
don't create reject files.
* tests/reject-format: Add rejects with --dry-run test case.
* tests/bad-filenames, tests/fifo, tests/mixed-patch-types: Update.



 smart-test-patch.sh hangs for new version of patch (2.7.1)
 --

 Key: HADOOP-9614
 URL: https://issues.apache.org/jira/browse/HADOOP-9614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HADOOP-9614.patch, HADOOP-9614.patch


 patch -p0 -E --dry-run prints checking file  for the new version of 
 patch(2.7.1) rather than patching file as it did for older versions. This 
 causes TMP2 to become empty, which causes the script to hang on this command 
 forever:
 PREFIX_DIRS_AND_FILES=$(cut -d '/' -f 1 | sort | uniq)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9617) HA HDFS client is too strict with validating URI authorities

2013-06-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673476#comment-13673476
 ] 

Daryn Sharp commented on HADOOP-9617:
-

I think something else is wrong because I specifically made port/no-port work 
last I time I was in that code.  Canonicalizing the uri is supposed to handle 
it.

bq.  Though this is not necessarily incorrect behavior
I believe it's incorrect behavior.  Users should not have to fully qualify path 
authorities with ports.  It will break customers.

 HA HDFS client is too strict with validating URI authorities
 

 Key: HADOOP-9617
 URL: https://issues.apache.org/jira/browse/HADOOP-9617
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, ha
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HADOOP-9617.patch


 HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS 
 resolution of logical URIs. This has the side effect of changing the way 
 Paths are verified when passed to a FileSystem instance created with an 
 authority that differs from the authority of the Path. Previous to 
 HADOOP-9150, a default port would be added to either authority in the event 
 that either URI did not have a port. Post HADOOP-9150, no default port is 
 added. This means that a FileSystem instance created using the URI 
 hdfs://ha-logical-uri:8020 will no longer process paths containing just the 
 authority hdfs://ha-logical-uri, and will throw an error like the following:
 {noformat}
 java.lang.IllegalArgumentException: Wrong FS: 
 hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected: 
 hdfs://ns1:8020
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
 {noformat}
 Though this is not necessarily incorrect behavior, it is a 
 backward-incompatible change that at least breaks certain clients' ability to 
 connect to an HA HDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9617) HA HDFS client is too strict with validating URI authorities

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673475#comment-13673475
 ] 

Hadoop QA commented on HADOOP-9617:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585908/HADOOP-9617.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2596//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2596//console

This message is automatically generated.

 HA HDFS client is too strict with validating URI authorities
 

 Key: HADOOP-9617
 URL: https://issues.apache.org/jira/browse/HADOOP-9617
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, ha
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HADOOP-9617.patch


 HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS 
 resolution of logical URIs. This has the side effect of changing the way 
 Paths are verified when passed to a FileSystem instance created with an 
 authority that differs from the authority of the Path. Previous to 
 HADOOP-9150, a default port would be added to either authority in the event 
 that either URI did not have a port. Post HADOOP-9150, no default port is 
 added. This means that a FileSystem instance created using the URI 
 hdfs://ha-logical-uri:8020 will no longer process paths containing just the 
 authority hdfs://ha-logical-uri, and will throw an error like the following:
 {noformat}
 java.lang.IllegalArgumentException: Wrong FS: 
 hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected: 
 hdfs://ns1:8020
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
 {noformat}
 Though this is not necessarily incorrect behavior, it is a 
 backward-incompatible change that at least breaks certain clients' ability to 
 connect to an HA HDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9617) HA HDFS client is too strict with validating URI authorities

2013-06-03 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673487#comment-13673487
 ] 

Todd Lipcon commented on HADOOP-9617:
-

[~daryn] -- maybe the reason why your code isn't taking effect is that we 
explicitly _don't_ call {{NetUtils.getCanonicalUri(...)}} for the case of 
logical authorities?

 HA HDFS client is too strict with validating URI authorities
 

 Key: HADOOP-9617
 URL: https://issues.apache.org/jira/browse/HADOOP-9617
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, ha
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HADOOP-9617.patch


 HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS 
 resolution of logical URIs. This has the side effect of changing the way 
 Paths are verified when passed to a FileSystem instance created with an 
 authority that differs from the authority of the Path. Previous to 
 HADOOP-9150, a default port would be added to either authority in the event 
 that either URI did not have a port. Post HADOOP-9150, no default port is 
 added. This means that a FileSystem instance created using the URI 
 hdfs://ha-logical-uri:8020 will no longer process paths containing just the 
 authority hdfs://ha-logical-uri, and will throw an error like the following:
 {noformat}
 java.lang.IllegalArgumentException: Wrong FS: 
 hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected: 
 hdfs://ns1:8020
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
 {noformat}
 Though this is not necessarily incorrect behavior, it is a 
 backward-incompatible change that at least breaks certain clients' ability to 
 connect to an HA HDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9617) HA HDFS client is too strict with validating URI authorities

2013-06-03 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673489#comment-13673489
 ] 

Aaron T. Myers commented on HADOOP-9617:


bq. I think something else is wrong because I specifically made port/no-port 
work last I time I was in that code. Canonicalizing the uri is supposed to 
handle it.

Sorry, not quite sure I follow. What else do you think is wrong?

bq. I believe it's incorrect behavior. Users should not have to fully qualify 
path authorities with ports. It will break customers.

I think this pattern of using FileSystem with Paths with different authorities 
is itself incorrect. One should not create a FileSystem object using one 
authority and then pass Paths to that FileSystem which use another authority. 
Instead, users should be calling Path#getFileSystem to get the proper FS 
associated with that Path.

Regardless, this issue is moot because the change was an inadvertent backward 
incompatible one, which I think we all agree should be fixed.

 HA HDFS client is too strict with validating URI authorities
 

 Key: HADOOP-9617
 URL: https://issues.apache.org/jira/browse/HADOOP-9617
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, ha
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HADOOP-9617.patch


 HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS 
 resolution of logical URIs. This has the side effect of changing the way 
 Paths are verified when passed to a FileSystem instance created with an 
 authority that differs from the authority of the Path. Previous to 
 HADOOP-9150, a default port would be added to either authority in the event 
 that either URI did not have a port. Post HADOOP-9150, no default port is 
 added. This means that a FileSystem instance created using the URI 
 hdfs://ha-logical-uri:8020 will no longer process paths containing just the 
 authority hdfs://ha-logical-uri, and will throw an error like the following:
 {noformat}
 java.lang.IllegalArgumentException: Wrong FS: 
 hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected: 
 hdfs://ns1:8020
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
 {noformat}
 Though this is not necessarily incorrect behavior, it is a 
 backward-incompatible change that at least breaks certain clients' ability to 
 connect to an HA HDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9614) smart-test-patch.sh hangs for new version of patch (2.7.1)

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673500#comment-13673500
 ] 

Hudson commented on HADOOP-9614:


Integrated in Hadoop-trunk-Commit #3845 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3845/])
HADOOP-9614. smart-test-patch.sh hangs for new version of patch (2.7.1) 
(Ravi Prakash via jeagles) (Revision 1489136)

 Result = SUCCESS
jeagles : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1489136
Files : 
* /hadoop/common/trunk/dev-support/smart-apply-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 smart-test-patch.sh hangs for new version of patch (2.7.1)
 --

 Key: HADOOP-9614
 URL: https://issues.apache.org/jira/browse/HADOOP-9614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HADOOP-9614.patch, HADOOP-9614.patch


 patch -p0 -E --dry-run prints checking file  for the new version of 
 patch(2.7.1) rather than patching file as it did for older versions. This 
 causes TMP2 to become empty, which causes the script to hang on this command 
 forever:
 PREFIX_DIRS_AND_FILES=$(cut -d '/' -f 1 | sort | uniq)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9533) Centralized Hadoop SSO/Token Server

2013-06-03 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673514#comment-13673514
 ] 

Larry McCay commented on HADOOP-9533:
-

@Andrew - In preparation for our soon to be announced security session get 
together, I've spent some time trying to reconcile this HSSO Jira as a subtask 
of HADOOP-9392.
While I'm not opposed to making it a subtask, I'm just not able to discern from 
the current design document posted for HADOOP-9392 exactly how it would fit as 
a subtask.
Going into a design session with these two already aligned at the highest 
levels would probably be a good goal rather than try and get there during that 
meeting.
If we could get a design document that is more focused on the space that 
HADOOP-9533 is addressing then I think we could great progress before the 
summit. 
It would also be helpful to be aware of what the other envisioned subtasks are 
for this effort. Without knowing how HSSO fits in as a subtask alone and along 
with others - I can't quite connect all the dots yet. Filing Jiras for your 
envisioned subtasks would probably be advantageous.
FYI - we are also in the process of determining the highest level goals, 
threats and objectives for an authentication system that would have to replace 
kerberos as central to Hadoop. This will be communicated separately - so that 
we can collaborate on those up front. This set of canonical goals could then 
serve as the foundation for our converged design work at the summit and beyond.

 Centralized Hadoop SSO/Token Server
 ---

 Key: HADOOP-9533
 URL: https://issues.apache.org/jira/browse/HADOOP-9533
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Larry McCay
 Attachments: HSSO-Interaction-Overview-rev-1.docx, 
 HSSO-Interaction-Overview-rev-1.pdf


 This is an umbrella Jira filing to oversee a set of proposals for introducing 
 a new master service for Hadoop Single Sign On (HSSO).
 There is an increasing need for pluggable authentication providers that 
 authenticate both users and services as well as validate tokens in order to 
 federate identities authenticated by trusted IDPs. These IDPs may be deployed 
 within the enterprise or third-party IDPs that are external to the enterprise.
 These needs speak to a specific pain point: which is a narrow integration 
 path into the enterprise identity infrastructure. Kerberos is a fine solution 
 for those that already have it in place or are willing to adopt its use but 
 there remains a class of user that finds this unacceptable and needs to 
 integrate with a wider variety of identity management solutions.
 Another specific pain point is that of rolling and distributing keys. A 
 related and integral part of the HSSO server is library called the Credential 
 Management Framework (CMF), which will be a common library for easing the 
 management of secrets, keys and credentials.
 Initially, the existing delegation, block access and job tokens will continue 
 to be utilized. There may be some changes required to leverage a PKI based 
 signature facility rather than shared secrets. This is a means to simplify 
 the solution for the pain point of distributing shared secrets.
 This project will primarily centralize the responsibility of authentication 
 and federation into a single service that is trusted across the Hadoop 
 cluster and optionally across multiple clusters. This greatly simplifies a 
 number of things in the Hadoop ecosystem:
 1.a single token format that is used across all of Hadoop regardless of 
 authentication method
 2.a single service to have pluggable providers instead of all services
 3.a single token authority that would be trusted across the cluster/s and 
 through PKI encryption be able to easily issue cryptographically verifiable 
 tokens
 4.automatic rolling of the token authority’s keys and publishing of the 
 public key for easy access by those parties that need to verify incoming 
 tokens
 5.use of PKI for signatures eliminates the need for securely sharing and 
 distributing shared secrets
 In addition to serving as the internal Hadoop SSO service this service will 
 be leveraged by the Knox Gateway from the cluster perimeter in order to 
 acquire the Hadoop cluster tokens. The same token mechanism that is used for 
 internal services will be used to represent user identities. Providing for 
 interesting scenarios such as SSO across Hadoop clusters within an enterprise 
 and/or into the cloud.
 The HSSO service will be comprised of three major components and capabilities:
 1.Federating IDP – authenticates users/services and issues the common 
 Hadoop token
 2.Federating SP – validates the token of trusted external IDPs and issues 
 the common Hadoop token
 

[jira] [Updated] (HADOOP-9614) smart-test-patch.sh hangs for new version of patch (2.7.1)

2013-06-03 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-9614:


   Resolution: Fixed
Fix Version/s: 2.0.5-alpha
   0.23.8
   3.0.0
   Status: Resolved  (was: Patch Available)

 smart-test-patch.sh hangs for new version of patch (2.7.1)
 --

 Key: HADOOP-9614
 URL: https://issues.apache.org/jira/browse/HADOOP-9614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 3.0.0, 0.23.8, 2.0.5-alpha

 Attachments: HADOOP-9614.patch, HADOOP-9614.patch


 patch -p0 -E --dry-run prints checking file  for the new version of 
 patch(2.7.1) rather than patching file as it did for older versions. This 
 causes TMP2 to become empty, which causes the script to hang on this command 
 forever:
 PREFIX_DIRS_AND_FILES=$(cut -d '/' -f 1 | sort | uniq)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9617) HA HDFS client is too strict with validating URI authorities

2013-06-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673527#comment-13673527
 ] 

Daryn Sharp commented on HADOOP-9617:
-

It looks like Todd is right.  DFS isn't canonicalizing as expected, ie. 
adding a default port, after the change to prevent attempts to resolve logical 
names.  I think it may be better to change 
{{DistributedFileSystem#canonicalizeUri}} to add the default port to a logical 
uri lacking a port so it conforms to the behavior expected by {{checkPath}}, 
instead of modifying {{checkPath}} itself?

 HA HDFS client is too strict with validating URI authorities
 

 Key: HADOOP-9617
 URL: https://issues.apache.org/jira/browse/HADOOP-9617
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, ha
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HADOOP-9617.patch


 HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS 
 resolution of logical URIs. This has the side effect of changing the way 
 Paths are verified when passed to a FileSystem instance created with an 
 authority that differs from the authority of the Path. Previous to 
 HADOOP-9150, a default port would be added to either authority in the event 
 that either URI did not have a port. Post HADOOP-9150, no default port is 
 added. This means that a FileSystem instance created using the URI 
 hdfs://ha-logical-uri:8020 will no longer process paths containing just the 
 authority hdfs://ha-logical-uri, and will throw an error like the following:
 {noformat}
 java.lang.IllegalArgumentException: Wrong FS: 
 hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected: 
 hdfs://ns1:8020
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
 {noformat}
 Though this is not necessarily incorrect behavior, it is a 
 backward-incompatible change that at least breaks certain clients' ability to 
 connect to an HA HDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9617) HA HDFS client is too strict with validating URI authorities

2013-06-03 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673547#comment-13673547
 ] 

Todd Lipcon commented on HADOOP-9617:
-

Wouldn't we have to add that in other places as well, like viewfs, etc? 

Here's another thought: what would happen if 
{{DistributedFileSystem.getDefaultPort()}} returned 0 if it's a logical URI? 
Would that fix the issue?

 HA HDFS client is too strict with validating URI authorities
 

 Key: HADOOP-9617
 URL: https://issues.apache.org/jira/browse/HADOOP-9617
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, ha
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HADOOP-9617.patch


 HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS 
 resolution of logical URIs. This has the side effect of changing the way 
 Paths are verified when passed to a FileSystem instance created with an 
 authority that differs from the authority of the Path. Previous to 
 HADOOP-9150, a default port would be added to either authority in the event 
 that either URI did not have a port. Post HADOOP-9150, no default port is 
 added. This means that a FileSystem instance created using the URI 
 hdfs://ha-logical-uri:8020 will no longer process paths containing just the 
 authority hdfs://ha-logical-uri, and will throw an error like the following:
 {noformat}
 java.lang.IllegalArgumentException: Wrong FS: 
 hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected: 
 hdfs://ns1:8020
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
 {noformat}
 Though this is not necessarily incorrect behavior, it is a 
 backward-incompatible change that at least breaks certain clients' ability to 
 connect to an HA HDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9617) HA HDFS client is too strict with validating URI authorities

2013-06-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673602#comment-13673602
 ] 

Daryn Sharp commented on HADOOP-9617:
-

Viewfs appears ok.  It falls back to the default impl of {{canonicalizeUri}} 
which uses {{if (uri.getPort() == -1  getDefaultPort()  0)}} to decide if it 
should add the default port.  So... I think it might just work if 
{{DFS#canonicalizeUri}} calls super when it's HA logical, instead of returning 
the uri as-is.

HA logical URIs are an interesting twist, where the port can be argued to be 
irrelevant.  However just as context for my concern, there's a distinct 
possibility that our HA logical uris will be the former primary NN's host 
address.  If so, it's critical that HA logical uris must have identical 
behavior to non-HA uris.

 HA HDFS client is too strict with validating URI authorities
 

 Key: HADOOP-9617
 URL: https://issues.apache.org/jira/browse/HADOOP-9617
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, ha
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HADOOP-9617.patch


 HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS 
 resolution of logical URIs. This has the side effect of changing the way 
 Paths are verified when passed to a FileSystem instance created with an 
 authority that differs from the authority of the Path. Previous to 
 HADOOP-9150, a default port would be added to either authority in the event 
 that either URI did not have a port. Post HADOOP-9150, no default port is 
 added. This means that a FileSystem instance created using the URI 
 hdfs://ha-logical-uri:8020 will no longer process paths containing just the 
 authority hdfs://ha-logical-uri, and will throw an error like the following:
 {noformat}
 java.lang.IllegalArgumentException: Wrong FS: 
 hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected: 
 hdfs://ns1:8020
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
 {noformat}
 Though this is not necessarily incorrect behavior, it is a 
 backward-incompatible change that at least breaks certain clients' ability to 
 connect to an HA HDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9609) Remove sh dependency of bin-package target

2013-06-03 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673687#comment-13673687
 ] 

Ivan Mitic commented on HADOOP-9609:


Thanks Chuan for reporting the problem and for the patch!

Your patch seem to introduce a different behavior on Linux platforms. I don’t 
think we want to do this (see HADOOP-8037 for some history on what it’s done in 
packageBinNativeHadoop.sh). I see two possible alternatives we can move forward 
with:
1.  Use packageNativeHadoop.py for both package and bin-package target when 
on Windows, but still use packageBinNativeHadoop.sh when on Linux. This would 
be a quick and simple fix for this Jira.
2.  Create packageBinNativeHadoop.py that would be a Windows equivalent of 
packageBinNativeHadoop.sh. This would however require a bit more work, as we’d 
want to satisfy the idea behind the bin-package target, which is to produce a 
platform specific target (for example, on Windows this would be 
hadoop-x.y.z-amd64-bin.winpkg.zip). 

I am personally fine with both approaches. Let me know what you think. 

 Remove sh dependency of bin-package target
 --

 Key: HADOOP-9609
 URL: https://issues.apache.org/jira/browse/HADOOP-9609
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: HADOOP-9609-branch-1-win.patch


 In Ant package target, we no longer use packageBinNativeHadoop.sh to place 
 native library binaries. However, the same script is still present in the 
 bin-package target. We should remove bin-package target's dependency on sh to 
 keep it consistent with package target.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2013-06-03 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673686#comment-13673686
 ] 

Larry McCay commented on HADOOP-8545:
-

Hi Steve - I notice that the potential for logging credentials exists in the 
SwiftRestClient.
While this is intended to only ever be done in testing and not production 
environments, I don't think that we should ever do it.

1. it provides an attacker a means to try and turn on credential logging
2. log files may be long lived given regulatory compliance - we don't want to 
archive passwords

I would like to see this possibility removed.

My two cents.

--larry

 Filesystem Implementation for OpenStack Swift
 -

 Key: HADOOP-8545
 URL: https://issues.apache.org/jira/browse/HADOOP-8545
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 1.2.0, 2.0.3-alpha
Reporter: Tim Miller
Assignee: Dmitry Mezhensky
  Labels: hadoop, patch
 Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, 
 HADOOP-8545-028.patch, HADOOP-8545-10.patch, HADOOP-8545-11.patch, 
 HADOOP-8545-12.patch, HADOOP-8545-13.patch, HADOOP-8545-14.patch, 
 HADOOP-8545-15.patch, HADOOP-8545-16.patch, HADOOP-8545-17.patch, 
 HADOOP-8545-18.patch, HADOOP-8545-19.patch, HADOOP-8545-1.patch, 
 HADOOP-8545-20.patch, HADOOP-8545-21.patch, HADOOP-8545-22.patch, 
 HADOOP-8545-23.patch, HADOOP-8545-24.patch, HADOOP-8545-25.patch, 
 HADOOP-8545-2.patch, HADOOP-8545-3.patch, HADOOP-8545-4.patch, 
 HADOOP-8545-5.patch, HADOOP-8545-6.patch, HADOOP-8545-7.patch, 
 HADOOP-8545-8.patch, HADOOP-8545-9.patch, HADOOP-8545-javaclouds-2.patch, 
 HADOOP-8545.patch, HADOOP-8545.patch


 ,Add a filesystem implementation for OpenStack Swift object store, similar to 
 the one which exists today for S3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9481) Broken conditional logic with HADOOP_SNAPPY_LIBRARY

2013-06-03 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9481:
--

Affects Version/s: 2.0.5-alpha

 Broken conditional logic with HADOOP_SNAPPY_LIBRARY
 ---

 Key: HADOOP-9481
 URL: https://issues.apache.org/jira/browse/HADOOP-9481
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.5-alpha
Reporter: Vadim Bondarev
Assignee: Vadim Bondarev
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-9481-trunk--N1.patch, HADOOP-9481-trunk--N4.patch


 The problem is a regression introduced by recent fix 
 https://issues.apache.org/jira/browse/HADOOP-8562 .
 That fix makes some improvements for Windows platform, but breaks native code 
 work on Unix.
 Namely, let's see the diff HADOOP-8562 of the file 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
  :  
 {noformat}
 --- 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
 +++ 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
 @@ -16,12 +16,18 @@
   * limitations under the License.
   */
 -#include dlfcn.h
 +
 +#if defined HADOOP_SNAPPY_LIBRARY
 +
  #include stdio.h
  #include stdlib.h
  #include string.h
 +#ifdef UNIX
 +#include dlfcn.h
  #include config.h
 +#endif // UNIX
 +
  #include org_apache_hadoop_io_compress_snappy.h
  #include org_apache_hadoop_io_compress_snappy_SnappyCompressor.h
 @@ -81,7 +87,7 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
UNLOCK_CLASS(env, clazz, SnappyCompressor);
if (uncompressed_bytes == 0) {
 -return 0;
 +return (jint)0;
}
// Get the output direct buffer
 @@ -90,7 +96,7 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
UNLOCK_CLASS(env, clazz, SnappyCompressor);
if (compressed_bytes == 0) {
 -return 0;
 +return (jint)0;
}
/* size_t should always be 4 bytes or larger. */
 @@ -109,3 +115,5 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
(*env)-SetIntField(env, thisj, SnappyCompressor_uncompressedDirectBufLen, 
 0);
return (jint)buf_len;
  }
 +
 +#endif //define HADOOP_SNAPPY_LIBRARY
 {noformat}
 Here we see that all the class implementation got enclosed into if defined 
 HADOOP_SNAPPY_LIBRARY directive, and the point is that 
 HADOOP_SNAPPY_LIBRARY is *not* defined. 
 This causes the class implementation to be effectively empty, what, in turn, 
 causes the UnsatisfiedLinkError to be thrown in the runtime upon any attempt 
 to invoke the native methods implemented there.
 The actual intention of the authors of HADOOP-8562 was (as we suppose) to 
 invoke include config.h, where HADOOP_SNAPPY_LIBRARY is defined. But 
 currently it is *not* included because it resides *inside* if defined 
 HADOOP_SNAPPY_LIBRARY block.
 Similar situation with ifdef UNIX, because UNIX or WINDOWS variables are 
 defined in org_apache_hadoop.h, which is indirectly included through 
 include org_apache_hadoop_io_compress_snappy.h, and in the current code 
 this is done *after* code ifdef UNIX, so in the current code the block 
 ifdef UNIX is *not* executed on UNIX.
 The suggested patch fixes the described problems by reordering the include 
 and if preprocessor directives accordingly, bringing the methods of class 
 org.apache.hadoop.io.compress.snappy.SnappyCompressor back to work again.
 Of course, Snappy native libraries must be installed to build and invoke 
 snappy native methods.
 (Note: there was a mistype in commit message: 8952 written in place of 8562: 
 HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows 
 Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas 
 Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, 
 Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, 
 Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, 
 Ramya Bharathi Nimmagadda.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 
 13f79535-47bb-0310-9956-ffa450edef68
 )

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9287) Parallel testing hadoop-common

2013-06-03 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-9287:
---

Fix Version/s: 2.1.0-beta

Thanks, Andrey!  I committed the branch-2 patch.

 Parallel testing hadoop-common
 --

 Key: HADOOP-9287
 URL: https://issues.apache.org/jira/browse/HADOOP-9287
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Tsuyoshi OZAWA
Assignee: Andrey Klochkov
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9287.1.patch, HADOOP-9287-branch-2--N1.patch, 
 HADOOP-9287--N3.patch, HADOOP-9287--N3.patch, HADOOP-9287--N4.patch, 
 HADOOP-9287--N5.patch, HADOOP-9287--N6.patch, HADOOP-9287--N7.patch, 
 HADOOP-9287.patch, HADOOP-9287.patch


 The maven surefire plugin supports parallel testing feature. By using it, the 
 tests can be run more faster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9481) Broken conditional logic with HADOOP_SNAPPY_LIBRARY

2013-06-03 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9481:
--

Fix Version/s: 2.0.5-alpha

 Broken conditional logic with HADOOP_SNAPPY_LIBRARY
 ---

 Key: HADOOP-9481
 URL: https://issues.apache.org/jira/browse/HADOOP-9481
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.5-alpha
Reporter: Vadim Bondarev
Assignee: Vadim Bondarev
Priority: Minor
 Fix For: 3.0.0, 2.0.5-alpha

 Attachments: HADOOP-9481-trunk--N1.patch, HADOOP-9481-trunk--N4.patch


 The problem is a regression introduced by recent fix 
 https://issues.apache.org/jira/browse/HADOOP-8562 .
 That fix makes some improvements for Windows platform, but breaks native code 
 work on Unix.
 Namely, let's see the diff HADOOP-8562 of the file 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
  :  
 {noformat}
 --- 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
 +++ 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
 @@ -16,12 +16,18 @@
   * limitations under the License.
   */
 -#include dlfcn.h
 +
 +#if defined HADOOP_SNAPPY_LIBRARY
 +
  #include stdio.h
  #include stdlib.h
  #include string.h
 +#ifdef UNIX
 +#include dlfcn.h
  #include config.h
 +#endif // UNIX
 +
  #include org_apache_hadoop_io_compress_snappy.h
  #include org_apache_hadoop_io_compress_snappy_SnappyCompressor.h
 @@ -81,7 +87,7 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
UNLOCK_CLASS(env, clazz, SnappyCompressor);
if (uncompressed_bytes == 0) {
 -return 0;
 +return (jint)0;
}
// Get the output direct buffer
 @@ -90,7 +96,7 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
UNLOCK_CLASS(env, clazz, SnappyCompressor);
if (compressed_bytes == 0) {
 -return 0;
 +return (jint)0;
}
/* size_t should always be 4 bytes or larger. */
 @@ -109,3 +115,5 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
(*env)-SetIntField(env, thisj, SnappyCompressor_uncompressedDirectBufLen, 
 0);
return (jint)buf_len;
  }
 +
 +#endif //define HADOOP_SNAPPY_LIBRARY
 {noformat}
 Here we see that all the class implementation got enclosed into if defined 
 HADOOP_SNAPPY_LIBRARY directive, and the point is that 
 HADOOP_SNAPPY_LIBRARY is *not* defined. 
 This causes the class implementation to be effectively empty, what, in turn, 
 causes the UnsatisfiedLinkError to be thrown in the runtime upon any attempt 
 to invoke the native methods implemented there.
 The actual intention of the authors of HADOOP-8562 was (as we suppose) to 
 invoke include config.h, where HADOOP_SNAPPY_LIBRARY is defined. But 
 currently it is *not* included because it resides *inside* if defined 
 HADOOP_SNAPPY_LIBRARY block.
 Similar situation with ifdef UNIX, because UNIX or WINDOWS variables are 
 defined in org_apache_hadoop.h, which is indirectly included through 
 include org_apache_hadoop_io_compress_snappy.h, and in the current code 
 this is done *after* code ifdef UNIX, so in the current code the block 
 ifdef UNIX is *not* executed on UNIX.
 The suggested patch fixes the described problems by reordering the include 
 and if preprocessor directives accordingly, bringing the methods of class 
 org.apache.hadoop.io.compress.snappy.SnappyCompressor back to work again.
 Of course, Snappy native libraries must be installed to build and invoke 
 snappy native methods.
 (Note: there was a mistype in commit message: 8952 written in place of 8562: 
 HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows 
 Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas 
 Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, 
 Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, 
 Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, 
 Ramya Bharathi Nimmagadda.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 
 13f79535-47bb-0310-9956-ffa450edef68
 )

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9287) Parallel testing hadoop-common

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673880#comment-13673880
 ] 

Hudson commented on HADOOP-9287:


Integrated in Hadoop-trunk-Commit #3850 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3850/])
Move HADOOP-9287 in CHANGES.txt after committing to branch-2 (Revision 
1489258)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1489258
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Parallel testing hadoop-common
 --

 Key: HADOOP-9287
 URL: https://issues.apache.org/jira/browse/HADOOP-9287
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Tsuyoshi OZAWA
Assignee: Andrey Klochkov
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9287.1.patch, HADOOP-9287-branch-2--N1.patch, 
 HADOOP-9287--N3.patch, HADOOP-9287--N3.patch, HADOOP-9287--N4.patch, 
 HADOOP-9287--N5.patch, HADOOP-9287--N6.patch, HADOOP-9287--N7.patch, 
 HADOOP-9287.patch, HADOOP-9287.patch


 The maven surefire plugin supports parallel testing feature. By using it, the 
 tests can be run more faster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9481) Broken conditional logic with HADOOP_SNAPPY_LIBRARY

2013-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673897#comment-13673897
 ] 

Hudson commented on HADOOP-9481:


Integrated in Hadoop-trunk-Commit #3851 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3851/])
HADOOP-9481. Move from trunk to release 2.1.0 section (Revision 1489261)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1489261
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Broken conditional logic with HADOOP_SNAPPY_LIBRARY
 ---

 Key: HADOOP-9481
 URL: https://issues.apache.org/jira/browse/HADOOP-9481
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.5-alpha
Reporter: Vadim Bondarev
Assignee: Vadim Bondarev
Priority: Minor
 Fix For: 3.0.0, 2.0.5-alpha

 Attachments: HADOOP-9481-trunk--N1.patch, HADOOP-9481-trunk--N4.patch


 The problem is a regression introduced by recent fix 
 https://issues.apache.org/jira/browse/HADOOP-8562 .
 That fix makes some improvements for Windows platform, but breaks native code 
 work on Unix.
 Namely, let's see the diff HADOOP-8562 of the file 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
  :  
 {noformat}
 --- 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
 +++ 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
 @@ -16,12 +16,18 @@
   * limitations under the License.
   */
 -#include dlfcn.h
 +
 +#if defined HADOOP_SNAPPY_LIBRARY
 +
  #include stdio.h
  #include stdlib.h
  #include string.h
 +#ifdef UNIX
 +#include dlfcn.h
  #include config.h
 +#endif // UNIX
 +
  #include org_apache_hadoop_io_compress_snappy.h
  #include org_apache_hadoop_io_compress_snappy_SnappyCompressor.h
 @@ -81,7 +87,7 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
UNLOCK_CLASS(env, clazz, SnappyCompressor);
if (uncompressed_bytes == 0) {
 -return 0;
 +return (jint)0;
}
// Get the output direct buffer
 @@ -90,7 +96,7 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
UNLOCK_CLASS(env, clazz, SnappyCompressor);
if (compressed_bytes == 0) {
 -return 0;
 +return (jint)0;
}
/* size_t should always be 4 bytes or larger. */
 @@ -109,3 +115,5 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
(*env)-SetIntField(env, thisj, SnappyCompressor_uncompressedDirectBufLen, 
 0);
return (jint)buf_len;
  }
 +
 +#endif //define HADOOP_SNAPPY_LIBRARY
 {noformat}
 Here we see that all the class implementation got enclosed into if defined 
 HADOOP_SNAPPY_LIBRARY directive, and the point is that 
 HADOOP_SNAPPY_LIBRARY is *not* defined. 
 This causes the class implementation to be effectively empty, what, in turn, 
 causes the UnsatisfiedLinkError to be thrown in the runtime upon any attempt 
 to invoke the native methods implemented there.
 The actual intention of the authors of HADOOP-8562 was (as we suppose) to 
 invoke include config.h, where HADOOP_SNAPPY_LIBRARY is defined. But 
 currently it is *not* included because it resides *inside* if defined 
 HADOOP_SNAPPY_LIBRARY block.
 Similar situation with ifdef UNIX, because UNIX or WINDOWS variables are 
 defined in org_apache_hadoop.h, which is indirectly included through 
 include org_apache_hadoop_io_compress_snappy.h, and in the current code 
 this is done *after* code ifdef UNIX, so in the current code the block 
 ifdef UNIX is *not* executed on UNIX.
 The suggested patch fixes the described problems by reordering the include 
 and if preprocessor directives accordingly, bringing the methods of class 
 org.apache.hadoop.io.compress.snappy.SnappyCompressor back to work again.
 Of course, Snappy native libraries must be installed to build and invoke 
 snappy native methods.
 (Note: there was a mistype in commit message: 8952 written in place of 8562: 
 HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows 
 Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas 
 Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, 
 Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, 
 Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, 
 Ramya Bharathi Nimmagadda.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 
 13f79535-47bb-0310-9956-ffa450edef68
 )

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA 

[jira] [Updated] (HADOOP-9481) Broken conditional logic with HADOOP_SNAPPY_LIBRARY

2013-06-03 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9481:
--

Affects Version/s: (was: 2.0.5-alpha)
   2.1.0-beta

 Broken conditional logic with HADOOP_SNAPPY_LIBRARY
 ---

 Key: HADOOP-9481
 URL: https://issues.apache.org/jira/browse/HADOOP-9481
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vadim Bondarev
Assignee: Vadim Bondarev
Priority: Minor
 Fix For: 3.0.0, 2.0.5-alpha

 Attachments: HADOOP-9481-trunk--N1.patch, HADOOP-9481-trunk--N4.patch


 The problem is a regression introduced by recent fix 
 https://issues.apache.org/jira/browse/HADOOP-8562 .
 That fix makes some improvements for Windows platform, but breaks native code 
 work on Unix.
 Namely, let's see the diff HADOOP-8562 of the file 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
  :  
 {noformat}
 --- 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
 +++ 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
 @@ -16,12 +16,18 @@
   * limitations under the License.
   */
 -#include dlfcn.h
 +
 +#if defined HADOOP_SNAPPY_LIBRARY
 +
  #include stdio.h
  #include stdlib.h
  #include string.h
 +#ifdef UNIX
 +#include dlfcn.h
  #include config.h
 +#endif // UNIX
 +
  #include org_apache_hadoop_io_compress_snappy.h
  #include org_apache_hadoop_io_compress_snappy_SnappyCompressor.h
 @@ -81,7 +87,7 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
UNLOCK_CLASS(env, clazz, SnappyCompressor);
if (uncompressed_bytes == 0) {
 -return 0;
 +return (jint)0;
}
// Get the output direct buffer
 @@ -90,7 +96,7 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
UNLOCK_CLASS(env, clazz, SnappyCompressor);
if (compressed_bytes == 0) {
 -return 0;
 +return (jint)0;
}
/* size_t should always be 4 bytes or larger. */
 @@ -109,3 +115,5 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
(*env)-SetIntField(env, thisj, SnappyCompressor_uncompressedDirectBufLen, 
 0);
return (jint)buf_len;
  }
 +
 +#endif //define HADOOP_SNAPPY_LIBRARY
 {noformat}
 Here we see that all the class implementation got enclosed into if defined 
 HADOOP_SNAPPY_LIBRARY directive, and the point is that 
 HADOOP_SNAPPY_LIBRARY is *not* defined. 
 This causes the class implementation to be effectively empty, what, in turn, 
 causes the UnsatisfiedLinkError to be thrown in the runtime upon any attempt 
 to invoke the native methods implemented there.
 The actual intention of the authors of HADOOP-8562 was (as we suppose) to 
 invoke include config.h, where HADOOP_SNAPPY_LIBRARY is defined. But 
 currently it is *not* included because it resides *inside* if defined 
 HADOOP_SNAPPY_LIBRARY block.
 Similar situation with ifdef UNIX, because UNIX or WINDOWS variables are 
 defined in org_apache_hadoop.h, which is indirectly included through 
 include org_apache_hadoop_io_compress_snappy.h, and in the current code 
 this is done *after* code ifdef UNIX, so in the current code the block 
 ifdef UNIX is *not* executed on UNIX.
 The suggested patch fixes the described problems by reordering the include 
 and if preprocessor directives accordingly, bringing the methods of class 
 org.apache.hadoop.io.compress.snappy.SnappyCompressor back to work again.
 Of course, Snappy native libraries must be installed to build and invoke 
 snappy native methods.
 (Note: there was a mistype in commit message: 8952 written in place of 8562: 
 HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows 
 Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas 
 Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, 
 Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, 
 Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, 
 Ramya Bharathi Nimmagadda.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 
 13f79535-47bb-0310-9956-ffa450edef68
 )

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9481) Broken conditional logic with HADOOP_SNAPPY_LIBRARY

2013-06-03 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9481:
--

Fix Version/s: (was: 2.0.5-alpha)
   2.1.0-beta

 Broken conditional logic with HADOOP_SNAPPY_LIBRARY
 ---

 Key: HADOOP-9481
 URL: https://issues.apache.org/jira/browse/HADOOP-9481
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vadim Bondarev
Assignee: Vadim Bondarev
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-9481-trunk--N1.patch, HADOOP-9481-trunk--N4.patch


 The problem is a regression introduced by recent fix 
 https://issues.apache.org/jira/browse/HADOOP-8562 .
 That fix makes some improvements for Windows platform, but breaks native code 
 work on Unix.
 Namely, let's see the diff HADOOP-8562 of the file 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
  :  
 {noformat}
 --- 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
 +++ 
 hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
 @@ -16,12 +16,18 @@
   * limitations under the License.
   */
 -#include dlfcn.h
 +
 +#if defined HADOOP_SNAPPY_LIBRARY
 +
  #include stdio.h
  #include stdlib.h
  #include string.h
 +#ifdef UNIX
 +#include dlfcn.h
  #include config.h
 +#endif // UNIX
 +
  #include org_apache_hadoop_io_compress_snappy.h
  #include org_apache_hadoop_io_compress_snappy_SnappyCompressor.h
 @@ -81,7 +87,7 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
UNLOCK_CLASS(env, clazz, SnappyCompressor);
if (uncompressed_bytes == 0) {
 -return 0;
 +return (jint)0;
}
// Get the output direct buffer
 @@ -90,7 +96,7 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
UNLOCK_CLASS(env, clazz, SnappyCompressor);
if (compressed_bytes == 0) {
 -return 0;
 +return (jint)0;
}
/* size_t should always be 4 bytes or larger. */
 @@ -109,3 +115,5 @@ JNIEXPORT jint JNICALL 
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompresso
(*env)-SetIntField(env, thisj, SnappyCompressor_uncompressedDirectBufLen, 
 0);
return (jint)buf_len;
  }
 +
 +#endif //define HADOOP_SNAPPY_LIBRARY
 {noformat}
 Here we see that all the class implementation got enclosed into if defined 
 HADOOP_SNAPPY_LIBRARY directive, and the point is that 
 HADOOP_SNAPPY_LIBRARY is *not* defined. 
 This causes the class implementation to be effectively empty, what, in turn, 
 causes the UnsatisfiedLinkError to be thrown in the runtime upon any attempt 
 to invoke the native methods implemented there.
 The actual intention of the authors of HADOOP-8562 was (as we suppose) to 
 invoke include config.h, where HADOOP_SNAPPY_LIBRARY is defined. But 
 currently it is *not* included because it resides *inside* if defined 
 HADOOP_SNAPPY_LIBRARY block.
 Similar situation with ifdef UNIX, because UNIX or WINDOWS variables are 
 defined in org_apache_hadoop.h, which is indirectly included through 
 include org_apache_hadoop_io_compress_snappy.h, and in the current code 
 this is done *after* code ifdef UNIX, so in the current code the block 
 ifdef UNIX is *not* executed on UNIX.
 The suggested patch fixes the described problems by reordering the include 
 and if preprocessor directives accordingly, bringing the methods of class 
 org.apache.hadoop.io.compress.snappy.SnappyCompressor back to work again.
 Of course, Snappy native libraries must be installed to build and invoke 
 snappy native methods.
 (Note: there was a mistype in commit message: 8952 written in place of 8562: 
 HADOOP-8952. Enhancements to support Hadoop on Windows Server and Windows 
 Azure environments. Contributed by Ivan Mitic, Chuan Liu, Ramya Sunil, Bikas 
 Saha, Kanna Karanam, John Gordon, Brandon Li, Chris Nauroth, David Lao, 
 Sumadhur Reddy Bolli, Arpit Agarwal, Ahmed El Baz, Mike Liddell, Jing Zhao, 
 Thejas Nair, Steve Maine, Ganeshan Iyer, Raja Aluri, Giridharan Kesavan, 
 Ramya Bharathi Nimmagadda.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1453486 
 13f79535-47bb-0310-9956-ffa450edef68
 )

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673934#comment-13673934
 ] 

Junping Du commented on HADOOP-9447:


The test result sounds a little strange. Where is 
test-config-TestConfiguration.xml comes from? kick Jenkins again. :) 

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, 
 HADOOP-9447-v3.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-9447:
---

Status: Open  (was: Patch Available)

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, 
 HADOOP-9447-v3.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-9447:
---

Target Version/s: 2.1.0-beta
  Status: Patch Available  (was: Open)

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, 
 HADOOP-9447-v3.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9602) Trash#moveToAppropriateTrash should output logs of execution instead of STDOUT

2013-06-03 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-9602:
---

Status: Open  (was: Patch Available)

 Trash#moveToAppropriateTrash should output logs of execution instead of STDOUT
 --

 Key: HADOOP-9602
 URL: https://issues.apache.org/jira/browse/HADOOP-9602
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: HADOOP-9602.1.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Currently, Trash#moveToAppropriateTrash outputs logs of execution into 
 STDOUT. It should use logging feature, because the other components outputs 
 its logs via logging features and it can confuse users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9602) Trash#moveToAppropriateTrash should output logs of execution instead of STDOUT

2013-06-03 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA resolved HADOOP-9602.


Resolution: Not A Problem

[~daryn]: Thank for your review. One of our use case is to reuse Trash class 
from external programs, and I confirmed that it's just our problem after 
throwing the patch. I resolved this issue as Not A Problem.

 Trash#moveToAppropriateTrash should output logs of execution instead of STDOUT
 --

 Key: HADOOP-9602
 URL: https://issues.apache.org/jira/browse/HADOOP-9602
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: HADOOP-9602.1.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Currently, Trash#moveToAppropriateTrash outputs logs of execution into 
 STDOUT. It should use logging feature, because the other components outputs 
 its logs via logging features and it can confuse users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673997#comment-13673997
 ] 

Junping Du commented on HADOOP-9447:


Ooops. Someone just check-in changes for unit test and change the config file 
from test-config.xml to test-config-TestConfiguration.xml. I will update 
the test case soon.

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, 
 HADOOP-9447-v3.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-9447:
---

Attachment: HADOOP-9447-v4.patch

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, 
 HADOOP-9447-v3.patch, HADOOP-9447-v4.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13674006#comment-13674006
 ] 

Junping Du commented on HADOOP-9447:


Update in v4 patch with specifying relative resource name.

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, 
 HADOOP-9447-v3.patch, HADOOP-9447-v4.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9392) Token based authentication and Single Sign On

2013-06-03 Thread Kyle Leckie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13674055#comment-13674055
 ] 

Kyle Leckie commented on HADOOP-9392:
-

Thanks for your thorough response Kai,

1) I agree on having support for tokens with pluggable token validation. Having 
the token contain an audience property in order to limit its scope should not 
add significant overhead but I take your point about having an initial 
implementation and progressing from there on an as needed basis.  
2)3) Thanks for the clarification. 

It seems that supporting pluggable token validation is a significant feature in 
itself and the TAS work can be layered on top. What do you think of having the 
token validation and transmission as a separate JIRA?
--
Kyle

 Token based authentication and Single Sign On
 -

 Key: HADOOP-9392
 URL: https://issues.apache.org/jira/browse/HADOOP-9392
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: 3.0.0

 Attachments: token-based-authn-plus-sso.pdf


 This is an umbrella entry for one of project Rhino’s topic, for details of 
 project Rhino, please refer to 
 https://github.com/intel-hadoop/project-rhino/. The major goal for this entry 
 as described in project Rhino was 
  
 “Core, HDFS, ZooKeeper, and HBase currently support Kerberos authentication 
 at the RPC layer, via SASL. However this does not provide valuable attributes 
 such as group membership, classification level, organizational identity, or 
 support for user defined attributes. Hadoop components must interrogate 
 external resources for discovering these attributes and at scale this is 
 problematic. There is also no consistent delegation model. HDFS has a simple 
 delegation capability, and only Oozie can take limited advantage of it. We 
 will implement a common token based authentication framework to decouple 
 internal user and service authentication from external mechanisms used to 
 support it (like Kerberos)”
  
 We’d like to start our work from Hadoop-Common and try to provide common 
 facilities by extending existing authentication framework which support:
 1.Pluggable token provider interface 
 2.Pluggable token verification protocol and interface
 3.Security mechanism to distribute secrets in cluster nodes
 4.Delegation model of user authentication

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9609) Remove sh dependency of bin-package target

2013-06-03 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-9609:
--

Attachment: HADOOP-9609-branch-1-win-2.patch

Thanks for reviewing the patch! I did notice the difference between 
packageNativeHadoop and packageBinNativeHadoop.

After taking a look at HADOOP-8037, I think your approach 1 makes more sense. 
In HADOOP-8037, the main reason to create platform (x64 vs amd64) dependent 
destinations was to accommodate Linux packaging requirements. On Linux, we have 
install scripts that will install the binaries to different destinations 
(/usr/lib vs /usr/lib64) and set java.library.path accordingly in .deb and .rpm 
packages. On Windows, we don't do such things, and it makes more sense to use 
'package' settings, i.e. put the binary under 'lib/native'. Also the platform 
is already part of the path, so there will be no confusing.

I still need to create a packageBinNativeHadoop.py because [Ant 
exec|http://ant.apache.org/manual/Tasks/exec.html] does not support 'if' 
condition.

I have tested building on Windows and Ubuntu, and both passes.

 Remove sh dependency of bin-package target
 --

 Key: HADOOP-9609
 URL: https://issues.apache.org/jira/browse/HADOOP-9609
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: HADOOP-9609-branch-1-win-2.patch, 
 HADOOP-9609-branch-1-win.patch


 In Ant package target, we no longer use packageBinNativeHadoop.sh to place 
 native library binaries. However, the same script is still present in the 
 bin-package target. We should remove bin-package target's dependency on sh to 
 keep it consistent with package target.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13674071#comment-13674071
 ] 

Hadoop QA commented on HADOOP-9447:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586038/HADOOP-9447-v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2597//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2597//console

This message is automatically generated.

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, 
 HADOOP-9447-v3.patch, HADOOP-9447-v4.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9533) Centralized Hadoop SSO/Token Server

2013-06-03 Thread Kyle Leckie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13674075#comment-13674075
 ] 

Kyle Leckie commented on HADOOP-9533:
-

Hi Daryn, 
Yes SASL could occur over SSL. With SSL we get protection from eavesdropping, 
tampering and possibly server authentication. With that we can pass the a 
bearer token over the network. Performing the SASL exchange would only slow 
down a request.

In addition the JAVA SASL mechanisms seem out of date. (see: Moving DIGEST-MD5 
to Historic http://tools.ietf.org/html/rfc6331). This also describes the 
issues with downgrade attacks. If we are going to bet on a piece of code that 
needs to be updated, performant and promptly patched I would bet on the TLS 
code.
 
The SGT will only be handed to the HSSO and not the services such as the NN. 
The NN would get an NN specific token. 
--
Kyle

 Centralized Hadoop SSO/Token Server
 ---

 Key: HADOOP-9533
 URL: https://issues.apache.org/jira/browse/HADOOP-9533
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Larry McCay
 Attachments: HSSO-Interaction-Overview-rev-1.docx, 
 HSSO-Interaction-Overview-rev-1.pdf


 This is an umbrella Jira filing to oversee a set of proposals for introducing 
 a new master service for Hadoop Single Sign On (HSSO).
 There is an increasing need for pluggable authentication providers that 
 authenticate both users and services as well as validate tokens in order to 
 federate identities authenticated by trusted IDPs. These IDPs may be deployed 
 within the enterprise or third-party IDPs that are external to the enterprise.
 These needs speak to a specific pain point: which is a narrow integration 
 path into the enterprise identity infrastructure. Kerberos is a fine solution 
 for those that already have it in place or are willing to adopt its use but 
 there remains a class of user that finds this unacceptable and needs to 
 integrate with a wider variety of identity management solutions.
 Another specific pain point is that of rolling and distributing keys. A 
 related and integral part of the HSSO server is library called the Credential 
 Management Framework (CMF), which will be a common library for easing the 
 management of secrets, keys and credentials.
 Initially, the existing delegation, block access and job tokens will continue 
 to be utilized. There may be some changes required to leverage a PKI based 
 signature facility rather than shared secrets. This is a means to simplify 
 the solution for the pain point of distributing shared secrets.
 This project will primarily centralize the responsibility of authentication 
 and federation into a single service that is trusted across the Hadoop 
 cluster and optionally across multiple clusters. This greatly simplifies a 
 number of things in the Hadoop ecosystem:
 1.a single token format that is used across all of Hadoop regardless of 
 authentication method
 2.a single service to have pluggable providers instead of all services
 3.a single token authority that would be trusted across the cluster/s and 
 through PKI encryption be able to easily issue cryptographically verifiable 
 tokens
 4.automatic rolling of the token authority’s keys and publishing of the 
 public key for easy access by those parties that need to verify incoming 
 tokens
 5.use of PKI for signatures eliminates the need for securely sharing and 
 distributing shared secrets
 In addition to serving as the internal Hadoop SSO service this service will 
 be leveraged by the Knox Gateway from the cluster perimeter in order to 
 acquire the Hadoop cluster tokens. The same token mechanism that is used for 
 internal services will be used to represent user identities. Providing for 
 interesting scenarios such as SSO across Hadoop clusters within an enterprise 
 and/or into the cloud.
 The HSSO service will be comprised of three major components and capabilities:
 1.Federating IDP – authenticates users/services and issues the common 
 Hadoop token
 2.Federating SP – validates the token of trusted external IDPs and issues 
 the common Hadoop token
 3.Token Authority – management of the common Hadoop tokens – including: 
 a.Issuance 
 b.Renewal
 c.Revocation
 As this is a meta Jira for tracking this overall effort, the details of the 
 individual efforts will be submitted along with the child Jira filings.
 Hadoop-Common would seem to be the most appropriate home for such a service 
 and its related common facilities. We will also leverage and extend existing 
 common mechanisms as appropriate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more 

[jira] [Commented] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception

2013-06-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13674090#comment-13674090
 ] 

Junping Du commented on HADOOP-9447:


Steve, the patch is done. Would you help to review it? Thanks!

 Configuration to include name of failing file/resource when wrapping an XML 
 parser exception
 

 Key: HADOOP-9447
 URL: https://issues.apache.org/jira/browse/HADOOP-9447
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, 
 HADOOP-9447-v3.patch, HADOOP-9447-v4.patch


 Currently, when there is an error parsing an XML file, the name of the file 
 at fault is logged, but not included in the (wrapped) XML exception. If that 
 same file/resource name were included in the text of the wrapped exception, 
 people would be able to find out which file was causing problems

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira