[jira] [Commented] (HADOOP-8268) A few pom.xml across Hadoop project may fail XML validation

2012-05-15 Thread Radim Kolar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13275891#comment-13275891
 ] 

Radim Kolar commented on HADOOP-8268:
-

command was wrong. libxml2 does not work well with XSD schemas. C version of 
xerces 3 parser works fine:

find . -name *.xml -exec PParse -n -s -f -v=always {} \;

 A few pom.xml across Hadoop project may fail XML validation
 ---

 Key: HADOOP-8268
 URL: https://issues.apache.org/jira/browse/HADOOP-8268
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
 Environment: FreeBSD 8.2 / AMD64
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: maven, patch
 Attachments: HADOOP-8268.patch, hadoop-pom.txt


 In a few pom files there are embedded ant commands which contains '' - 
 redirection. This makes XML file invalid and this POM file can not be 
 deployed into validating Maven repository managers such as Artifactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8398) Cleanup BlockLocation

2012-05-15 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8398:


Status: Patch Available  (was: Open)

 Cleanup BlockLocation
 -

 Key: HADOOP-8398
 URL: https://issues.apache.org/jira/browse/HADOOP-8398
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Attachments: hadoop-8398.txt


 Minor BlockLocation cleanup. Remove dead imports, cleanup some incorrect 
 comment and write a better javadoc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8268) A few pom.xml across Hadoop project may fail XML validation

2012-05-15 Thread Radim Kolar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radim Kolar updated HADOOP-8268:


Attachment: poms-patch.txt

added xml schema to maven POMs

 A few pom.xml across Hadoop project may fail XML validation
 ---

 Key: HADOOP-8268
 URL: https://issues.apache.org/jira/browse/HADOOP-8268
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
 Environment: FreeBSD 8.2 / AMD64
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: maven, patch
 Attachments: HADOOP-8268.patch, hadoop-pom.txt, poms-patch.txt


 In a few pom files there are embedded ant commands which contains '' - 
 redirection. This makes XML file invalid and this POM file can not be 
 deployed into validating Maven repository managers such as Artifactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8268) A few pom.xml across Hadoop project may fail XML validation

2012-05-15 Thread Radim Kolar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radim Kolar updated HADOOP-8268:


Fix Version/s: 0.23.0
   2.0.0

 A few pom.xml across Hadoop project may fail XML validation
 ---

 Key: HADOOP-8268
 URL: https://issues.apache.org/jira/browse/HADOOP-8268
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
 Environment: FreeBSD 8.2 / AMD64
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: maven, patch
 Fix For: 0.23.0, 2.0.0

 Attachments: HADOOP-8268.patch, hadoop-pom.txt, poms-patch.txt


 In a few pom files there are embedded ant commands which contains '' - 
 redirection. This makes XML file invalid and this POM file can not be 
 deployed into validating Maven repository managers such as Artifactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8400) All commands warn Kerberos krb5 configuration not found when security is not enabled

2012-05-15 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8400:
---

 Summary: All commands warn Kerberos krb5 configuration not found 
when security is not enabled
 Key: HADOOP-8400
 URL: https://issues.apache.org/jira/browse/HADOOP-8400
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Alejandro Abdelnur


Post HADOOP-8086 I get Kerberos krb5 configuration not found, setting default 
realm to empty warnings when running Hadoop commands even though I don't have 
kerb enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8398) Cleanup BlockLocation

2012-05-15 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8398:


  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 2.0.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this to trunk and merged to branch-2. Thanks John and Todd.

 Cleanup BlockLocation
 -

 Key: HADOOP-8398
 URL: https://issues.apache.org/jira/browse/HADOOP-8398
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.0

 Attachments: hadoop-8398.txt


 Minor BlockLocation cleanup. Remove dead imports, cleanup some incorrect 
 comment and write a better javadoc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8393) hadoop-config.sh missing variable exports, causes Yarn jobs to fail with ClassNotFoundException MRAppMaster

2012-05-15 Thread Ahmed Radwan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276027#comment-13276027
 ] 

Ahmed Radwan commented on HADOOP-8393:
--

lgtm +1, thanks Patrick!

 hadoop-config.sh missing variable exports, causes Yarn jobs to fail with 
 ClassNotFoundException MRAppMaster
 ---

 Key: HADOOP-8393
 URL: https://issues.apache.org/jira/browse/HADOOP-8393
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Patrick Hunt
Assignee: Patrick Hunt
 Attachments: HADOOP-8393.patch


 If you start a pseudo distributed yarn using start-yarn.sh you need to 
 specify exports for HADOOP_COMMON_HOME, HADOOP_HDFS_HOME, YARN_HOME, 
 YARN_CONF_DIR, and HADOOP_MAPRED_HOME in hadoop-env.sh (or elsewhere), 
 otherwise the spawned node manager will be missing these in it's environment. 
 This is due to start-yarn using yarn-daemons. With this fix it's possible to 
 start yarn (etc...) with only HADOOP_CONF_DIR specified in the environment. 
 Took some time to track down this failure, so seems worthwhile to fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8398) Cleanup BlockLocation

2012-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276085#comment-13276085
 ] 

Hudson commented on HADOOP-8398:


Integrated in Hadoop-Hdfs-trunk-Commit #2321 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2321/])
HADOOP-8398. Cleanup BlockLocation. Contributed by Eli Collins (Revision 
1338806)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1338806
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockLocation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/Node.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NodeBase.java


 Cleanup BlockLocation
 -

 Key: HADOOP-8398
 URL: https://issues.apache.org/jira/browse/HADOOP-8398
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.0

 Attachments: hadoop-8398.txt


 Minor BlockLocation cleanup. Remove dead imports, cleanup some incorrect 
 comment and write a better javadoc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-05-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276083#comment-13276083
 ] 

Kihwal Lee commented on HADOOP-8240:


We need this feature to make data copying and verification work across clusters 
with different configurations. I would appreciate any feedback.

h4. Design Choices

# *Add a new create method to FileSystem for allowing checksum type to be 
specified.* FileSystem#create() already allows specifying bytesPerChecksum.   
The new create method may accept a DataChecksum object.  Users can use the 
existing DataChecksum.newDataChecksum( int type, int bytesPerChecksum) to 
create one. Users who wants to specify non-default type likely want to control 
bytesPerChecksum as well. 
# *Add checksum types to CreateFlags.* This approach minimizes interface 
changes, but may not be the most intuitive/consistent way.
# *Add a method to FSDataOutputStream and DFSOutputStream to allow users to 
override default checksum parameters.*  This method should fail if data is 
already written.  This is sort of like ioctl. If there are other tunables we 
want to support, we could generalize the api. But changing internal parameters 
(not encapsulated data) of an object during run-time doesn't go well with 
typical java semantics and may cause confusion. So we need to be careful about 
this.

h4. Other previously discussed approaches

# *Setting dfs.checksum.type.*  FileSystem cache cause it to be stay the same 
after the creation of DFSClient.  Also, conf is shared, so it can have 
unforeseen side-effects.
# *Disable FileSystem cache.* Create a new Configuration and set 
dfs.checksum.type. Without cache, memory bloat is too much. 
# *Use conf as a part of key in FileSystem cache, in addition to UGI and scheme 
+ authority.* Something along this line may work.  Doing shallow comparison may 
not be enough. Do we create a special hashCode/equals to make it safer?  There 
will be memory bloat, but how much?  It is still up to users to manage 
different configurations and may be more prone to mistakes because of that.


 Allow users to specify a checksum type on create()
 --

 Key: HADOOP-8240
 URL: https://issues.apache.org/jira/browse/HADOOP-8240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: hadoop-8240.patch


 Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
 create() is needed. The way FileSystem cache works makes it impossible to use 
 dfs.checksum.type to achieve this. Also checksum-related API is at 
 Filesystem-level, so we prefer something at that level, not hdfs-specific 
 one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8398) Cleanup BlockLocation

2012-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276095#comment-13276095
 ] 

Hudson commented on HADOOP-8398:


Integrated in Hadoop-Common-trunk-Commit #2247 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2247/])
HADOOP-8398. Cleanup BlockLocation. Contributed by Eli Collins (Revision 
1338806)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1338806
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockLocation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/Node.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NodeBase.java


 Cleanup BlockLocation
 -

 Key: HADOOP-8398
 URL: https://issues.apache.org/jira/browse/HADOOP-8398
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.0

 Attachments: hadoop-8398.txt


 Minor BlockLocation cleanup. Remove dead imports, cleanup some incorrect 
 comment and write a better javadoc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8399) Remove JDK5 dependency from Hadoop 1.0+ line

2012-05-15 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276113#comment-13276113
 ] 

Matt Foley commented on HADOOP-8399:


BTW, until this fix goes in, you may be interested to know there's a 
work-around:
I don't actually use Java 5.  As long as you're using Forrest version 0.9 or 
higher, you can just create a symlink to Java 6 and name it Java 5, and all 
works well.

 Remove JDK5 dependency from Hadoop 1.0+ line
 

 Key: HADOOP-8399
 URL: https://issues.apache.org/jira/browse/HADOOP-8399
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.0.2
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: HADOOP-8399.patch


 This issues has been fixed in Hadoop starting from 0.21 (see HDFS-1552).
 I propose to make the same fix for 1.0 line and get rid of JDK5 dependency 
 all together.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-05-15 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276255#comment-13276255
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8240:


#1 in design choices sounds good to me except that it is better to add a new 
class, say ChecksumOpts, than reuse DataChecksum.  The new class should be in 
the o.a.h.fs package (or an inner class of fs.Options) and the checksum type 
should be an enum instead of int.

 Allow users to specify a checksum type on create()
 --

 Key: HADOOP-8240
 URL: https://issues.apache.org/jira/browse/HADOOP-8240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: hadoop-8240.patch


 Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
 create() is needed. The way FileSystem cache works makes it impossible to use 
 dfs.checksum.type to achieve this. Also checksum-related API is at 
 Filesystem-level, so we prefer something at that level, not hdfs-specific 
 one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8240) Allow users to specify a checksum type on create()

2012-05-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276270#comment-13276270
 ] 

Kihwal Lee commented on HADOOP-8240:


Thanks, Nicholas. I think what you suggested makes sense. I haven't thought 
about FileContext side of changes though. 

 Allow users to specify a checksum type on create()
 --

 Key: HADOOP-8240
 URL: https://issues.apache.org/jira/browse/HADOOP-8240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: hadoop-8240.patch


 Per discussion in HADOOP-8060, a way for users to specify a checksum type on 
 create() is needed. The way FileSystem cache works makes it impossible to use 
 dfs.checksum.type to achieve this. Also checksum-related API is at 
 Filesystem-level, so we prefer something at that level, not hdfs-specific 
 one.  Current proposal is to use CreatFlag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8400) All commands warn Kerberos krb5 configuration not found when security is not enabled

2012-05-15 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276288#comment-13276288
 ] 

Alejandro Abdelnur commented on HADOOP-8400:


arghh, the problem here is that hadoop-auth does not have Configuration in the 
classpath, thus we cannot check if security is enabled or not. I'm kind of 
inclined to revert HADOOP-8086.

thoughts.

 All commands warn Kerberos krb5 configuration not found when security is 
 not enabled
 --

 Key: HADOOP-8400
 URL: https://issues.apache.org/jira/browse/HADOOP-8400
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Alejandro Abdelnur

 Post HADOOP-8086 I get Kerberos krb5 configuration not found, setting 
 default realm to empty warnings when running Hadoop commands even though I 
 don't have kerb enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8401) Investigate use of JobObject to spawn tasks on Windows

2012-05-15 Thread Bikas Saha (JIRA)
Bikas Saha created HADOOP-8401:
--

 Summary: Investigate use of JobObject to spawn tasks on Windows
 Key: HADOOP-8401
 URL: https://issues.apache.org/jira/browse/HADOOP-8401
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Bikas Saha




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8400) All commands warn Kerberos krb5 configuration not found when security is not enabled

2012-05-15 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276405#comment-13276405
 ] 

Eli Collins commented on HADOOP-8400:
-

Perhaps just make the log-level debug instead of warn?

 All commands warn Kerberos krb5 configuration not found when security is 
 not enabled
 --

 Key: HADOOP-8400
 URL: https://issues.apache.org/jira/browse/HADOOP-8400
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Alejandro Abdelnur

 Post HADOOP-8086 I get Kerberos krb5 configuration not found, setting 
 default realm to empty warnings when running Hadoop commands even though I 
 don't have kerb enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8393) hadoop-config.sh missing variable exports, causes Yarn jobs to fail with ClassNotFoundException MRAppMaster

2012-05-15 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8393:
---

   Resolution: Fixed
Fix Version/s: 2.0.1
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Patrick. committed to trunk and branch-2.

 hadoop-config.sh missing variable exports, causes Yarn jobs to fail with 
 ClassNotFoundException MRAppMaster
 ---

 Key: HADOOP-8393
 URL: https://issues.apache.org/jira/browse/HADOOP-8393
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Patrick Hunt
Assignee: Patrick Hunt
 Fix For: 2.0.1

 Attachments: HADOOP-8393.patch


 If you start a pseudo distributed yarn using start-yarn.sh you need to 
 specify exports for HADOOP_COMMON_HOME, HADOOP_HDFS_HOME, YARN_HOME, 
 YARN_CONF_DIR, and HADOOP_MAPRED_HOME in hadoop-env.sh (or elsewhere), 
 otherwise the spawned node manager will be missing these in it's environment. 
 This is due to start-yarn using yarn-daemons. With this fix it's possible to 
 start yarn (etc...) with only HADOOP_CONF_DIR specified in the environment. 
 Took some time to track down this failure, so seems worthwhile to fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8393) hadoop-config.sh missing variable exports, causes Yarn jobs to fail with ClassNotFoundException MRAppMaster

2012-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276442#comment-13276442
 ] 

Hudson commented on HADOOP-8393:


Integrated in Hadoop-Hdfs-trunk-Commit #2324 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2324/])
HADOOP-8393. hadoop-config.sh missing variable exports, causes Yarn jobs to 
fail with ClassNotFoundException MRAppMaster. (phunt via tucu) (Revision 
1338998)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1338998
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh


 hadoop-config.sh missing variable exports, causes Yarn jobs to fail with 
 ClassNotFoundException MRAppMaster
 ---

 Key: HADOOP-8393
 URL: https://issues.apache.org/jira/browse/HADOOP-8393
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Patrick Hunt
Assignee: Patrick Hunt
 Fix For: 2.0.1

 Attachments: HADOOP-8393.patch


 If you start a pseudo distributed yarn using start-yarn.sh you need to 
 specify exports for HADOOP_COMMON_HOME, HADOOP_HDFS_HOME, YARN_HOME, 
 YARN_CONF_DIR, and HADOOP_MAPRED_HOME in hadoop-env.sh (or elsewhere), 
 otherwise the spawned node manager will be missing these in it's environment. 
 This is due to start-yarn using yarn-daemons. With this fix it's possible to 
 start yarn (etc...) with only HADOOP_CONF_DIR specified in the environment. 
 Took some time to track down this failure, so seems worthwhile to fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8393) hadoop-config.sh missing variable exports, causes Yarn jobs to fail with ClassNotFoundException MRAppMaster

2012-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276446#comment-13276446
 ] 

Hudson commented on HADOOP-8393:


Integrated in Hadoop-Common-trunk-Commit #2250 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2250/])
HADOOP-8393. hadoop-config.sh missing variable exports, causes Yarn jobs to 
fail with ClassNotFoundException MRAppMaster. (phunt via tucu) (Revision 
1338998)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1338998
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh


 hadoop-config.sh missing variable exports, causes Yarn jobs to fail with 
 ClassNotFoundException MRAppMaster
 ---

 Key: HADOOP-8393
 URL: https://issues.apache.org/jira/browse/HADOOP-8393
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Patrick Hunt
Assignee: Patrick Hunt
 Fix For: 2.0.1

 Attachments: HADOOP-8393.patch


 If you start a pseudo distributed yarn using start-yarn.sh you need to 
 specify exports for HADOOP_COMMON_HOME, HADOOP_HDFS_HOME, YARN_HOME, 
 YARN_CONF_DIR, and HADOOP_MAPRED_HOME in hadoop-env.sh (or elsewhere), 
 otherwise the spawned node manager will be missing these in it's environment. 
 This is due to start-yarn using yarn-daemons. With this fix it's possible to 
 start yarn (etc...) with only HADOOP_CONF_DIR specified in the environment. 
 Took some time to track down this failure, so seems worthwhile to fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8400) All commands warn Kerberos krb5 configuration not found when security is not enabled

2012-05-15 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8400:
---

Status: Patch Available  (was: Open)

 All commands warn Kerberos krb5 configuration not found when security is 
 not enabled
 --

 Key: HADOOP-8400
 URL: https://issues.apache.org/jira/browse/HADOOP-8400
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-8400.patch


 Post HADOOP-8086 I get Kerberos krb5 configuration not found, setting 
 default realm to empty warnings when running Hadoop commands even though I 
 don't have kerb enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8400) All commands warn Kerberos krb5 configuration not found when security is not enabled

2012-05-15 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8400:
---

Attachment: HADOOP-8400.patch

changing log from warn to debug

 All commands warn Kerberos krb5 configuration not found when security is 
 not enabled
 --

 Key: HADOOP-8400
 URL: https://issues.apache.org/jira/browse/HADOOP-8400
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-8400.patch


 Post HADOOP-8086 I get Kerberos krb5 configuration not found, setting 
 default realm to empty warnings when running Hadoop commands even though I 
 don't have kerb enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8400) All commands warn Kerberos krb5 configuration not found when security is not enabled

2012-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276460#comment-13276460
 ] 

Hadoop QA commented on HADOOP-8400:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527563/HADOOP-8400.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/996//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/996//console

This message is automatically generated.

 All commands warn Kerberos krb5 configuration not found when security is 
 not enabled
 --

 Key: HADOOP-8400
 URL: https://issues.apache.org/jira/browse/HADOOP-8400
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-8400.patch


 Post HADOOP-8086 I get Kerberos krb5 configuration not found, setting 
 default realm to empty warnings when running Hadoop commands even though I 
 don't have kerb enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8393) hadoop-config.sh missing variable exports, causes Yarn jobs to fail with ClassNotFoundException MRAppMaster

2012-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13276465#comment-13276465
 ] 

Hudson commented on HADOOP-8393:


Integrated in Hadoop-Mapreduce-trunk-Commit #2267 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2267/])
HADOOP-8393. hadoop-config.sh missing variable exports, causes Yarn jobs to 
fail with ClassNotFoundException MRAppMaster. (phunt via tucu) (Revision 
1338998)

 Result = ABORTED
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1338998
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh


 hadoop-config.sh missing variable exports, causes Yarn jobs to fail with 
 ClassNotFoundException MRAppMaster
 ---

 Key: HADOOP-8393
 URL: https://issues.apache.org/jira/browse/HADOOP-8393
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Patrick Hunt
Assignee: Patrick Hunt
 Fix For: 2.0.1

 Attachments: HADOOP-8393.patch


 If you start a pseudo distributed yarn using start-yarn.sh you need to 
 specify exports for HADOOP_COMMON_HOME, HADOOP_HDFS_HOME, YARN_HOME, 
 YARN_CONF_DIR, and HADOOP_MAPRED_HOME in hadoop-env.sh (or elsewhere), 
 otherwise the spawned node manager will be missing these in it's environment. 
 This is due to start-yarn using yarn-daemons. With this fix it's possible to 
 start yarn (etc...) with only HADOOP_CONF_DIR specified in the environment. 
 Took some time to track down this failure, so seems worthwhile to fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8224) Don't hardcode hdfs.audit.logger in the scripts

2012-05-15 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8224:


Fix Version/s: (was: 2.0.0)
   2.0.1

 Don't hardcode hdfs.audit.logger in the scripts
 ---

 Key: HADOOP-8224
 URL: https://issues.apache.org/jira/browse/HADOOP-8224
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Tomohiko Kinebuchi
 Fix For: 2.0.1

 Attachments: HADOOP-8224.txt, HADOOP-8224.txt, hadoop-8224.txt


 The HADOOP_*OPTS defined for HDFS in hadoop-env.sh hard-code the 
 hdfs.audit.logger (is explicitly set via -Dhdfs.audit.logger=INFO,RFAAUDIT) 
 so it's not overridable. Let's allow someone to override it as we do the 
 other parameters by introducing HADOOP_AUDIT_LOGGER.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character

2012-05-15 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8372:


Fix Version/s: (was: 3.0.0)
   (was: 2.0.0)
   2.0.1

 normalizeHostName() in NetUtils is not working properly in resolving a 
 hostname start with numeric character
 

 Key: HADOOP-8372
 URL: https://issues.apache.org/jira/browse/HADOOP-8372
 Project: Hadoop Common
  Issue Type: Bug
  Components: io, util
Affects Versions: 1.0.0, 0.23.0
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 2.0.1

 Attachments: HADOOP-8372-v2.patch, HADOOP-8372-v3.patch, 
 HADOOP-8372.patch, HADOOP-8372.patch


 A valid host name can start with numeric value (You can refer RFC952, RFC1123 
 or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a 
 production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. 
 But normalizeHostName() will recognise this hostname as IP address and return 
 directly rather than resolving the real IP address. These nodes will be 
 failed to get correct network topology if topology script/TableMapping only 
 contains their IPs (without hostname).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8316) Audit logging should be disabled by default

2012-05-15 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8316:


Fix Version/s: (was: 2.0.0)
   2.0.1

 Audit logging should be disabled by default
 ---

 Key: HADOOP-8316
 URL: https://issues.apache.org/jira/browse/HADOOP-8316
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.1

 Attachments: hadoop-8316.txt


 HADOOP-7633 made hdfs, mr and security audit logging on by default (INFO 
 level) in log4j.properties used for the packages, this then got copied over 
 to the non-packaging log4j.properties in HADOOP-8216 (which made them 
 consistent).
 Seems like we should keep with the v1.x setting which is disabled (WARNING 
 level) by default. There's a performance overhead to audit logging, and 
 HADOOP-7633 provided not rationale (just We should add the audit logs as 
 part of default confs) as to why they were enabled for the packages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-05-15 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8340:


Fix Version/s: (was: 2.0.0)
   2.0.1

 SNAPSHOT build versions should compare as less than their eventual final 
 release
 

 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Fix For: 2.0.1

 Attachments: hadoop-8340.txt, hadoop-8340.txt


 We recently added a utility function to compare two version strings, based on 
 splitting on '.'s and comparing each component. However, it considers a 
 version like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, 
 since SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8361) Avoid out-of-memory problems when deserializing strings

2012-05-15 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8361:


Fix Version/s: (was: 2.0.0)
   2.0.1

 Avoid out-of-memory problems when deserializing strings
 ---

 Key: HADOOP-8361
 URL: https://issues.apache.org/jira/browse/HADOOP-8361
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.1

 Attachments: HADOOP-8361.001.patch, HADOOP-8361.002.patch, 
 HADOOP-8361.003.patch, HADOOP-8361.004.patch, HADOOP-8361.005.patch, 
 HADOOP-8361.006.patch, HADOOP-8361.007.patch


 In HDFS, we want to be able to read the edit log without crashing on an OOM 
 condition.  Unfortunately, we currently cannot do this, because there are no 
 limits on the length of certain data types we pull from the edit log.  We 
 often read strings without setting any upper limit on the length we're 
 prepared to accept.
 It's not that we don't have limits on strings-- for example, HDFS limits the 
 maximum path length to 8000 UCS-2 characters.  Linux limits the maximum user 
 name length to either 64 or 128 bytes, depending on what version you are 
 running.  It's just that we're not exposing these limits to the 
 deserialization functions that need to be aware of them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira