[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468361#comment-13468361
 ] 

Bikas Saha commented on HADOOP-8847:


Java File.setExecutable/File.setWritable dont work as expected on Windows. In 
any case, the distributed cache explicitly sets permissions on untar'd files 
after expanding archives. So there should be no problem. Here is the code 
snippet from TrackerDistributedCacheManager.downloadCacheObject(). See end of 
snippet.
{code}
if (isArchive) {
  String tmpArchive = workFile.getName().toLowerCase();
  File srcFile = new File(workFile.toString());
  File destDir = new File(workDir.toString());
  LOG.info(String.format(Extracting %s to %s,
   srcFile.toString(), destDir.toString()));
  if (tmpArchive.endsWith(.jar)) {
RunJar.unJar(srcFile, destDir);
  } else if (tmpArchive.endsWith(.zip)) {
FileUtil.unZip(srcFile, destDir);
  } else if (isTarFile(tmpArchive)) {
FileUtil.unTar(srcFile, destDir);
  } else {
LOG.warn(String.format(
Cache file %s specified as archive, but not valid extension.,
srcFile.toString()));
// else will not do anyhting
// and copy the file into the dir as it is
  }
  FileUtil.chmod(destDir.toString(), ugo+rx, true);
}
{code}
If you are really worried about this change then I could continue to use 
existing spawn tar impl for Linux.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-03 Thread Senthil V Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468367#comment-13468367
 ] 

Senthil V Kumar commented on HADOOP-8804:
-

Hi Eli,

I am able to see this happening in JobTracker. But I don't see this happening 
in RM and NM. Can you tell me how to reproduce this issue with RM and NM? I 
tried this in trunk. 

Thanks
Senthil

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468473#comment-13468473
 ] 

Steve Loughran commented on HADOOP-8847:


that code snippet shows that the dest dir perms are setup -but doesn't touch 
the contents of the files itself. If someone needs x permissions on binaries 
there then there's a regression risk here -which is what Robert sounds like 
he's going to hit

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-10-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468548#comment-13468548
 ] 

Hudson commented on HADOOP-8756:


Integrated in Hadoop-Hdfs-trunk #1184 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1184/])
HADOOP-8756. Fix SEGV when libsnappy is in java.library.path but not 
LD_LIBRARY_PATH. Contributed by Colin Patrick McCabe (Revision 1393243)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1393243
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/LoadSnappy.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java


 Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
 ---

 Key: HADOOP-8756
 URL: https://issues.apache.org/jira/browse/HADOOP-8756
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.2-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch, 
 HADOOP-8756.004.patch


 We use {{System.loadLibrary(snappy)}} from the Java side.  However in 
 libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
 System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
 {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
 libraries.  Because of this, the two library loading functions can be at odds.
 We should fix this so we only load the library once, preferably using the 
 standard Java {{java.library.path}}.
 We should also log the search path(s) we use for {{libsnappy.so}} when 
 loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2012-10-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468554#comment-13468554
 ] 

Daryn Sharp commented on HADOOP-8873:
-

The only difference other than parent creation is that -p doesn't fail if the 
directory already exists whereas a normal mkdir should.  If 1.x mkdir doesn't 
fail on existence, then -p can be a no-op.

 Port HADOOP-8175 (Add mkdir -p flag) to branch-1
 

 Key: HADOOP-8873
 URL: https://issues.apache.org/jira/browse/HADOOP-8873
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Eli Collins
  Labels: newbie

 Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
 to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
 currently requires the -p option to create parent directories but a program 
 that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8783) Improve RPC.Server's digest auth

2012-10-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468567#comment-13468567
 ] 

Hudson commented on HADOOP-8783:


Integrated in Hadoop-Common-trunk-Commit #2805 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2805/])
HADOOP-8783. Improve RPC.Server's digest auth (daryn) (Revision 1393483)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1393483
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 Improve RPC.Server's digest auth
 

 Key: HADOOP-8783
 URL: https://issues.apache.org/jira/browse/HADOOP-8783
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-8783.patch, HADOOP-8783.patch


 RPC.Server should always allow digest auth (tokens) if a secret manager if 
 present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8783) Improve RPC.Server's digest auth

2012-10-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468569#comment-13468569
 ] 

Hudson commented on HADOOP-8783:


Integrated in Hadoop-Hdfs-trunk-Commit #2867 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2867/])
HADOOP-8783. Improve RPC.Server's digest auth (daryn) (Revision 1393483)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1393483
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 Improve RPC.Server's digest auth
 

 Key: HADOOP-8783
 URL: https://issues.apache.org/jira/browse/HADOOP-8783
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-8783.patch, HADOOP-8783.patch


 RPC.Server should always allow digest auth (tokens) if a secret manager if 
 present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-10-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468574#comment-13468574
 ] 

Hudson commented on HADOOP-8756:


Integrated in Hadoop-Mapreduce-trunk #1215 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1215/])
HADOOP-8756. Fix SEGV when libsnappy is in java.library.path but not 
LD_LIBRARY_PATH. Contributed by Colin Patrick McCabe (Revision 1393243)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1393243
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/SnappyCodec.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/LoadSnappy.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java


 Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
 ---

 Key: HADOOP-8756
 URL: https://issues.apache.org/jira/browse/HADOOP-8756
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.2-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch, 
 HADOOP-8756.004.patch


 We use {{System.loadLibrary(snappy)}} from the Java side.  However in 
 libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
 System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
 {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
 libraries.  Because of this, the two library loading functions can be at odds.
 We should fix this so we only load the library once, preferably using the 
 standard Java {{java.library.path}}.
 We should also log the search path(s) we use for {{libsnappy.so}} when 
 loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8783) Improve RPC.Server's digest auth

2012-10-03 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8783:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Will use umbrella jira to later track integration into 23.x and possibly 1.x

 Improve RPC.Server's digest auth
 

 Key: HADOOP-8783
 URL: https://issues.apache.org/jira/browse/HADOOP-8783
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8783.patch, HADOOP-8783.patch


 RPC.Server should always allow digest auth (tokens) if a secret manager if 
 present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8783) Improve RPC.Server's digest auth

2012-10-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468588#comment-13468588
 ] 

Hudson commented on HADOOP-8783:


Integrated in Hadoop-Mapreduce-trunk-Commit #2828 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2828/])
HADOOP-8783. Improve RPC.Server's digest auth (daryn) (Revision 1393483)

 Result = FAILURE
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1393483
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 Improve RPC.Server's digest auth
 

 Key: HADOOP-8783
 URL: https://issues.apache.org/jira/browse/HADOOP-8783
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8783.patch, HADOOP-8783.patch


 RPC.Server should always allow digest auth (tokens) if a secret manager if 
 present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8875) test-patch.sh doesn't test changes to itself

2012-10-03 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8875:
---

 Summary: test-patch.sh doesn't test changes to itself
 Key: HADOOP-8875
 URL: https://issues.apache.org/jira/browse/HADOOP-8875
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Eli Collins


The test-patch.sh script run by the pre commit jobs doesn't handle patches that 
modify test-patch.sh itself, let's modify it to do so or at least log a warning 
in the test-patch output to indicate the submitter should test test-patch 
themselves.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-10-03 Thread Hemanth Yamijala (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468670#comment-13468670
 ] 

Hemanth Yamijala commented on HADOOP-8776:
--

Hi, I have a patch that implements what is discussed in the comments above. 
Roughly, it inverts the option from the earlier patch, i.e. the user can now 
pass --enable-native to enable native compilation. If that flag isn't passed, 
we check based on uname if the platform is one on which native compilation is 
supported and enable the native profile based on that.

However:

I still am not convinced about the value this is adding. Please note that I am 
not talking about the complexity to *implement* the patch. I am concerned about 
the complexity the patch will introduce *after* it is committed.

Specifically:
* Now test-patch will be platform specific. Ideally, it must be tested on all 
platforms before committing.
* It has more logic (either --enable-native is passed OR it is a supported 
platform, etc.). Not earth-shattering, but still needs to be understood in 
context by someone fresh looking at it.
* The dependency it will put on us to have to track removing this once native 
compile is fixed. Given that we cannot rely on contributors being always there 
and watchful, I am worried that someone will forget to document in a related 
JIRA to fix test-patch and we will end up in a situation that native compile is 
fixed, and test-patch isn't testing it.

IMHO, with the simpler --disable-native option, a developer will struggle 
initially like I did to figure out how to make test-patch work in spite of 
broken native compiles, figure out how to get around it, and then remember it 
for future. Agreed that there is overhead on all developers on the unsupported 
platforms initially, but I feel that is outweighed by the simplicity and 
clarity of that option.

Jianbin, please let me know what you feel. It would be good for others watching 
the JIRA to weigh in as well. If it is felt otherwise, I will put up my new 
patch for review.

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468675#comment-13468675
 ] 

Bikas Saha commented on HADOOP-8847:


The true argument at the end is for recursive descent. So everything gets 
chmod'd.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-03 Thread Senthil V Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Senthil V Kumar updated HADOOP-8804:


Attachment: HADOOP-8804-trunk.patch
HADOOP-8804-1.1.patch

Patches for not stripping the hostname when the given hostname is an IP 
address. The change is really minimal in this case. Please review and let me 
know whether this is fine. 

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-1.1.patch, HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, 
 HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-10-03 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468698#comment-13468698
 ] 

Jianbin Wei commented on HADOOP-8776:
-

{quote}
Now test-patch will be platform specific. Ideally, it must be tested on all 
platforms before committing.
{quote}

Yes I agree completely.  As you said, the core is to fix it on broken 
platforms.  This is just a workaround.

{quote}
It has more logic (either --enable-native is passed OR it is a supported 
platform, etc.). Not earth-shattering, but still needs to be understood in 
context by someone fresh looking at it.
{quote}

I would say only the developer to fix the script will need to understand the 
extra complexity.  So it should be transparent to most other developers.  

{quote}
The dependency it will put on us to have to track removing this once native 
compile is fixed. Given that we cannot rely on contributors being always there 
and watchful, I am worried that someone will forget to document in a related 
JIRA to fix test-patch and we will end up in a situation that native compile is 
fixed, and test-patch isn't testing it.
{quote}

The developer who disables the test needs to 
* file a ticket to get broken platform fixed (if it is not there yet) AND 
* document the needs to enable the test in the ticket.  In this case if it 
would be you :-)

Can you also print some messages to notify others that the native compilation 
is disabled due to issues?

My goal is to provide best experiences for both end users and developers.

Other thoughts?

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468706#comment-13468706
 ] 

Hadoop QA commented on HADOOP-8804:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12547561/HADOOP-8804-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1553//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1553//console

This message is automatically generated.

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-1.1.patch, HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, 
 HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8876) SequenceFile default compression is RECORD, not BLOCK

2012-10-03 Thread Harsh J (JIRA)
Harsh J created HADOOP-8876:
---

 Summary: SequenceFile default compression is RECORD, not BLOCK
 Key: HADOOP-8876
 URL: https://issues.apache.org/jira/browse/HADOOP-8876
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: Harsh J


Currently both the SequenceFile writer and the MR defaults for SequenceFile 
compression default to RECORD type compression, while most recommendations are 
to use BLOCK for smaller end sizes instead.

Should we not change the default?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-03 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468721#comment-13468721
 ] 

Joep Rottinghuis commented on HADOOP-8847:
--

Are symlinks a factor in the pure Java implementation?

On Unix, tar will handle symlinks appropriately and restore them properly (even 
if there are loops in the symlinks). Recursive descent with loops could be a 
problem.

As long as Java implementation is limited to Windows this may not matter.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-10-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468740#comment-13468740
 ] 

Colin Patrick McCabe commented on HADOOP-8776:
--

* I think we should call the flag {{\--build-native}}, and have it take 
{{true}} or {{false}} as an argument.  That way it will be useful in more 
cases.  For example, we may want to verify that the unit tests pass on Linux 
when the native libraries are not present.

* remember to rebase on trunk now that HDFS-3753 is in.  You'll need to avoid 
passing {{\-Drequire.test.libhadoop}} when the native build is disabled.

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8784) Improve IPC.Client's token use

2012-10-03 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8784:


Attachment: HADOOP-8784.patch

Simple alteration of logic to always look for a token.

Note: spurious change of {{return call.rpcResponse}} to {{return 
call.getRpcResult()}} is to eliminate the only warning in the file.

 Improve IPC.Client's token use
 --

 Key: HADOOP-8784
 URL: https://issues.apache.org/jira/browse/HADOOP-8784
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-8784.patch


 If present, tokens should be sent for all auth types including simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8784) Improve IPC.Client's token use

2012-10-03 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8784:


Status: Patch Available  (was: Open)

 Improve IPC.Client's token use
 --

 Key: HADOOP-8784
 URL: https://issues.apache.org/jira/browse/HADOOP-8784
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-8784.patch


 If present, tokens should be sent for all auth types including simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8784) Improve IPC.Client's token use

2012-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468838#comment-13468838
 ] 

Hadoop QA commented on HADOOP-8784:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547589/HADOOP-8784.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1554//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1554//console

This message is automatically generated.

 Improve IPC.Client's token use
 --

 Key: HADOOP-8784
 URL: https://issues.apache.org/jira/browse/HADOOP-8784
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-8784.patch


 If present, tokens should be sent for all auth types including simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8784) Improve IPC.Client's token use

2012-10-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468866#comment-13468866
 ] 

Daryn Sharp commented on HADOOP-8784:
-

TestZKFailoverController tests pass for me.  They appear to randomly fail on 
the build servers.

 Improve IPC.Client's token use
 --

 Key: HADOOP-8784
 URL: https://issues.apache.org/jira/browse/HADOOP-8784
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-8784.patch


 If present, tokens should be sent for all auth types including simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8608) Add Configuration API for parsing time durations

2012-10-03 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-8608:
--

Attachment: 8608-0.patch

{code}
public void setTimeDuration(String name, long value, TimeUnit unit);
public long getTimeDuration(String name, long defaultValue, TimeUnit unit);
{code}

It warns when the unit is unspecified and assumes whatever the caller requested.

 Add Configuration API for parsing time durations
 

 Key: HADOOP-8608
 URL: https://issues.apache.org/jira/browse/HADOOP-8608
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Todd Lipcon
 Attachments: 8608-0.patch


 Hadoop has a lot of configurations which specify durations or intervals of 
 time. Unfortunately these different configurations have little consistency in 
 units - eg some are in milliseconds, some in seconds, and some in minutes. 
 This makes it difficult for users to configure, since they have to always 
 refer back to docs to remember the unit for each property.
 The proposed solution is to add an API like {{Configuration.getTimeDuration}} 
 which allows the user to specify the units with a postfix. For example, 
 10ms, 10s, 10m, 10h, or even 10d. For backwards-compatibility, if 
 the user does not specify a unit, the API can specify the default unit, and 
 warn the user that they should specify an explicit unit instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-03 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468916#comment-13468916
 ] 

Aaron T. Myers commented on HADOOP-8804:


I'll leave it to Eli for a final review since he's more familiar with this 
issue than I am, but one quick comment:

bq. SHould not truncate when when IP address is passed

Please change SHould - Should and remove one of the unnecessary when

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-1.1.patch, HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, 
 HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8852) DelegationTokenRenewer should be Singleton

2012-10-03 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468968#comment-13468968
 ] 

Karthik Kambatla commented on HADOOP-8852:
--

After spending some time on this, I noticed a couple of other related issues:
- WebHdfsFileSystem and HftpFileSystem have their own implemenations of 
TokenRenewer that are not used at all.
- In DelegationTokenRenewer#renew() (see snippet below), we fetch new tokens in 
case the renew fails. However, we use only the first token and ignore the rest. 
Later, if we can't renew this token, we re-fetch new tokens instead of using 
the previously fetched ones. Also, we might not be able to fetch new ones, in 
which case we give up. Is this valid behavior? Or, should we be first using the 
initially fetched tokens, and re-fetch only when we run out of them.

{code}
  Token?[] tokens = fs.addDelegationTokens(null, null);
  if (tokens.length == 0) {
throw new IOException(addDelegationTokens returned no tokens);
  }
  fs.setDelegationToken(tokens[0]);
{code}

Taking the above into consideration along with the goal of further decoupling 
DelegationTokenRenewer from actual FS-specific details of 
renewal/refetching/cancellation, I propose the following. Please advise 
appropriately.
# TokenRenewer should be an interface (it is currently a fully abstract class).
# Create a new abstract class TokenManager that holds a list (queue) of tokens, 
their kind, renewal period, a TokenRenewer implementation, and an abstract 
method #manage() to manage (renew/re-fetch/cancel) the tokens as appropriate.
# Each FileSystem (WebHdfs and Hftp) has its own implemenation (local extended 
class) of the TokenManager. 
# DelegationTokenRenewer allows registering and de-registering TokenManagers. 
FileSystems call register() at init(), and de-register() at close().
# DelegationTokenRenewer stores the registered TokenManagers in its DelayQueue. 
On a TokenManager's turn, the manage() method is invoked. If the manage fails, 
the TokenManager is automatically de-registered.


 DelegationTokenRenewer should be Singleton
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch, hadoop-8852.patch, 
 hadoop-8852-v1.patch


 Updated description:
 DelegationTokenRenewer should be Singleton - the instance and renewer threads 
 should be created/started lazily. The filesystems using the renewer shouldn't 
 need to explicity start/stop the renewer, and only register/de-register for 
 token renewal.
 Original issue:
 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-8877) Rename o.a.h.security.token.Token.TrivialRenewer to UnmanagedRenewer for clarity

2012-10-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla moved MAPREDUCE-4702 to HADOOP-8877:
-

Key: HADOOP-8877  (was: MAPREDUCE-4702)
Project: Hadoop Common  (was: Hadoop Map/Reduce)

 Rename o.a.h.security.token.Token.TrivialRenewer to UnmanagedRenewer for 
 clarity
 

 Key: HADOOP-8877
 URL: https://issues.apache.org/jira/browse/HADOOP-8877
 Project: Hadoop Common
  Issue Type: Wish
Reporter: Karthik Kambatla
Priority: Trivial

 While browsing through the code, I came across the TrivialRenewer. It would 
 definitely be easy to comprehend if we rename it to UnmanagedRenewer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8830) org.apache.hadoop.security.authentication.server.AuthenticationFilter might be called twice, causing kerberos replay errors

2012-10-03 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469025#comment-13469025
 ] 

Alejandro Abdelnur commented on HADOOP-8830:


Moritz,

Do you mean that there are effectively 2 HTTP requests?  If so, this happens 
when the initial request is not authenticated, a NEGOTIATE response is sent 
back which will trigger the SPNEGO/Kerberos authentication on the client. After 
a successful authentication a signed cookie is issued and used for subsequent 
requests.



 org.apache.hadoop.security.authentication.server.AuthenticationFilter might 
 be called twice, causing kerberos replay errors
 ---

 Key: HADOOP-8830
 URL: https://issues.apache.org/jira/browse/HADOOP-8830
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Moritz Moeller

 AuthenticationFilter.doFilter is called twice (not sure if that is 
 intentional or not).
 The second time it is called the ServletRequest is already authenticated, 
 i.e. httpRequest.getRemoteUser() returns non-null info.
 If the kerberos authentication is triggered a second time it'll return a 
 replay attack exception.
 I solved this by adding a if (httpRequest.getRemoteUser() == null) at the 
 very beginning of doFilter.
 Alternatively one can set an attribute on the request, or figure out why 
 doFilter is called twice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8878) uppercase hostname causes hadoop dfs calls with webhdfs filesystem to fail

2012-10-03 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-8878:
---

 Summary: uppercase hostname causes hadoop dfs calls with webhdfs 
filesystem to fail
 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta


This was noticed on a secure cluster where the namenode had an upper case 
hostname and the following command was issued

hadoop dfs -ls webhdfs://NN:PORT/PATH

the above command failed because delegation token retrieval failed.

Upon looking at the kerberos logs it was determined that we tried to get the 
ticket for kerberos principal with upper case hostnames and that host did not 
exit in kerberos. We should convert the hostnames to lower case. Take a look at 
HADOOP-7988 where the same fix was applied on a different class.

I have noticed this issue exists on branch-1. Will investigate trunk and 
branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase hostname causes hadoop dfs calls with webhdfs filesystem to fail

2012-10-03 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8878:


Attachment: HADOOP-8878.branch-1.patch

patch for branch-1

 uppercase hostname causes hadoop dfs calls with webhdfs filesystem to fail
 --

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase hostname causes hadoop dfs calls with webhdfs filesystem to fail when security is on

2012-10-03 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8878:


Summary: uppercase hostname causes hadoop dfs calls with webhdfs filesystem 
to fail when security is on  (was: uppercase hostname causes hadoop dfs calls 
with webhdfs filesystem to fail)

 uppercase hostname causes hadoop dfs calls with webhdfs filesystem to fail 
 when security is on
 --

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase hostname causes hadoop dfs calls with webhdfs filesystem to fail when security is on

2012-10-03 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8878:


Affects Version/s: 3.0.0

 uppercase hostname causes hadoop dfs calls with webhdfs filesystem to fail 
 when security is on
 --

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase hostname causes hadoop dfs calls with webhdfs filesystem to fail when security is on

2012-10-03 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8878:


Attachment: HADOOP-8878.patch

patch for trunk

 uppercase hostname causes hadoop dfs calls with webhdfs filesystem to fail 
 when security is on
 --

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem to fail when security is on

2012-10-03 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8878:


Summary: uppercase namenode hostname causes hadoop dfs calls with webhdfs 
filesystem to fail when security is on  (was: uppercase hostname causes hadoop 
dfs calls with webhdfs filesystem to fail when security is on)

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 to fail when security is on
 ---

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase hostname causes hadoop dfs calls with webhdfs filesystem to fail when security is on

2012-10-03 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8878:


Status: Patch Available  (was: Open)

 uppercase hostname causes hadoop dfs calls with webhdfs filesystem to fail 
 when security is on
 --

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem to fail when security is on

2012-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469077#comment-13469077
 ] 

Hadoop QA commented on HADOOP-8878:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547655/HADOOP-8878.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1555//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1555//console

This message is automatically generated.

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 to fail when security is on
 ---

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8879) TestUserGroupInformation fails on Windows when runas Administrator

2012-10-03 Thread Bikas Saha (JIRA)
Bikas Saha created HADOOP-8879:
--

 Summary: TestUserGroupInformation fails on Windows when runas 
Administrator
 Key: HADOOP-8879
 URL: https://issues.apache.org/jira/browse/HADOOP-8879
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha


User name is case insensitive on Windows and whoami returns administrator 
instead of Administrator causing test assertion to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8879) TestUserGroupInformation fails on Windows when runas Administrator

2012-10-03 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8879:
---

Priority: Minor  (was: Major)

 TestUserGroupInformation fails on Windows when runas Administrator
 --

 Key: HADOOP-8879
 URL: https://issues.apache.org/jira/browse/HADOOP-8879
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Priority: Minor

 User name is case insensitive on Windows and whoami returns administrator 
 instead of Administrator causing test assertion to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8879) TestUserGroupInformation fails on Windows when runas Administrator

2012-10-03 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8879:
---

Attachment: HADOOP-8879.branch-1-win.1.patch

Attaching quick fix by normalizing user name lower case for Windows.

 TestUserGroupInformation fails on Windows when runas Administrator
 --

 Key: HADOOP-8879
 URL: https://issues.apache.org/jira/browse/HADOOP-8879
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Priority: Minor
 Attachments: HADOOP-8879.branch-1-win.1.patch


 User name is case insensitive on Windows and whoami returns administrator 
 instead of Administrator causing test assertion to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8879) TestUserGroupInformation fails on Windows when runas Administrator

2012-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469129#comment-13469129
 ] 

Hadoop QA commented on HADOOP-8879:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12547668/HADOOP-8879.branch-1-win.1.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1556//console

This message is automatically generated.

 TestUserGroupInformation fails on Windows when runas Administrator
 --

 Key: HADOOP-8879
 URL: https://issues.apache.org/jira/browse/HADOOP-8879
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Priority: Minor
 Attachments: HADOOP-8879.branch-1-win.1.patch


 User name is case insensitive on Windows and whoami returns administrator 
 instead of Administrator causing test assertion to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8880) Missing jersey jars as dependency in the pom causes hive tests to fail

2012-10-03 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-8880:
--

 Summary: Missing jersey jars as dependency in the pom causes hive 
tests to fail
 Key: HADOOP-8880
 URL: https://issues.apache.org/jira/browse/HADOOP-8880
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


ivy.xml has the dependency included where as the same dependency is not updated 
in the pom template.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-03 Thread Senthil V Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Senthil V Kumar updated HADOOP-8804:


Attachment: HADOOP-8804-trunk.patch
HADOOP-8804-1.1.patch

Incorporating Aaron's comment

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-1.1.patch, HADOOP-8804-1.1.patch, HADOOP-8804-trunk.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8879) TestUserGroupInformation fails on Windows when runas Administrator

2012-10-03 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469149#comment-13469149
 ] 

Suresh Srinivas commented on HADOOP-8879:
-

Bikas please do not submit the patch meant for branch-1-win. Jenkins cannot run 
tests against it. Also please mark the Affects Versions as 1-win for 
branch-1-win related activities.

 TestUserGroupInformation fails on Windows when runas Administrator
 --

 Key: HADOOP-8879
 URL: https://issues.apache.org/jira/browse/HADOOP-8879
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Priority: Minor
 Attachments: HADOOP-8879.branch-1-win.1.patch


 User name is case insensitive on Windows and whoami returns administrator 
 instead of Administrator causing test assertion to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8879) TestUserGroupInformation fails on Windows when runas Administrator

2012-10-03 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469150#comment-13469150
 ] 

Suresh Srinivas commented on HADOOP-8879:
-

+1 for the patch.

 TestUserGroupInformation fails on Windows when runas Administrator
 --

 Key: HADOOP-8879
 URL: https://issues.apache.org/jira/browse/HADOOP-8879
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Priority: Minor
 Attachments: HADOOP-8879.branch-1-win.1.patch


 User name is case insensitive on Windows and whoami returns administrator 
 instead of Administrator causing test assertion to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8879) TestUserGroupInformation fails on Windows when runas Administrator

2012-10-03 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8879:


  Component/s: test
Affects Version/s: 1-win
 Assignee: Bikas Saha

 TestUserGroupInformation fails on Windows when runas Administrator
 --

 Key: HADOOP-8879
 URL: https://issues.apache.org/jira/browse/HADOOP-8879
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Bikas Saha
Assignee: Bikas Saha
Priority: Minor
 Attachments: HADOOP-8879.branch-1-win.1.patch


 User name is case insensitive on Windows and whoami returns administrator 
 instead of Administrator causing test assertion to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8879) TestUserGroupInformation fails on Windows when runas Administrator

2012-10-03 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8879:


   Resolution: Fixed
Fix Version/s: 1-win
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch. Thank you Bikas.

 TestUserGroupInformation fails on Windows when runas Administrator
 --

 Key: HADOOP-8879
 URL: https://issues.apache.org/jira/browse/HADOOP-8879
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Bikas Saha
Assignee: Bikas Saha
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-8879.branch-1-win.1.patch


 User name is case insensitive on Windows and whoami returns administrator 
 instead of Administrator causing test assertion to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8881) FileBasedKeyStoresFactory initialization logging should be debug not info

2012-10-03 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8881:
---

Attachment: HADOOP-8881.patch

 FileBasedKeyStoresFactory initialization logging should be debug not info
 -

 Key: HADOOP-8881
 URL: https://issues.apache.org/jira/browse/HADOOP-8881
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8881.patch


 When hadoop.ssl.enabled is set to true hadoop client invocations get a log 
 message on the terminal with the initialization of the keystores, switching 
 to debug will disable this log message by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8881) FileBasedKeyStoresFactory initialization logging should be debug not info

2012-10-03 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-8881:
--

 Summary: FileBasedKeyStoresFactory initialization logging should 
be debug not info
 Key: HADOOP-8881
 URL: https://issues.apache.org/jira/browse/HADOOP-8881
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha
 Attachments: HADOOP-8881.patch

When hadoop.ssl.enabled is set to true hadoop client invocations get a log 
message on the terminal with the initialization of the keystores, switching to 
debug will disable this log message by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8881) FileBasedKeyStoresFactory initialization logging should be debug not info

2012-10-03 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8881:
---

Status: Patch Available  (was: Open)

 FileBasedKeyStoresFactory initialization logging should be debug not info
 -

 Key: HADOOP-8881
 URL: https://issues.apache.org/jira/browse/HADOOP-8881
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8881.patch


 When hadoop.ssl.enabled is set to true hadoop client invocations get a log 
 message on the terminal with the initialization of the keystores, switching 
 to debug will disable this log message by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8880) Missing jersey jars as dependency in the pom causes hive tests to fail

2012-10-03 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan updated HADOOP-8880:
---

Attachment: HADOOP-8880.patch

 Missing jersey jars as dependency in the pom causes hive tests to fail
 --

 Key: HADOOP-8880
 URL: https://issues.apache.org/jira/browse/HADOOP-8880
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Attachments: HADOOP-8880.patch


 ivy.xml has the dependency included where as the same dependency is not 
 updated in the pom template.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469164#comment-13469164
 ] 

Bikas Saha commented on HADOOP-8847:


I am going to restore old behavior for non Windows case. I dont see much value 
in re-implementing all idiosyncrasies and features of unix tar in Java. The 
Java implementation was to give a good enough implementation that meets the use 
case of users submitting tar archives for distributed cache files. There are 
some tests which validate this functionality and these were failing on Windows. 
Most likely users will submit simple tar files to distributed cache. Windows 
users will likely submit zip files and not tar files since tar is not native to 
windows. Hence I will re-submit the patch which uses Java code only on Windows.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8881) FileBasedKeyStoresFactory initialization logging should be debug not info

2012-10-03 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469166#comment-13469166
 ] 

Todd Lipcon commented on HADOOP-8881:
-

+1

 FileBasedKeyStoresFactory initialization logging should be debug not info
 -

 Key: HADOOP-8881
 URL: https://issues.apache.org/jira/browse/HADOOP-8881
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8881.patch


 When hadoop.ssl.enabled is set to true hadoop client invocations get a log 
 message on the terminal with the initialization of the keystores, switching 
 to debug will disable this log message by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469169#comment-13469169
 ] 

Hadoop QA commented on HADOOP-8804:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12547675/HADOOP-8804-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1557//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1557//console

This message is automatically generated.

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-1.1.patch, HADOOP-8804-1.1.patch, HADOOP-8804-trunk.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8881) FileBasedKeyStoresFactory initialization logging should be debug not info

2012-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469172#comment-13469172
 ] 

Hadoop QA commented on HADOOP-8881:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547677/HADOOP-8881.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1558//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1558//console

This message is automatically generated.

 FileBasedKeyStoresFactory initialization logging should be debug not info
 -

 Key: HADOOP-8881
 URL: https://issues.apache.org/jira/browse/HADOOP-8881
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8881.patch


 When hadoop.ssl.enabled is set to true hadoop client invocations get a log 
 message on the terminal with the initialization of the keystores, switching 
 to debug will disable this log message by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira