[jira] [Commented] (HADOOP-11920) Refactor some codes for erasure coders

2015-05-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14528408#comment-14528408
 ] 

Kai Zheng commented on HADOOP-11920:


HADOOP-11921 was opened to enhance the tests separately.

 Refactor some codes for erasure coders
 --

 Key: HADOOP-11920
 URL: https://issues.apache.org/jira/browse/HADOOP-11920
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11920-v1.patch, HADOOP-11920-v2.patch


 While working on native erasure coders and also HADOOP-11847, it was found in 
 some chances better to refine a little bit of codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11920) Refactor some codes for erasure coders

2015-05-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11920:
---
Summary: Refactor some codes for erasure coders  (was: Refactor erasure 
coders and enhance the tests)

 Refactor some codes for erasure coders
 --

 Key: HADOOP-11920
 URL: https://issues.apache.org/jira/browse/HADOOP-11920
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11920-v1.patch


 While working on native erasure coders and also HADOOP-11847, it was found in 
 some chances better to refine a little bit of codes and enhance the tests for 
 erasure coders. Cases:
 * Better to test if erasure coders can be repeatedly reused;
 * Better to test if erasure coders can be called with two buffer types (bytes 
 array version and direct bytebuffer version).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10387) Misspelling of threshold in log4j.properties for tests in Hadoop-common

2015-05-05 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-10387:
--
Summary: Misspelling of threshold in log4j.properties for tests in 
Hadoop-common  (was: Misspelling of threshold in log4j.properties for tests)

 Misspelling of threshold in log4j.properties for tests in Hadoop-common
 ---

 Key: HADOOP-10387
 URL: https://issues.apache.org/jira/browse/HADOOP-10387
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf, test
Affects Versions: 2.3.0
Reporter: Kenji Kikushima
Priority: Minor
 Attachments: HADOOP-10387-2.patch, HADOOP-10387.patch


 log4j.properties file for test contains misspelling log4j.threshhold.
 We should use log4j.threshold correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11917) test-patch.sh should work with ${BASEDIR}/patchprocess setups

2015-05-05 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14528425#comment-14528425
 ] 

Sean Busbey commented on HADOOP-11917:
--

{quote}
-${GIT} clean -xdf
+${GIT} clean -xdf -e patchprocess/
{quote}

If someone passes in the cli arg for PATCH_DIR and it doesn't happen to be 
${BASE_DIR}/patchprocess, then this won't keep them safe. Probably best to 
add a warning to the cli arg that says this should not be within ${BASE_DIR} 
and leave this hold out as an undocumented feature. It's easy enough to set up 
a jenkins workspace where the git repo and the patch dir sit next to each other 
(for example, as done in the [work-in-progress nifi 
job|https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-NIFI-Build/])

 test-patch.sh should work with ${BASEDIR}/patchprocess setups
 -

 Key: HADOOP-11917
 URL: https://issues.apache.org/jira/browse/HADOOP-11917
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-11917.patch


 There are a bunch of problems with this kind of setup: configuration and code 
 changes in test-patch.sh required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11566) Add tests and fix for erasure coders to recover erased parity units

2015-05-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14528309#comment-14528309
 ] 

Kai Zheng commented on HADOOP-11566:


This issue bases on HADOOP-11920.

 Add tests and fix for erasure coders to recover erased parity units 
 

 Key: HADOOP-11566
 URL: https://issues.apache.org/jira/browse/HADOOP-11566
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11566-v1.patch


 Discussing with [~zhz] in HADOOP-11542: it's planned to have follow up a JIRA 
 to enhance the tests for parity chunks as well. Like erasedDataIndexes, 
 erasedParityIndexes will be added to specify which parity units are to be 
 erased and recovered then.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11566) Add tests and fix for erasure coders to recover erased parity units

2015-05-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11566:
---
Attachment: HADOOP-11566-v1.patch

Uploaded a patch adding tests for erasure of parity units.

 Add tests and fix for erasure coders to recover erased parity units 
 

 Key: HADOOP-11566
 URL: https://issues.apache.org/jira/browse/HADOOP-11566
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11566-v1.patch


 Discussing with [~zhz] in HADOOP-11542: it's planned to have follow up a JIRA 
 to enhance the tests for parity chunks as well. Like erasedDataIndexes, 
 erasedParityIndexes will be added to specify which parity units are to be 
 erased and recovered then.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11916) TestStringUtils#testLowerAndUpperStrings failed on MAC due to a JVM bug

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14528330#comment-14528330
 ] 

Hudson commented on HADOOP-11916:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #918 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/918/])
HADOOP-11916. TestStringUtils#testLowerAndUpperStrings failed on MAC due to a 
JVM bug. Contributed by Ming Ma. (ozawa: rev 
338e88a19eeb01364c7f5bcdc5f4b5c35d53852d)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java


 TestStringUtils#testLowerAndUpperStrings failed on MAC due to a JVM bug
 ---

 Key: HADOOP-11916
 URL: https://issues.apache.org/jira/browse/HADOOP-11916
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11916.patch


 The test fails with the below exception. It turns out there is a JVM bug for 
 MAC, http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8047340.
 {noformat}
 testLowerAndUpperStrings(org.apache.hadoop.util.TestStringUtils)  Time 
 elapsed: 0.205 sec   ERROR!
 java.lang.Error: posix_spawn is not a supported process launch mechanism on 
 this platform.
   at java.lang.UNIXProcess$1.run(UNIXProcess.java:104)
   at java.lang.UNIXProcess$1.run(UNIXProcess.java:93)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.lang.UNIXProcess.clinit(UNIXProcess.java:91)
   at java.lang.ProcessImpl.start(ProcessImpl.java:130)
   at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:486)
   at org.apache.hadoop.util.Shell.run(Shell.java:456)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
   at org.apache.hadoop.util.Shell.isSetsidSupported(Shell.java:391)
   at org.apache.hadoop.util.Shell.clinit(Shell.java:381)
   at org.apache.hadoop.util.StringUtils.clinit(StringUtils.java:80)
   at 
 org.apache.hadoop.util.TestStringUtils.testLowerAndUpperStrings(TestStringUtils.java:432)
 {noformat}
 Perhaps we can disable this test case on MAC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11920) Refactor some codes for erasure coders

2015-05-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11920:
---
Description: 
While working on native erasure coders and also HADOOP-11847, it was found in 
some chances better to refine a little bit of codes.


  was:
While working on native erasure coders and also HADOOP-11847, it was found in 
some chances better to refine a little bit of codes and enhance the tests for 
erasure coders. Cases:
* Better to test if erasure coders can be repeatedly reused;
* Better to test if erasure coders can be called with two buffer types (bytes 
array version and direct bytebuffer version).



 Refactor some codes for erasure coders
 --

 Key: HADOOP-11920
 URL: https://issues.apache.org/jira/browse/HADOOP-11920
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11920-v1.patch


 While working on native erasure coders and also HADOOP-11847, it was found in 
 some chances better to refine a little bit of codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11921) Enhance tests for erasure coders

2015-05-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11921:
---
Description: 
While working on native coders, it was found better to enhance the tests for 
erasure coders to:
* Test if erasure coders can be repeatedly reused;
* Test if erasure coders can be called with two buffer types (bytes array 
version and direct bytebuffer version).

  was:
Better to enhance the tests for erasure coders to:
* Test if erasure coders can be repeatedly reused;
* Test if erasure coders can be called with two buffer types (bytes array 
version and direct bytebuffer version).


 Enhance tests for erasure coders
 

 Key: HADOOP-11921
 URL: https://issues.apache.org/jira/browse/HADOOP-11921
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng

 While working on native coders, it was found better to enhance the tests for 
 erasure coders to:
 * Test if erasure coders can be repeatedly reused;
 * Test if erasure coders can be called with two buffer types (bytes array 
 version and direct bytebuffer version).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11921) Enhance tests for erasure coders

2015-05-05 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11921:
--

 Summary: Enhance tests for erasure coders
 Key: HADOOP-11921
 URL: https://issues.apache.org/jira/browse/HADOOP-11921
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


Better to enhance the tests for erasure coders to:
* Test if erasure coders can be repeatedly reused;
* Test if erasure coders can be called with two buffer types (bytes array 
version and direct bytebuffer version).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11920) Refactor some codes for erasure coders

2015-05-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11920:
---
Attachment: HADOOP-11920-v2.patch

Uploaded a patch only doing the refactoring codes.

 Refactor some codes for erasure coders
 --

 Key: HADOOP-11920
 URL: https://issues.apache.org/jira/browse/HADOOP-11920
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11920-v1.patch, HADOOP-11920-v2.patch


 While working on native erasure coders and also HADOOP-11847, it was found in 
 some chances better to refine a little bit of codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11922) Misspelling of threshold in log4j.properties for tests in hadoop-nfs

2015-05-05 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11922:
-

 Summary: Misspelling of threshold in log4j.properties for tests in 
hadoop-nfs
 Key: HADOOP-11922
 URL: https://issues.apache.org/jira/browse/HADOOP-11922
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Priority: Minor


log4j.properties file for test contains misspelling log4j.threshhold.
We should use log4j.threshold correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11566) Add tests and fix for erasure coders to recover erased parity units

2015-05-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14528433#comment-14528433
 ] 

Kai Zheng commented on HADOOP-11566:


It will also base on the work of HADOOP-11921.

 Add tests and fix for erasure coders to recover erased parity units 
 

 Key: HADOOP-11566
 URL: https://issues.apache.org/jira/browse/HADOOP-11566
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11566-v1.patch


 Discussing with [~zhz] in HADOOP-11542: it's planned to have follow up a JIRA 
 to enhance the tests for parity chunks as well. Like erasedDataIndexes, 
 erasedParityIndexes will be added to specify which parity units are to be 
 erased and recovered then.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11920) Refactor erasure coders and enhance the tests

2015-05-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14528377#comment-14528377
 ] 

Kai Zheng commented on HADOOP-11920:


Oops, the resultant patch is still large. I will split it further and 
re-purpose the issue, only considering the re-factoring work here.

 Refactor erasure coders and enhance the tests
 -

 Key: HADOOP-11920
 URL: https://issues.apache.org/jira/browse/HADOOP-11920
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11920-v1.patch


 While working on native erasure coders and also HADOOP-11847, it was found in 
 some chances better to refine a little bit of codes and enhance the tests for 
 erasure coders. Cases:
 * Better to test if erasure coders can be repeatedly reused;
 * Better to test if erasure coders can be called with two buffer types (bytes 
 array version and direct bytebuffer version).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11921) Enhance tests for erasure coders

2015-05-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11921:
---
Attachment: HADOOP-11921-v1.patch

Uploaded a patch enhancing the tests for erasure coders.

 Enhance tests for erasure coders
 

 Key: HADOOP-11921
 URL: https://issues.apache.org/jira/browse/HADOOP-11921
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11921-v1.patch


 While working on native coders, it was found better to enhance the tests for 
 erasure coders to:
 * Test if erasure coders can be repeatedly reused;
 * Test if erasure coders can be called with two buffer types (bytes array 
 version and direct bytebuffer version).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11912) Extra configuration key used in TraceUtils should respect prefix

2015-05-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14529598#comment-14529598
 ] 

Colin Patrick McCabe commented on HADOOP-11912:
---

+1.  Thanks, [~iwasakims].

 Extra configuration key used in TraceUtils should respect prefix
 

 Key: HADOOP-11912
 URL: https://issues.apache.org/jira/browse/HADOOP-11912
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-11912.001.patch


 HDFS-8213 added prefix handling to configuration used by tracing but extra 
 key value pairs in configuration returned by TraceUtils#wrapHadoopConf does 
 not respect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11926) test-patch.sh mv does wrong math

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11926:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks!
committed

 test-patch.sh mv does wrong math
 

 Key: HADOOP-11926
 URL: https://issues.apache.org/jira/browse/HADOOP-11926
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0

 Attachments: HADOOP-11928.00.patch


 cleanup_and_exit uses the wrong result code check and fails to mv the 
 patchdir when it should, and mv's it when it shouldn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11883) Checkstyle Results are Different Between Command Line and Eclipse

2015-05-05 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14529627#comment-14529627
 ] 

Gera Shegalov commented on HADOOP-11883:


I heard [~sjlee0] is an Eclipse user :)

 Checkstyle Results are Different Between Command Line and Eclipse
 -

 Key: HADOOP-11883
 URL: https://issues.apache.org/jira/browse/HADOOP-11883
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
  Labels: build
 Attachments: HADOOP-11883.1.patch, HADOOP-11883.2.patch, 
 HADOOP-11883.3.patch


 If I run the checkstyle plugin from with eclipse, I want it to apply the same 
 rules as when run from the command line.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8643) hadoop-client should exclude hadoop-annotations from hadoop-common dependency

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8643:
-
Labels: BB2015-05-TBR  (was: )

 hadoop-client should exclude hadoop-annotations from hadoop-common dependency
 -

 Key: HADOOP-8643
 URL: https://issues.apache.org/jira/browse/HADOOP-8643
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: hadoop-8643.txt


 When reviewing HADOOP-8370 I've missed that changing the scope to compile for 
 hadoop-annotations in hadoop-common it would make hadoop-annotations to 
 bubble up in hadoop-client. Because of this we need to explicitly exclude it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11244) The HCFS contract test testRenameFileBeingAppended doesn't do a rename

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11244:
--
Labels: BB2015-05-TBR  (was: )

 The HCFS contract test testRenameFileBeingAppended doesn't do a rename
 --

 Key: HADOOP-11244
 URL: https://issues.apache.org/jira/browse/HADOOP-11244
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Noah Watkins
Assignee: jay vyas
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11244.patch, HADOOP-11244.patch


 The test AbstractContractAppendTest::testRenameFileBeingAppended appears to 
 assert the behavior of renaming a file opened for writing. However, the 
 assertion assertPathExists(renamed destination file does not exist, 
 renamed); fails because it appears that the file renamed is never created 
 (ostensibly it should be the target file that has been renamed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7021) MapWritable NullPointerException

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7021:
-
Labels: BB2015-05-TBR  (was: )

 MapWritable NullPointerException
 

 Key: HADOOP-7021
 URL: https://issues.apache.org/jira/browse/HADOOP-7021
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.20.1, 0.21.0
 Environment: Hadoop 0.20.1, Centos
Reporter: John Lee
Assignee: John Lee
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-7021.patch, HADOOP-7021.patch, HADOOP-7021.patch, 
 HADOOP-7021.patch


 We have encountered a NullPointerException when we use MapWritable with 
 custom Writable objects.
 The root cause is the counter newClasses in AbstractMapWritable is allowed to 
 get out of sync with the id to class mapping tables when addToMap(Class) is 
 called.  We have a patch to AbstractMapWritable.addToMap(Class) that handles 
 the case when newClasses gets out of sync with the id to class mapping tables 
 and adds a serialization optimization to minimize the number of class names 
 to write out per MapWritable object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10329) Fully qualified URIs are inconsistant and sometimes break in hadoop conf files

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10329:
--
Labels: BB2015-05-TBR  (was: )

 Fully qualified URIs are inconsistant and sometimes break in hadoop conf files
 --

 Key: HADOOP-10329
 URL: https://issues.apache.org/jira/browse/HADOOP-10329
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.2.0
Reporter: Travis Thompson
Assignee: Mohammad Kamrul Islam
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10329.1.patch, HADOOP-10329.2.patch, 
 HADOOP-10329.3.patch


 When specifying paths in the *-site.xml files, some are required to be fully 
 qualified, while others (specifically hadoop.tmp.dir) break when a fully 
 qualified uri is used.
 Example:
 If I set hadoop.tmp.dir in core-site to file:///something it'll create a 
 file: directory in my $PWD.
 {noformat}
   property
 namehadoop.tmp.dir/name
 valuefile:///grid/a/tmp/hadoop-${user.name}/value
   /property
 {noformat}
 {noformat}
 [tthompso@test ~]$ tree file\:/
 file:/
 └── grid
 └── a
 └── tmp
 └── hadoop-tthompso
 {noformat}
 Other places, like the datanode, or the nodemanager, will complain if I don't 
 use fully qualified uris
 {noformat}
   property
 namedfs.datanode.data.dir/name
 value/grid/a/dfs-data/bs/value
   /property
 {noformat}
 {noformat}
 WARN org.apache.hadoop.hdfs.server.common.Util: Path /grid/a/dfs-data/bs 
 should be specified as a URI in configuration files. Please update hdfs 
 configuration.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10544) Find command - add operator functions to find command

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10544:
--
Labels: BB2015-05-TBR  (was: )

 Find command - add operator functions to find command
 -

 Key: HADOOP-10544
 URL: https://issues.apache.org/jira/browse/HADOOP-10544
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Jonathan Allen
Assignee: Jonathan Allen
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10544.patch, HADOOP-10544.patch, 
 HADOOP-10544.patch, HADOOP-10544.patch, HADOOP-10544.patch


 Add operator functions (OR, NOT) to the find command created under 
 HADOOP-8989.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11613:
--
Labels: BB2015-05-TBR  (was: )

 Remove httpclient dependency from hadoop-azure
 --

 Key: HADOOP-11613
 URL: https://issues.apache.org/jira/browse/HADOOP-11613
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
 HADOOP-11613-003.patch, HADOOP-11613.patch


 Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11308) Enable JMX to directly output JSON objects instead JSON strings

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11308:
--
Labels: BB2015-05-TBR  (was: )

 Enable JMX to directly output JSON objects instead JSON strings
 ---

 Key: HADOOP-11308
 URL: https://issues.apache.org/jira/browse/HADOOP-11308
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.5.1
Reporter: Benoy Antony
Assignee: Benoy Antony
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11308.patch, HADOOP-11308.patch


 Currently many JMX beans provide Json content as strings.
 JMXJsonServlet outputs these as Json Strings.  This also results in losing 
 the original Json object structure.
 An example is given below:
 {code}
   TieredStorageStats : 
 {\ARCHIVE\:{\capacityTotal\:1498254102528,\capacityUsed\:12288,\capacityRemaining\:980102602752,\blockPoolUsed\:12288,\nodesInService\:3,\numBlocks\:0}...
 {code}
 {code}
   TieredStorageStats : 
 {ARCHIVE:{capacityTotal:1498254102528,capacityUsed:12288,capacityRemaining:980102602752,blockPoolUsed:12288,nodesInService:3,numBlocks:0}...
 {code}
 In the former output {{TieredStorageStats}} maps to a JSON string while in 
 the latter one it maps to a JSON object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8143) Change distcp to have -pb on by default

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8143:
-
Labels: BB2015-05-TBR  (was: )

 Change distcp to have -pb on by default
 ---

 Key: HADOOP-8143
 URL: https://issues.apache.org/jira/browse/HADOOP-8143
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Dave Thompson
Assignee: Mithun Radhakrishnan
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-8143.1.patch


 We should have the preserve blocksize (-pb) on in distcp by default.
 checksum which is on by default will always fail if blocksize is not the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11640) add user defined delimiter support to Configuration

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11640:
--
Labels: BB2015-05-TBR  (was: )

 add user defined delimiter support to Configuration
 ---

 Key: HADOOP-11640
 URL: https://issues.apache.org/jira/browse/HADOOP-11640
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Xiaoshuang LU
Assignee: Xiaoshuang LU
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11640.patch


 As mentioned by org.apache.hadoop.conf.Configuration.getStrings (Get the 
 comma delimited values of the name property as an array of Strings), only 
 comma separated strings can be used.  It would be much better if user defined 
 separators are supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11682) Move the native code for libhadoop into a dedicated directory

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11682:
--
Labels: BB2015-05-TBR  (was: )

 Move the native code for libhadoop into a dedicated directory
 -

 Key: HADOOP-11682
 URL: https://issues.apache.org/jira/browse/HADOOP-11682
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11682.000.patch, HADOOP-11682.001.patch, 
 HADOOP-11682.002.patch, HADOOP-11682.003.patch


 Current the code of {{libhadoop.so}} lies in 
 {{hadoop-common-project/hadoop-common/src/main/native}}. This jira proposes 
 to move it into 
 {{hadoop-common-project/hadoop-common/src/main/native/libhadoop}} so that it 
 makes it easier to add other native code projects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11622) Add support for specifying only groups in kms-acls.xml

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11622:
--
Labels: BB2015-05-TBR  (was: )

 Add support for specifying only groups in kms-acls.xml
 --

 Key: HADOOP-11622
 URL: https://issues.apache.org/jira/browse/HADOOP-11622
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Reporter: Arun Suresh
Assignee: Arun Suresh
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11622.1.patch


 Currently, ACL specification, which is specified in kms-acls.xml can take the 
 form :
 {noformat}
 u1,u2 g1,g2
 {noformat}
 with the user and group list separated by space.
 Unfortunately there is no way to specify only groups, since the 
 {{Configuration}} object trims the string property values



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9752) Latest Ubuntu (13.04) /bin/kill parameter for process group requires a 'double dash kill -0 -- -pid

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9752:
-
Labels: BB2015-05-TBR  (was: )

 Latest Ubuntu (13.04)  /bin/kill parameter for process group requires a 
 'double dash kill -0 -- -pid
 --

 Key: HADOOP-9752
 URL: https://issues.apache.org/jira/browse/HADOOP-9752
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 2.0.4-alpha, 2.4.1
Reporter: Robert Parker
Assignee: Robert Parker
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9752v1.patch, HADOOP-9752v2.patch, 
 HADOOP-9752v3.patch, HADOOP-9752v4.patch, HADOOP-9752v4.patch


 This changed on Ubuntu 12.10 and later.  This prevents the kill command from 
 executing correctly in Shell.java.
 There is a bug filed in Ubuntu but there is not much activity. 
 https://bugs.launchpad.net/ubuntu/+source/coreutils/+bug/1077796



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9650) Update jetty dependencies

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9650:
-
Labels: BB2015-05-TBR build maven  (was: build maven)

 Update jetty dependencies 
 --

 Key: HADOOP-9650
 URL: https://issues.apache.org/jira/browse/HADOOP-9650
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.6.0
Reporter: Timothy St. Clair
Assignee: Timothy St. Clair
  Labels: BB2015-05-TBR, build, maven
 Attachments: HADOOP-9650.patch, HADOOP-trunk-9650.patch


 Update deprecated jetty 6 dependencies, moving forwards to jetty 8.  This 
 enables mvn-rpmbuild on Fedora 18   platforms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11651) Handle kerberos authentication where there is no principal of HTTP/host@REALM

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11651:
--
Labels: BB2015-05-TBR  (was: )

 Handle kerberos authentication where there is no principal of HTTP/host@REALM
 -

 Key: HADOOP-11651
 URL: https://issues.apache.org/jira/browse/HADOOP-11651
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: zhouyingchao
Assignee: zhouyingchao
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11651-001.patch


 In a testing cluster, the HTTP service principal is just HTTP/hdtst@REALM 
 rather than HTTP/hostname@REALM. In this case, the following exception is 
 thrown on active HDFS namenode when bootstrap the standy HDFS namenode:
 2015-02-28,16:08:44,106 WARN 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
 Authentication exception: GSSException: No valid credentials provided 
 (Mechanism level: Failed to find any Kerberos Key)
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos Key)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:412)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1224)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos Key)
 at 
 sun.security.jgss.krb5.Krb5AcceptCredential.getInstance(Krb5AcceptCredential.java:95)
 at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:111)
 at 
 sun.security.jgss.GSSManagerImpl.getCredentialElement(GSSManagerImpl.java:178)
 at 
 sun.security.jgss.spnego.SpNegoMechFactory.getCredentialElement(SpNegoMechFactory.java:109)
 at 
 sun.security.jgss.GSSManagerImpl.getCredentialElement(GSSManagerImpl.java:178)
 at sun.security.jgss.GSSCredentialImpl.add(GSSCredentialImpl.java:384)
 at 
 sun.security.jgss.GSSCredentialImpl.init(GSSCredentialImpl.java:57)
 at 
 sun.security.jgss.GSSManagerImpl.createCredential(GSSManagerImpl.java:145)
 at 
 org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:363)
 ...
 We think our configuration is a valid use case and we should fix the issue. 
 The attached patch has been tested and it works fine on our testing cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11347) Inconsistent enforcement of umask between FileSystem and FileContext interacting with local file system.

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11347:
--
Labels: BB2015-05-TBR  (was: )

 Inconsistent enforcement of umask between FileSystem and FileContext 
 interacting with local file system.
 

 Key: HADOOP-11347
 URL: https://issues.apache.org/jira/browse/HADOOP-11347
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Chris Nauroth
Assignee: Varun Saxena
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11347.001.patch, HADOOP-11347.002.patch


 The {{FileSystem}} and {{FileContext}} APIs are inconsistent in enforcement 
 of umask for newly created directories.  {{FileContext}} utilizes 
 configuration property {{fs.permissions.umask-mode}} and runs a separate 
 {{chmod}} call to guarantee bypassing the process umask.  This is the 
 expected behavior for Hadoop as discussed in the documentation of 
 {{fs.permissions.umask-mode}}.  For the equivalent {{FileSystem}} APIs, it 
 does not use {{fs.permissions.umask-mode}}.  Instead, the permissions end up 
 getting controlled by the process umask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11675) tiny exception log with checking storedBlock is null or not

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11675:
--
Labels: BB2015-05-TBR  (was: )

 tiny exception log with checking storedBlock is null or not
 ---

 Key: HADOOP-11675
 URL: https://issues.apache.org/jira/browse/HADOOP-11675
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11675-001.txt


 Found this log at our product cluster:
 {code}
 2015-03-05,10:33:31,778 ERROR 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
 Compaction failed 
 regionName=xiaomi_device_info_test,ff,1425377429116.41437dc231fe370f1304104a75aad78f.,
  storeName=A, fileCount=7, fileSize=899.7 M (470.7 M, 259.7 M, 75.9 M, 24.4 
 M, 24.8 M, 25.7 M, 18.6 M), priority=23, time=44765894600479
 java.io.IOException: 
 BP-1356983882-10.2.201.14-1359086191297:blk_1211511211_1100144235504 does not 
 exist or is not under Constructionnull
 {code}
 let's check storedBlock is null or not to make log pretty



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11476) Mark InterfaceStability and InterfaceAudience as stable

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11476:
--
Labels: BB2015-05-TBR  (was: )

 Mark InterfaceStability and InterfaceAudience as stable
 ---

 Key: HADOOP-11476
 URL: https://issues.apache.org/jira/browse/HADOOP-11476
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Abraham Elmahrek
Assignee: Abraham Elmahrek
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11476.0.patch


 InterfaceStability 
 (https://github.com/apache/hadoop/blob/branch-2.6.0/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceStability.java)
  and InterfaceAudience 
 (https://github.com/apache/hadoop/blob/branch-2.6.0/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceAudience.java)
  are marked as Evolving, but should not change in reality. Let's mark them 
 as Stable so other projects can use them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7308) Remove unused TaskLogAppender configurations from log4j.properties

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7308:
-
Labels: BB2015-05-TBR  (was: )

 Remove unused TaskLogAppender configurations from log4j.properties
 --

 Key: HADOOP-7308
 URL: https://issues.apache.org/jira/browse/HADOOP-7308
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
  Labels: BB2015-05-TBR
 Attachments: hadoop-7308.txt


 MAPREDUCE-2372 improved TaskLogAppender to no longer need as much wiring in 
 log4j.properties. There are also some old properties in there that are no 
 longer used (eg logsRetainHours and noKeepSplits).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11603) Snapshot log can be removed #MetricsSystemImpl.java since all the services will intialzed

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11603:
--
Labels: BB2015-05-TBR  (was: )

 Snapshot log can be removed #MetricsSystemImpl.java since all the services 
 will intialzed
 -

 Key: HADOOP-11603
 URL: https://issues.apache.org/jira/browse/HADOOP-11603
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11603.patch, defaultmetricIntialize-classes.JPG


 Namenode,DataNode,journalnode,ResourceManager, Nodemanager and historyservice 
 etc... will initialize DefaultMetricsSystem hence following log will be 
 logged for all the classes which is not correct.. since snapshot feature is 
 related to HDFS and this can be logged when snapshot is enabled.
 {code}
 LOG.info(Scheduled snapshot period at + period + second(s).);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9844) NPE when trying to create an error message response of RPC

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9844:
-
Labels: BB2015-05-TBR  (was: )

 NPE when trying to create an error message response of RPC
 --

 Key: HADOOP-9844
 URL: https://issues.apache.org/jira/browse/HADOOP-9844
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.1-beta, 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9844-001.patch


 I'm seeing an NPE which is raised when the server is trying to create an 
 error response to send back to the caller and there is no error text.
 The root cause is probably somewhere in SASL, but sending something back to 
 the caller would seem preferable to NPE-ing server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9942) Adding keytab login test for UGI using MiniKdc

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9942:
-
Labels: BB2015-05-TBR  (was: )

 Adding keytab login test for UGI using MiniKdc
 --

 Key: HADOOP-9942
 URL: https://issues.apache.org/jira/browse/HADOOP-9942
 Project: Hadoop Common
  Issue Type: Test
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9942-v2.patch, HADOOP-9942.patch


 This will add a keytab login test for UGI by using MiniKdc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9184) Some reducers failing to write final output file to s3.

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9184:
-
Labels: BB2015-05-TBR  (was: )

 Some reducers failing to write final output file to s3.
 ---

 Key: HADOOP-9184
 URL: https://issues.apache.org/jira/browse/HADOOP-9184
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: Jeremy Karn
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9184-branch-0.20.patch, example.pig, 
 hadoop-9184.patch, task_log.txt


 We had a Hadoop job that was running 100 reducers with most of the reducers 
 expected to write out an empty file. When the final output was to an S3 
 bucket we were finding that sometimes we were missing a final part file.  
 This was happening approximately 1 job in 3 (so approximately 1 reducer out 
 of 300 was failing to output the data properly). I've attached the pig script 
 we were using to reproduce the bug.
 After an in depth look and instrumenting the code we traced the problem to 
 moveTaskOutputs in FileOutputCommitter.  
 The code there looked like:
 {code}
 if (fs.isFile(taskOutput)) {
   … do stuff …   
 } else if(fs.getFileStatus(taskOutput).isDir()) {
   … do stuff … 
 }
 {code}
 And what we saw happening is that for the problem jobs neither path was being 
 exercised.  I've attached the task log of our instrumented code.  In this 
 version we added an else statement and printed out the line THIS SEEMS LIKE 
 WE SHOULD NEVER GET HERE ….
 The root cause of this seems to be an eventual consistency issue with S3.  
 You can see in the log that the first time moveTaskOutputs is called it finds 
 that the taskOutput is a directory.  It goes into the isDir() branch and 
 successfully retrieves the list of files in that directory from S3 (in this 
 case just one file).  This triggers a recursive call to moveTaskOutputs for 
 the file found in the directory.  But in this pass through moveTaskOutput the 
 temporary output file can't be found resulting in both branches of the above 
 if statement being skipped and the temporary file never being moved to the 
 final output location.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9966) Refactor XDR code into XDRReader and XDRWriter

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9966:
-
Labels: BB2015-05-TBR  (was: )

 Refactor XDR code into XDRReader and XDRWriter
 --

 Key: HADOOP-9966
 URL: https://issues.apache.org/jira/browse/HADOOP-9966
 Project: Hadoop Common
  Issue Type: Improvement
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9966.000.patch, HADOOP-9966.001.patch, 
 HADOOP-9966.002.patch


 Several methods in the current XDR class have ambiguous semantics. For 
 example, Size() returns the actual size of internal byte array. The actual 
 size of current buffer, is also affected by read requests, which pull data 
 out of the buffer.
 These ambiguous semantics makes removing redundant copies on the NFS paths 
 difficult.
 This JIRA proposes to decompose the responsibilities of XDR into two separate 
 class: XDRReader and XDRWriter. The overall designs should closely follow 
 Java's *Reader / *Writer classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10000) HttpServer log link is inaccessible in secure cluster

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-1:
--
Labels: BB2015-05-TBR  (was: )

 HttpServer log link is inaccessible in secure cluster
 -

 Key: HADOOP-1
 URL: https://issues.apache.org/jira/browse/HADOOP-1
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
  Labels: BB2015-05-TBR
 Attachments: HADOOP-1.002.patch, HDFS-5217.000.patch, 
 HDFS-5217.001.patch


 Currently in a secured HDFS cluster, 401 error is returned when clicking the 
 NameNode Logs link.
 Looks like the cause of the issue is that the httpServer does not correctly 
 set the security handler and the user realm currently, which causes the 
 httpRequest.getRemoteUser (for the log URL) to return null and later be 
 overwritten to the default web name (e.g., dr.who) by the filter. In the 
 meanwhile, in a secured cluster the log URL requires the http user to be an 
 administrator. That's why we see the 401 error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9516) Enable spnego filters only if kerberos is enabled

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9516:
-
Labels: BB2015-05-TBR  (was: )

 Enable spnego filters only if kerberos is enabled
 -

 Key: HADOOP-9516
 URL: https://issues.apache.org/jira/browse/HADOOP-9516
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9516.patch


 Spnego filters are currently enabled if security is enabled - which is 
 predicated on security=kerberos.  With the advent of the PLAIN authentication 
 method, the filters should only be enabled if kerberos is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9426) Hadoop should expose Jar location utilities on its public API

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9426:
-
Labels: BB2015-05-TBR  (was: )

 Hadoop should expose Jar location utilities on its public API
 -

 Key: HADOOP-9426
 URL: https://issues.apache.org/jira/browse/HADOOP-9426
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 1.1.0, 2.0.0-alpha
Reporter: Nick Dimiduk
  Labels: BB2015-05-TBR
 Attachments: 
 0001-HADOOP-9426-Promote-JarFinder-out-of-test-jar.patch, 
 0001-HADOOP-9426-Promote-JarFinder-out-of-test-jar.patch


 The facilities behind JobConf#setJarByClass and the JarFinder utility in test 
 are both generally useful. As the core platform, these should be published as 
 part of the public API. In addition to HBase, they are probably useful for 
 Pig and Hive as well. See also HBASE-2588, HBASE-5317, HBASE-8140.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9733) a node's stack trace output to log file should be controllable

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9733:
-
Labels: BB2015-05-TBR  (was: )

 a node's stack trace output to log file should be controllable
 --

 Key: HADOOP-9733
 URL: https://issues.apache.org/jira/browse/HADOOP-9733
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9733.patch


 We can confirm the stack trace of node when we access the node's web 
 interface /stacks.
 And the stack trace is output not only browser but also log file of node.
 Considering the cluster's management policy (e.g. log monitoring), it should 
 be controllable.
 At now implementation, stack trace output controls HttpServer class.
 If we change log level, other messages does not output by HttpServer class.
 So, we control to output the stacktrace to log file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10018) TestUserGroupInformation throws NPE when HADOOP_HOME is not set

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10018:
--
Labels: BB2015-05-TBR  (was: )

 TestUserGroupInformation throws NPE when HADOOP_HOME is not set
 ---

 Key: HADOOP-10018
 URL: https://issues.apache.org/jira/browse/HADOOP-10018
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10018.000.patch


 TestUserGroupInformation throws NPE in System.setProperty() when HADOOP_HOME 
 is not set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8830) org.apache.hadoop.security.authentication.server.AuthenticationFilter might be called twice, causing kerberos replay errors

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8830:
-
Labels: BB2015-05-TBR  (was: )

 org.apache.hadoop.security.authentication.server.AuthenticationFilter might 
 be called twice, causing kerberos replay errors
 ---

 Key: HADOOP-8830
 URL: https://issues.apache.org/jira/browse/HADOOP-8830
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha, 2.1.0-beta, 2.1.1-beta, 2.2.0
Reporter: Moritz Moeller
Assignee: Omkar Vinit Joshi
Priority: Critical
  Labels: BB2015-05-TBR
 Attachments: HADOOP-8830.20131026.1.patch, 
 HADOOP-8830.20131027.1.patch


 AuthenticationFilter.doFilter is called twice (not sure if that is 
 intentional or not).
 The second time it is called the ServletRequest is already authenticated, 
 i.e. httpRequest.getRemoteUser() returns non-null info.
 If the kerberos authentication is triggered a second time it'll return a 
 replay attack exception.
 I solved this by adding a if (httpRequest.getRemoteUser() == null) at the 
 very beginning of doFilter.
 Alternatively one can set an attribute on the request, or figure out why 
 doFilter is called twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9512) Add Hadoop-Vaidya to branch2

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9512:
-
Labels: BB2015-05-TBR  (was: )

 Add Hadoop-Vaidya to branch2
 

 Key: HADOOP-9512
 URL: https://issues.apache.org/jira/browse/HADOOP-9512
 Project: Hadoop Common
  Issue Type: New Feature
  Components: tools
Affects Versions: 2.0.4-alpha
Reporter: Nemon Lou
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9512.patch, HADOOP-9512.patch, HADOOP-9512.patch


 Hadoop-Vaidya exists in hadoop1.0 series and we need to add it to hadoop 
 branch2.
 The only changes for supporting Hadoop2 are job history perser and job 
 counters ,etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8425) Upgrade commons-math version to 2.2

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8425:
-
Labels: BB2015-05-TBR  (was: )

 Upgrade commons-math version to 2.2
 ---

 Key: HADOOP-8425
 URL: https://issues.apache.org/jira/browse/HADOOP-8425
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 1.0.3, 0.23.1, 1.2.1
Reporter: Luke Lu
Assignee: Yu Li
  Labels: BB2015-05-TBR
 Attachments: hadoop-1.2.x-HADOOP-8425.patch


 From commons math 2.2 release note: This is primarily a maintenance release, 
 but it also includes new features and enhancements. Users of version 2.1 are 
 encouraged to upgrade to 2.2, as this release includes some important bug 
 fixes. Some downstream projects also need some new features in 2.2. Until we 
 have a clear user container story, upgrading the dependency in Hadoop core is 
 the most painless solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9925) Remove groups static and testing related codes out of UGI class

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9925:
-
Labels: BB2015-05-TBR  (was: )

 Remove groups static and testing related codes out of UGI class
 ---

 Key: HADOOP-9925
 URL: https://issues.apache.org/jira/browse/HADOOP-9925
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9925-v2.patch, HADOOP-9925-v2.patch, 
 HADOOP-9925.patch


 This refactors UGI class a bit, removing groups static and testing related 
 codes out of it. The testing usage APIs in UGI class are deprecated, and new 
 testing support class with equivalent APIs is provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9424) The hadoop jar invocation should include the passed jar on the classpath as a whole

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9424:
-
Labels: BB2015-05-TBR  (was: )

 The hadoop jar invocation should include the passed jar on the classpath as 
 a whole
 -

 Key: HADOOP-9424
 URL: https://issues.apache.org/jira/browse/HADOOP-9424
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9424.patch


 When you have a case such as this:
 {{X.jar - Classes = Main, Foo}}
 {{Y.jar - Classes = Bar}}
 With implementation details such as:
 * Main references Bar and invokes a public, static method on it.
 * Bar does a class lookup to find Foo (Class.forName(Foo)).
 Then when you do a {{HADOOP_CLASSPATH=Y.jar hadoop jar X.jar Main}}, the 
 Bar's method fails with a ClassNotFound exception cause of the way RunJar 
 runs.
 RunJar extracts the passed jar and includes its contents on the ClassLoader 
 of its current thread but the {{Class.forName(…)}} call from another class 
 does not check that class loader and hence cannot find the class as its not 
 on any classpath it is aware of.
 The script of hadoop jar should ideally include the passed jar argument to 
 the CLASSPATH before RunJar is invoked, for this above case to pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9995) Consistent log severity level guards and statements

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9995:
-
Labels: BB2015-05-TBR  (was: )

 Consistent log severity level guards and statements 
 

 Key: HADOOP-9995
 URL: https://issues.apache.org/jira/browse/HADOOP-9995
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jackie Chang
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9995.patch


 Developers use logs to do in-house debugging. These log statements are later 
 demoted to less severe levels and usually are guarded by their matching 
 severity levels. However, we do see inconsistencies in trunk. A log statement 
 like 
 {code}
if (LOG.isDebugEnabled()) {
 LOG.info(Assigned container ( + allocated + ) 
 {code}
 doesn't make much sense because the log message is actually only printed out 
 in DEBUG-level. We do see previous issues tried to correct this 
 inconsistency. I am proposing a comprehensive correction over trunk.
 Doug Cutting pointed it out in HADOOP-312: 
 https://issues.apache.org/jira/browse/HADOOP-312?focusedCommentId=12429498page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12429498
 HDFS-1611 also corrected this inconsistency.
 This could have been avoided by switching from log4j to slf4j's {} format 
 like CASSANDRA-625 (2010/3) and ZOOKEEPER-850 (2012/1), which gives cleaner 
 code and slightly higher performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9631:
-
Labels: BB2015-05-TBR  (was: )

 ViewFs should use underlying FileSystem's server side defaults
 --

 Key: HADOOP-9631
 URL: https://issues.apache.org/jira/browse/HADOOP-9631
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, viewfs
Affects Versions: 2.0.4-alpha
Reporter: Lohit Vijayarenu
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, 
 HADOOP-9631.trunk.3.patch, HADOOP-9631.trunk.4.patch, TestFileContext.java


 On a cluster with ViewFS as default FileSystem, creating files using 
 FileContext will always result with replication factor of 1, instead of 
 underlying filesystem default (like HDFS)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7614) Reloading configuration when using imputstream resources results in org.xml.sax.SAXParseException

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7614:
-
Labels: BB2015-05-TBR  (was: )

 Reloading configuration when using imputstream resources results in 
 org.xml.sax.SAXParseException
 -

 Key: HADOOP-7614
 URL: https://issues.apache.org/jira/browse/HADOOP-7614
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.21.0
Reporter: Ferdy Galema
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-7614-v1.patch, HADOOP-7614-v2.patch


 When using an inputstream as a resource for configuration, reloading this 
 configuration will throw the following exception:
 Exception in thread main java.lang.RuntimeException: 
 org.xml.sax.SAXParseException: Premature end of file.
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1576)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1445)
   at 
 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1381)
   at org.apache.hadoop.conf.Configuration.get(Configuration.java:569)
 ...
 Caused by: org.xml.sax.SAXParseException: Premature end of file.
   at 
 com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:249)
   at 
 com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
   at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:124)
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1504)
   ... 4 more
 To reproduce see following testcode:
 Configuration conf = new Configuration();
 ByteArrayInputStream bais = new 
 ByteArrayInputStream(configuration/configuration.getBytes());
 conf.addResource(bais);
 System.out.println(conf.get(blah));
 conf.addResource(core-site.xml); //just add a named resource, doesn't 
 matter which one
 System.out.println(conf.get(blah));
 Allowing inputstream resources is flexible, but in cases such as this in can 
 lead to difficult to debug problems.
 What do you think is the best solution? We could:
 A) reset the inputstream after it is read instead of closing it (but what to 
 do when the stream does not support marking?)
 B) leave it up to the client (for example make sure you implement close() so 
 that it resets the steam)
 C) when reading the inputstream for the first time, cache or wrap the 
 contents somehow so that is can be read multiple times (let's at least 
 document it)
 D) remove inputstream method altogether
 e) something else?
 For now I have attached a patch for solution A.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9823) Make ReconfigurableServlet compatible with chrome/firefox

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9823:
-
Labels: BB2015-05-TBR  (was: )

 Make ReconfigurableServlet compatible with chrome/firefox
 -

 Key: HADOOP-9823
 URL: https://issues.apache.org/jira/browse/HADOOP-9823
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Zesheng Wu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: 9823.patch


 The current implementation doesn't set the content type, the http server will 
 return the default one 'text/plain', this will result in chrome/firefox 
 displaying the servlet page as a plain text page.
 This is a very simple change, doesn't need to add unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9840) Improve User class for UGI and decouple it from Kerberos

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9840:
-
Labels: BB2015-05-TBR Rhino  (was: Rhino)

 Improve User class for UGI and decouple it from Kerberos
 

 Key: HADOOP-9840
 URL: https://issues.apache.org/jira/browse/HADOOP-9840
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
  Labels: BB2015-05-TBR, Rhino
 Attachments: HADOOP-9840-v2.patch, HADOOP-9840.patch, 
 HADOOP-9840.patch


 As discussed in HADOOP-9797, it would be better to improve UGI incrementally. 
 Open this JIRA to improve User class to:
 * Make it extensible as a base class, then can have subclasses like 
 SimpleUser for Simple authn, KerberosUser for Kerberos authn, 
 IdentityTokenUser for TokenAuth (in future), and etc.
 * Decouple it from Kerberos.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7573) hadoop should log configuration reads

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7573:
-
Labels: BB2015-05-TBR  (was: )

 hadoop should log configuration reads
 -

 Key: HADOOP-7573
 URL: https://issues.apache.org/jira/browse/HADOOP-7573
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 0.20.203.0
Reporter: Ari Rabkin
Assignee: Ari Rabkin
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-7573.patch, HADOOP-7573.patch, HADOOP-7573.patch, 
 HADOOP-7573.patch


 For debugging, it would often be valuable to know which configuration options 
 ever got read out of the Configuration into the rest of the program -- an 
 unread option didn't cause a problem. This patch logs the first time each 
 option is read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9284) Authentication method is wrong if no TGT is present

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9284:
-
Labels: BB2015-05-TBR  (was: )

 Authentication method is wrong if no TGT is present
 ---

 Key: HADOOP-9284
 URL: https://issues.apache.org/jira/browse/HADOOP-9284
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9284.patch, HADOOP-9284.patch


 If security is enabled, {{UGI.getLoginUser()}} will attempt an os-specific 
 login followed by looking for a TGT in the ticket cache.  If no TGT is found, 
 the UGI's authentication method is still set as KERBEROS instead of SIMPLE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9930) Desync AbstractDelegationTokenSecretManager

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9930:
-
Labels: BB2015-05-TBR  (was: )

 Desync AbstractDelegationTokenSecretManager
 ---

 Key: HADOOP-9930
 URL: https://issues.apache.org/jira/browse/HADOOP-9930
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9930.2.patch, HADOOP-9930.patch


 The ADTSM is heavily synchronized.  The result is that verifying, creating, 
 renewing, and canceling tokens are all unnecessarily serialized.  The only 
 operations should be serialized are per-token renew and cancel operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9700) Snapshot support for distcp

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9700:
-
Labels: BB2015-05-TBR  (was: )

 Snapshot support for distcp
 ---

 Key: HADOOP-9700
 URL: https://issues.apache.org/jira/browse/HADOOP-9700
 Project: Hadoop Common
  Issue Type: New Feature
  Components: tools/distcp
Reporter: Binglin Chang
Assignee: Binglin Chang
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9700-demo.patch


 Add snapshot incremental copy ability to distcp, so we can do iterative 
 consistent backup between hadoop clusters. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10108) Add support for kerberos delegation to hadoop-auth

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10108:
--
Labels: BB2015-05-TBR  (was: )

 Add support for kerberos delegation to hadoop-auth
 --

 Key: HADOOP-10108
 URL: https://issues.apache.org/jira/browse/HADOOP-10108
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0, 2.2.0
Reporter: Joey Echeverria
Assignee: Joey Echeverria
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10108-1.patch


 Most services that need to perform Hadoop operations on behalf of an end-user 
 make use of the built-in ability to configure trusted services and use 
 Hadoop-specific delegation tokens. However, some web-applications need 
 delegated access to both Hadoop and other kerberos-authenticated services. 
 It'd be useful for these applications to user kerberos delegation when using 
 hadoop-auth's SPNEGO libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11901) BytesWritable supports only up to ~700MB (instead of 2G) due to integer overflow.

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11901:
--
Labels: BB2015-05-TBR  (was: )

 BytesWritable supports only up to ~700MB (instead of 2G) due to integer 
 overflow.
 -

 Key: HADOOP-11901
 URL: https://issues.apache.org/jira/browse/HADOOP-11901
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Reynold Xin
Assignee: Reynold Xin
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11901.diff


 BytesWritable.setSize increases the buffer size by 1.5 each time ( * 3 / 2). 
 This is an unsafe operation since it restricts the max size to ~700MB, since 
 700MB * 3  2GB.
 I didn't write a test case for this case because in order to trigger this, 
 I'd need to allocate around 700MB, which is pretty expensive to do in a unit 
 test. Note that I didn't throw any exception in the case integer overflow as 
 I didn't want to change that behavior (callers to this might expect a 
 java.lang.NegativeArraySizeException).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11883) Checkstyle Results are Different Between Command Line and Eclipse

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11883:
--
Labels: BB2015-05-TBR build  (was: build)

 Checkstyle Results are Different Between Command Line and Eclipse
 -

 Key: HADOOP-11883
 URL: https://issues.apache.org/jira/browse/HADOOP-11883
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
  Labels: BB2015-05-TBR, build
 Attachments: HADOOP-11883.1.patch, HADOOP-11883.2.patch, 
 HADOOP-11883.3.patch


 If I run the checkstyle plugin from with eclipse, I want it to apply the same 
 rules as when run from the command line.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11923) test-patch whitespace checker doesn't flag new files

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11923:
--
Labels: BB2015-05-TBR  (was: )

 test-patch whitespace checker doesn't flag new files
 

 Key: HADOOP-11923
 URL: https://issues.apache.org/jira/browse/HADOOP-11923
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Busbey
Priority: Critical
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11923.patch


 The whitespace plugin for test-patch only examines new files. So when a patch 
 comes in with trailing whitespace on new files it doesn't flag things as a 
 problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11910) add command line arg to test-patch for the regex to use for evaluating if something is a patch file.

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11910:
--
Labels: BB2015-05-TBR  (was: )

 add command line arg to test-patch for the regex to use for evaluating if 
 something is a patch file.
 

 Key: HADOOP-11910
 URL: https://issues.apache.org/jira/browse/HADOOP-11910
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Sean Busbey
Assignee: Sean Busbey
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11910.1.patch


 right now before test-patch can be used on a submission it checks if the file 
 ends with .patch. we should make this configurable so that it can be used 
 with other projects that might want to be more permissive (i.e. .patch.txt)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11918) Listing an empty s3a root directory throws FileNotFound.

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11918:
--
Labels: BB2015-05-TBR s3  (was: s3)

 Listing an empty s3a root directory throws FileNotFound.
 

 Key: HADOOP-11918
 URL: https://issues.apache.org/jira/browse/HADOOP-11918
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: BB2015-05-TBR, s3
 Attachments: HADOOP-11918.000.patch, HADOOP-11918.001.patch


 With an empty s3 bucket and run
 {code}
 $ hadoop fs -D... -ls s3a://hdfs-s3a-test/
 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 ls: `s3a://hdfs-s3a-test/': No such file or directory
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2015-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14529590#comment-14529590
 ] 

Hadoop QA commented on HADOOP-11820:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 15s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:blue}0{color} | shellcheck |   0m 15s | Shellcheck was not available. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| | |   0m 20s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | http://issues.apache.org/jira/secure/attachment/12730651/1.patch |
| Optional Tests | shellcheck |
| git revision | trunk / 4da8490 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6499/artifact/patchprocess/whitespace.txt
 |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6499/console |


This message was automatically generated.

 aw jira testing, ignore
 ---

 Key: HADOOP-11820
 URL: https://issues.apache.org/jira/browse/HADOOP-11820
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: 1.patch, HADOOP-11590.patch, HADOOP-11590.patch, 
 HADOOP-11877 .patch, HADOOP-11923.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7418) support for multiple slashes in the path separator

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7418:
-
Labels: BB2015-05-TBR newbie  (was: newbie)

 support for multiple slashes in the path separator
 --

 Key: HADOOP-7418
 URL: https://issues.apache.org/jira/browse/HADOOP-7418
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
 Environment: Linux running JDK 1.6
Reporter: Sudharsan Sampath
Assignee: Andrew Look
Priority: Minor
  Labels: BB2015-05-TBR, newbie
 Attachments: HADOOP-7418--20110719.txt, HADOOP-7418.txt, 
 HADOOP-7418.txt, HDFS-1460.txt, HDFS-1460.txt


 the parsing of the input path string to identify the uri authority conflicts 
 with the file system paths. For instance the following is a valid path in 
 both the linux file system and the hdfs.
 //user/directory1//directory2.
 While this works perfectly fine in the command line for manipulating hdfs, 
 the same fails when specified as the input path for a mapper class with the 
 following expcetion.
 Exception in thread main java.net.UnknownHostException: unknown host: user
 at org.apache.hadoop.ipc.Client$Connection.init(Client.java:195)
 as the org.apache.hadoop.fs.Path class assumes the string that follows the 
 '//' to be an uri authority



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11435) refine HttpServer2 port retry detail

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11435:
--
Labels: BB2015-05-TBR  (was: )

 refine HttpServer2 port retry detail
 

 Key: HADOOP-11435
 URL: https://issues.apache.org/jira/browse/HADOOP-11435
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Liang Xie
Assignee: Liang Xie
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11435-001.txt


 Current port retry could reach out the max port number.  the style is:
 while(true)
 {
 ++port;
 sleep(100);
 }
 let's ensure the retry port be picked from a normal range always.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11715:
--
Labels: BB2015-05-TBR  (was: )

 azureFs::getFileStatus doesn't check the file system scheme and thus could 
 throw a misleading exception. 
 -

 Key: HADOOP-11715
 URL: https://issues.apache.org/jira/browse/HADOOP-11715
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: nijel
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HADOOP-11715.1.patch, HADOOP-11715.2.patch, 
 HADOOP-11715.3.patch


  azureFs::getFileStatus doesn't check the file system scheme and thus could 
 throw a misleading exception. 
 For example, it complains filenotfound instead of wrong-fs for an hdfs path:
 Caused by: java.io.FileNotFoundException: 
 hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split:
  No such file or directory.
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11813:
--
Labels: BB2015-05-TBR newbie  (was: newbie)

 releasedocmaker.py should use today's date instead of unreleased
 

 Key: HADOOP-11813
 URL: https://issues.apache.org/jira/browse/HADOOP-11813
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Darrell Taylor
Priority: Minor
  Labels: BB2015-05-TBR, newbie
 Attachments: HADOOP-11813.001.patch, HADOOP-11813.patch


 After discussing with a few folks, it'd be more convenient if releasedocmaker 
 used the current date rather than unreleased when processing a version that 
 JIRA hasn't declared released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11335) KMS ACL in meta data or database

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11335:
--
Labels: BB2015-05-TBR Security  (was: Security)

 KMS ACL in meta data or database
 

 Key: HADOOP-11335
 URL: https://issues.apache.org/jira/browse/HADOOP-11335
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 2.6.0
Reporter: Jerry Chen
Assignee: Dian Fu
  Labels: BB2015-05-TBR, Security
 Attachments: HADOOP-11335.001.patch, HADOOP-11335.002.patch, 
 HADOOP-11335.003.patch, HADOOP-11335.004.patch, HADOOP-11335.005.patch, 
 HADOOP-11335.006.patch, HADOOP-11335.007.patch, HADOOP-11335.re-design.patch, 
 KMS ACL in metadata or database.pdf

   Original Estimate: 504h
  Remaining Estimate: 504h

 Currently Hadoop KMS has implemented ACL for keys and the per key ACL are 
 stored in the configuration file kms-acls.xml.
 The management of ACL in configuration file would not be easy in enterprise 
 usage and it is put difficulties for backup and recovery.
 It is ideal to store the ACL for keys in the key meta data similar to what 
 file system ACL does.  In this way, the backup and recovery that works on 
 keys should work for ACL for keys too.
 On the other hand, with the ACL in meta data, the ACL of each key can be 
 easily manipulate with API or command line tool and take effect instantly.  
 This is very important for enterprise level access control management.  This 
 feature can be addressed by separate JIRA. While with the configuration file, 
 these would be hard to provide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11594) Improve the readability of site index of documentation

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11594:
--
Labels: BB2015-05-TBR  (was: )

 Improve the readability of site index of documentation
 --

 Key: HADOOP-11594
 URL: https://issues.apache.org/jira/browse/HADOOP-11594
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11594.001.patch, HADOOP-11594.002.patch, 
 HADOOP-11594.003.patch


 * change the order of items
 * make redundant title shorter and fit it in single line as far as possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11567) Refresh HTTP Authentication secret without restarting the server

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11567:
--
Labels: BB2015-05-TBR  (was: )

 Refresh HTTP Authentication secret without restarting the server
 

 Key: HADOOP-11567
 URL: https://issues.apache.org/jira/browse/HADOOP-11567
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Benoy Antony
Assignee: Benoy Antony
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11567-001.patch


 The _AuthenticationFilter_ uses the secret read from a file specified via 
 hadoop.http.authentication.signature.secret.file to sign the cookie 
 containing user authentication information.
 The secret is read only during initialization and hence needs a restart to 
 update the secret.
 ZKSignerSecretProvider can be used to rotate the secrets without restarting 
 the servers, but it needs a zookeeper setup.
 The jira is to refresh secret by updating the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9984:
-
Labels: BB2015-05-TBR  (was: )

 FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by 
 default
 --

 Key: HADOOP-9984
 URL: https://issues.apache.org/jira/browse/HADOOP-9984
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Critical
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, 
 HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, 
 HADOOP-9984.010.patch, HADOOP-9984.011.patch, HADOOP-9984.012.patch, 
 HADOOP-9984.013.patch, HADOOP-9984.014.patch, HADOOP-9984.015.patch


 During the process of adding symlink support to FileSystem, we realized that 
 many existing HDFS clients would be broken by listStatus and globStatus 
 returning symlinks.  One example is applications that assume that 
 !FileStatus#isFile implies that the inode is a directory.  As we discussed in 
 HADOOP-9972 and HADOOP-9912, we should default these APIs to returning 
 resolved paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11628:
--
Labels: BB2015-05-TBR jdk8  (was: jdk8)

 SPNEGO auth does not work with CNAMEs in JDK8
 -

 Key: HADOOP-11628
 URL: https://issues.apache.org/jira/browse/HADOOP-11628
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
  Labels: BB2015-05-TBR, jdk8
 Attachments: HADOOP-11628.patch


 Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
 principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
 user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-3619) DNS.getHosts triggers an ArrayIndexOutOfBoundsException in reverseDNS if one of the interfaces is IPv6

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-3619:
-
Labels: BB2015-05-TBR ipv6 patch  (was: ipv6 patch)

 DNS.getHosts triggers an ArrayIndexOutOfBoundsException in reverseDNS if one 
 of the interfaces is IPv6
 --

 Key: HADOOP-3619
 URL: https://issues.apache.org/jira/browse/HADOOP-3619
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Reporter: Steve Loughran
Assignee: Dr. Martin Menzel
  Labels: BB2015-05-TBR, ipv6, patch
 Attachments: HADOOP-3619-v2.patch


 reverseDNS tries to split a host address string by ., and so fails if : 
 is the separator, as it is in IPv6. When it tries to access the parts of the 
 address, a stack trace is seen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11885:
--
Labels: BB2015-05-TBR  (was: )

 hadoop-dist dist-layout-stitching.sh does not work with dash
 

 Key: HADOOP-11885
 URL: https://issues.apache.org/jira/browse/HADOOP-11885
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
  Labels: BB2015-05-TBR
 Attachments: hadoop-11885.001.patch, hadoop-11885.002.patch


 Saw this while building the EC branch, pretty sure it'll repro on trunk 
 though too.
 {noformat}
  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
  [exec] $ copy 
 /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT
  .
  [exec] $ copy 
 /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT
  .
  [exec] $ copy 
 /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT
  .
  [exec] $ copy 
 /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT
  .
  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
  [exec] $ copy 
 /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT
  .
  [exec] $ copy 
 /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT
  .
  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator
  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11229) JobStoryProducer is not closed upon return from Gridmix#setupDistCacheEmulation()

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11229:
--
Labels: BB2015-05-TBR  (was: )

 JobStoryProducer is not closed upon return from 
 Gridmix#setupDistCacheEmulation()
 -

 Key: HADOOP-11229
 URL: https://issues.apache.org/jira/browse/HADOOP-11229
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: skrho
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11229_001.patch, HADOOP-11229_002.patch


 Here is related code:
 {code}
   JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
   exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
 {code}
 jsp should be closed upon return from setupDistCacheEmulation().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11677) Missing secure session attributed for log and static contexts

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11677:
--
Labels: BB2015-05-TBR  (was: )

 Missing secure session attributed for log and static contexts
 -

 Key: HADOOP-11677
 URL: https://issues.apache.org/jira/browse/HADOOP-11677
 Project: Hadoop Common
  Issue Type: Bug
Reporter: nijel
Assignee: nijel
  Labels: BB2015-05-TBR
 Attachments: 001-HADOOP-11677.patch, HADOOP-11677-2.patch, 
 HADOOP-11677.1.patch


 In HTTPServer2.java for the default context the secure attributes are set.
 {code}
 SessionManager sm = webAppContext.getSessionHandler().getSessionManager();
 if (sm instanceof AbstractSessionManager) {
   AbstractSessionManager asm = (AbstractSessionManager)sm;
   asm.setHttpOnly(true);
   asm.setSecureCookies(true);
 }
 {code}
 But when the contexts are created for /logs and /static, new contexts are 
 created and the session handler is assigned as null. 
 Here also the secure attributes needs to be set.
 Is it not done intentionally ? please give your thought
 Background 
 trying to add login action for HTTP pages. After this when security test tool 
 is used, it reports error for these 2 urls (/logs and /static).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8728) Display (fs -text) shouldn't hard-depend on Writable serialized sequence files.

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8728:
-
Labels: BB2015-05-TBR  (was: )

 Display (fs -text) shouldn't hard-depend on Writable serialized sequence 
 files.
 ---

 Key: HADOOP-8728
 URL: https://issues.apache.org/jira/browse/HADOOP-8728
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Harsh J
Assignee: Akira AJISAKA
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-8728-002.patch, HADOOP-8728-003.patch, 
 HADOOP-8728.patch


 The Display command (fs -text) currently reads only Writable-based 
 SequenceFiles. This isn't necessary to do, and prevents reading 
 non-Writable-based serialization in SequenceFiles from the shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11772:
--
Labels: BB2015-05-TBR  (was: )

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
 HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
 HADOOP-11772-wip-002.patch, after-ipc-fix.png, dfs-sync-ipc.png, 
 sync-client-bt.png, sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11050) hconf.c: fix bug where we would sometimes not try to load multiple XML files from the same path

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11050:
--
Labels: BB2015-05-TBR  (was: )

 hconf.c: fix bug where we would sometimes not try to load multiple XML files 
 from the same path
 ---

 Key: HADOOP-11050
 URL: https://issues.apache.org/jira/browse/HADOOP-11050
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
  Labels: BB2015-05-TBR
 Fix For: HADOOP-10388

 Attachments: 001-HADOOP-11050.patch


 hconf.c: fix bug where we would sometimes not try to load multiple XML files 
 from the same path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10661) Ineffective user/passsword check in FTPFileSystem#initialize()

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10661:
--
Labels: BB2015-05-TBR  (was: )

 Ineffective user/passsword check in FTPFileSystem#initialize()
 --

 Key: HADOOP-10661
 URL: https://issues.apache.org/jira/browse/HADOOP-10661
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10661.patch, HADOOP-10661.patch


 Here is related code:
 {code}
   userAndPassword = (conf.get(fs.ftp.user. + host, null) + : + conf
   .get(fs.ftp.password. + host, null));
   if (userAndPassword == null) {
 throw new IOException(Invalid user/passsword specified);
   }
 {code}
 The intention seems to be checking that username / password should not be 
 null.
 But due to the presence of colon, the above check is not effective.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10362) Closing of Reader in HadoopArchives#HArchiveInputFormat#getSplits() should check against null

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10362:
--
Labels: BB2015-05-TBR  (was: )

 Closing of Reader in HadoopArchives#HArchiveInputFormat#getSplits() should 
 check against null
 -

 Key: HADOOP-10362
 URL: https://issues.apache.org/jira/browse/HADOOP-10362
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10362.patch


 {code}
   try {
 reader = new SequenceFile.Reader(fs, src, jconf);
 ...
   finally {
 reader.close();
   }
 {code}
 If Reader ctor throws exception, the close() method would be called on null 
 object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11270) Seek behavior difference between NativeS3FsInputStream and DFSInputStream

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11270:
--
Labels: BB2015-05-TBR fs  (was: fs)

 Seek behavior difference between NativeS3FsInputStream and DFSInputStream
 -

 Key: HADOOP-11270
 URL: https://issues.apache.org/jira/browse/HADOOP-11270
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.5.1
Reporter: Venkata Puneet Ravuri
Assignee: Venkata Puneet Ravuri
  Labels: BB2015-05-TBR, fs
 Attachments: HADOOP-11270.02.patch, HADOOP-11270.03.patch, 
 HADOOP-11270.patch


 There is a difference in behavior while seeking a given file present
 in S3 using NativeS3FileSystem$NativeS3FsInputStream and a file present in 
 HDFS using DFSInputStream.
 If we seek to the end of the file incase of NativeS3FsInputStream, it fails 
 with exception java.io.EOFException: Attempted to seek or read past the end 
 of the file. That is because a getObject request is issued on the S3 object 
 with range start as value of length of file.
 This is the complete exception stack:-
 Caused by: java.io.EOFException: Attempted to seek or read past the end of 
 the file
 at 
 org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:462)
 at 
 org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411)
 at 
 org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:234)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
 at org.apache.hadoop.fs.s3native.$Proxy17.retrieve(Unknown Source)
 at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:205)
 at 
 org.apache.hadoop.fs.BufferedFSInputStream.seek(BufferedFSInputStream.java:96)
 at 
 org.apache.hadoop.fs.BufferedFSInputStream.skip(BufferedFSInputStream.java:67)
 at java.io.DataInputStream.skipBytes(DataInputStream.java:220)
 at org.apache.hadoop.hive.ql.io.RCFile$ValueBuffer.readFields(RCFile.java:739)
 at 
 org.apache.hadoop.hive.ql.io.RCFile$Reader.currentValueBuffer(RCFile.java:1720)
 at org.apache.hadoop.hive.ql.io.RCFile$Reader.getCurrentRow(RCFile.java:1898)
 at 
 org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:149)
 at 
 org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:44)
 at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:339)
 ... 15 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7165) listLocatedStatus(path, filter) is not redefined in FilterFs

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7165:
-
Labels: BB2015-05-TBR  (was: )

 listLocatedStatus(path, filter) is not redefined in FilterFs
 

 Key: HADOOP-7165
 URL: https://issues.apache.org/jira/browse/HADOOP-7165
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Hairong Kuang
Assignee: Hairong Kuang
  Labels: BB2015-05-TBR
 Attachments: HADOOP-7165.patch, locatedStatusFilter.patch


 listLocatedStatus(path, filter) is not redefined in FilterFs. So if a job 
 client uses a FilterFs to talk to NameNode, it does not trigger the bulk 
 location optimization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9946) NumAllSinks metrics shows lower value than NumActiveSinks

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9946:
-
Labels: BB2015-05-TBR metrics  (was: metrics)

 NumAllSinks metrics shows lower value than NumActiveSinks
 -

 Key: HADOOP-9946
 URL: https://issues.apache.org/jira/browse/HADOOP-9946
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: BB2015-05-TBR, metrics
 Attachments: HADOOP-9946, HADOOP-9946.patch


 In my hadoop cluster (trunk), output metrics file is as follows:
 {code}
 $ less /tmp/nodemanager-metrics.out
 1377894554661 metricssystem.MetricsSystem: Context=metricssystem, 
 Hostname=hadoop, NumActiveSources=8, NumAllSources=8, NumActiveSinks=1, 
 NumAllSinks=0, Sink_fileNumOps=0, ...
 {code}
 NumAllSinks should be equal to or greater than NumActiveSinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10289) o.a.h.u.ReflectionUtils.printThreadInfo() causes deadlock in TestHttpServer

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10289:
--
Labels: BB2015-05-TBR  (was: )

 o.a.h.u.ReflectionUtils.printThreadInfo() causes deadlock in TestHttpServer
 ---

 Key: HADOOP-10289
 URL: https://issues.apache.org/jira/browse/HADOOP-10289
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 2.3.0
 Environment: MacOS X 10.9/Java 6 1.6.0_65-b14-462
Reporter: Laurent Goujon
  Labels: BB2015-05-TBR
 Attachments: TestHttpServer.jstack, hadoop-10289.patch


 This bug is a followup on HADOOP-9964
 ReflectionUtils.printThreadInfo is now a synchronized method. This change 
 creates sometimes deadlock situation in TestHttpServer if one servlet thread 
 calling this method is waiting on client to consume output.
 In TestHttpServer, several tests connect to the http server only to check the 
 status code but without reading the full inputstream. Depending on 
 HttpURLConnection, the deadlock scenario may be triggered or not.
 Note that in the original ticket, it is not explained why synchronized fixed 
 the issue. According to the attached stacktrace, test was blocked on 
 HttpServer.stop(), waiting on worker threads to stop, which didn't happen 
 because those threads were waiting for their output to be consumed, so the 
 original issue looks very similar to what I'm experiencing.
 My proposed fix is to remove synchronized (as it seems to make the issue 
 worse) but configure HttpServer.stop() to forcibly kill threads after a 
 configurable period of time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10363) Closing of SequenceFile.Reader / SequenceFile.Writer in DistCh should check against null

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10363:
--
Labels: BB2015-05-TBR  (was: )

 Closing of SequenceFile.Reader / SequenceFile.Writer in DistCh should check 
 against null
 

 Key: HADOOP-10363
 URL: https://issues.apache.org/jira/browse/HADOOP-10363
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10363.patch


 Here is related code:
 {code}
   try {
 for(in = new SequenceFile.Reader(fs, srcs, job); in.next(key, value); 
 ) {
 ...
   finally {
 in.close();
   }
 {code}
 {code}
 SequenceFile.Writer opWriter = null;
 try {
   opWriter = SequenceFile.createWriter(fs, jobconf, opList, Text.class,
   FileOperation.class, SequenceFile.CompressionType.NONE);
 ...
 } finally {
   opWriter.close();
 }
 {code}
 If ctor of Reader / Writer throws exception, the close() would be called on 
 null object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10865) Add a Crc32 chunked verification benchmark for both directly and non-directly buffer cases

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10865:
--
Labels: BB2015-05-TBR  (was: )

 Add a Crc32 chunked verification benchmark for both directly and non-directly 
 buffer cases
 --

 Key: HADOOP-10865
 URL: https://issues.apache.org/jira/browse/HADOOP-10865
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: c10865_20140717.patch


 Currently, it is not easy to compare Crc32 chunked verification 
 implementations.  Let's add a benchmark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10287) FSOutputSummer should support any checksum size

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10287:
--
Labels: BB2015-05-TBR  (was: )

 FSOutputSummer should support any checksum size
 ---

 Key: HADOOP-10287
 URL: https://issues.apache.org/jira/browse/HADOOP-10287
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0, 2.2.0
Reporter: Laurent Goujon
  Labels: BB2015-05-TBR
 Attachments: hadoop-10287.patch


 HADOOP-9114 only fixes if checksum size is 0, but doesn't handle the generic 
 case.
 FSOutputSummer should work with any checksum size (between 0 and 8 since 
 Checksum.getValue() returns a long)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10942) Globbing optimizations and regression fix

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10942:
--
Labels: BB2015-05-TBR  (was: )

 Globbing optimizations and regression fix
 -

 Key: HADOOP-10942
 URL: https://issues.apache.org/jira/browse/HADOOP-10942
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10942.patch


 When globbing was commonized to support both filesystem and filecontext, it 
 regressed a fix that prevents an intermediate glob that matches a file from 
 throwing a confusing permissions exception.  The hdfs traverse check requires 
 the exec bit which a file does not have.
 Additional optimizations to reduce rpcs actually increases them if 
 directories contain 1 item.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10778) Use NativeCrc32 only if it is faster

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10778:
--
Labels: BB2015-05-TBR  (was: )

 Use NativeCrc32 only if it is faster
 

 Key: HADOOP-10778
 URL: https://issues.apache.org/jira/browse/HADOOP-10778
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
  Labels: BB2015-05-TBR
 Attachments: c10778_20140702.patch, c10778_20140717.patch


 From the benchmark post in [this 
 comment|https://issues.apache.org/jira/browse/HDFS-6560?focusedCommentId=14044060page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14044060],
  NativeCrc32 is slower than java.util.zip.CRC32 for Java 7 and above when 
 bytesPerChecksum  512.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11095) How about Null check when closing inputstream object in JavaKeyStoreProvider#() ?

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11095:
--
Labels: BB2015-05-TBR  (was: )

 How about Null check when closing inputstream object in 
 JavaKeyStoreProvider#() ?
 -

 Key: HADOOP-11095
 URL: https://issues.apache.org/jira/browse/HADOOP-11095
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.5.1
Reporter: skrho
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11095_001.patch


 In the finally block:
   InputStream is = pwdFile.openStream();
   try {
 password = IOUtils.toCharArray(is);
   } finally {
 is.close();
   }
   
 How about Null check when closing inputstream object?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10416) For pseudo authentication, what to do if there is an expired token?

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10416:
--
Labels: BB2015-05-TBR  (was: )

 For pseudo authentication, what to do if there is an expired token?
 ---

 Key: HADOOP-10416
 URL: https://issues.apache.org/jira/browse/HADOOP-10416
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: c10416_20140321.patch, c10416_20140322.patch


 PseudoAuthenticationHandler currently only gets username from the user.name 
 parameter.  If there is an expired auth token in the request, the token is 
 ignored (without returning any error back to the client).  Further, if 
 anonymous is enabled, the client will be authenticated as anonymous.
 The above behavior seems non-desirable since the client does not want to be 
 authenticated as anonymous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9737) JarFinder#getJar should delete the jar file upon destruction of the JVM

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9737:
-
Labels: BB2015-05-TBR  (was: )

 JarFinder#getJar should delete the jar file upon destruction of the JVM
 ---

 Key: HADOOP-9737
 URL: https://issues.apache.org/jira/browse/HADOOP-9737
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.0-alpha
Reporter: Esteban Gutierrez
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9737.patch


 Once {{JarFinder.getJar()}} is invoked by a client app, it would be really 
 useful to destroy the generated JAR after the JVM is destroyed by setting 
 {{tempJar.deleteOnExit()}}. In order to preserve backwards compatibility a 
 configuration setting could be implemented, e.g. 
 {{test.build.dir.purge.on.exit}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9655) Connection object in IPC Client can not run concurrently during connection time out

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9655:
-
Labels: BB2015-05-TBR  (was: )

 Connection object in IPC Client can not run concurrently during connection 
 time out
 ---

 Key: HADOOP-9655
 URL: https://issues.apache.org/jira/browse/HADOOP-9655
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.0.4-alpha
Reporter: Nemon Lou
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9655.patch


 When one machine power off during running a job ,MRAppMaster find tasks timed 
 out on that host and then call stop container for each container concurrently.
 But the IPC layer did it serially, for each call,the connection time out 
 exception toke a few minutes to raise after 45 times reties. And AM hang for 
 many hours to wait for stopContainer to finish.
 The jstack output file shows that most threads stuck at Connection.addCall 
 waiting for a lock object hold by  Connection.setupIOstreams.
 (The setupIOstreams method run slowlly becauseof connection time out during 
 setupconnection.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8653) FTPFileSystem rename broken

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8653:
-
Labels: BB2015-05-TBR  (was: )

 FTPFileSystem rename broken
 ---

 Key: HADOOP-8653
 URL: https://issues.apache.org/jira/browse/HADOOP-8653
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.20.2, 2.0.0-alpha
Reporter: Karel Kolman
  Labels: BB2015-05-TBR
 Attachments: HDFS-8653-1.patch


 The FTPFileSystem.rename(FTPClient client, Path src, Path dst) method is 
 broken.
 The changeWorkingDirectory command underneath is being passed a string with 
 file:// uri prefix (which FTP server does not understand obviously)
 {noformat}
 INFO [2012-08-06 12:59:39] (DefaultSession.java:297) - Received command: [CWD 
 ftp://localhost:61246/tmp/myfile]
  WARN [2012-08-06 12:59:39] (AbstractFakeCommandHandler.java:213) - Error 
 handling command: Command[CWD:[ftp://localhost:61246/tmp/myfile]]; 
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.verifyFileSystemCondition(AbstractFakeCommandHandler.java:264)
   at 
 org.mockftpserver.fake.command.CwdCommandHandler.handle(CwdCommandHandler.java:44)
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.handleCommand(AbstractFakeCommandHandler.java:76)
   at 
 org.mockftpserver.core.session.DefaultSession.readAndProcessCommand(DefaultSession.java:421)
   at 
 org.mockftpserver.core.session.DefaultSession.run(DefaultSession.java:384)
   at java.lang.Thread.run(Thread.java:680)
 {noformat}
 The solution would be this:
 {noformat}
 --- a/FTPFileSystem.java
 +++ b/FTPFileSystem.java
 @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
throw new IOException(Destination path  + dst
+  already exist, cannot rename!);
  }
 -String parentSrc = absoluteSrc.getParent().toUri().toString();
 -String parentDst = absoluteDst.getParent().toUri().toString();
 +URI parentSrc = absoluteSrc.getParent().toUri();
 +URI parentDst = absoluteDst.getParent().toUri();
  String from = src.getName();
  String to = dst.getName();
 -if (!parentSrc.equals(parentDst)) {
 +if (!parentSrc.toString().equals(parentDst.toString())) {
throw new IOException(Cannot rename parent(source):  + parentSrc
+ , parent(destination):   + parentDst);
  }
 -client.changeWorkingDirectory(parentSrc);
 +client.changeWorkingDirectory(parentSrc.getPath().toString());
  boolean renamed = client.rename(from, to);
  return renamed;
}
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9729) The example code of org.apache.hadoop.util.Tool is incorrect

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9729:
-
Labels: BB2015-05-TBR  (was: )

 The example code of org.apache.hadoop.util.Tool is incorrect
 

 Key: HADOOP-9729
 URL: https://issues.apache.org/jira/browse/HADOOP-9729
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 1.1.2
Reporter: hellojinjie
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9729.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 see http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/util/Tool.html
 function  public int run(String[] args) has no return value in the example 
 code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   >