[jira] [Commented] (HDFS-6783) Fix HDFS CacheReplicationMonitor rescan logic

2014-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083879#comment-14083879
 ] 

Hadoop QA commented on HDFS-6783:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12659518/HDFS-6783.006.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.TestHDFSServerPorts
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7544//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7544//console

This message is automatically generated.

 Fix HDFS CacheReplicationMonitor rescan logic
 -

 Key: HDFS-6783
 URL: https://issues.apache.org/jira/browse/HDFS-6783
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6783.001.patch, HDFS-6783.002.patch, 
 HDFS-6783.003.patch, HDFS-6783.004.patch, HDFS-6783.005.patch, 
 HDFS-6783.006.patch


 In monitor thread, needsRescan is set to false before real scan starts, so 
 for {{waitForRescanIfNeeded}} will return for the first condition:
 {code}
 if (!needsRescan) {
   return;
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6782) Improve FS editlog logSync

2014-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083903#comment-14083903
 ] 

Hadoop QA commented on HDFS-6782:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12659525/HDFS-6782.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7545//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7545//console

This message is automatically generated.

 Improve FS editlog logSync
 --

 Key: HDFS-6782
 URL: https://issues.apache.org/jira/browse/HDFS-6782
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.4.1
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6782.001.patch, HDFS-6782.002.patch


 In NN, it uses a double buffer (bufCurrent, bufReady) for log sync, 
 bufCurrent it to buffer new coming edit ops and bufReady is for flushing. 
 This's efficient. When flush is ongoing, and bufCurrent is full, NN goes to 
 force log sync, and all new Ops are blocked (since force log sync is 
 protected by FSNameSystem write lock). After the flush finished, the new Ops 
 are still blocked, but actually at this time, bufCurrent is free and Ops can 
 go ahead and write to the buffer. The following diagram shows the detail. 
 This JIRA is for this improvement.  Thanks [~umamaheswararao] for confirming 
 this issue.
 {code}
 edit1(txid1) -- write to bufCurrent  logSync - (swap 
 buffer)flushing ---
 edit2(txid2) -- write to bufCurrent  logSync - waiting 
 ---
 edit3(txid3) -- write to bufCurrent  logSync - waiting 
 ---
 edit4(txid4) -- write to bufCurrent  logSync - waiting 
 ---
 edit5(txid5) -- write to bufCurrent --full-- force sync - waiting 
 ---
 edit6(txid6) -- blocked
 ...
 editn(txidn) -- blocked
 {code}
 After the flush, it becomes
 {code}
 edit1(txid1) -- write to bufCurrent  logSync - finished 
 
 edit2(txid2) -- write to bufCurrent  logSync - flushing 
 ---
 edit3(txid3) -- write to bufCurrent  logSync - waiting 
 ---
 edit4(txid4) -- write to bufCurrent  logSync - waiting 
 ---
 edit5(txid5) -- write to bufCurrent --full-- force sync - waiting 
 ---
 edit6(txid6) -- blocked
 ...
 editn(txidn) -- blocked
 {code}
 After edit1 finished, bufCurrent is free, and the thread which flushes txid2 
 will also flushes txid3-5, so we should return from the force sync of edit5 
 and FSNamesystem write lock will be freed (Don't worry that edit5 Op will 
 return, since there will be a normal logSync after the force logSync and 
 there will wait for sync finished). This is the idea of this JIRA. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6813) WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable with thead-safe.

2014-08-03 Thread Yi Liu (JIRA)
Yi Liu created HDFS-6813:


 Summary: WebHdfsFileSystem#OffsetUrlInputStream should implement 
PositionedReadable with thead-safe.
 Key: HDFS-6813
 URL: https://issues.apache.org/jira/browse/HDFS-6813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Yi Liu


{{PositionedReadable}} definition requires the implementations for its 
interfaces should be thread-safe.

OffsetUrlInputStream(WebHdfsFileSystem inputstream) doesn't implement these 
interfaces with tread-safe, this JIRA is to fix this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6813) WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable with thead-safe.

2014-08-03 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6813:
-

Status: Patch Available  (was: Open)

 WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable 
 with thead-safe.
 ---

 Key: HDFS-6813
 URL: https://issues.apache.org/jira/browse/HDFS-6813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6813.001.patch


 {{PositionedReadable}} definition requires the implementations for its 
 interfaces should be thread-safe.
 OffsetUrlInputStream(WebHdfsFileSystem inputstream) doesn't implement these 
 interfaces with tread-safe, this JIRA is to fix this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6813) WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable with thead-safe.

2014-08-03 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6813:
-

Attachment: HDFS-6813.001.patch

 WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable 
 with thead-safe.
 ---

 Key: HDFS-6813
 URL: https://issues.apache.org/jira/browse/HDFS-6813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6813.001.patch


 {{PositionedReadable}} definition requires the implementations for its 
 interfaces should be thread-safe.
 OffsetUrlInputStream(WebHdfsFileSystem inputstream) doesn't implement these 
 interfaces with tread-safe, this JIRA is to fix this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException

2014-08-03 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6451:
-

Attachment: HDFS-6451.002.patch

Attaching a patch which addresses the review comments by [~brandonli]. Added 
tests for all the handlers in TestRpcProgramNfs3.java. Kept the tests generic, 
so they can be extended in future to include other tests (various corner cases, 
other NFS3ERR* messages, etc). 

While testing read() I hit HDFS-6582. I have made a note of this and commented 
that specific test for now. 

Let me know if there are any suggestions. Thanks!

 NFS should not return NFS3ERR_IO for AccessControlException 
 

 Key: HDFS-6451
 URL: https://issues.apache.org/jira/browse/HDFS-6451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6451.002.patch, HDFS-6451.patch


 As [~jingzhao] pointed out in HDFS-6411, we need to catch the 
 AccessControlException from the HDFS calls, and return NFS3ERR_PERM instead 
 of NFS3ERR_IO for it.
 Another possible improvement is to have a single class/method for the common 
 exception handling process, instead of repeating the same exception handling 
 process in different NFS methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException

2014-08-03 Thread Abhiraj Butala (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083946#comment-14083946
 ] 

Abhiraj Butala commented on HDFS-6451:
--

Forgot to mention, I also cleaned up some white spaces in RpcProgramNf3.java. 
Please forgive me for that. :) 

 NFS should not return NFS3ERR_IO for AccessControlException 
 

 Key: HDFS-6451
 URL: https://issues.apache.org/jira/browse/HDFS-6451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6451.002.patch, HDFS-6451.patch


 As [~jingzhao] pointed out in HDFS-6411, we need to catch the 
 AccessControlException from the HDFS calls, and return NFS3ERR_PERM instead 
 of NFS3ERR_IO for it.
 Another possible improvement is to have a single class/method for the common 
 exception handling process, instead of repeating the same exception handling 
 process in different NFS methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6803) Documenting DFSClient#DFSInputStream expectations reading and preading in concurrent context

2014-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083949#comment-14083949
 ] 

Steve Loughran commented on HDFS-6803:
--

This is fun, stack's  just opened up a whole new bag of inconsistencies.

h2. Consistency with actual file data  metadata

We should state that changes to a file (length, contents, existence, perms) may 
not be visible to an open stream; if they do become visible there are no 
guarantees when those changes become visible. That could include partway 
through a readFully operation -this cannot guaranteed to be atomic.


h2. Isolation of pread operations

When a pread is in progress, should that change be visible in {{getPos()}}? 

# If not, the method will need to be made {{synchronized}} on all 
implementations (it isn't right now; I checked). I
# If it can be visible, then we could pull the  {{synchronized}} marker off 
some implementations and remove that as a lock point.

h2. Failure Modes in concurrent/serialized operations

One problem with concurrency on read+pread is something I hadn't thought of 
before: on any failure of a pread, the pos value must be reset to the previous 
one. Everything appears to do this; the test would be

{code}

read()
try{
read(EOF+2)
} catch (IOException) {
}
assertTrue(getPos()=EOF)
read()
{code}

The second {{read()}} would succeed/return -1 depending on the position, and 
not an {{EOFException}}. The same outcome must happen for a negative pread 
attempt.

 If someone were to add this to {{AbstractContractSeekTest}} it'd get picked up 
by all the implementations and we could see what happens.

Looking at the standard impl, it does seek() back in a finally block -but if 
there is an exception in the read(), then a subsequent exception in the final 
seek() would lose that. I think it should be reworked to catch any IOE in the 
read operation and do an exception-swallowing seek-back in this case. Or just 
do it for EOFException now that Hadoop 2.5+ has all the standard filesystems 
throwing EOFException consistently.



 Documenting DFSClient#DFSInputStream expectations reading and preading in 
 concurrent context
 

 Key: HDFS-6803
 URL: https://issues.apache.org/jira/browse/HDFS-6803
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 2.4.1
Reporter: stack
 Attachments: DocumentingDFSClientDFSInputStream (1).pdf


 Reviews of the patch posted the parent task suggest that we be more explicit 
 about how DFSIS is expected to behave when being read by contending threads. 
 It is also suggested that presumptions made internally be made explicit 
 documenting expectations.
 Before we put up a patch we've made a document of assertions we'd like to 
 make into tenets of DFSInputSteam.  If agreement, we'll attach to this issue 
 a patch that weaves the assumptions into DFSIS as javadoc and class comments. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException

2014-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083953#comment-14083953
 ] 

Hadoop QA commented on HDFS-6451:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12659540/HDFS-6451.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7547//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7547//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs-nfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7547//console

This message is automatically generated.

 NFS should not return NFS3ERR_IO for AccessControlException 
 

 Key: HDFS-6451
 URL: https://issues.apache.org/jira/browse/HDFS-6451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6451.002.patch, HDFS-6451.patch


 As [~jingzhao] pointed out in HDFS-6411, we need to catch the 
 AccessControlException from the HDFS calls, and return NFS3ERR_PERM instead 
 of NFS3ERR_IO for it.
 Another possible improvement is to have a single class/method for the common 
 exception handling process, instead of repeating the same exception handling 
 process in different NFS methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6813) WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable with thead-safe.

2014-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083956#comment-14083956
 ] 

Hadoop QA commented on HDFS-6813:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12659533/HDFS-6813.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  
org.apache.hadoop.hdfs.server.namenode.ha.TestDFSZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7546//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7546//console

This message is automatically generated.

 WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable 
 with thead-safe.
 ---

 Key: HDFS-6813
 URL: https://issues.apache.org/jira/browse/HDFS-6813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6813.001.patch


 {{PositionedReadable}} definition requires the implementations for its 
 interfaces should be thread-safe.
 OffsetUrlInputStream(WebHdfsFileSystem inputstream) doesn't implement these 
 interfaces with tread-safe, this JIRA is to fix this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6810) StorageReport array is initialized with wrong size in DatanodeDescriptor#getStorageReports

2014-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083979#comment-14083979
 ] 

Hudson commented on HDFS-6810:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1826 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1826/])
HDFS-6810. StorageReport array is initialized with wrong size in 
DatanodeDescriptor#getStorageReports. (Contributed by szetszwo) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1615381)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


 StorageReport array is initialized with wrong size in 
 DatanodeDescriptor#getStorageReports
 --

 Key: HDFS-6810
 URL: https://issues.apache.org/jira/browse/HDFS-6810
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Ted Yu
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 3.0.0, 2.6.0

 Attachments: h6810_20140803.patch


 Here is related code:
 {code}
   public StorageReport[] getStorageReports() {
 final StorageReport[] reports = new StorageReport[storageMap.size()];
 {code}
 Other methods use the following construct:
 {code}
 synchronized (storageMap) {
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6810) StorageReport array is initialized with wrong size in DatanodeDescriptor#getStorageReports

2014-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084003#comment-14084003
 ] 

Hudson commented on HDFS-6810:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1851 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1851/])
HDFS-6810. StorageReport array is initialized with wrong size in 
DatanodeDescriptor#getStorageReports. (Contributed by szetszwo) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1615381)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


 StorageReport array is initialized with wrong size in 
 DatanodeDescriptor#getStorageReports
 --

 Key: HDFS-6810
 URL: https://issues.apache.org/jira/browse/HDFS-6810
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Ted Yu
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 3.0.0, 2.6.0

 Attachments: h6810_20140803.patch


 Here is related code:
 {code}
   public StorageReport[] getStorageReports() {
 final StorageReport[] reports = new StorageReport[storageMap.size()];
 {code}
 Other methods use the following construct:
 {code}
 synchronized (storageMap) {
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6776) distcp from insecure cluster (source) to secure cluster (destination) doesn't work

2014-08-03 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6776:


Attachment: HDFS-6776.003.patch

Version 003 to address findbugs issue.


 distcp from insecure cluster (source) to secure cluster (destination) doesn't 
 work
 --

 Key: HDFS-6776
 URL: https://issues.apache.org/jira/browse/HDFS-6776
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0, 2.5.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6776.001.patch, HDFS-6776.002.patch, 
 HDFS-6776.003.patch


 Issuing distcp command at the secure cluster side, trying to copy stuff from 
 insecure cluster to secure cluster, and see the following problem:
 {code}
 hadoopuser@yjc5u-1 ~]$ hadoop distcp webhdfs://insure-cluster:port/tmp 
 hdfs://sure-cluster:8020/tmp/tmptgt
 14/07/30 20:06:19 INFO tools.DistCp: Input Options: 
 DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
 ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
 copyStrategy='uniformsize', sourceFileListing=null, 
 sourcePaths=[webhdfs://insecure-cluster:port/tmp], 
 targetPath=hdfs://secure-cluster:8020/tmp/tmptgt, targetPathExists=true}
 14/07/30 20:06:19 INFO client.RMProxy: Connecting to ResourceManager at 
 secure-clister:8032
 14/07/30 20:06:20 WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded
 14/07/30 20:06:20 WARN security.UserGroupInformation: 
 PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
 cause:java.io.IOException: Failed to get the token for hadoopuser, 
 user=hadoopuser
 14/07/30 20:06:20 WARN security.UserGroupInformation: 
 PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
 cause:java.io.IOException: Failed to get the token for hadoopuser, 
 user=hadoopuser
 14/07/30 20:06:20 ERROR tools.DistCp: Exception encountered 
 java.io.IOException: Failed to get the token for hadoopuser, user=hadoopuser
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:365)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$600(WebHdfsFileSystem.java:84)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:618)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:584)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:438)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:466)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:462)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1132)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:218)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:403)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:424)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractFsPathRunner.getUrl(WebHdfsFileSystem.java:640)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:565)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:438)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:466)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:462)
   at 
 

[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Attachment: HDFS-6790.patch

Patch to leverage Configuration.getPassword in order to provide an alternative 
to SSL passwords stored in clear text within ssl-server.xml or a side file - 
while maintaining backward compatibility.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Patch Available  (was: Open)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6813) WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable with thead-safe.

2014-08-03 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084056#comment-14084056
 ] 

Uma Maheswara Rao G commented on HDFS-6813:
---

I think from PositionedReadable doc, this looks reasonable to me. But I also 
noticed FsInputStream also have the APIs with out synchronized. Also the read 
api in DFSInputStream also not synchronized, but sure that was left with 
synchronization with intention. 

[~szetszwo], can you please confirm if there is any reason for not synchronized 
and did not follow the PositionedReadable  java doc? Thanks

 WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable 
 with thead-safe.
 ---

 Key: HDFS-6813
 URL: https://issues.apache.org/jira/browse/HDFS-6813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6813.001.patch


 {{PositionedReadable}} definition requires the implementations for its 
 interfaces should be thread-safe.
 OffsetUrlInputStream(WebHdfsFileSystem inputstream) doesn't implement these 
 interfaces with tread-safe, this JIRA is to fix this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6814) Mistakenly dfs.namenode.list.encryption.zones.num.responses configured as boolean

2014-08-03 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-6814:
-

 Summary: Mistakenly 
dfs.namenode.list.encryption.zones.num.responses configured as boolean
 Key: HDFS-6814
 URL: https://issues.apache.org/jira/browse/HDFS-6814
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


{code}
property
  namedfs.namenode.list.encryption.zones.num.responses/name
  valuefalse/value
  descriptionWhen listing encryption zones, the maximum number of zones
that will be returned in a batch. Fetching the list incrementally in
batches improves namenode performance.
  /description
/property
{code}
default value should be 100. Should be same as {code}public static final int
DFS_NAMENODE_LIST_ENCRYPTION_ZONES_NUM_RESPONSES_DEFAULT = 100;{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6814) Mistakenly dfs.namenode.list.encryption.zones.num.responses configured as boolean

2014-08-03 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-6814:
--

Attachment: HDFS-6814.patch

Attached simple patch

 Mistakenly dfs.namenode.list.encryption.zones.num.responses configured as 
 boolean
 -

 Key: HDFS-6814
 URL: https://issues.apache.org/jira/browse/HDFS-6814
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HDFS-6814.patch


 {code}
 property
   namedfs.namenode.list.encryption.zones.num.responses/name
   valuefalse/value
   descriptionWhen listing encryption zones, the maximum number of zones
 that will be returned in a batch. Fetching the list incrementally in
 batches improves namenode performance.
   /description
 /property
 {code}
 default value should be 100. Should be same as {code}public static final int  
   DFS_NAMENODE_LIST_ENCRYPTION_ZONES_NUM_RESPONSES_DEFAULT = 100;{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HDFS-6814) Mistakenly dfs.namenode.list.encryption.zones.num.responses configured as boolean

2014-08-03 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084108#comment-14084108
 ] 

Uma Maheswara Rao G edited comment on HDFS-6814 at 8/3/14 7:37 PM:
---

Noticed while running TestDistributedFileSystem,
{noformat}
java.lang.NumberFormatException: For input string: false
at java.lang.NumberFormatException.forInputString(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1113)
at 
org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager.init(EncryptionZoneManager.java:75)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.init(FSDirectory.java:231)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:880)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:752)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:925)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:291)
at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:146)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:869)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:707)
at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:378)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:359)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testGetFileBlockStorageLocationsBatching(TestDistributedFileSystem.java:674)

{noformat}



was (Author: umamaheswararao):
Attached simple patch

 Mistakenly dfs.namenode.list.encryption.zones.num.responses configured as 
 boolean
 -

 Key: HDFS-6814
 URL: https://issues.apache.org/jira/browse/HDFS-6814
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HDFS-6814.patch


 {code}
 property
   namedfs.namenode.list.encryption.zones.num.responses/name
   valuefalse/value
   descriptionWhen listing encryption zones, the maximum number of zones
 that will be returned in a batch. Fetching the list incrementally in
 batches improves namenode performance.
   /description
 /property
 {code}
 default value should be 100. Should be same as {code}public static final int  
   DFS_NAMENODE_LIST_ENCRYPTION_ZONES_NUM_RESPONSES_DEFAULT = 100;{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6814) Mistakenly dfs.namenode.list.encryption.zones.num.responses configured as boolean

2014-08-03 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084109#comment-14084109
 ] 

Uma Maheswara Rao G commented on HDFS-6814:
---

Attached simple patch.

 Mistakenly dfs.namenode.list.encryption.zones.num.responses configured as 
 boolean
 -

 Key: HDFS-6814
 URL: https://issues.apache.org/jira/browse/HDFS-6814
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HDFS-6814.patch


 {code}
 property
   namedfs.namenode.list.encryption.zones.num.responses/name
   valuefalse/value
   descriptionWhen listing encryption zones, the maximum number of zones
 that will be returned in a batch. Fetching the list incrementally in
 batches improves namenode performance.
   /description
 /property
 {code}
 default value should be 100. Should be same as {code}public static final int  
   DFS_NAMENODE_LIST_ENCRYPTION_ZONES_NUM_RESPONSES_DEFAULT = 100;{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HDFS-6814) Mistakenly dfs.namenode.list.encryption.zones.num.responses configured as boolean

2014-08-03 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-6814 started by Uma Maheswara Rao G.

 Mistakenly dfs.namenode.list.encryption.zones.num.responses configured as 
 boolean
 -

 Key: HDFS-6814
 URL: https://issues.apache.org/jira/browse/HDFS-6814
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HDFS-6814.patch


 {code}
 property
   namedfs.namenode.list.encryption.zones.num.responses/name
   valuefalse/value
   descriptionWhen listing encryption zones, the maximum number of zones
 that will be returned in a batch. Fetching the list incrementally in
 batches improves namenode performance.
   /description
 /property
 {code}
 default value should be 100. Should be same as {code}public static final int  
   DFS_NAMENODE_LIST_ENCRYPTION_ZONES_NUM_RESPONSES_DEFAULT = 100;{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6776) distcp from insecure cluster (source) to secure cluster (destination) doesn't work

2014-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084122#comment-14084122
 ] 

Hadoop QA commented on HDFS-6776:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12659555/HDFS-6776.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7548//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7548//console

This message is automatically generated.

 distcp from insecure cluster (source) to secure cluster (destination) doesn't 
 work
 --

 Key: HDFS-6776
 URL: https://issues.apache.org/jira/browse/HDFS-6776
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0, 2.5.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6776.001.patch, HDFS-6776.002.patch, 
 HDFS-6776.003.patch


 Issuing distcp command at the secure cluster side, trying to copy stuff from 
 insecure cluster to secure cluster, and see the following problem:
 {code}
 hadoopuser@yjc5u-1 ~]$ hadoop distcp webhdfs://insure-cluster:port/tmp 
 hdfs://sure-cluster:8020/tmp/tmptgt
 14/07/30 20:06:19 INFO tools.DistCp: Input Options: 
 DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
 ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
 copyStrategy='uniformsize', sourceFileListing=null, 
 sourcePaths=[webhdfs://insecure-cluster:port/tmp], 
 targetPath=hdfs://secure-cluster:8020/tmp/tmptgt, targetPathExists=true}
 14/07/30 20:06:19 INFO client.RMProxy: Connecting to ResourceManager at 
 secure-clister:8032
 14/07/30 20:06:20 WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded
 14/07/30 20:06:20 WARN security.UserGroupInformation: 
 PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
 cause:java.io.IOException: Failed to get the token for hadoopuser, 
 user=hadoopuser
 14/07/30 20:06:20 WARN security.UserGroupInformation: 
 PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
 cause:java.io.IOException: Failed to get the token for hadoopuser, 
 user=hadoopuser
 14/07/30 20:06:20 ERROR tools.DistCp: Exception encountered 
 java.io.IOException: Failed to get the token for hadoopuser, user=hadoopuser
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:365)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$600(WebHdfsFileSystem.java:84)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:618)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:584)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:438)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:466)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 

[jira] [Commented] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084128#comment-14084128
 ] 

Hadoop QA commented on HDFS-6790:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12659559/HDFS-6790.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSUtil

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7549//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7549//console

This message is automatically generated.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException

2014-08-03 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6451:
-

Attachment: HDFS-6451.003.patch

Reattaching the patch with findbug warning addressed.

 NFS should not return NFS3ERR_IO for AccessControlException 
 

 Key: HDFS-6451
 URL: https://issues.apache.org/jira/browse/HDFS-6451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6451.002.patch, HDFS-6451.003.patch, HDFS-6451.patch


 As [~jingzhao] pointed out in HDFS-6411, we need to catch the 
 AccessControlException from the HDFS calls, and return NFS3ERR_PERM instead 
 of NFS3ERR_IO for it.
 Another possible improvement is to have a single class/method for the common 
 exception handling process, instead of repeating the same exception handling 
 process in different NFS methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Open  (was: Patch Available)

Invalid line in test.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Attachment: HDFS-6790.patch

Removed invalid line in TestDFSUtil.testGetPassword() and attaching new patch.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Patch Available  (was: Open)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6451) NFS should not return NFS3ERR_IO for AccessControlException

2014-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084140#comment-14084140
 ] 

Hadoop QA commented on HDFS-6451:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12659584/HDFS-6451.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7550//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7550//console

This message is automatically generated.

 NFS should not return NFS3ERR_IO for AccessControlException 
 

 Key: HDFS-6451
 URL: https://issues.apache.org/jira/browse/HDFS-6451
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
 Attachments: HDFS-6451.002.patch, HDFS-6451.003.patch, HDFS-6451.patch


 As [~jingzhao] pointed out in HDFS-6411, we need to catch the 
 AccessControlException from the HDFS calls, and return NFS3ERR_PERM instead 
 of NFS3ERR_IO for it.
 Another possible improvement is to have a single class/method for the common 
 exception handling process, instead of repeating the same exception handling 
 process in different NFS methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6694) TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently with various symptoms

2014-08-03 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084148#comment-14084148
 ] 

Yongjun Zhang commented on HDFS-6694:
-

Created  INFRA-8147: check on open file limit on jenkins test slaves.



 TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently 
 with various symptoms
 

 Key: HDFS-6694
 URL: https://issues.apache.org/jira/browse/HDFS-6694
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Yongjun Zhang
 Attachments: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover-output.txt, 
 org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.txt


 TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently 
 with various symptoms. Typical failures are described in first comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084172#comment-14084172
 ] 

Hadoop QA commented on HDFS-6790:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12659585/HDFS-6790.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7551//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7551//console

This message is automatically generated.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084196#comment-14084196
 ] 

Larry McCay commented on HDFS-6790:
---

Test failure is unrelated to the patch.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Open  (was: Patch Available)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Patch Available  (was: Open)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6663) Admin command to track file and locations from block id

2014-08-03 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084242#comment-14084242
 ] 

Chen He commented on HDFS-6663:
---

done with the preliminary code change, right now is working on unit test.

 Admin command to track file and locations from block id
 ---

 Key: HDFS-6663
 URL: https://issues.apache.org/jira/browse/HDFS-6663
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Kihwal Lee
Assignee: Chen He

 A dfsadmin command that allows finding out the file and the locations given a 
 block number will be very useful in debugging production issues.   It may be 
 possible to add this feature to Fsck, instead of creating a new command.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5185) DN fails to startup if one of the data dir is full

2014-08-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-5185:


Attachment: HDFS-5185-003.patch

As suggested by [~umamaheswararao] offline,
Changed existing {{checkDiskError()}} to {{checkDiskErrorAsync()}} 
and named new sync method to {{checkDiskError()}}.

 DN fails to startup if one of the data dir is full
 --

 Key: HDFS-5185
 URL: https://issues.apache.org/jira/browse/HDFS-5185
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Attachments: HDFS-5185-002.patch, HDFS-5185-003.patch, HDFS-5185.patch


 DataNode fails to startup if one of the data dirs configured is out of space. 
 fails with following exception
 {noformat}2013-09-11 17:48:43,680 FATAL 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
 block pool Block pool registering (storage id 
 DS-308316523-xx.xx.xx.xx-64015-1378896293604) service to /nn1:65110
 java.io.IOException: Mkdirs failed to create 
 /opt/nish/data/current/BP-123456-1234567/tmp
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.init(BlockPoolSlice.java:105)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:216)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.addBlockPool(FsVolumeList.java:155)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.addBlockPool(FsDatasetImpl.java:1593)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:834)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:311)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:217)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:660)
 at java.lang.Thread.run(Thread.java:662)
 {noformat}
 It should continue to start-up with other data dirs available.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5185) DN fails to startup if one of the data dir is full

2014-08-03 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084293#comment-14084293
 ] 

Uma Maheswara Rao G commented on HDFS-5185:
---

+1 latest patch looks good to me. Pending jenkins.

 DN fails to startup if one of the data dir is full
 --

 Key: HDFS-5185
 URL: https://issues.apache.org/jira/browse/HDFS-5185
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Attachments: HDFS-5185-002.patch, HDFS-5185-003.patch, HDFS-5185.patch


 DataNode fails to startup if one of the data dirs configured is out of space. 
 fails with following exception
 {noformat}2013-09-11 17:48:43,680 FATAL 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
 block pool Block pool registering (storage id 
 DS-308316523-xx.xx.xx.xx-64015-1378896293604) service to /nn1:65110
 java.io.IOException: Mkdirs failed to create 
 /opt/nish/data/current/BP-123456-1234567/tmp
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.init(BlockPoolSlice.java:105)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:216)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.addBlockPool(FsVolumeList.java:155)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.addBlockPool(FsDatasetImpl.java:1593)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:834)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:311)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:217)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:660)
 at java.lang.Thread.run(Thread.java:662)
 {noformat}
 It should continue to start-up with other data dirs available.



--
This message was sent by Atlassian JIRA
(v6.2#6252)