[jira] [Created] (HDFS-8365) Erasure Coding: Badly treated when short of Datanode in StripedDataStreamer

2015-05-11 Thread Walter Su (JIRA)
Walter Su created HDFS-8365:
---

 Summary: Erasure Coding: Badly treated when short of Datanode in 
StripedDataStreamer
 Key: HDFS-8365
 URL: https://issues.apache.org/jira/browse/HDFS-8365
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su


Currently, each innerBlock of blockGroup should put on different node. One node 
can has 2 innerBlock. 
If one node has 2 innerBlock, we have blockReport issue. The first reported 
innerBlock will be added to triplets, but the second won't.
If we decide to not to support 2 innerBlock in one node. We should handle this 
situation, and output warning friendly.

When there are only 8 DN, and ECSchema is RS-6-3
{noformat}
 # bin/hdfs dfs -put README.txt /ecdir
15/05/11 13:48:30 WARN hdfs.DataStreamer: DataStreamer Exception
java.lang.NullPointerException
at 
java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
DFSStripedOutputStream:#0: isFailed? f, null@null
java.io.IOException: java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.set(DataStreamer.java:183)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:571)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
Caused by: java.lang.NullPointerException
at 
java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
... 1 more
15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
DFSStripedOutputStream:#1: isFailed? f, null@null
java.nio.channels.ClosedChannelException
at 
org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:208)
at 
org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:146)
at 
org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:713)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8220) Erasure Coding: StripedDataStreamer fails to handle the blocklocations which doesn't satisfy BlockGroupSize

2015-05-11 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537636#comment-14537636
 ] 

Li Bo commented on HDFS-8220:
-

hi, Rakesh 
Sorry for the late reply.
I notice HDFS-8365 has been created to target similar problem. I think we need 
some discussions about when and how to support very small clusters.

 Erasure Coding: StripedDataStreamer fails to handle the blocklocations which 
 doesn't satisfy BlockGroupSize
 ---

 Key: HDFS-8220
 URL: https://issues.apache.org/jira/browse/HDFS-8220
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8220-001.patch, HDFS-8220-002.patch, 
 HDFS-8220-003.patch, HDFS-8220-004.patch, HDFS-8220-HDFS-7285.005.patch, 
 HDFS-8220-HDFS-7285.006.patch


 During write operations {{StripedDataStreamer#locateFollowingBlock}} fails to 
 validate the available datanodes against the {{BlockGroupSize}}. Please see 
 the exception to understand more:
 {code}
 2015-04-22 14:56:11,313 WARN  hdfs.DFSClient (DataStreamer.java:run(538)) - 
 DataStreamer Exception
 java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 2015-04-22 14:56:11,313 INFO  hdfs.MiniDFSCluster 
 (MiniDFSCluster.java:shutdown(1718)) - Shutting down the Mini HDFS Cluster
 2015-04-22 14:56:11,313 ERROR hdfs.DFSClient 
 (DFSClient.java:closeAllFilesBeingWritten(608)) - Failed to close inode 16387
 java.io.IOException: DataStreamer Exception: 
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:544)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 Caused by: java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   ... 1 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8220) Erasure Coding: StripedDataStreamer fails to handle the blocklocations which doesn't satisfy BlockGroupSize

2015-05-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537653#comment-14537653
 ] 

Rakesh R commented on HDFS-8220:


Thanks a lot [~libo-intel] for the reply. Its a very good case to support 
clusters which doesn't have enough datanodes to satisfy the configured schema 
number of (data + parity) nodes. It can be due to not enough live datanodes or 
small cluster etc.

Say, there are only 3 live datanodes. Like you mentioned earlier we could 
return something like, 9 locations 3 in each datanode. As per the comments by 
[~walter.k.su] presently {{PlacementPolicyEC}} is lacking this logic of 
returning identical nodes. If you agree, I'm happy to explore this case 
separately and volunteer this task:)

For the safer side, IMHO we could do a validation at the StripedDataStreamer to 
avoid NPE now. Does this makes sense to you?

 Erasure Coding: StripedDataStreamer fails to handle the blocklocations which 
 doesn't satisfy BlockGroupSize
 ---

 Key: HDFS-8220
 URL: https://issues.apache.org/jira/browse/HDFS-8220
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8220-001.patch, HDFS-8220-002.patch, 
 HDFS-8220-003.patch, HDFS-8220-004.patch, HDFS-8220-HDFS-7285.005.patch, 
 HDFS-8220-HDFS-7285.006.patch


 During write operations {{StripedDataStreamer#locateFollowingBlock}} fails to 
 validate the available datanodes against the {{BlockGroupSize}}. Please see 
 the exception to understand more:
 {code}
 2015-04-22 14:56:11,313 WARN  hdfs.DFSClient (DataStreamer.java:run(538)) - 
 DataStreamer Exception
 java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 2015-04-22 14:56:11,313 INFO  hdfs.MiniDFSCluster 
 (MiniDFSCluster.java:shutdown(1718)) - Shutting down the Mini HDFS Cluster
 2015-04-22 14:56:11,313 ERROR hdfs.DFSClient 
 (DFSClient.java:closeAllFilesBeingWritten(608)) - Failed to close inode 16387
 java.io.IOException: DataStreamer Exception: 
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:544)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 Caused by: java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   ... 1 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-05-11 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8241:

  Labels: BB2015-05-TBR  (was: BB2015-05-RFC)
Hadoop Flags: Incompatible change  (was: Incompatible change,Reviewed)

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-05-11 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8241:
---
Attachment: HDFS-8241-002.patch

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: HDFS-8241-002.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8366) Erasure Coding: Make the timeout parameter of polling blocking queue configurable in DFSStripedOutputStream

2015-05-11 Thread Li Bo (JIRA)
Li Bo created HDFS-8366:
---

 Summary: Erasure Coding: Make the timeout parameter of polling 
blocking queue configurable in DFSStripedOutputStream
 Key: HDFS-8366
 URL: https://issues.apache.org/jira/browse/HDFS-8366
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo


Different streamer may have different write speeds. The maximum tolerance of   
the different speeds should be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-05-11 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537678#comment-14537678
 ] 

Brahma Reddy Battula commented on HDFS-8241:


[~ajisakaa] Updated patch..Kindly Review!! Thanks.

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: HDFS-8241-002.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8351) Remove namenode -finalize option from document

2015-05-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537643#comment-14537643
 ] 

Akira AJISAKA commented on HDFS-8351:
-

Thanks [~brahmareddy], [~arpitagarwal], and [~aw] for the reviews. I'll commit 
it shortly.

bq. We can just remove it from the command line help altogether.
Removing it from command line is being done in HDFS-8241. I'm thinking it's 
better to do this separately since HDFS-8241 is an incompatible change.

 Remove namenode -finalize option from document
 --

 Key: HDFS-8351
 URL: https://issues.apache.org/jira/browse/HDFS-8351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-8351.001.patch


 hdfs namenode -finalize option was removed by HDFS-5138, however, the 
 document was not updated.
 http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-05-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537661#comment-14537661
 ] 

Akira AJISAKA commented on HDFS-8241:
-

Hi [~brahmareddy], I have additional two comments.
# Would you remove the option from enum {{HdfsServerConstants.StartupOption}}?
# Would you remove the option from document entirely? We have deprecated the 
option in the document of trunk and branch-2 by HDFS-8351, we can remove it in 
trunk.

I'm +1 if these are addressed.

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-RFC
 Attachments: HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8365) Erasure Coding: Badly treated when short of Datanode in StripedDataStreamer

2015-05-11 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su resolved HDFS-8365.
-
Resolution: Duplicate

 Erasure Coding: Badly treated when short of Datanode in StripedDataStreamer
 ---

 Key: HDFS-8365
 URL: https://issues.apache.org/jira/browse/HDFS-8365
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su

 Currently, each innerBlock of blockGroup should put on different node. One 
 node can't has 2 innerBlock. 
 If one node has 2 innerBlock, we have blockReport issue. The first reported 
 innerBlock will be added to triplets, but the second won't.
 If we decide to not to support 2 innerBlock in one node. We should handle 
 this situation, and output warning friendly.
 When there are only 8 DN, and ECSchema is RS-6-3
 {noformat}
  # bin/hdfs dfs -put README.txt /ecdir
 15/05/11 13:48:30 WARN hdfs.DataStreamer: DataStreamer Exception
 java.lang.NullPointerException
 at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
 at 
 org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
 at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
 15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
 DFSStripedOutputStream:#0: isFailed? f, null@null
 java.io.IOException: java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.set(DataStreamer.java:183)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:571)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
 Caused by: java.lang.NullPointerException
 at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
 at 
 org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
 at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
 ... 1 more
 15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
 DFSStripedOutputStream:#1: isFailed? f, null@null
 java.nio.channels.ClosedChannelException
 at 
 org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:208)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:146)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:713)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8294) Erasure Coding: Fix Findbug warnings present in erasure coding

2015-05-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537694#comment-14537694
 ] 

Rakesh R commented on HDFS-8294:


Thanks again [~drankye]. Truly, I failed to find the need of {{null}} check. 
But there may be some reason for this. In a true case where replicas.length is 
zero, then presently {{#initializeBlockRecovery()}} function is only setting 
the {{primaryNodeIndex = -1;}} and return. In that case, I think changing the 
validation and return logic like below one satisfy FindBugs check. Also, this 
won't touch {{null}} condition part.
{code}
if (replicas == null || replicas.length == 0) {
  NameNode.blockStateChangeLog.warn(BLOCK* +
   BlockInfoStripedUnderConstruction.initLeaseRecovery: +
   No blocks found, lease removed.);
  // sets primary node index and return.
  primaryNodeIndex = -1;
  return;
}
{code}
Any thoughts?

 Erasure Coding: Fix Findbug warnings present in erasure coding
 --

 Key: HDFS-8294
 URL: https://issues.apache.org/jira/browse/HDFS-8294
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
  Labels: BB2015-05-RFC
 Attachments: FindBugs Report in EC feature.html, 
 HDFS-8294-HDFS-7285.00.patch, HDFS-8294-HDFS-7285.01.patch, 
 HDFS-8294-HDFS-7285.02.patch, HDFS-8294-HDFS-7285.03.patch


 This jira is to address the findbug issues reported in erasure coding feature.
 Attached sheet which contains the details of the findbug issues reported in 
 the erasure coding feature. I've taken this report from the jenkins build : 
 https://builds.apache.org/job/PreCommit-HDFS-Build/10848/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8365) Erasure Coding: Badly treated when short of Datanode in StripedDataStreamer

2015-05-11 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8365:

Description: 
Currently, each innerBlock of blockGroup should put on different node. One node 
can't has 2 innerBlock. 
If one node has 2 innerBlock, we have blockReport issue. The first reported 
innerBlock will be added to triplets, but the second won't.
If we decide to not to support 2 innerBlock in one node. We should handle this 
situation, and output warning friendly.

When there are only 8 DN, and ECSchema is RS-6-3
{noformat}
 # bin/hdfs dfs -put README.txt /ecdir
15/05/11 13:48:30 WARN hdfs.DataStreamer: DataStreamer Exception
java.lang.NullPointerException
at 
java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
DFSStripedOutputStream:#0: isFailed? f, null@null
java.io.IOException: java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.set(DataStreamer.java:183)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:571)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
Caused by: java.lang.NullPointerException
at 
java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
... 1 more
15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
DFSStripedOutputStream:#1: isFailed? f, null@null
java.nio.channels.ClosedChannelException
at 
org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:208)
at 
org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:146)
at 
org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:713)
{noformat}

  was:
Currently, each innerBlock of blockGroup should put on different node. One node 
can has 2 innerBlock. 
If one node has 2 innerBlock, we have blockReport issue. The first reported 
innerBlock will be added to triplets, but the second won't.
If we decide to not to support 2 innerBlock in one node. We should handle this 
situation, and output warning friendly.

When there are only 8 DN, and ECSchema is RS-6-3
{noformat}
 # bin/hdfs dfs -put README.txt /ecdir
15/05/11 13:48:30 WARN hdfs.DataStreamer: DataStreamer Exception
java.lang.NullPointerException
at 
java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
DFSStripedOutputStream:#0: isFailed? f, null@null
java.io.IOException: java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.set(DataStreamer.java:183)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:571)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
Caused by: java.lang.NullPointerException
at 
java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
at 
org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
... 1 more
15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
DFSStripedOutputStream:#1: isFailed? f, null@null

[jira] [Commented] (HDFS-8365) Erasure Coding: Badly treated when short of Datanode in StripedDataStreamer

2015-05-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537638#comment-14537638
 ] 

Rakesh R commented on HDFS-8365:


Thanks [~walter.k.su] for reporting this issue. It looks like duplicate of 
HDFS-8220. It would be great if you can help to make HDFS-8220 issue resolved.

 Erasure Coding: Badly treated when short of Datanode in StripedDataStreamer
 ---

 Key: HDFS-8365
 URL: https://issues.apache.org/jira/browse/HDFS-8365
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su

 Currently, each innerBlock of blockGroup should put on different node. One 
 node can't has 2 innerBlock. 
 If one node has 2 innerBlock, we have blockReport issue. The first reported 
 innerBlock will be added to triplets, but the second won't.
 If we decide to not to support 2 innerBlock in one node. We should handle 
 this situation, and output warning friendly.
 When there are only 8 DN, and ECSchema is RS-6-3
 {noformat}
  # bin/hdfs dfs -put README.txt /ecdir
 15/05/11 13:48:30 WARN hdfs.DataStreamer: DataStreamer Exception
 java.lang.NullPointerException
 at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
 at 
 org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
 at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
 15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
 DFSStripedOutputStream:#0: isFailed? f, null@null
 java.io.IOException: java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.set(DataStreamer.java:183)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:571)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
 Caused by: java.lang.NullPointerException
 at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
 at 
 org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
 at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
 ... 1 more
 15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
 DFSStripedOutputStream:#1: isFailed? f, null@null
 java.nio.channels.ClosedChannelException
 at 
 org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:208)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:146)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:713)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8351) Remove namenode -finalize option from document

2015-05-11 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8351:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks all.

 Remove namenode -finalize option from document
 --

 Key: HDFS-8351
 URL: https://issues.apache.org/jira/browse/HDFS-8351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.8.0

 Attachments: HDFS-8351.001.patch


 hdfs namenode -finalize option was removed by HDFS-5138, however, the 
 document was not updated.
 http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8062) Remove hard-coded values in favor of EC schema

2015-05-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537660#comment-14537660
 ] 

Kai Zheng commented on HDFS-8062:
-

bq. Changing constructor interface of BlockInfoStriped cause a lot of updates.
So how about separating this out and handling it in new issue first?
If the left patch for the other issues are medium, I thought they're good to be 
handled here.

 Remove hard-coded values in favor of EC schema
 --

 Key: HDFS-8062
 URL: https://issues.apache.org/jira/browse/HDFS-8062
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Sasaki
 Attachments: HDFS-8062.1.patch, HDFS-8062.2.patch, HDFS-8062.3.patch, 
 HDFS-8062.4.patch, HDFS-8062.5.patch, HDFS-8062.6.patch


 Related issues about EC schema in NameNode side:
 HDFS-7859 is to change fsimage and editlog in NameNode to persist EC schemas;
 HDFS-7866 is to manage EC schemas in NameNode, loading, syncing between 
 persisted ones in image and predefined ones in XML.
 This is to revisit all the places in NameNode that uses hard-coded values in 
 favor of {{ECSchema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8294) Erasure Coding: Fix Findbug warnings present in erasure coding

2015-05-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537667#comment-14537667
 ] 

Kai Zheng commented on HDFS-8294:
-

Thanks for your update.

I saw you split the condition {{(replicas == null || replicas.length == 0)}} 
into two, but reporting the same warning message. Do we have to split it to 
make FindBugs happy? I'm not sure.

 Erasure Coding: Fix Findbug warnings present in erasure coding
 --

 Key: HDFS-8294
 URL: https://issues.apache.org/jira/browse/HDFS-8294
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
  Labels: BB2015-05-RFC
 Attachments: FindBugs Report in EC feature.html, 
 HDFS-8294-HDFS-7285.00.patch, HDFS-8294-HDFS-7285.01.patch, 
 HDFS-8294-HDFS-7285.02.patch, HDFS-8294-HDFS-7285.03.patch


 This jira is to address the findbug issues reported in erasure coding feature.
 Attached sheet which contains the details of the findbug issues reported in 
 the erasure coding feature. I've taken this report from the jenkins build : 
 https://builds.apache.org/job/PreCommit-HDFS-Build/10848/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8351) Remove namenode -finalize option from document

2015-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537654#comment-14537654
 ] 

Hudson commented on HDFS-8351:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7789 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7789/])
HDFS-8351. Remove namenode -finalize option from document. (aajisaka) 
(aajisaka: rev 3fa2efc09f051b6fc6244f0edca46d3d06f4ae3b)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove namenode -finalize option from document
 --

 Key: HDFS-8351
 URL: https://issues.apache.org/jira/browse/HDFS-8351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.8.0

 Attachments: HDFS-8351.001.patch


 hdfs namenode -finalize option was removed by HDFS-5138, however, the 
 document was not updated.
 http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7582) Limit the number of default ACL entries to Half of maximum entries (16)

2015-05-11 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537671#comment-14537671
 ] 

Vinayakumar B edited comment on HDFS-7582 at 5/11/15 7:05 AM:
--

bq. It would be great if we could find a definitive source that defines the 
limit, but I haven't found it yet.
http://users.suse.com/~agruen/acl/linux-acls/online/#tab:acl_entries 
I have tested on XFS, so I found the limit of 25 mentioned here for XFS. But 
doc didnt say that its separately applied on access and default. Anyway this 
limit was based on underlying FS implementations, Some supports in limiting # 
of entries, some support limiting overall size.

In my earlier options, the option #1 (Apply the EXISTING limit (32) separately 
on ACCESS and DEFAULT) was without backward incompatibility for existing 
deployments. Of-course it increases the NN memory if extensively used.

What you say about option #1?

bq. Do you still want to consider changing this for 3.x? We'd have the 
flexibility to make a backwards-incompatible change there.
I dont think so.


was (Author: vinayrpet):
bq. It would be great if we could find a definitive source that defines the 
limit, but I haven't found it yet.
http://users.suse.com/~agruen/acl/linux-acls/online/#tab:acl_entries 
I have tested on XFS, so I found the limit of 25 mentioned here for XFS. But it 
didnt say that its separately applied on access and default. Anyway this limit 
was based on inderlying FS implementations, Some supports in limiting # of 
entries, some support limiting overall size.

In my earlier options, the option #1 (Apply the EXISTING limit (32) separately 
on ACCESS and DEFAULT) was without backward incompatibility for existing 
deployments. Of-course it increases the NN memory if extensively used.

What you say about option #1?

bq. Do you still want to consider changing this for 3.x? We'd have the 
flexibility to make a backwards-incompatible change there.
I dont think so.

 Limit the number of default ACL entries to Half of maximum entries (16)
 ---

 Key: HDFS-7582
 URL: https://issues.apache.org/jira/browse/HDFS-7582
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7582-001.patch


 Current ACL limits are only on the total number of entries.
 But there can be a situation where number of default entries for a directory 
 will be more than half of the maximum entries, i.e.  16.
 In such case, under this parent directory only files can be created which 
 will have ACLs inherited using parent's default entries.
 But when directories are created, total number of entries will be more than 
 the maximum allowed, because sub-directories copies both inherited ACLs as 
 well as default entries.
 Since currently there is no check while copying ACLs from default ACLs 
 directory creation succeeds, but any modification (only permission on one 
 entry also) on the same ACL will fail.
 So it would be better to restrict the default entries to 16.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7582) Limit the number of default ACL entries to Half of maximum entries (16)

2015-05-11 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537671#comment-14537671
 ] 

Vinayakumar B commented on HDFS-7582:
-

bq. It would be great if we could find a definitive source that defines the 
limit, but I haven't found it yet.
http://users.suse.com/~agruen/acl/linux-acls/online/#tab:acl_entries 
I have tested on XFS, so I found the limit of 25 mentioned here for XFS. But it 
didnt say that its separately applied on access and default. Anyway this limit 
was based on inderlying FS implementations, Some supports in limiting # of 
entries, some support limiting overall size.

In my earlier options, the option #1 (Apply the EXISTING limit (32) separately 
on ACCESS and DEFAULT) was without backward incompatibility for existing 
deployments. Of-course it increases the NN memory if extensively used.

What you say about option #1?

bq. Do you still want to consider changing this for 3.x? We'd have the 
flexibility to make a backwards-incompatible change there.
I dont think so.

 Limit the number of default ACL entries to Half of maximum entries (16)
 ---

 Key: HDFS-7582
 URL: https://issues.apache.org/jira/browse/HDFS-7582
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7582-001.patch


 Current ACL limits are only on the total number of entries.
 But there can be a situation where number of default entries for a directory 
 will be more than half of the maximum entries, i.e.  16.
 In such case, under this parent directory only files can be created which 
 will have ACLs inherited using parent's default entries.
 But when directories are created, total number of entries will be more than 
 the maximum allowed, because sub-directories copies both inherited ACLs as 
 well as default entries.
 Since currently there is no check while copying ACLs from default ACLs 
 directory creation succeeds, but any modification (only permission on one 
 entry also) on the same ACL will fail.
 So it would be better to restrict the default entries to 16.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5270) Use thread pools in the datenode daemons

2015-05-11 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HDFS-5270:
---
Attachment: HDFS-5270.4.patch

Make PacketResponder use thread pool.
And fix some fingbugs, checkstyle and whilespace erros.

 Use thread pools in the datenode daemons
 

 Key: HDFS-5270
 URL: https://issues.apache.org/jira/browse/HDFS-5270
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Haohui Mai
Assignee: zhangduo
  Labels: BB2015-05-TBR
 Attachments: HDFS-5270.000.patch, HDFS-5270.2.patch, 
 HDFS-5270.3.patch, HDFS-5270.4.patch, TestConcurrentAccess.java


 The current implementation of the datanode creates a thread when a new 
 request comes in. This incurs high overheads for the creation / destruction 
 of threads, making the datanode unstable under high concurrent loads.
 This JIRA proposes to use a thread pool to reduce the overheads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-05-11 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537678#comment-14537678
 ] 

Brahma Reddy Battula commented on HDFS-8241:


[~ajisakaa] Updated patch..Kindly Review!! Thanks.

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: HDFS-8241-002.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8012) Updatable HAR Filesystem

2015-05-11 Thread Madhan Sundararajan Devaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Madhan Sundararajan Devaki updated HDFS-8012:
-
Component/s: HDFS

 Updatable HAR Filesystem
 

 Key: HDFS-8012
 URL: https://issues.apache.org/jira/browse/HDFS-8012
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, HDFS, hdfs-client
Reporter: Madhan Sundararajan Devaki
Priority: Critical

 Improvement: Updatable HAR Filesystem.
 The following operations may be supported additionally.
 + Add new files [ -a filename-uri1 filename-uri2 ... / -a dirname-uri1 
 dirname-uri2 ...]
 + Remove existing files [ -d filename-uri1 filename-uri2 ... / -d 
 dirname-uri1 dirname-uri2 ...]
 + Update/Replace existing files (Optional) [ -u old-filename-uri 
 new-filename-uri]
 This is required in cases where data is stored in AVRO format in HDFS and the 
 corresponding .avsc files are used to create Hive external tables.
 This will lead to the small files (.avsc files in this case) problem when 
 there are a large number of tables that need to be loaded into Hive as 
 external tables as is the typical case during a Datawarehouse migration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8012) Updatable HAR Filesystem

2015-05-11 Thread Madhan Sundararajan Devaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Madhan Sundararajan Devaki updated HDFS-8012:
-
Description: 
Improvement: Updatable HAR Filesystem.
The following operations may be supported additionally.
+ Add new files [ -a filename-uri1 filename-uri2 ... / -a dirname-uri1 
dirname-uri2 ...]
+ Remove existing files [ -d filename-uri1 filename-uri2 ... / -d dirname-uri1 
dirname-uri2 ...]
+ Update/Replace existing files (Optional) [ -u old-filename-uri 
new-filename-uri]
This is required in cases where data is stored in AVRO format in HDFS and the 
corresponding .avsc files are used to create Hive external tables.
This will lead to the small files (.avsc files in this case) problem when there 
are a large number of tables that need to be loaded into Hive as external 
tables as is the typical case during a Datawarehouse migration.

  was:
Is there a plan to support updatable HAR Filesystem? If so, by when is this 
expected please?
The following operations may be supported.
+ Add new files [ -a filename-uri1 filename-uri2 ... / -a dirname-uri1 
dirname-uri2 ...]
+ Remove existing files [ -d filename-uri1 filename-uri2 ... / -d dirname-uri1 
dirname-uri2 ...]
+ Update/Replace existing files (Optional) [ -u old-filename-uri 
new-filename-uri]
This is required in cases where data is stored in AVRO format in HDFS and the 
corresponding .avsc files are used to create Hive external tables.
This will lead to the small files (.avsc files in this case) problem when there 
are a large number of tables that need to be loaded into Hive as external 
tables.


 Updatable HAR Filesystem
 

 Key: HDFS-8012
 URL: https://issues.apache.org/jira/browse/HDFS-8012
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, hdfs-client
Reporter: Madhan Sundararajan Devaki
Priority: Critical

 Improvement: Updatable HAR Filesystem.
 The following operations may be supported additionally.
 + Add new files [ -a filename-uri1 filename-uri2 ... / -a dirname-uri1 
 dirname-uri2 ...]
 + Remove existing files [ -d filename-uri1 filename-uri2 ... / -d 
 dirname-uri1 dirname-uri2 ...]
 + Update/Replace existing files (Optional) [ -u old-filename-uri 
 new-filename-uri]
 This is required in cases where data is stored in AVRO format in HDFS and the 
 corresponding .avsc files are used to create Hive external tables.
 This will lead to the small files (.avsc files in this case) problem when 
 there are a large number of tables that need to be loaded into Hive as 
 external tables as is the typical case during a Datawarehouse migration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-05-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537748#comment-14537748
 ] 

Akira AJISAKA commented on HDFS-8241:
-

Thanks [~brahmareddy] for updating the patch. Minor nit:
{code:title=HDFSCommands.md}
Runs the namenode. More info about the upgrade, rollback and finalize is at 
[Upgrade Rollback](./HdfsUserGuide.html#Upgrade_and_Rollback).
{code}
Would you remove finalize in the sentence?

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: HDFS-8241-002.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-05-11 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537760#comment-14537760
 ] 

Brahma Reddy Battula commented on HDFS-8241:


[~ajisakaa] Updated patch to fix above minor nit.. Kindly review

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: HDFS-8241-002.patch, HDFS-8241-003.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8367) BlockInfoStriped can also receive schema at its creation

2015-05-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537775#comment-14537775
 ] 

Kai Zheng commented on HDFS-8367:
-

I updated the issue using {{ECSchema}} instead of {{ECInfo}} per my 
understanding and the patch you did in HDFS-8062. Please correct me if 
necessary. Thanks.

 BlockInfoStriped can also receive schema at its creation
 

 Key: HDFS-8367
 URL: https://issues.apache.org/jira/browse/HDFS-8367
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Kai Sasaki
Assignee: Kai Sasaki
  Labels: EC

 {{BlockInfoStriped}} should receive the total information for erasure coding 
 as {{ECSchema}}. This JIRA changes the constructor interface and its 
 dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8019) Erasure Coding: erasure coding chunk buffer allocation and management

2015-05-11 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8019:

Attachment: HDFS-8019-HDFS-7285-02.patch

Attaching second patch with configurations.
Please review.

 Erasure Coding: erasure coding chunk buffer allocation and management
 -

 Key: HDFS-8019
 URL: https://issues.apache.org/jira/browse/HDFS-8019
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Vinayakumar B
 Attachments: HDFS-8019-HDFS-7285-01.patch, 
 HDFS-8019-HDFS-7285-02.patch


 As a task of HDFS-7344, this is to come up a chunk buffer pool allocating and 
 managing coding chunk buffers, either based on on-heap or off-heap. Note this 
 assumes some DataNodes are powerful in computing and performing EC coding 
 work, so better to have this dedicated buffer pool and management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8294) Erasure Coding: Fix Findbug warnings present in erasure coding

2015-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537718#comment-14537718
 ] 

Hadoop QA commented on HDFS-8294:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 45s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 48s | The applied patch generated 
12 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 37s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 11s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 179m 56s | Tests failed in hadoop-hdfs. |
| | | 221m 56s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestDFSPermission |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.web.TestWebHDFSForHA |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731220/HDFS-8294-HDFS-7285.03.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / d96c64c |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10906/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10906/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10906/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10906/console |


This message was automatically generated.

 Erasure Coding: Fix Findbug warnings present in erasure coding
 --

 Key: HDFS-8294
 URL: https://issues.apache.org/jira/browse/HDFS-8294
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
  Labels: BB2015-05-RFC
 Attachments: FindBugs Report in EC feature.html, 
 HDFS-8294-HDFS-7285.00.patch, HDFS-8294-HDFS-7285.01.patch, 
 HDFS-8294-HDFS-7285.02.patch, HDFS-8294-HDFS-7285.03.patch


 This jira is to address the findbug issues reported in erasure coding feature.
 Attached sheet which contains the details of the findbug issues reported in 
 the erasure coding feature. I've taken this report from the jenkins build : 
 https://builds.apache.org/job/PreCommit-HDFS-Build/10848/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8062) Remove hard-coded values in favor of EC schema

2015-05-11 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537719#comment-14537719
 ] 

Kai Sasaki commented on HDFS-8062:
--

[~drankye] I agree with you. I'm going to create new JIRA for this issue. Thank 
you.

 Remove hard-coded values in favor of EC schema
 --

 Key: HDFS-8062
 URL: https://issues.apache.org/jira/browse/HDFS-8062
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Sasaki
 Attachments: HDFS-8062.1.patch, HDFS-8062.2.patch, HDFS-8062.3.patch, 
 HDFS-8062.4.patch, HDFS-8062.5.patch, HDFS-8062.6.patch


 Related issues about EC schema in NameNode side:
 HDFS-7859 is to change fsimage and editlog in NameNode to persist EC schemas;
 HDFS-7866 is to manage EC schemas in NameNode, loading, syncing between 
 persisted ones in image and predefined ones in XML.
 This is to revisit all the places in NameNode that uses hard-coded values in 
 favor of {{ECSchema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8368) Erasure Coding: DFS opening a non-existent file need to be handled properly

2015-05-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8368:
---
Attachment: HDFS-8368-HDFS-7285.00.patch

 Erasure Coding: DFS opening a non-existent file need to be handled properly
 ---

 Key: HDFS-8368
 URL: https://issues.apache.org/jira/browse/HDFS-8368
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8368-HDFS-7285.00.patch


 This jira to address bad exceptions when opening a non-existent file. It 
 throws NPE as shown below:
 {code}
 java.lang.NullPointerException: null
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1184)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:307)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:303)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:303)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767)
   at 
 org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:359)
   at 
 org.apache.hadoop.hdfs.TestDistributedFileSystem.testAllWithNoXmlDefaults(TestDistributedFileSystem.java:666)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8368) Erasure Coding: DFS opening a non-existent file need to be handled properly

2015-05-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8368:
---
Status: Patch Available  (was: Open)

 Erasure Coding: DFS opening a non-existent file need to be handled properly
 ---

 Key: HDFS-8368
 URL: https://issues.apache.org/jira/browse/HDFS-8368
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8368-HDFS-7285.00.patch


 This jira to address bad exceptions when opening a non-existent file. It 
 throws NPE as shown below:
 {code}
 java.lang.NullPointerException: null
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1184)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:307)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:303)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:303)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767)
   at 
 org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:359)
   at 
 org.apache.hadoop.hdfs.TestDistributedFileSystem.testAllWithNoXmlDefaults(TestDistributedFileSystem.java:666)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8362) Java Compilation Error in TestHdfsConfigFields.java and TestMapreduceConfigFields.java

2015-05-11 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537794#comment-14537794
 ] 

Vinayakumar B commented on HDFS-8362:
-

Hi [~arshad.mohammad],
can you separate the patch for HDFS and Mapreduce ?
For HDFS changes you can use this Jira, and MapReduce changes can you re-open  
MAPREDUCE-6360 and submit patch there?

 Java Compilation Error in TestHdfsConfigFields.java and 
 TestMapreduceConfigFields.java
 --

 Key: HDFS-8362
 URL: https://issues.apache.org/jira/browse/HDFS-8362
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
 Fix For: 2.8.0

 Attachments: HDFS-8362-1.patch


 in TestHdfsConfigFields.java failure is because of wrong package name.
 in TestMapreduceConfigFields.java failure i s becuase of:
 i) wrong package name
 ii) missing imports



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7966) New Data Transfer Protocol via HTTP/2

2015-05-11 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537805#comment-14537805
 ] 

zhangduo commented on HDFS-7966:


The latest patch of HDFS-5270 proves that the logic of FsDataset and writing 
pipeline could be compatible with a thread pool implementation.

But HDFS-5270 does not address the basic issue-one thread per connection(maybe 
two?). This makes client connection pooling which is very important for HBase 
impossible in large cluster. 

So I think it is time to pick up this issue.

Thanks.

 New Data Transfer Protocol via HTTP/2
 -

 Key: HDFS-7966
 URL: https://issues.apache.org/jira/browse/HDFS-7966
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Haohui Mai
Assignee: Qianqian Shi
  Labels: gsoc, gsoc2015, mentor

 The current Data Transfer Protocol (DTP) implements a rich set of features 
 that span across multiple layers, including:
 * Connection pooling and authentication (session layer)
 * Encryption (presentation layer)
 * Data writing pipeline (application layer)
 All these features are HDFS-specific and defined by implementation. As a 
 result it requires non-trivial amount of work to implement HDFS clients and 
 servers.
 This jira explores to delegate the responsibilities of the session and 
 presentation layers to the HTTP/2 protocol. Particularly, HTTP/2 handles 
 connection multiplexing, QoS, authentication and encryption, reducing the 
 scope of DTP to the application layer only. By leveraging the existing HTTP/2 
 library, it should simplify the implementation of both HDFS clients and 
 servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8220) Erasure Coding: StripedDataStreamer fails to handle the blocklocations which doesn't satisfy BlockGroupSize

2015-05-11 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537729#comment-14537729
 ] 

Walter Su commented on HDFS-8220:
-

You moved assert nodes  ==  The location is good. I think assert is not 
enough. Assertion is disable default. 
...For the safer side, IMHO we could do a validation at the 
StripedDataStreamer to avoid NPE now. 
That's what i mean. Maybe throw an IOException, and add some friendly messages. 
NPE is awful.

 Erasure Coding: StripedDataStreamer fails to handle the blocklocations which 
 doesn't satisfy BlockGroupSize
 ---

 Key: HDFS-8220
 URL: https://issues.apache.org/jira/browse/HDFS-8220
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8220-001.patch, HDFS-8220-002.patch, 
 HDFS-8220-003.patch, HDFS-8220-004.patch, HDFS-8220-HDFS-7285.005.patch, 
 HDFS-8220-HDFS-7285.006.patch


 During write operations {{StripedDataStreamer#locateFollowingBlock}} fails to 
 validate the available datanodes against the {{BlockGroupSize}}. Please see 
 the exception to understand more:
 {code}
 2015-04-22 14:56:11,313 WARN  hdfs.DFSClient (DataStreamer.java:run(538)) - 
 DataStreamer Exception
 java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 2015-04-22 14:56:11,313 INFO  hdfs.MiniDFSCluster 
 (MiniDFSCluster.java:shutdown(1718)) - Shutting down the Mini HDFS Cluster
 2015-04-22 14:56:11,313 ERROR hdfs.DFSClient 
 (DFSClient.java:closeAllFilesBeingWritten(608)) - Failed to close inode 16387
 java.io.IOException: DataStreamer Exception: 
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:544)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 Caused by: java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   ... 1 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8368) Erasure Coding: DFS opening a non-existent file need to be handled properly

2015-05-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537743#comment-14537743
 ] 

Rakesh R commented on HDFS-8368:


Attached a patch where it checks the {{fileInfo}} existence. Since there are 
few existing unit test cases {{TestDistributedFileSystem#testDFSClient}} to 
verify the behavior, I haven't included any more. Please review. Thanks!

 Erasure Coding: DFS opening a non-existent file need to be handled properly
 ---

 Key: HDFS-8368
 URL: https://issues.apache.org/jira/browse/HDFS-8368
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8368-HDFS-7285.00.patch


 This jira to address bad exceptions when opening a non-existent file. It 
 throws NPE as shown below:
 {code}
 java.lang.NullPointerException: null
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1184)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:307)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:303)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:303)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767)
   at 
 org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:359)
   at 
 org.apache.hadoop.hdfs.TestDistributedFileSystem.testAllWithNoXmlDefaults(TestDistributedFileSystem.java:666)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8220) Erasure Coding: StripedDataStreamer fails to handle the blocklocations which doesn't satisfy BlockGroupSize

2015-05-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537753#comment-14537753
 ] 

Rakesh R commented on HDFS-8220:


Thank you [~walter.k.su] for the comments.

bq. I think assert is not enough. Assertion is disable default. 
I could see in hadoop project {{assert}} is used in many places in the source 
code(for example, LocatedStripedBlock.java, LocatedBlocks.java etc). Should we 
need to change this assert to if statement?

 Erasure Coding: StripedDataStreamer fails to handle the blocklocations which 
 doesn't satisfy BlockGroupSize
 ---

 Key: HDFS-8220
 URL: https://issues.apache.org/jira/browse/HDFS-8220
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8220-001.patch, HDFS-8220-002.patch, 
 HDFS-8220-003.patch, HDFS-8220-004.patch, HDFS-8220-HDFS-7285.005.patch, 
 HDFS-8220-HDFS-7285.006.patch


 During write operations {{StripedDataStreamer#locateFollowingBlock}} fails to 
 validate the available datanodes against the {{BlockGroupSize}}. Please see 
 the exception to understand more:
 {code}
 2015-04-22 14:56:11,313 WARN  hdfs.DFSClient (DataStreamer.java:run(538)) - 
 DataStreamer Exception
 java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 2015-04-22 14:56:11,313 INFO  hdfs.MiniDFSCluster 
 (MiniDFSCluster.java:shutdown(1718)) - Shutting down the Mini HDFS Cluster
 2015-04-22 14:56:11,313 ERROR hdfs.DFSClient 
 (DFSClient.java:closeAllFilesBeingWritten(608)) - Failed to close inode 16387
 java.io.IOException: DataStreamer Exception: 
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:544)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 Caused by: java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   ... 1 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8285) Contents of ArchivalStorage in hadoop2.7 is messed up

2015-05-11 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina resolved HDFS-8285.
--
Resolution: Implemented

This issue no longer exist in current document.


 Contents of ArchivalStorage in hadoop2.7 is messed up
 -

 Key: HDFS-8285
 URL: https://issues.apache.org/jira/browse/HDFS-8285
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina

 On accessing below link of hadoop-2.7
 http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html
 1. Contents still points to hadoop2.6
 2. Few links on left side panel are missed out .
 Need to check for any other related issues and fix as part of this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8367) BlockInfoStriped can also receive schema at its creation

2015-05-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8367:

Summary: BlockInfoStriped can also receive schema at its creation  (was: 
BlockInfoStriped can also receive ECInfo at its creation)

 BlockInfoStriped can also receive schema at its creation
 

 Key: HDFS-8367
 URL: https://issues.apache.org/jira/browse/HDFS-8367
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Kai Sasaki
Assignee: Kai Sasaki
  Labels: EC

 {{BlockInfoStriped}} should receive the total information for erasure coding 
 as {{ECInfo}}. This JIRA changes the constructor interface and its 
 dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8369) TestHdfsConfigFields is placed in wrong dir, introducing compile error

2015-05-11 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8369:

Description: 
HDFS-7559 has introduced a Test file {{TestHdfsConfigFields}}
which was committed in package {{org.apache.hadoop.tools}}
But the package declaration inside file is {{org.apache.hadoop.hdfs.tools}}
By surprise, this is not giving any compile errors in maven build. But eclipse 
catches it.
So move {{TestHdfsConfigFields}} to correct package 
{{org.apache.hadoop.hdfs.tools}}

  was:
HDFS-7559 has introduced a Test file {{TestHdfsConfigFields }}
which was committed in package {{org.apache.hadoop.tools}}
But the package declaration inside file is {{org.apache.hadoop.hdfs.tools}}
By surprise, this is not giving any compile errors in maven build. But eclipse 
catches it.
So move {{TestHdfsConfigFields}} to correct package 
{{org.apache.hadoop.hdfs.tools}}


 TestHdfsConfigFields is placed in wrong dir, introducing compile error
 --

 Key: HDFS-8369
 URL: https://issues.apache.org/jira/browse/HDFS-8369
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8369-01.patch


 HDFS-7559 has introduced a Test file {{TestHdfsConfigFields}}
 which was committed in package {{org.apache.hadoop.tools}}
 But the package declaration inside file is {{org.apache.hadoop.hdfs.tools}}
 By surprise, this is not giving any compile errors in maven build. But 
 eclipse catches it.
 So move {{TestHdfsConfigFields}} to correct package 
 {{org.apache.hadoop.hdfs.tools}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8369) TestHdfsConfigFields is placed in wrong dir, introducing compile error

2015-05-11 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8369:

Attachment: HDFS-8369-01.patch

Attaching the patch to move the file to appropriate package

 TestHdfsConfigFields is placed in wrong dir, introducing compile error
 --

 Key: HDFS-8369
 URL: https://issues.apache.org/jira/browse/HDFS-8369
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8369-01.patch


 HDFS-7559 has introduced a Test file {{TestHdfsConfigFields}}
 which was committed in package {{org.apache.hadoop.tools}}
 But the package declaration inside file is {{org.apache.hadoop.hdfs.tools}}
 By surprise, this is not giving any compile errors in maven build. But 
 eclipse catches it.
 So move {{TestHdfsConfigFields}} to correct package 
 {{org.apache.hadoop.hdfs.tools}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8365) Erasure Coding: Badly treated when short of Datanode in StripedDataStreamer

2015-05-11 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537686#comment-14537686
 ] 

Walter Su commented on HDFS-8365:
-

Sorry I didn't saw it. Of course.

 Erasure Coding: Badly treated when short of Datanode in StripedDataStreamer
 ---

 Key: HDFS-8365
 URL: https://issues.apache.org/jira/browse/HDFS-8365
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su

 Currently, each innerBlock of blockGroup should put on different node. One 
 node can't has 2 innerBlock. 
 If one node has 2 innerBlock, we have blockReport issue. The first reported 
 innerBlock will be added to triplets, but the second won't.
 If we decide to not to support 2 innerBlock in one node. We should handle 
 this situation, and output warning friendly.
 When there are only 8 DN, and ECSchema is RS-6-3
 {noformat}
  # bin/hdfs dfs -put README.txt /ecdir
 15/05/11 13:48:30 WARN hdfs.DataStreamer: DataStreamer Exception
 java.lang.NullPointerException
 at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
 at 
 org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
 at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
 15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
 DFSStripedOutputStream:#0: isFailed? f, null@null
 java.io.IOException: java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.set(DataStreamer.java:183)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:571)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:48)
 Caused by: java.lang.NullPointerException
 at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:410)
 at 
 org.apache.hadoop.hdfs.DFSStripedOutputStream$Coordinator.putStripedBlock(DFSStripedOutputStream.java:115)
 at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:120)
 at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1360)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:457)
 ... 1 more
 15/05/11 13:48:30 WARN hdfs.DFSOutputStream: Failed: closeImpl, 
 DFSStripedOutputStream:#1: isFailed? f, null@null
 java.nio.channels.ClosedChannelException
 at 
 org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:208)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:146)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:713)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8367) BlockInfoStriped can also receive ECInfo at its creation

2015-05-11 Thread Kai Sasaki (JIRA)
Kai Sasaki created HDFS-8367:


 Summary: BlockInfoStriped can also receive ECInfo at its creation
 Key: HDFS-8367
 URL: https://issues.apache.org/jira/browse/HDFS-8367
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Kai Sasaki
Assignee: Kai Sasaki


{{BlockInfoStriped}} should receive the total information for erasure coding as 
{{ECInfo}}. This JIRA changes the constructor interface and its dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8341) HDFS mover stuck in loop after failing to move block, doesn't move rest of blocks, can't get data back off decommissioning external storage tier as a result

2015-05-11 Thread Hari Sekhon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537733#comment-14537733
 ] 

Hari Sekhon commented on HDFS-8341:
---

I had to move several thousand blocks by hand via scripting which was a 
significant proportion of the total blocks given that I had only put some 
limited amount of expendible data on the archive tier for testing. Given the 
dimensions of the data I'm certain it wasn't only successive blocks for one 
given file.

The command was looping on the same block, which also implies it never 
proceeded to try to move the blocks of the other files, hence the large number 
of blocks left behind and not moved back to the regular disk tier.

 HDFS mover stuck in loop after failing to move block, doesn't move rest of 
 blocks, can't get data back off decommissioning external storage tier as a 
 result
 

 Key: HDFS-8341
 URL: https://issues.apache.org/jira/browse/HDFS-8341
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Hari Sekhon
Assignee: surendra singh lilhore
Priority: Blocker

 HDFS mover gets stuck looping on a block that fails to move and doesn't 
 migrate the rest of the blocks.
 This is preventing recovery of data from a decomissioning external storage 
 tier used for archive (we've had problems with that proprietary hyperscale 
 storage product which is why a couple blocks here and there have checksum 
 problems or premature eof as shown below), but this should not prevent moving 
 all the other blocks to recover our data:
 {code}hdfs mover -p /apps/hive/warehouse/custom_scrubbed
 15/05/07 14:52:50 INFO mover.Mover: namenodes = 
 {hdfs://nameservice1=[/apps/hive/warehouse/custom_scrubbed]}
 15/05/07 14:52:51 INFO balancer.KeyManager: Block token params received from 
 NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
 15/05/07 14:52:51 INFO block.BlockTokenSecretManager: Setting block keys
 15/05/07 14:52:51 INFO balancer.KeyManager: Update block keys every 2hrs, 
 30mins, 0sec
 15/05/07 14:52:52 INFO block.BlockTokenSecretManager: Setting block keys
 15/05/07 14:52:52 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:52:52 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:52:52 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:52:52 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:52:52 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:52:52 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:52:52 WARN balancer.Dispatcher: Failed to move 
 blk_1075156654_1438349 with size=134217728 from ip:1019:ARCHIVE to 
 ip:1019:DISK through ip:1019: block move is failed: opReplaceBlock 
 BP-120244285-ip-1417023863606:blk_1075156654_1438349 received exception 
 java.io.EOFException: Premature EOF: no length prefix available
 NOW IT STARTS LOOPING ON SAME BLOCK
 15/05/07 14:53:31 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:53:31 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:53:31 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:53:31 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:53:31 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:53:31 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/ip:1019
 15/05/07 14:53:31 WARN balancer.Dispatcher: Failed to move 
 blk_1075156654_1438349 with size=134217728 from ip:1019:ARCHIVE to 
 ip:1019:DISK through ip:1019: block move is failed: opReplaceBlock 
 BP-120244285-ip-1417023863606:blk_1075156654_1438349 received exception 
 java.io.EOFException: Premature EOF: no length prefix available
 ...repeat indefinitely...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-05-11 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8241:
---
Attachment: HDFS-8241-003.patch

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: HDFS-8241-002.patch, HDFS-8241-003.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8369) TestHdfsConfigFields is placed in wrong dir, introducing compile error

2015-05-11 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HDFS-8369.
-
Resolution: Duplicate

 TestHdfsConfigFields is placed in wrong dir, introducing compile error
 --

 Key: HDFS-8369
 URL: https://issues.apache.org/jira/browse/HDFS-8369
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8369-01.patch


 HDFS-7559 has introduced a Test file {{TestHdfsConfigFields}}
 which was committed in package {{org.apache.hadoop.tools}}
 But the package declaration inside file is {{org.apache.hadoop.hdfs.tools}}
 By surprise, this is not giving any compile errors in maven build. But 
 eclipse catches it.
 So move {{TestHdfsConfigFields}} to correct package 
 {{org.apache.hadoop.hdfs.tools}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8368) Erasure Coding: DFS opening a non-existent file need to be handled properly

2015-05-11 Thread Rakesh R (JIRA)
Rakesh R created HDFS-8368:
--

 Summary: Erasure Coding: DFS opening a non-existent file need to 
be handled properly
 Key: HDFS-8368
 URL: https://issues.apache.org/jira/browse/HDFS-8368
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R


This jira to address bad exceptions when opening a non-existent file. It throws 
NPE as shown below:

{code}
java.lang.NullPointerException: null
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1184)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:307)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:303)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:303)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:359)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testAllWithNoXmlDefaults(TestDistributedFileSystem.java:666)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8367) BlockInfoStriped can also receive schema at its creation

2015-05-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8367:

Description: {{BlockInfoStriped}} should receive the total information for 
erasure coding as {{ECSchema}}. This JIRA changes the constructor interface and 
its dependencies.  (was: {{BlockInfoStriped}} should receive the total 
information for erasure coding as {{ECInfo}}. This JIRA changes the constructor 
interface and its dependencies.)

 BlockInfoStriped can also receive schema at its creation
 

 Key: HDFS-8367
 URL: https://issues.apache.org/jira/browse/HDFS-8367
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Kai Sasaki
Assignee: Kai Sasaki
  Labels: EC

 {{BlockInfoStriped}} should receive the total information for erasure coding 
 as {{ECSchema}}. This JIRA changes the constructor interface and its 
 dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-05-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537778#comment-14537778
 ] 

Akira AJISAKA commented on HDFS-8241:
-

+1 pending Jenkins. Thank you Brahma.

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: HDFS-8241-002.patch, HDFS-8241-003.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8369) TestHdfsConfigFields is placed in wrong dir, introducing compile error

2015-05-11 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-8369:
---

 Summary: TestHdfsConfigFields is placed in wrong dir, introducing 
compile error
 Key: HDFS-8369
 URL: https://issues.apache.org/jira/browse/HDFS-8369
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B


HDFS-7559 has introduced a Test file {{TestHdfsConfigFields }}
which was committed in package {{org.apache.hadoop.tools}}
But the package declaration inside file is {{org.apache.hadoop.hdfs.tools}}
By surprise, this is not giving any compile errors in maven build. But eclipse 
catches it.
So move {{TestHdfsConfigFields}} to correct package 
{{org.apache.hadoop.hdfs.tools}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8369) TestHdfsConfigFields is placed in wrong dir, introducing compile error

2015-05-11 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537788#comment-14537788
 ] 

Brahma Reddy Battula commented on HDFS-8369:


dupe of HDFS-8362..?

 TestHdfsConfigFields is placed in wrong dir, introducing compile error
 --

 Key: HDFS-8369
 URL: https://issues.apache.org/jira/browse/HDFS-8369
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8369-01.patch


 HDFS-7559 has introduced a Test file {{TestHdfsConfigFields}}
 which was committed in package {{org.apache.hadoop.tools}}
 But the package declaration inside file is {{org.apache.hadoop.hdfs.tools}}
 By surprise, this is not giving any compile errors in maven build. But 
 eclipse catches it.
 So move {{TestHdfsConfigFields}} to correct package 
 {{org.apache.hadoop.hdfs.tools}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5270) Use thread pools in the datenode daemons

2015-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537806#comment-14537806
 ] 

Hadoop QA commented on HDFS-5270:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  6s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 8 new or modified test files. |
| {color:green}+1{color} | javac |   7m 49s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 55s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 22s | The applied patch generated  
17 new checkstyle issues (total was 545, now 520). |
| {color:green}+1{color} | whitespace |   0m 20s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 41s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  6s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 166m  3s | Tests failed in hadoop-hdfs. |
| | | 210m 42s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tracing.TestTraceAdmin |
|   | hadoop.hdfs.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731869/HDFS-5270.4.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 3fa2efc |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10908/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10908/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10908/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10908/console |


This message was automatically generated.

 Use thread pools in the datenode daemons
 

 Key: HDFS-5270
 URL: https://issues.apache.org/jira/browse/HDFS-5270
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Haohui Mai
Assignee: zhangduo
  Labels: BB2015-05-TBR
 Attachments: HDFS-5270.000.patch, HDFS-5270.2.patch, 
 HDFS-5270.3.patch, HDFS-5270.4.patch, TestConcurrentAccess.java


 The current implementation of the datanode creates a thread when a new 
 request comes in. This incurs high overheads for the creation / destruction 
 of threads, making the datanode unstable under high concurrent loads.
 This JIRA proposes to use a thread pool to reduce the overheads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality (pread)

2015-05-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537815#comment-14537815
 ] 

Kai Zheng commented on HDFS-7678:
-

Just checked the place to call the decoder, found the following codes in 
{{decodeAndFillBuffer}}:
{code}
+byte[][] outputs = new byte[parityBlkNum][(int) alignedStripe.getLength()];
+RSRawDecoder rsRawDecoder = new RSRawDecoder();
+rsRawDecoder.initialize(dataBlkNum, parityBlkNum, (int) 
alignedStripe.getLength());
+rsRawDecoder.decode(decodeInputs, decodeIndices, outputs);
{code}
1. Better to declare the variable as {{RawDecoder decoder}};
2. Please create and initialize {{decoder}} during initialization place where a 
schema is determined. Doing it per decode call is expensive as in the 
underlying it may involve preparing for many coding buffers.
3. By the way, please note with the work in HADOOP-11938 it will be possible to 
pass the dest buffers directly to the decode call as output buffers so we will 
avoid a data copy thereafter.

 Erasure coding: DFSInputStream with decode functionality (pread)
 

 Key: HDFS-7678
 URL: https://issues.apache.org/jira/browse/HDFS-7678
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: BlockGroupReader.patch, HDFS-7678-HDFS-7285.002.patch, 
 HDFS-7678-HDFS-7285.003.patch, HDFS-7678-HDFS-7285.004.patch, 
 HDFS-7678-HDFS-7285.005.patch, HDFS-7678-HDFS-7285.006.patch, 
 HDFS-7678-HDFS-7285.007.patch, HDFS-7678-HDFS-7285.008.patch, 
 HDFS-7678-HDFS-7285.009.patch, HDFS-7678-HDFS-7285.010.patch, 
 HDFS-7678-HDFS-7285.012.patch, HDFS-7678-HDFS-7285.013.patch, 
 HDFS-7678.000.patch, HDFS-7678.001.patch


 A block group reader will read data from BlockGroup no matter in striping 
 layout or contiguous layout. The corrupt blocks can be known before 
 reading(told by namenode), or just be found during reading. The block group 
 reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8351) Remove namenode -finalize option from document

2015-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537861#comment-14537861
 ] 

Hudson commented on HDFS-8351:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #924 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/924/])
HDFS-8351. Remove namenode -finalize option from document. (aajisaka) 
(aajisaka: rev 3fa2efc09f051b6fc6244f0edca46d3d06f4ae3b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


 Remove namenode -finalize option from document
 --

 Key: HDFS-8351
 URL: https://issues.apache.org/jira/browse/HDFS-8351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.8.0

 Attachments: HDFS-8351.001.patch


 hdfs namenode -finalize option was removed by HDFS-5138, however, the 
 document was not updated.
 http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-3716) Purger should remove stale fsimage ckpt files

2015-05-11 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina reassigned HDFS-3716:


Assignee: J.Andreina

 Purger should remove stale fsimage ckpt files
 -

 Key: HDFS-3716
 URL: https://issues.apache.org/jira/browse/HDFS-3716
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.0-alpha
Reporter: suja s
Assignee: J.Andreina
Priority: Minor

 NN got killed while checkpointing in progress before renaming the ckpt file 
 to actual file.
 Since the checkpointing process is not completed, on next NN startup it will 
 load previous fsimage and apply rest of the edits.
 Functionally there's no harm but this ckpt file will be retained as is.
 Purger will not remove the ckpt file though other old fsimage files will be 
 taken care.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8220) Erasure Coding: StripedDataStreamer fails to handle the blocklocations which doesn't satisfy BlockGroupSize

2015-05-11 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537872#comment-14537872
 ] 

Walter Su commented on HDFS-8220:
-

bq. Assertions should be used to check something that should never happen, 
while an exception should be used to check something that might happen. (from 
[When to use an assertion and when to use an 
exception|http://stackoverflow.com/questions/1957645/when-to-use-an-assertion-and-when-to-use-an-exception]
 )
Here, in this issue, We can't know lsb.getLocations().length beforehand, right?

 Erasure Coding: StripedDataStreamer fails to handle the blocklocations which 
 doesn't satisfy BlockGroupSize
 ---

 Key: HDFS-8220
 URL: https://issues.apache.org/jira/browse/HDFS-8220
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8220-001.patch, HDFS-8220-002.patch, 
 HDFS-8220-003.patch, HDFS-8220-004.patch, HDFS-8220-HDFS-7285.005.patch, 
 HDFS-8220-HDFS-7285.006.patch


 During write operations {{StripedDataStreamer#locateFollowingBlock}} fails to 
 validate the available datanodes against the {{BlockGroupSize}}. Please see 
 the exception to understand more:
 {code}
 2015-04-22 14:56:11,313 WARN  hdfs.DFSClient (DataStreamer.java:run(538)) - 
 DataStreamer Exception
 java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 2015-04-22 14:56:11,313 INFO  hdfs.MiniDFSCluster 
 (MiniDFSCluster.java:shutdown(1718)) - Shutting down the Mini HDFS Cluster
 2015-04-22 14:56:11,313 ERROR hdfs.DFSClient 
 (DFSClient.java:closeAllFilesBeingWritten(608)) - Failed to close inode 16387
 java.io.IOException: DataStreamer Exception: 
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:544)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:1)
 Caused by: java.lang.NullPointerException
   at 
 java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:374)
   at 
 org.apache.hadoop.hdfs.StripedDataStreamer.locateFollowingBlock(StripedDataStreamer.java:157)
   at 
 org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1332)
   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:424)
   ... 1 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537918#comment-14537918
 ] 

Hadoop QA commented on HDFS-8241:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 34s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 58s | Site still builds. |
| {color:green}+1{color} | checkstyle |   0m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  3s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 165m 41s | Tests failed in hadoop-hdfs. |
| | | 212m 43s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.tools.TestHdfsConfigFields |
|   | hadoop.tracing.TestTraceAdmin |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731883/HDFS-8241-003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / 3fa2efc |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10910/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10910/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10910/console |


This message was automatically generated.

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: HDFS-8241-002.patch, HDFS-8241-003.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6775) Users may see TrashPolicy if hdfs dfs -rm is run

2015-05-11 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-6775:
-
Attachment: HDFS-6775.2.patch

Updated the patch, fixing checkStyle failure. 
Kindly review.


 Users may see TrashPolicy if hdfs dfs -rm is run
 

 Key: HDFS-6775
 URL: https://issues.apache.org/jira/browse/HDFS-6775
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: J.Andreina
 Attachments: HDFS-6775.1.patch, HDFS-6775.2.patch


 Doing 'hdfs dfs -rm file' generates an extra log message on the console:
 {code}
 14/07/29 15:18:56 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 0 minutes, Emptier interval = 0 minutes.
 {code}
 This shouldn't be seen by users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3512) Delay in scanning blocks at DN side when there are huge number of blocks

2015-05-11 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-3512:

Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

 Delay in scanning blocks at DN side when there are huge number of blocks
 

 Key: HDFS-3512
 URL: https://issues.apache.org/jira/browse/HDFS-3512
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.0.0-alpha
Reporter: suja s
Assignee: amith
  Labels: BB2015-05-TBR
 Attachments: HDFS-3512.patch


 Block scanner maintains the full list of blocks at DN side in a map and there 
 is no differentiation between the blocks which are already scanned and the 
 ones not scanend. For every check (ie every 5 secs) it will pick one block 
 and scan. There are chances that it chooses a block which is already scanned 
 which leads to further delay in scanning of blcoks which are yet to be 
 scanned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality (pread)

2015-05-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537890#comment-14537890
 ] 

Kai Zheng commented on HDFS-7678:
-

About comments above, I realized that it was coding that way to by-pass the 
issue existed in raw erasure coder. If so please leave it and add comment for 
TODO. 

 Erasure coding: DFSInputStream with decode functionality (pread)
 

 Key: HDFS-7678
 URL: https://issues.apache.org/jira/browse/HDFS-7678
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: BlockGroupReader.patch, HDFS-7678-HDFS-7285.002.patch, 
 HDFS-7678-HDFS-7285.003.patch, HDFS-7678-HDFS-7285.004.patch, 
 HDFS-7678-HDFS-7285.005.patch, HDFS-7678-HDFS-7285.006.patch, 
 HDFS-7678-HDFS-7285.007.patch, HDFS-7678-HDFS-7285.008.patch, 
 HDFS-7678-HDFS-7285.009.patch, HDFS-7678-HDFS-7285.010.patch, 
 HDFS-7678-HDFS-7285.012.patch, HDFS-7678-HDFS-7285.013.patch, 
 HDFS-7678.000.patch, HDFS-7678.001.patch


 A block group reader will read data from BlockGroup no matter in striping 
 layout or contiguous layout. The corrupt blocks can be known before 
 reading(told by namenode), or just be found during reading. The block group 
 reader needs to do decoding work when some blocks are found corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8362) Java Compilation Error in TestHdfsConfigFields.java and TestMapreduceConfigFields.java

2015-05-11 Thread Arshad Mohammad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arshad Mohammad updated HDFS-8362:
--
Attachment: HDFS-8362-2.patch

 Java Compilation Error in TestHdfsConfigFields.java and 
 TestMapreduceConfigFields.java
 --

 Key: HDFS-8362
 URL: https://issues.apache.org/jira/browse/HDFS-8362
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
 Fix For: 2.8.0

 Attachments: HDFS-8362-1.patch, HDFS-8362-2.patch


 in TestHdfsConfigFields.java failure is because of wrong package name.
 in TestMapreduceConfigFields.java failure i s becuase of:
 i) wrong package name
 ii) missing imports



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2015-05-11 Thread Arshad Mohammad (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537879#comment-14537879
 ] 

Arshad Mohammad commented on HDFS-7401:
---

1) No test case because only log modification
2)This patch is not the reason for test cases failures.

 Add block info to DFSInputStream' WARN message when it adds node to deadNodes
 -

 Key: HDFS-7401
 URL: https://issues.apache.org/jira/browse/HDFS-7401
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Arshad Mohammad
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HDFS-7401-2.patch, HDFS-7401.patch


 Block info is missing in the below message
 {noformat}
 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
 connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
 java.io.IOException: Got error for OP_READ_BLOCK
 {noformat}
 The code
 {noformat}
 DFSInputStream.java
   DFSClient.LOG.warn(Failed to connect to  + targetAddr +  for 
 block
 + , add to deadNodes and continue.  + ex, ex);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8367) BlockInfoStriped can also receive schema at its creation

2015-05-11 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8367 started by Kai Sasaki.

 BlockInfoStriped can also receive schema at its creation
 

 Key: HDFS-8367
 URL: https://issues.apache.org/jira/browse/HDFS-8367
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Kai Sasaki
Assignee: Kai Sasaki
  Labels: EC
 Attachments: HDFS-8367.1.patch


 {{BlockInfoStriped}} should receive the total information for erasure coding 
 as {{ECSchema}}. This JIRA changes the constructor interface and its 
 dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8367) BlockInfoStriped can also receive schema at its creation

2015-05-11 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8367:
-
Attachment: HDFS-8367.1.patch

 BlockInfoStriped can also receive schema at its creation
 

 Key: HDFS-8367
 URL: https://issues.apache.org/jira/browse/HDFS-8367
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Kai Sasaki
Assignee: Kai Sasaki
  Labels: EC
 Attachments: HDFS-8367.1.patch


 {{BlockInfoStriped}} should receive the total information for erasure coding 
 as {{ECSchema}}. This JIRA changes the constructor interface and its 
 dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8351) Remove namenode -finalize option from document

2015-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537841#comment-14537841
 ] 

Hudson commented on HDFS-8351:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #193 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/193/])
HDFS-8351. Remove namenode -finalize option from document. (aajisaka) 
(aajisaka: rev 3fa2efc09f051b6fc6244f0edca46d3d06f4ae3b)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove namenode -finalize option from document
 --

 Key: HDFS-8351
 URL: https://issues.apache.org/jira/browse/HDFS-8351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.8.0

 Attachments: HDFS-8351.001.patch


 hdfs namenode -finalize option was removed by HDFS-5138, however, the 
 document was not updated.
 http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3512) Delay in scanning blocks at DN side when there are huge number of blocks

2015-05-11 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537864#comment-14537864
 ] 

nijel commented on HDFS-3512:
-

Agree with [~umamaheswararao]
Closing as not a problem. Feel free to reopen

 Delay in scanning blocks at DN side when there are huge number of blocks
 

 Key: HDFS-3512
 URL: https://issues.apache.org/jira/browse/HDFS-3512
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.0.0-alpha
Reporter: suja s
Assignee: amith
  Labels: BB2015-05-TBR
 Attachments: HDFS-3512.patch


 Block scanner maintains the full list of blocks at DN side in a map and there 
 is no differentiation between the blocks which are already scanned and the 
 ones not scanend. For every check (ie every 5 secs) it will pick one block 
 and scan. There are chances that it chooses a block which is already scanned 
 which leads to further delay in scanning of blcoks which are yet to be 
 scanned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4383) Document the lease limits

2015-05-11 Thread Arshad Mohammad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arshad Mohammad updated HDFS-4383:
--
Attachment: HDFS-4383-6.patch

removed blank lines

 Document the lease limits
 -

 Key: HDFS-4383
 URL: https://issues.apache.org/jira/browse/HDFS-4383
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Arshad Mohammad
Priority: Trivial
  Labels: BB2015-05-RFC
 Attachments: HDFS-4383-5.patch, HDFS-4383-6.patch, HDFS-4383.3.patch, 
 HDFS-4383.patch, HDFS-4383.patch, HDFS-4383.patch


 HdfsConstants.java or DFSClient/LeaseManager.java could use a comment 
 indicating the behavior of hard and soft file lease limit periods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8367) BlockInfoStriped can also receive schema at its creation

2015-05-11 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537836#comment-14537836
 ] 

Kai Sasaki commented on HDFS-8367:
--

@Kai Zheng Yes, that was mistake. Thank you for modifying. I submitted initial 
patch.

 BlockInfoStriped can also receive schema at its creation
 

 Key: HDFS-8367
 URL: https://issues.apache.org/jira/browse/HDFS-8367
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Kai Sasaki
Assignee: Kai Sasaki
  Labels: EC
 Attachments: HDFS-8367.1.patch


 {{BlockInfoStriped}} should receive the total information for erasure coding 
 as {{ECSchema}}. This JIRA changes the constructor interface and its 
 dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8256) -storagepolicies , -blockId ,-replicaDetails options are missed out in usage and from documentation

2015-05-11 Thread J.Andreina (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537862#comment-14537862
 ] 

J.Andreina commented on HDFS-8256:
--

Failures are not related to this patch.
Kindly review.

 -storagepolicies , -blockId ,-replicaDetails  options are missed out in 
 usage and from documentation
 --

 Key: HDFS-8256
 URL: https://issues.apache.org/jira/browse/HDFS-8256
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: J.Andreina
Assignee: J.Andreina
  Labels: BB2015-05-TBR
 Attachments: HDFS-8256.2.patch, HDFS-8256.3.patch, 
 HDFS-8256_Trunk.1.patch


 -storagepolicies , -blockId ,-replicaDetails  options are missed out in 
 usage and from documentation.
 {noformat}
 Usage: hdfs fsck path [-list-corruptfileblocks | [-move | -delete | 
 -openforwrite] [-files [-blocks [-locations | -racks [-includeSnapshots] 
 [-showprogress]
 {noformat}
 Found as part of HDFS-8108.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8362) Java Compilation Error in TestHdfsConfigFields.java and TestMapreduceConfigFields.java

2015-05-11 Thread Arshad Mohammad (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537910#comment-14537910
 ] 

Arshad Mohammad commented on HDFS-8362:
---

Thanks [~vinayrpet]
Separated HDFS and Mapreduce portions and submited HDFS-8362-2.patch for this 
issue

 Java Compilation Error in TestHdfsConfigFields.java and 
 TestMapreduceConfigFields.java
 --

 Key: HDFS-8362
 URL: https://issues.apache.org/jira/browse/HDFS-8362
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
 Fix For: 2.8.0

 Attachments: HDFS-8362-1.patch, HDFS-8362-2.patch


 in TestHdfsConfigFields.java failure is because of wrong package name.
 in TestMapreduceConfigFields.java failure i s becuase of:
 i) wrong package name
 ii) missing imports



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6775) Users may see TrashPolicy if hdfs dfs -rm is run

2015-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537916#comment-14537916
 ] 

Hadoop QA commented on HDFS-6775:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 28s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  7s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 39s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  21m 47s | Tests passed in 
hadoop-common. |
| | |  58m 36s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731912/HDFS-6775.2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 3fa2efc |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10912/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10912/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10912/console |


This message was automatically generated.

 Users may see TrashPolicy if hdfs dfs -rm is run
 

 Key: HDFS-6775
 URL: https://issues.apache.org/jira/browse/HDFS-6775
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: J.Andreina
 Attachments: HDFS-6775.1.patch, HDFS-6775.2.patch


 Doing 'hdfs dfs -rm file' generates an extra log message on the console:
 {code}
 14/07/29 15:18:56 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 0 minutes, Emptier interval = 0 minutes.
 {code}
 This shouldn't be seen by users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8370) Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing

2015-05-11 Thread Rakesh R (JIRA)
Rakesh R created HDFS-8370:
--

 Summary: Erasure Coding: 
TestRecoverStripedFile#testRecoverOneParityBlock is failing
 Key: HDFS-8370
 URL: https://issues.apache.org/jira/browse/HDFS-8370
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R


This jira is to analyse more on the failure of this unit test. 

{code}
java.io.IOException: Time out waiting for EC block recovery.
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:333)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:234)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneParityBlock(TestRecoverStripedFile.java:98)
{code}

Exception occurred during recovery packet transferring:
{code}
2015-05-09 15:08:08,910 INFO  datanode.DataNode 
(BlockReceiver.java:receiveBlock(826)) - Exception for 
BP-1332677436-67.195.81.147-1431184082022:blk_-9223372036854775792_1001
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:787)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:803)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8370) Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing

2015-05-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537947#comment-14537947
 ] 

Rakesh R commented on HDFS-8370:


As per the initial analysis, recovery operation is getting failed during 
encoding/decoding function:

{code}
2015-05-11 17:45:33,871 WARN  datanode.DataNode 
(ErasureCodingWorker.java:run(402)) - Failed to recover striped block: 
BP-890762290-192.168.1.2-1431346474544:blk_-9223372036854775776_1002
java.lang.ArrayIndexOutOfBoundsException: 79667
at 
org.apache.hadoop.io.erasurecode.rawcoder.util.GaloisField.remainder(GaloisField.java:427)
at 
org.apache.hadoop.io.erasurecode.rawcoder.RSRawEncoder.doEncode(RSRawEncoder.java:76)
at 
org.apache.hadoop.io.erasurecode.rawcoder.AbstractRawErasureEncoder.encode(AbstractRawErasureEncoder.java:40)
at 
org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$ReconstructAndTransferBlock.recoverTargets(ErasureCodingWorker.java:560)
at 
org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$ReconstructAndTransferBlock.run(ErasureCodingWorker.java:384)
at java.lang.Thread.run(Unknown Source)
{code}

 Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing
 ---

 Key: HDFS-8370
 URL: https://issues.apache.org/jira/browse/HDFS-8370
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R

 This jira is to analyse more on the failure of this unit test. 
 {code}
 java.io.IOException: Time out waiting for EC block recovery.
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:333)
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:234)
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneParityBlock(TestRecoverStripedFile.java:98)
 {code}
 Exception occurred during recovery packet transferring:
 {code}
 2015-05-09 15:08:08,910 INFO  datanode.DataNode 
 (BlockReceiver.java:receiveBlock(826)) - Exception for 
 BP-1332677436-67.195.81.147-1431184082022:blk_-9223372036854775792_1001
 java.io.IOException: Premature EOF from inputStream
   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:787)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:803)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
   at java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8368) Erasure Coding: DFS opening a non-existent file need to be handled properly

2015-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537941#comment-14537941
 ] 

Hadoop QA commented on HDFS-8368:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 55s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 48s | The applied patch generated 
12 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 44s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 13s | The patch appears to introduce 8 
new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 174m 48s | Tests failed in hadoop-hdfs. |
| | | 217m 10s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time  
Unsynchronized access at DFSOutputStream.java:89% of time  Unsynchronized 
access at DFSOutputStream.java:[line 146] |
|  |  Possible null pointer dereference of arr$ in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
  Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
  Dereferenced at BlockInfoStripedUnderConstruction.java:[line 194] |
|  |  Unread field:field be static?  At ErasureCodingWorker.java:[line 251] |
|  |  Should 
org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$StripedReader
 be a _static_ inner class?  At ErasureCodingWorker.java:inner class?  At 
ErasureCodingWorker.java:[lines 910-912] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
 ECSchema):in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
 ECSchema): String.getBytes()  At ErasureCodingZoneManager.java:[line 117] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in
 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):
 new String(byte[])  At ErasureCodingZoneManager.java:[line 81] |
|  |  Result of integer multiplication cast to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock,
 int, int, int, int)  At StripedBlockUtil.java:to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock,
 int, int, int, int)  At StripedBlockUtil.java:[line 84] |
|  |  Result of integer multiplication cast to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, 
int, int)  At StripedBlockUtil.java:to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, 
int, int)  At StripedBlockUtil.java:[line 204] |
| Failed unit tests | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.TestRecoverStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731879/HDFS-8368-HDFS-7285.00.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / d96c64c |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10911/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10911/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10911/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10911/testReport/ |
| 

[jira] [Work started] (HDFS-8370) Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing

2015-05-11 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8370 started by Rakesh R.
--
 Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing
 ---

 Key: HDFS-8370
 URL: https://issues.apache.org/jira/browse/HDFS-8370
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R

 This jira is to analyse more on the failure of this unit test. 
 {code}
 java.io.IOException: Time out waiting for EC block recovery.
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:333)
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:234)
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneParityBlock(TestRecoverStripedFile.java:98)
 {code}
 Exception occurred during recovery packet transferring:
 {code}
 2015-05-09 15:08:08,910 INFO  datanode.DataNode 
 (BlockReceiver.java:receiveBlock(826)) - Exception for 
 BP-1332677436-67.195.81.147-1431184082022:blk_-9223372036854775792_1001
 java.io.IOException: Premature EOF from inputStream
   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:787)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:803)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
   at java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8241) Remove unused Namenode startup option FINALIZE

2015-05-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538025#comment-14538025
 ] 

Akira AJISAKA commented on HDFS-8241:
-

+1. Committing this shortly.
* TestHdfsConfigFields: tracked by HDFS-8362
* TestTraceAdmin: Fails in trunk as well. I'll file a jira for this.

 Remove unused Namenode startup option  FINALIZE
 -

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: HDFS-8241-002.patch, HDFS-8241-003.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8241) Remove unused NameNode startup option -finalize

2015-05-11 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8241:

Affects Version/s: (was: 2.7.0)
   3.0.0
   Labels:   (was: BB2015-05-TBR)
   Issue Type: Improvement  (was: Bug)
  Summary: Remove unused NameNode startup option -finalize  (was: 
Remove unused Namenode startup option  FINALIZE)
 Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)

 Remove unused NameNode startup option -finalize
 ---

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-8241-002.patch, HDFS-8241-003.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8351) Remove namenode -finalize option from document

2015-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538033#comment-14538033
 ] 

Hudson commented on HDFS-8351:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #192 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/192/])
HDFS-8351. Remove namenode -finalize option from document. (aajisaka) 
(aajisaka: rev 3fa2efc09f051b6fc6244f0edca46d3d06f4ae3b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


 Remove namenode -finalize option from document
 --

 Key: HDFS-8351
 URL: https://issues.apache.org/jira/browse/HDFS-8351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.8.0

 Attachments: HDFS-8351.001.patch


 hdfs namenode -finalize option was removed by HDFS-5138, however, the 
 document was not updated.
 http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8241) Remove unused NameNode startup option -finalize

2015-05-11 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8241:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~brahmareddy] for contribution, and thanks all 
who commented on this issue.

 Remove unused NameNode startup option -finalize
 ---

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 3.0.0

 Attachments: HDFS-8241-002.patch, HDFS-8241-003.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8241) Remove unused NameNode startup option -finalize

2015-05-11 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8241:

Priority: Minor  (was: Major)

 Remove unused NameNode startup option -finalize
 ---

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-8241-002.patch, HDFS-8241-003.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8241) Remove unused NameNode startup option -finalize

2015-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538037#comment-14538037
 ] 

Hudson commented on HDFS-8241:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7791 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7791/])
HDFS-8241. Remove unused NameNode startup option -finalize. Contributed by 
Brahma Reddy Battula. (aajisaka: rev 1dd79ffaca4b0c2cb0ab817dff3697686f3367e3)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestHdfsServerConstants.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java


 Remove unused NameNode startup option -finalize
 ---

 Key: HDFS-8241
 URL: https://issues.apache.org/jira/browse/HDFS-8241
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-8241-002.patch, HDFS-8241-003.patch, HDFS-8241.patch


 Command : hdfs namenode -finalize
 15/04/24 22:26:23 INFO namenode.NameNode: createNameNode [-finalize]
  *Use of the argument 'FINALIZE' is no longer supported.*  To finalize an 
 upgrade, start the NN  and then run `hdfs dfsadmin -finalizeUpgrade'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8351) Remove namenode -finalize option from document

2015-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537974#comment-14537974
 ] 

Hudson commented on HDFS-8351:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2122 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2122/])
HDFS-8351. Remove namenode -finalize option from document. (aajisaka) 
(aajisaka: rev 3fa2efc09f051b6fc6244f0edca46d3d06f4ae3b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


 Remove namenode -finalize option from document
 --

 Key: HDFS-8351
 URL: https://issues.apache.org/jira/browse/HDFS-8351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.8.0

 Attachments: HDFS-8351.001.patch


 hdfs namenode -finalize option was removed by HDFS-5138, however, the 
 document was not updated.
 http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7471) TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails

2015-05-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7471:
-
Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

 TestDatanodeManager#testNumVersionsReportedCorrect occasionally fails
 -

 Key: HDFS-7471
 URL: https://issues.apache.org/jira/browse/HDFS-7471
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Ted Yu
Assignee: Binglin Chang
  Labels: BB2015-05-TBR
 Attachments: HDFS-7471.001.patch, PreCommit-HDFS-Build #9898 test - 
 testNumVersionsReportedCorrect [Jenkins].html


 From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1957/ :
 {code}
 FAILED:  
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
 Error Message:
 The map of version counts returned by DatanodeManager was not what it was 
 expected to be on iteration 237 expected:0 but was:1
 Stack Trace:
 java.lang.AssertionError: The map of version counts returned by 
 DatanodeManager was not what it was expected to be on iteration 237 
 expected:0 but was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect(TestDatanodeManager.java:150)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8370) Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing

2015-05-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng resolved HDFS-8370.
-
Resolution: Duplicate

 Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing
 ---

 Key: HDFS-8370
 URL: https://issues.apache.org/jira/browse/HDFS-8370
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R

 This jira is to analyse more on the failure of this unit test. 
 {code}
 java.io.IOException: Time out waiting for EC block recovery.
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:333)
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:234)
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneParityBlock(TestRecoverStripedFile.java:98)
 {code}
 Exception occurred during recovery packet transferring:
 {code}
 2015-05-09 15:08:08,910 INFO  datanode.DataNode 
 (BlockReceiver.java:receiveBlock(826)) - Exception for 
 BP-1332677436-67.195.81.147-1431184082022:blk_-9223372036854775792_1001
 java.io.IOException: Premature EOF from inputStream
   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:787)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:803)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
   at java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8370) Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing

2015-05-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537979#comment-14537979
 ] 

Kai Zheng commented on HDFS-8370:
-

Hi Rakesh, thanks for the analysis, you're right. This will be resolved in 
HADOOP-11938. Would you check the patch there? 

 Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing
 ---

 Key: HDFS-8370
 URL: https://issues.apache.org/jira/browse/HDFS-8370
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R

 This jira is to analyse more on the failure of this unit test. 
 {code}
 java.io.IOException: Time out waiting for EC block recovery.
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:333)
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:234)
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneParityBlock(TestRecoverStripedFile.java:98)
 {code}
 Exception occurred during recovery packet transferring:
 {code}
 2015-05-09 15:08:08,910 INFO  datanode.DataNode 
 (BlockReceiver.java:receiveBlock(826)) - Exception for 
 BP-1332677436-67.195.81.147-1431184082022:blk_-9223372036854775792_1001
 java.io.IOException: Premature EOF from inputStream
   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:787)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:803)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
   at java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8351) Remove namenode -finalize option from document

2015-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538045#comment-14538045
 ] 

Hudson commented on HDFS-8351:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2140/])
HDFS-8351. Remove namenode -finalize option from document. (aajisaka) 
(aajisaka: rev 3fa2efc09f051b6fc6244f0edca46d3d06f4ae3b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


 Remove namenode -finalize option from document
 --

 Key: HDFS-8351
 URL: https://issues.apache.org/jira/browse/HDFS-8351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.8.0

 Attachments: HDFS-8351.001.patch


 hdfs namenode -finalize option was removed by HDFS-5138, however, the 
 document was not updated.
 http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8351) Remove namenode -finalize option from document

2015-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537982#comment-14537982
 ] 

Hudson commented on HDFS-8351:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #182 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/182/])
HDFS-8351. Remove namenode -finalize option from document. (aajisaka) 
(aajisaka: rev 3fa2efc09f051b6fc6244f0edca46d3d06f4ae3b)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove namenode -finalize option from document
 --

 Key: HDFS-8351
 URL: https://issues.apache.org/jira/browse/HDFS-8351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.8.0

 Attachments: HDFS-8351.001.patch


 hdfs namenode -finalize option was removed by HDFS-5138, however, the 
 document was not updated.
 http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6775) Users may see TrashPolicy if hdfs dfs -rm is run

2015-05-11 Thread J.Andreina (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538085#comment-14538085
 ] 

J.Andreina commented on HDFS-6775:
--

The patch is to avoid log message displaying at client side but to redirect to 
namenode log.
So no need for testcase.
Kindly review the patch.

 Users may see TrashPolicy if hdfs dfs -rm is run
 

 Key: HDFS-6775
 URL: https://issues.apache.org/jira/browse/HDFS-6775
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: J.Andreina
 Attachments: HDFS-6775.1.patch, HDFS-6775.2.patch


 Doing 'hdfs dfs -rm file' generates an extra log message on the console:
 {code}
 14/07/29 15:18:56 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 0 minutes, Emptier interval = 0 minutes.
 {code}
 This shouldn't be seen by users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7936) Erasure coding: resolving conflicts in the branch when merging trunk changes.

2015-05-11 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7936:

Description: 
This will be used to track and resolve conflicts when merging trunk changes. 

Below is a list of trunk changes that have caused conflicts (updated weekly):
# HDFS-7903
# HDFS-7435
# HDFS-7930
# HDFS-7960
# HDFS-7742
# HDFS-8035
# HDFS-8169
# HDFS-8327
# HDFS-8357

  was:
This will be used to track and resolve conflicts when merging trunk changes. 

Below is a list of trunk changes that have caused conflicts (updated weekly):
# HDFS-7903
# HDFS-7435
# HDFS-7930
# HDFS-7960
# HDFS-7742
# HDFS-8035
# HDFS-8169


 Erasure coding: resolving conflicts in the branch when merging trunk changes. 
 --

 Key: HDFS-7936
 URL: https://issues.apache.org/jira/browse/HDFS-7936
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HDFS-7936-001.patch, HDFS-7936-002.patch, 
 HDFS-7936-003.patch, HDFS-7936-004.patch, HDFS-7936-005.patch


 This will be used to track and resolve conflicts when merging trunk changes. 
 Below is a list of trunk changes that have caused conflicts (updated weekly):
 # HDFS-7903
 # HDFS-7435
 # HDFS-7930
 # HDFS-7960
 # HDFS-7742
 # HDFS-8035
 # HDFS-8169
 # HDFS-8327
 # HDFS-8357



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8368) Erasure Coding: DFS opening a non-existent file need to be handled properly

2015-05-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538174#comment-14538174
 ] 

Rakesh R commented on HDFS-8368:


Test case and FindBugs are unrelated to this patch. This fix will resolve 
couple of failures in the {{Hadoop-HDFS-7285}} build.

 Erasure Coding: DFS opening a non-existent file need to be handled properly
 ---

 Key: HDFS-8368
 URL: https://issues.apache.org/jira/browse/HDFS-8368
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8368-HDFS-7285.00.patch


 This jira to address bad exceptions when opening a non-existent file. It 
 throws NPE as shown below:
 {code}
 java.lang.NullPointerException: null
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1184)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:307)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:303)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:303)
   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767)
   at 
 org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:359)
   at 
 org.apache.hadoop.hdfs.TestDistributedFileSystem.testAllWithNoXmlDefaults(TestDistributedFileSystem.java:666)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8372) Erasure coding: simplify snapshots and truncate quota calculations for striped files, to be consistent with HDFS-8327

2015-05-11 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8372 started by Zhe Zhang.
---
 Erasure coding: simplify snapshots and truncate quota calculations for 
 striped files, to be consistent with HDFS-8327
 -

 Key: HDFS-8372
 URL: https://issues.apache.org/jira/browse/HDFS-8372
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7936) Erasure coding: resolving conflicts in the branch when merging trunk changes.

2015-05-11 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538489#comment-14538489
 ] 

Zhe Zhang commented on HDFS-7936:
-

Major conflicts with HDFS-8327 and HDFS-8357 in this week's rebase. I just made 
minimum change for the branch to compile. Filed HDFS-8372 to properly stay in 
sync with HDFS-8327. 

 Erasure coding: resolving conflicts in the branch when merging trunk changes. 
 --

 Key: HDFS-7936
 URL: https://issues.apache.org/jira/browse/HDFS-7936
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HDFS-7936-001.patch, HDFS-7936-002.patch, 
 HDFS-7936-003.patch, HDFS-7936-004.patch, HDFS-7936-005.patch


 This will be used to track and resolve conflicts when merging trunk changes. 
 Below is a list of trunk changes that have caused conflicts (updated weekly):
 # HDFS-7903
 # HDFS-7435
 # HDFS-7930
 # HDFS-7960
 # HDFS-7742
 # HDFS-8035
 # HDFS-8169
 # HDFS-8327
 # HDFS-8357



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7916) 'reportBadBlocks' from datanodes to standby Node BPServiceActor goes for infinite loop

2015-05-11 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7916:
-
   Resolution: Fixed
Fix Version/s: 2.7.1
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-2.7. 

 'reportBadBlocks' from datanodes to standby Node BPServiceActor goes for 
 infinite loop
 --

 Key: HDFS-7916
 URL: https://issues.apache.org/jira/browse/HDFS-7916
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Vinayakumar B
Assignee: Rushabh S Shah
Priority: Critical
 Fix For: 2.7.1

 Attachments: HDFS-7916-01.patch, HDFS-7916-1.patch


 if any badblock found, then BPSA for StandbyNode will go for infinite times 
 to report it.
 {noformat}2015-03-11 19:43:41,528 WARN 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to report bad block 
 BP-1384821822-10.224.54.68-1422634566395:blk_1079544278_5812006 to namenode: 
 stobdtserver3/10.224.54.70:18010
 org.apache.hadoop.hdfs.server.datanode.BPServiceActorActionException: Failed 
 to report bad block 
 BP-1384821822-10.224.54.68-1422634566395:blk_1079544278_5812006 to namenode:
 at 
 org.apache.hadoop.hdfs.server.datanode.ReportBadBlockAction.reportTo(ReportBadBlockAction.java:63)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processQueueMessages(BPServiceActor.java:1020)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:762)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:856)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8371) Fix test failure in TestHdfsConfigFields for spanreceiver properties

2015-05-11 Thread Ray Chiang (JIRA)
Ray Chiang created HDFS-8371:


 Summary: Fix test failure in TestHdfsConfigFields for spanreceiver 
properties
 Key: HDFS-8371
 URL: https://issues.apache.org/jira/browse/HDFS-8371
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ray Chiang
Assignee: Ray Chiang


Some new properties got added to hdfs-default.xml.  Update the test to skip 
these new properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8372) Erasure coding: simplify snapshots and truncate quota calculations for striped files, to be consistent with HDFS-8327

2015-05-11 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8372:
---

 Summary: Erasure coding: simplify snapshots and truncate quota 
calculations for striped files, to be consistent with HDFS-8327
 Key: HDFS-8372
 URL: https://issues.apache.org/jira/browse/HDFS-8372
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8362) Java Compilation Error in TestHdfsConfigFields.java and TestMapreduceConfigFields.java

2015-05-11 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538434#comment-14538434
 ] 

Ray Chiang commented on HDFS-8362:
--

I'm fine with integrating HDFS-8371 into this patch and marking it as a 
duplicate, or getting the other JIRA submitted separately.

 Java Compilation Error in TestHdfsConfigFields.java and 
 TestMapreduceConfigFields.java
 --

 Key: HDFS-8362
 URL: https://issues.apache.org/jira/browse/HDFS-8362
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
 Fix For: 2.8.0

 Attachments: HDFS-8362-1.patch, HDFS-8362-2.patch


 in TestHdfsConfigFields.java failure is because of wrong package name.
 in TestMapreduceConfigFields.java failure i s becuase of:
 i) wrong package name
 ii) missing imports



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8358) TestTraceAdmin fails

2015-05-11 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538433#comment-14538433
 ] 

Masatake Iwasaki commented on HDFS-8358:


The failure of TestHdfsConfigFields will be fixed in HDFS-8371. It should be 
addressed as separate JIRA and I will update the patch once HDFS-8371 is 
committed.

 TestTraceAdmin fails
 

 Key: HDFS-8358
 URL: https://issues.apache.org/jira/browse/HDFS-8358
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Masatake Iwasaki
 Attachments: HADOOP-11940.001.patch, HDFS-8358.002.patch


 After HADOOP-11912, {{TestTraceAdmin#testCreateAndDestroySpanReceiver}} in 
 hdfs started failing.
 It was probably unnoticed because the jira changed and triggered unit testing 
 in common only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8370) Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing

2015-05-11 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538444#comment-14538444
 ] 

Rakesh R commented on HDFS-8370:


Thanks [~drankye] for taking care encode/decode logic. Yes, I'll take a look at 
the patch there.

 Erasure Coding: TestRecoverStripedFile#testRecoverOneParityBlock is failing
 ---

 Key: HDFS-8370
 URL: https://issues.apache.org/jira/browse/HDFS-8370
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R

 This jira is to analyse more on the failure of this unit test. 
 {code}
 java.io.IOException: Time out waiting for EC block recovery.
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.waitForRecoveryFinished(TestRecoverStripedFile.java:333)
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:234)
   at 
 org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverOneParityBlock(TestRecoverStripedFile.java:98)
 {code}
 Exception occurred during recovery packet transferring:
 {code}
 2015-05-09 15:08:08,910 INFO  datanode.DataNode 
 (BlockReceiver.java:receiveBlock(826)) - Exception for 
 BP-1332677436-67.195.81.147-1431184082022:blk_-9223372036854775792_1001
 java.io.IOException: Premature EOF from inputStream
   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:787)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:803)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
   at java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8238) Move ClientProtocol to the hdfs-client

2015-05-11 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538700#comment-14538700
 ] 

Haohui Mai commented on HDFS-8238:
--

Upload my initial patch

 Move ClientProtocol to the hdfs-client
 --

 Key: HDFS-8238
 URL: https://issues.apache.org/jira/browse/HDFS-8238
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Takanobu Asanuma
 Attachments: HDFS-8238.000.patch


 The {{ClientProtocol}} class defines the RPC protocol between the NN and the 
 client. This jira proposes to move it into the hdfs-client module.
 The jira needs to move:
 * {{o.a.h.hdfs.server.namenode.SafeModeException}} and 
 {{o.a.h.hdfs.server.namenode.NotReplicatedYetException}} to the hdfs-client 
 package
 * Remove the reference of {{DistributedFileSystem}} in the javadoc
 * Create a copy of {{DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY}} in 
 {{HdfsClientConfigKeys}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8372) Erasure coding: compute storage type quotas for striped files, to be consistent with HDFS-8327

2015-05-11 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8372:

Attachment: HDFS-8372-HDFS-7285.0.patch

This patch merged the changes from HDFS-8327 for storage space calculation. I 
guess we should leave truncate-related logic to HDFS for HDFS-7622.

[~wheat9] / [~jingzhao]: it'd be great if you can take a look. Thanks!

 Erasure coding: compute storage type quotas for striped files, to be 
 consistent with HDFS-8327
 --

 Key: HDFS-8372
 URL: https://issues.apache.org/jira/browse/HDFS-8372
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8372-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8143) HDFS Mover tool should exit after some retry when failed to move blocks.

2015-05-11 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14538654#comment-14538654
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8143:
---

- Rename the new conf dfs.mover.failed.retry to 
dfs.mover.retry.max.attempts.
- The conf should be read in the Mover constructor.
{code}
+private static int moverFailedRetry; //For keeping configured value
+private static int moverFailedRetryCounter; //For calculation
{code}
- The above two fields should be moved to the Mover class instead of the Cli 
class.  Both of them should be non-static and moverFailedRetry should be final.
- Rename moverFailedRetry to retryMaxAttempts and moverFailedRetryCounter to 
retryCount


 HDFS Mover tool should exit after some retry when failed to move blocks.
 

 Key: HDFS-8143
 URL: https://issues.apache.org/jira/browse/HDFS-8143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.6.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
Priority: Blocker
 Attachments: HDFS-8143.patch, HDFS-8143_1.patch


 Mover is not coming out in case of failed to move blocks.
 {code}
 hasRemaining |= Dispatcher.waitForMoveCompletion(storages.targets.values());
 {code}
 {{Dispatcher.waitForMoveCompletion()}} will always return true if some blocks 
 migration failed. So hasRemaining never become false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >