[jira] [Commented] (HDFS-7927) Fluentd unable to write events to MaprFS using httpfs

2015-03-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360751#comment-14360751
 ] 

Tsuyoshi Ozawa commented on HDFS-7927:
--

s/webhdfs/fluent-plugin-webhdfs/

 Fluentd unable to write events to MaprFS using httpfs
 -

 Key: HDFS-7927
 URL: https://issues.apache.org/jira/browse/HDFS-7927
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
 Environment: mapr 4.0.1
Reporter: Roman Slysh
 Fix For: 2.4.1

 Attachments: HDFS-7927.patch


 The issue is on MaprFS file system. Probably, can be reproduced on HDFS, but 
 not sure. 
 We have observed in td-agent log whenever webhdfs plugin call to flush events 
 its calling append instead of create file on maprfs by communicating with 
 webhdfs. We need to modify this plugin to create file and then append data to 
 the file as manually creating file is not a solution as lot of log events 
 write to Filesystem they need to rotate on timely basis.
 http://docs.fluentd.org/articles/http-to-hdfs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7927) Fluentd unable to write events to MaprFS using httpfs

2015-03-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360749#comment-14360749
 ] 

Tsuyoshi Ozawa commented on HDFS-7927:
--

[~rslysh] the code of webhdfs looks same thing at td-agent side. 
https://github.com/fluent/fluent-plugin-webhdfs/blob/master/lib/fluent/plugin/out_webhdfs.rb#L220

Doesn't this work correctly for you?

 Fluentd unable to write events to MaprFS using httpfs
 -

 Key: HDFS-7927
 URL: https://issues.apache.org/jira/browse/HDFS-7927
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
 Environment: mapr 4.0.1
Reporter: Roman Slysh
 Fix For: 2.4.1

 Attachments: HDFS-7927.patch


 The issue is on MaprFS file system. Probably, can be reproduced on HDFS, but 
 not sure. 
 We have observed in td-agent log whenever webhdfs plugin call to flush events 
 its calling append instead of create file on maprfs by communicating with 
 webhdfs. We need to modify this plugin to create file and then append data to 
 the file as manually creating file is not a solution as lot of log events 
 write to Filesystem they need to rotate on timely basis.
 http://docs.fluentd.org/articles/http-to-hdfs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7926:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 NameNode implementation of ClientProtocol.truncate(..) is not idempotent
 

 Key: HDFS-7926
 URL: https://issues.apache.org/jira/browse/HDFS-7926
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h7926_20150313.patch, h7926_20150313b.patch


 If dfsclient drops the first response of a truncate RPC call, the retry by 
 retry cache will fail with DFSClient ... is already the current lease 
 holder.  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
 the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7930) commitBlockSynchronization() does not remove locations, which were not confirmed

2015-03-13 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360974#comment-14360974
 ] 

Konstantin Shvachko commented on HDFS-7930:
---

I saw it with truncate in HDFS-7886, [described in this 
comment|https://issues.apache.org/jira/browse/HDFS-7886?focusedCommentId=14360903page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14360903].
 But it can also happen with a regular block recovery, particularly when DNs 
are restarting during the recovery.
If recovery is started for a UC-block, which already has locations from live 
DNs, then recovery may succeed only for some of those locations, because the 
others have e.g. a different length. But {{commitBlockSynchronization()}} will 
not remove the unconfirmed locations. The locations will be invalidated by the 
next block report and then replicated correctly, but until then reads may see 
different data or fail.
Will post a failing test once HDFS-7886 is in.
Marked it is a blocker for 2.7.0. Feel free to unmark if it is not.

 commitBlockSynchronization() does not remove locations, which were not 
 confirmed
 

 Key: HDFS-7930
 URL: https://issues.apache.org/jira/browse/HDFS-7930
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Priority: Blocker

 When {{commitBlockSynchronization()}} has less {{newTargets}} than in the 
 original block it does not remove unconfirmed locations. This results in that 
 the the block stores locations of different lengths or genStamp (corrupt).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7886) TestFileTruncate#testTruncateWithDataNodesRestart runs timeout sometimes

2015-03-13 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360903#comment-14360903
 ] 

Konstantin Shvachko commented on HDFS-7886:
---

Hey guys, thanks for the reviews.
Cannot remove {{triggerBlockReports()}}, though. Just reran on my laptop 
without triggering and it failed. This is actually where I spent most of the 
time. The problem is in {{commitBlockSync()}}, which I was about to file a jira 
for, but will explain here first.
In short {{commitBlockSync()}} does not remove locations from the block, which 
were not confirmed, that is not reducing them to {{newTargets}}.
In the test truncate recovery is happening _while_ DNs are restarting.  If 
recovery is handled _after_ the initial block reports from restarting DNs, the 
recovery will have only one new target, the node that was not restarted, but  
{{commitBlockSync()}} will not remove the other two. So {{waitReplication()}} 
will incorrectly show 3 replicas, but {{cluster.getBlockFile().length}} on the 
restarted node will the old length 4, while it should be 3. So I had to trigger 
block reports after the recovery, which  removes the two invalid replicas from 
NN, then replication is triggered, and the test passes.
Now, if truncate recovery happens _before_ the initial block reports from 
restarting nodes, then everything is fine and {{triggerBlockReports()}} is 
redundant.  When you see TestFileTruncate succeeds, look for block 
{{blk_1073742100}}, you should see that {{initReplicaRecovery}} for it is 
happening before {{processReport}} and succeeds on all three nodes. While in 
the failure case {{initReplicaRecovery}} throws exceptions on two DNs out of 
three. 

 TestFileTruncate#testTruncateWithDataNodesRestart runs timeout sometimes
 

 Key: HDFS-7886
 URL: https://issues.apache.org/jira/browse/HDFS-7886
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.0
Reporter: Yi Liu
Assignee: Plamen Jeliazkov
Priority: Minor
 Attachments: HDFS-7886-01.patch, HDFS-7886.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/9730//testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7337) Configurable and pluggable Erasure Codec and schema

2015-03-13 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360718#comment-14360718
 ] 

Zhe Zhang commented on HDFS-7337:
-

Thanks for the pointers to HDFS-7859 and HDFS-7866. Yes I believe they are 
along the same direction as {{ECSchemaSuite}} in the above discussion. 

 Configurable and pluggable Erasure Codec and schema
 ---

 Key: HDFS-7337
 URL: https://issues.apache.org/jira/browse/HDFS-7337
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Kai Zheng
 Attachments: HDFS-7337-prototype-v1.patch, 
 HDFS-7337-prototype-v2.zip, HDFS-7337-prototype-v3.zip, 
 PluggableErasureCodec-v2.pdf, PluggableErasureCodec.pdf


 According to HDFS-7285 and the design, this considers to support multiple 
 Erasure Codecs via pluggable approach. It allows to define and configure 
 multiple codec schemas with different coding algorithms and parameters. The 
 resultant codec schemas can be utilized and specified via command tool for 
 different file folders. While design and implement such pluggable framework, 
 it’s also to implement a concrete codec by default (Reed Solomon) to prove 
 the framework is useful and workable. Separate JIRA could be opened for the 
 RS codec implementation.
 Note HDFS-7353 will focus on the very low level codec API and implementation 
 to make concrete vendor libraries transparent to the upper layer. This JIRA 
 focuses on high level stuffs that interact with configuration, schema and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7886) TestFileTruncate#testTruncateWithDataNodesRestart runs timeout sometimes

2015-03-13 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360903#comment-14360903
 ] 

Konstantin Shvachko edited comment on HDFS-7886 at 3/13/15 6:47 PM:


Hey guys, thanks for the reviews.
Cannot remove {{triggerBlockReports()}}, though. Just reran on my laptop 
without triggering and it failed. This is actually where I spent most of the 
time. The problem is in {{commitBlockSync()}}, which I was about to file a jira 
for, but will explain here first.
In short {{commitBlockSync()}} does not remove locations from the block, which 
were not confirmed, that is not reducing them to {{newTargets}}.
In the test truncate recovery is happening _while_ DNs are restarting.  If 
recovery is handled _after_ the initial block reports from restarting DNs, the 
recovery will have only one new target, the node that was not restarted, but 
{{commitBlockSync()}} will not remove the other two. So {{waitReplication()}} 
will incorrectly show 3 replicas, but {{cluster.getBlockFile().length}} on the 
restarted node will show the old length 4, while it should be 3. So I had to 
trigger block reports after the recovery, which  removes the two invalid 
replicas from NN, then replication is triggered, and the test passes.
Now, if truncate recovery happens _before_ the initial block reports from 
restarting nodes, then everything is fine and {{triggerBlockReports()}} is 
redundant.  When you see TestFileTruncate succeeds, look for block 
{{blk_1073742100}}, you should see that {{initReplicaRecovery}} for it is 
happening before {{processReport}} and succeeds on all three nodes. While in 
the failure case {{initReplicaRecovery}} throws exceptions on two DNs out of 
three.
We should remove {{triggerBlockReports()}} when we fix the 
{{commitBlockSync()}} problem.


was (Author: shv):
Hey guys, thanks for the reviews.
Cannot remove {{triggerBlockReports()}}, though. Just reran on my laptop 
without triggering and it failed. This is actually where I spent most of the 
time. The problem is in {{commitBlockSync()}}, which I was about to file a jira 
for, but will explain here first.
In short {{commitBlockSync()}} does not remove locations from the block, which 
were not confirmed, that is not reducing them to {{newTargets}}.
In the test truncate recovery is happening _while_ DNs are restarting.  If 
recovery is handled _after_ the initial block reports from restarting DNs, the 
recovery will have only one new target, the node that was not restarted, but  
{{commitBlockSync()}} will not remove the other two. So {{waitReplication()}} 
will incorrectly show 3 replicas, but {{cluster.getBlockFile().length}} on the 
restarted node will the old length 4, while it should be 3. So I had to trigger 
block reports after the recovery, which  removes the two invalid replicas from 
NN, then replication is triggered, and the test passes.
Now, if truncate recovery happens _before_ the initial block reports from 
restarting nodes, then everything is fine and {{triggerBlockReports()}} is 
redundant.  When you see TestFileTruncate succeeds, look for block 
{{blk_1073742100}}, you should see that {{initReplicaRecovery}} for it is 
happening before {{processReport}} and succeeds on all three nodes. While in 
the failure case {{initReplicaRecovery}} throws exceptions on two DNs out of 
three. 

 TestFileTruncate#testTruncateWithDataNodesRestart runs timeout sometimes
 

 Key: HDFS-7886
 URL: https://issues.apache.org/jira/browse/HDFS-7886
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.0
Reporter: Yi Liu
Assignee: Plamen Jeliazkov
Priority: Minor
 Attachments: HDFS-7886-01.patch, HDFS-7886.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/9730//testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-13 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360951#comment-14360951
 ] 

Zhe Zhang commented on HDFS-5523:
-

Hi [~brandonli], we recently see some use cases for this feature. I wonder if 
you still plan to work on it? Any thought how to extend HDFS-5469 with multiple 
exports? Thanks!

 Support multiple subdirectory exports in HDFS NFS gateway 
 --

 Key: HDFS-5523
 URL: https://issues.apache.org/jira/browse/HDFS-5523
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Reporter: Brandon Li

 Currently, the HDFS NFS Gateway only supports configuring a single 
 subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
 Supporting multiple subdirectory exports can make data and security 
 management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360675#comment-14360675
 ] 

Hadoop QA commented on HDFS-7854:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12704455/HDFS-7854-004-duplicate.patch
  against trunk revision 8180e67.

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9877//console

This message is automatically generated.

 Separate class DataStreamer out of DFSOutputStream
 --

 Key: HDFS-7854
 URL: https://issues.apache.org/jira/browse/HDFS-7854
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
 HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, HDFS-7854-004.patch


 This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
 will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7896) HDFS Slow disk detection

2015-03-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360674#comment-14360674
 ] 

Arpit Agarwal commented on HDFS-7896:
-

Hi [~zhangyongxyz], this is not related to archival storage. In this context we 
are referring to failing disks.

 HDFS Slow disk detection
 

 Key: HDFS-7896
 URL: https://issues.apache.org/jira/browse/HDFS-7896
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Arpit Agarwal

 HDFS should detect slow disks. To start with we can flag this information via 
 the NameNode web UI. Alternatively DNs can avoid using slow disks for writes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7929) inotify unable fetch pre-upgrade edit log segments once upgrade starts

2015-03-13 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-7929:
---

 Summary: inotify unable fetch pre-upgrade edit log segments once 
upgrade starts
 Key: HDFS-7929
 URL: https://issues.apache.org/jira/browse/HDFS-7929
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zhe Zhang
Assignee: Zhe Zhang


inotify is often used to periodically poll HDFS events. However, once an HDFS 
upgrade has started, edit logs are moved to /previous on the NN, which is not 
accessible. Moreover, once the upgrade is finalized /previous is currently lost 
forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7435) PB encoding of block reports is very inefficient

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360800#comment-14360800
 ] 

Hadoop QA commented on HDFS-7435:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12704194/HDFS-7435.patch
  against trunk revision 387f271.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9874//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9874//console

This message is automatically generated.

 PB encoding of block reports is very inefficient
 

 Key: HDFS-7435
 URL: https://issues.apache.org/jira/browse/HDFS-7435
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, 
 HDFS-7435.002.patch, HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, 
 HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, 
 HDFS-7435.patch, HDFS-7435.patch


 Block reports are encoded as a PB repeating long.  Repeating fields use an 
 {{ArrayList}} with default capacity of 10.  A block report containing tens or 
 hundreds of thousand of longs (3 for each replica) is extremely expensive 
 since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
 fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7645) Rolling upgrade is restoring blocks from trash multiple times

2015-03-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360820#comment-14360820
 ] 

Arpit Agarwal commented on HDFS-7645:
-

Hi [~ogikei], did you get a chance to read the comments I added for the v1 
patch?

Could you please describe what your patch is attempting to do? Is it your 
intention to get rid of trash completely, because removing enableTrash will 
have that effect.

bq. This is exactly does the job when the datanode is rolled back. But problem 
is (as from the beginning) entire cluster ( including those DNs who have not 
yet upgraded) must be restarted with '-rollback' option to restore.
[~vinayrpet], we already require the cluster to be stopped and DNs to be 
restarted with {{-rollback}} to proceed with the rollback so we can support DN 
layout upgrades. Not sure I understand your comment.

bq. I think you want clearTrash() to be called when a rolling upgrade is 
finalized. That is if inProgress is not true, clear all trash. For regular or 
downgrade start-ups, if the rolling upgrade is already aborted/finallized, the 
trash will get cleared once the datanode registers with the namenode. So we 
don't have to anything special on start-up.
Clearing trash is probably the right thing to do but there is a caveat. DNs do 
not get a 'finalize rolling upgrade' indication. DNs look for 
{{RollingUpgradeStatus}} in the heartbeat response. If it is absent then DNs 
infer that the rolling upgrade is finalized. If the administrator attempts to 
do a rollback without stopping all DNs first then clearing trash will cause 
data loss. That's a risk of clearing vs doing restore. Currently with restore 
there is no such risk since NNs will either keep or delete the blocks 
appropriately.

 Rolling upgrade is restoring blocks from trash multiple times
 -

 Key: HDFS-7645
 URL: https://issues.apache.org/jira/browse/HDFS-7645
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Nathan Roberts
Assignee: Keisuke Ogiwara
 Attachments: HDFS-7645.01.patch, HDFS-7645.02.patch, 
 HDFS-7645.03.patch


 When performing an HDFS rolling upgrade, the trash directory is getting 
 restored twice when under normal circumstances it shouldn't need to be 
 restored at all. iiuc, the only time these blocks should be restored is if we 
 need to rollback a rolling upgrade. 
 On a busy cluster, this can cause significant and unnecessary block churn 
 both on the datanodes, and more importantly in the namenode.
 The two times this happens are:
 1) restart of DN onto new software
 {code}
   private void doTransition(DataNode datanode, StorageDirectory sd,
   NamespaceInfo nsInfo, StartupOption startOpt) throws IOException {
 if (startOpt == StartupOption.ROLLBACK  sd.getPreviousDir().exists()) {
   Preconditions.checkState(!getTrashRootDir(sd).exists(),
   sd.getPreviousDir() +  and  + getTrashRootDir(sd) +  should not 
  +
both be present.);
   doRollback(sd, nsInfo); // rollback if applicable
 } else {
   // Restore all the files in the trash. The restored files are retained
   // during rolling upgrade rollback. They are deleted during rolling
   // upgrade downgrade.
   int restored = restoreBlockFilesFromTrash(getTrashRootDir(sd));
   LOG.info(Restored  + restored +  block files from trash.);
 }
 {code}
 2) When heartbeat response no longer indicates a rollingupgrade is in progress
 {code}
   /**
* Signal the current rolling upgrade status as indicated by the NN.
* @param inProgress true if a rolling upgrade is in progress
*/
   void signalRollingUpgrade(boolean inProgress) throws IOException {
 String bpid = getBlockPoolId();
 if (inProgress) {
   dn.getFSDataset().enableTrash(bpid);
   dn.getFSDataset().setRollingUpgradeMarker(bpid);
 } else {
   dn.getFSDataset().restoreTrash(bpid);
   dn.getFSDataset().clearRollingUpgradeMarker(bpid);
 }
   }
 {code}
 HDFS-6800 and HDFS-6981 were modifying this behavior making it not completely 
 clear whether this is somehow intentional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360869#comment-14360869
 ] 

Brandon Li commented on HDFS-7926:
--

Thank you, [~szetszwo], for the fix. I've committed the patch.

 NameNode implementation of ClientProtocol.truncate(..) is not idempotent
 

 Key: HDFS-7926
 URL: https://issues.apache.org/jira/browse/HDFS-7926
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h7926_20150313.patch, h7926_20150313b.patch


 If dfsclient drops the first response of a truncate RPC call, the retry by 
 retry cache will fail with DFSClient ... is already the current lease 
 holder.  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
 the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7926:
-
Fix Version/s: 2.7.0

 NameNode implementation of ClientProtocol.truncate(..) is not idempotent
 

 Key: HDFS-7926
 URL: https://issues.apache.org/jira/browse/HDFS-7926
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.7.0

 Attachments: h7926_20150313.patch, h7926_20150313b.patch


 If dfsclient drops the first response of a truncate RPC call, the retry by 
 retry cache will fail with DFSClient ... is already the current lease 
 holder.  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
 the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360942#comment-14360942
 ] 

Konstantin Shvachko commented on HDFS-7926:
---

Cool. Truncate should be idempotent indeed.

 NameNode implementation of ClientProtocol.truncate(..) is not idempotent
 

 Key: HDFS-7926
 URL: https://issues.apache.org/jira/browse/HDFS-7926
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.7.0

 Attachments: h7926_20150313.patch, h7926_20150313b.patch


 If dfsclient drops the first response of a truncate RPC call, the retry by 
 retry cache will fail with DFSClient ... is already the current lease 
 holder.  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
 the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7930) commitBlockSynchronization() does not remove locations, which were not confirmed

2015-03-13 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-7930:
-

 Summary: commitBlockSynchronization() does not remove locations, 
which were not confirmed
 Key: HDFS-7930
 URL: https://issues.apache.org/jira/browse/HDFS-7930
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Priority: Blocker


When {{commitBlockSynchronization()}} has less {{newTargets}} than in the 
original block it does not remove unconfirmed locations. This results in that 
the the block stores locations of different lengths or genStamp (corrupt).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7827) Erasure Coding: support striped blocks in non-protobuf fsimage

2015-03-13 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360678#comment-14360678
 ] 

Haohui Mai commented on HDFS-7827:
--

I'm not sure whether this is the right thing to do as the old fsimage is quite 
out-of-dated compared to the features we have today. It might make sense to add 
the information to WebImageViewer though.

 Erasure Coding: support striped blocks in non-protobuf fsimage
 --

 Key: HDFS-7827
 URL: https://issues.apache.org/jira/browse/HDFS-7827
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Hui Zheng
 Attachments: HDFS-7827.000.patch, HDFS-7827.001.patch


 HDFS-7749 only adds code to persist striped blocks to protobuf-based fsimage. 
 We should also add this support to the non-protobuf fsimage since it is still 
 used for use cases like offline image processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7645) Rolling upgrade is restoring blocks from trash multiple times

2015-03-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360820#comment-14360820
 ] 

Arpit Agarwal edited comment on HDFS-7645 at 3/13/15 6:04 PM:
--

Hi [~ogikei], did you get a chance to read the comments I added for the v1 
patch?

Could you please describe what your patch is attempting to do? Is it your 
intention to get rid of trash completely, because removing enableTrash will 
have that effect.

bq. This is exactly does the job when the datanode is rolled back. But problem 
is (as from the beginning) entire cluster ( including those DNs who have not 
yet upgraded) must be restarted with '-rollback' option to restore.
[~vinayrpet], we already require the cluster to be stopped and DNs to be 
restarted with {{-rollback}} to proceed with the rollback so we can support DN 
layout upgrades. Not sure I understand what you meant.

bq. I think you want clearTrash() to be called when a rolling upgrade is 
finalized. That is if inProgress is not true, clear all trash. For regular or 
downgrade start-ups, if the rolling upgrade is already aborted/finallized, the 
trash will get cleared once the datanode registers with the namenode. So we 
don't have to anything special on start-up.
Clearing trash is probably the right thing to do but there is a caveat. DNs do 
not get a 'finalize rolling upgrade' indication. DNs look for 
{{RollingUpgradeStatus}} in the heartbeat response. If it is absent then DNs 
infer that the rolling upgrade is finalized. If the administrator attempts to 
do a rollback without stopping all DNs first then clearing trash will cause 
data loss. That's a risk of clearing vs doing restore. Currently with restore 
there is no such risk since NNs will either keep or delete the blocks 
appropriately.


was (Author: arpitagarwal):
Hi [~ogikei], did you get a chance to read the comments I added for the v1 
patch?

Could you please describe what your patch is attempting to do? Is it your 
intention to get rid of trash completely, because removing enableTrash will 
have that effect.

bq. This is exactly does the job when the datanode is rolled back. But problem 
is (as from the beginning) entire cluster ( including those DNs who have not 
yet upgraded) must be restarted with '-rollback' option to restore.
[~vinayrpet], we already require the cluster to be stopped and DNs to be 
restarted with {{-rollback}} to proceed with the rollback so we can support DN 
layout upgrades. Not sure I understand your comment.

bq. I think you want clearTrash() to be called when a rolling upgrade is 
finalized. That is if inProgress is not true, clear all trash. For regular or 
downgrade start-ups, if the rolling upgrade is already aborted/finallized, the 
trash will get cleared once the datanode registers with the namenode. So we 
don't have to anything special on start-up.
Clearing trash is probably the right thing to do but there is a caveat. DNs do 
not get a 'finalize rolling upgrade' indication. DNs look for 
{{RollingUpgradeStatus}} in the heartbeat response. If it is absent then DNs 
infer that the rolling upgrade is finalized. If the administrator attempts to 
do a rollback without stopping all DNs first then clearing trash will cause 
data loss. That's a risk of clearing vs doing restore. Currently with restore 
there is no such risk since NNs will either keep or delete the blocks 
appropriately.

 Rolling upgrade is restoring blocks from trash multiple times
 -

 Key: HDFS-7645
 URL: https://issues.apache.org/jira/browse/HDFS-7645
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Nathan Roberts
Assignee: Keisuke Ogiwara
 Attachments: HDFS-7645.01.patch, HDFS-7645.02.patch, 
 HDFS-7645.03.patch


 When performing an HDFS rolling upgrade, the trash directory is getting 
 restored twice when under normal circumstances it shouldn't need to be 
 restored at all. iiuc, the only time these blocks should be restored is if we 
 need to rollback a rolling upgrade. 
 On a busy cluster, this can cause significant and unnecessary block churn 
 both on the datanodes, and more importantly in the namenode.
 The two times this happens are:
 1) restart of DN onto new software
 {code}
   private void doTransition(DataNode datanode, StorageDirectory sd,
   NamespaceInfo nsInfo, StartupOption startOpt) throws IOException {
 if (startOpt == StartupOption.ROLLBACK  sd.getPreviousDir().exists()) {
   Preconditions.checkState(!getTrashRootDir(sd).exists(),
   sd.getPreviousDir() +  and  + getTrashRootDir(sd) +  should not 
  +
both be present.);
   doRollback(sd, nsInfo); // rollback if applicable
 } else {
   // Restore all the files in the 

[jira] [Commented] (HDFS-7433) Optimize performance of DatanodeManager's node map

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360920#comment-14360920
 ] 

Hadoop QA commented on HDFS-7433:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12703937/HDFS-7433.patch
  against trunk revision 387f271.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9876//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9876//console

This message is automatically generated.

 Optimize performance of DatanodeManager's node map
 --

 Key: HDFS-7433
 URL: https://issues.apache.org/jira/browse/HDFS-7433
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7433.patch, HDFS-7433.patch, HDFS-7433.patch, 
 HDFS-7433.patch


 The datanode map is currently a {{TreeMap}}.  For many thousands of 
 datanodes, tree lookups are ~10X more expensive than a {{HashMap}}.  
 Insertions and removals are up to 100X more expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-13 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7854:

Attachment: HDFS-7854-004-duplicate.patch

004 patch works on my local machine as well. Submitting a duplicate patch to 
trigger Jenkins.

 Separate class DataStreamer out of DFSOutputStream
 --

 Key: HDFS-7854
 URL: https://issues.apache.org/jira/browse/HDFS-7854
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
 HDFS-7854-003.patch, HDFS-7854-004-duplicate.patch, HDFS-7854-004.patch


 This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
 will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7928) Scanning blocks from disk during rolling upgrade startup takes a lot of time if disks are busy

2015-03-13 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-7928:
-
Status: Open  (was: Patch Available)

Cancelling a patch to address Daryn's comments

 Scanning blocks from disk during rolling upgrade startup takes a lot of time 
 if disks are busy
 --

 Key: HDFS-7928
 URL: https://issues.apache.org/jira/browse/HDFS-7928
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.6.0
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah
 Attachments: HDFS-7928.patch


 We observed this issue in rolling upgrade to 2.6.x on one of our cluster.
 One of the disks was very busy and it took long time to scan that disk 
 compared to other disks.
 Seeing the sar (System Activity Reporter) data we saw that the particular 
 disk was very busy performing IO operations.
 Requesting for an improvement during datanode rolling upgrade.
 During shutdown, we can persist the whole volume map on the disk and let the 
 datanode read that file and create the volume map during startup  after 
 rolling upgrade.
 This will not require the datanode process to scan all the disk and read the 
 block.
 This will significantly improve the datanode startup time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7928) Scanning blocks from disk during rolling upgrade startup takes a lot of time if disks are busy

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360840#comment-14360840
 ] 

Hadoop QA commented on HDFS-7928:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12704432/HDFS-7928.patch
  against trunk revision 387f271.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
  
org.apache.hadoop.hdfs.server.namenode.TestProcessCorruptBlocks
  
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks
  org.apache.hadoop.hdfs.TestDFSUpgrade
  org.apache.hadoop.hdfs.TestAppendSnapshotTruncate

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9875//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9875//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9875//console

This message is automatically generated.

 Scanning blocks from disk during rolling upgrade startup takes a lot of time 
 if disks are busy
 --

 Key: HDFS-7928
 URL: https://issues.apache.org/jira/browse/HDFS-7928
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.6.0
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah
 Attachments: HDFS-7928.patch


 We observed this issue in rolling upgrade to 2.6.x on one of our cluster.
 One of the disks was very busy and it took long time to scan that disk 
 compared to other disks.
 Seeing the sar (System Activity Reporter) data we saw that the particular 
 disk was very busy performing IO operations.
 Requesting for an improvement during datanode rolling upgrade.
 During shutdown, we can persist the whole volume map on the disk and let the 
 datanode read that file and create the volume map during startup  after 
 rolling upgrade.
 This will not require the datanode process to scan all the disk and read the 
 block.
 This will significantly improve the datanode startup time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7435) PB encoding of block reports is very inefficient

2015-03-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360878#comment-14360878
 ] 

Kihwal Lee commented on HDFS-7435:
--

+1 I also ran TestDatanodeManager multiple times on my machine with this patch 
without any failure.


 PB encoding of block reports is very inefficient
 

 Key: HDFS-7435
 URL: https://issues.apache.org/jira/browse/HDFS-7435
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, 
 HDFS-7435.002.patch, HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, 
 HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, 
 HDFS-7435.patch, HDFS-7435.patch


 Block reports are encoded as a PB repeating long.  Repeating fields use an 
 {{ArrayList}} with default capacity of 10.  A block report containing tens or 
 hundreds of thousand of longs (3 for each replica) is extremely expensive 
 since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
 fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360791#comment-14360791
 ] 

Hudson commented on HDFS-7926:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7317 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7317/])
HDFS-7926. NameNode implementation of ClientProtocol.truncate(..) is not 
idempotent. Contributed by Tsz Wo Nicholas Sze (brandonli: rev 
f446669afb5c3d31a00c65449f27088b39e11ae3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


 NameNode implementation of ClientProtocol.truncate(..) is not idempotent
 

 Key: HDFS-7926
 URL: https://issues.apache.org/jira/browse/HDFS-7926
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h7926_20150313.patch, h7926_20150313b.patch


 If dfsclient drops the first response of a truncate RPC call, the retry by 
 retry cache will fail with DFSClient ... is already the current lease 
 holder.  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
 the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7435) PB encoding of block reports is very inefficient

2015-03-13 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360854#comment-14360854
 ] 

Daryn Sharp commented on HDFS-7435:
---

The retry cache test failure is HDFS-7524.  I cannot reproduce the DN manager 
timeout, and it's impossible to tell from pre-commit why it failed (no logs).  
The test normally runs in ~10s, but takes 3min during a prior successful run so 
maybe the build machine is overwhelmed and on the edge of timing out.

 PB encoding of block reports is very inefficient
 

 Key: HDFS-7435
 URL: https://issues.apache.org/jira/browse/HDFS-7435
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, 
 HDFS-7435.002.patch, HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, 
 HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, 
 HDFS-7435.patch, HDFS-7435.patch


 Block reports are encoded as a PB repeating long.  Repeating fields use an 
 {{ArrayList}} with default capacity of 10.  A block report containing tens or 
 hundreds of thousand of longs (3 for each replica) is extremely expensive 
 since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
 fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2605) CHANGES.txt has two Release 0.21.1 sections

2015-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361075#comment-14361075
 ] 

Hudson commented on HDFS-2605:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #7320 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7320/])
HDFS-2605. Remove redundant Release 0.21.1 section from CHANGES.txt. 
Contributed by Allen Wittenauer. (shv: rev 
dfd32017001e6902829671dc8cc68afbca61e940)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 CHANGES.txt has two Release 0.21.1 sections
 -

 Key: HDFS-2605
 URL: https://issues.apache.org/jira/browse/HDFS-2605
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Allen Wittenauer
 Attachments: HDFS-2605-01.patch, HDFS-2605.patch


 CHANGES.txt in hdfs-project has 2 sections titled Release 0.21.1 - 
 Unreleased. They are not identical, should be merged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7903) Cannot recover block after truncate and delete snapshot

2015-03-13 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7903:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this. Thank you Plamen.

 Cannot recover block after truncate and delete snapshot
 ---

 Key: HDFS-7903
 URL: https://issues.apache.org/jira/browse/HDFS-7903
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Plamen Jeliazkov
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7903.1.patch, HDFS-7903.2.patch, HDFS-7903.patch, 
 testMultipleTruncate.patch


 # Create a file.
 # Create a snapshot.
 # Truncate the file in the middle of a block.
 # Delete the snapshot.
 The block cannot be recovered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7369) Erasure coding: distribute recovery work for striped blocks to DataNode

2015-03-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361131#comment-14361131
 ] 

Kai Zheng commented on HDFS-7369:
-

Thanks Zhe for the update. Some comments.

1. Looking at some changes like below, I'm wondering if we could have any 
abstract concept for the two cases, replication and erasure coding or recovery. 
With that, something like {{neededReplications}} can be better named. However 
I'm not sure how much change will be incurred. You might be thinking recovery 
can cover replication, but recovery is mainly discussed or mentioned and 
relevant in ec context and this issue title.
{code}
-   * Scan blocks in {@link #neededReplications} and assign replication
-   * work to data-nodes they belong to.
+   * Scan blocks in {@link #neededReplications} and assign recovery
+   * (replication or erasure coding) work to data-nodes they belong to.
...
-  int computeReplicationWork(int blocksToProcess) {
+  int computeBlockRecoveryWork(int blocksToProcess) {
{code}
2. Why we have this ? Are you using Java 8 ?
{code}
-containingNodes = new ArrayListDatanodeDescriptor();
-ListDatanodeStorageInfo liveReplicaNodes = new 
ArrayListDatanodeStorageInfo();
+containingNodes = new ArrayList();
+ListDatanodeStorageInfo liveReplicaNodes = new ArrayList();
{code}
3. In the codes for above, could we have initial capacity value for the two 
array lists ?
4. A minor, clean up.
{code}
+//  ErasureCodingWork ecw =
{code}
5. Assume we have conclusion about ec policy or unified storage policy in 
HDFS-7285, we may have new issue for the following other than HDFS-7337 (which 
mainly focuses on support of multiple codecs). Would we have one ?
{code}
// TODO: move erasure coding policy to file XAttr (HDFS-7337)
{code}
6. Better rename {{BlockCodecInfo}} since the system has many codec concepts.
7. In the test testMissingStripedBlock:
{code}
+assertTrue(There should be 4 outstanding EC tasks, cnt  0);
{code}
1) Can 4 be a variable like {{expectedRecoveryTasks}} to be calculated with 
constant value ?
2) Then, have {{cnt == expectedRecoveryTasks}} ?
8. Regarding below, I haven't better idea, and it looks good to me. Maybe we 
have some comments there for future optimization ? 
bq.how to efficiently get the indices of missing blocks. Maybe something like 
the below?


 Erasure coding: distribute recovery work for striped blocks to DataNode
 ---

 Key: HDFS-7369
 URL: https://issues.apache.org/jira/browse/HDFS-7369
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-7369-000-part1.patch, HDFS-7369-000-part2.patch, 
 HDFS-7369-001.patch, HDFS-7369-002.patch


 This JIRA updates NameNode to handle background / offline recovery of erasure 
 coded blocks. It includes 2 parts:
 # Extend {{UnderReplicatedBlocks}} to recognize EC blocks and insert them to 
 appropriate priority levels. 
 # Update {{ReplicationMonitor}} to distinguish block codec tasks and send a 
 new DataNode command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7932) Speed up the shutdown of datanode during rolling upgrade

2015-03-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7932:
-
Attachment: HDFS-7932.patch

 Speed up the shutdown of datanode during rolling upgrade
 

 Key: HDFS-7932
 URL: https://issues.apache.org/jira/browse/HDFS-7932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
 Attachments: HDFS-7932.patch


 Datanode normally exits in 3 seconds after receiving {{shutdownDatanode}} 
 command. However, sometimes it doesn't, especially when the IO is busy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7933) fsck should also report decommissioning replicas.

2015-03-13 Thread Jitendra Nath Pandey (JIRA)
Jitendra Nath Pandey created HDFS-7933:
--

 Summary: fsck should also report decommissioning replicas. 
 Key: HDFS-7933
 URL: https://issues.apache.org/jira/browse/HDFS-7933
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Jitendra Nath Pandey


Fsck doesn't count replicas that are on decommissioning nodes. If a block has 
all replicas on the decommissioning nodes, it will be marked as missing, which 
is alarming for the admins, although the system will replicate them before 
nodes are decommissioned.
Fsck output should also show decommissioning replicas along with the live 
replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7435) PB encoding of block reports is very inefficient

2015-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360982#comment-14360982
 ] 

Hudson commented on HDFS-7435:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7318 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7318/])
HDFS-7435. PB encoding of block reports is very inefficient. Contributed by 
Daryn Sharp. (kihwal: rev d324164a51a43d72c02567248bd9f0f12b244a40)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeRegistration.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDnRespectsBlockReportSplitThreshold.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/StorageBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestBlockListAsLongs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/TestOfflineEditsViewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java


 PB encoding of block reports is very inefficient
 

 Key: HDFS-7435
 URL: https://issues.apache.org/jira/browse/HDFS-7435
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, 
 HDFS-7435.002.patch, HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, 
 HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, 
 HDFS-7435.patch, HDFS-7435.patch


 Block reports are encoded as a PB repeating long.  Repeating fields use an 
 {{ArrayList}} with default capacity of 10.  A block report containing tens or 
 hundreds of thousand of longs (3 for each replica) is extremely expensive 
 since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
 fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7932) Speed up the shutdown of datanode during rolling upgrade

2015-03-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361160#comment-14361160
 ] 

Kihwal Lee commented on HDFS-7932:
--

Here is an example of slow shutdown.  Instead of waiting for the thread group 
to clear, it can continue with the rest of the process after interrupting it.

{noformat}
2015-03-10 06:16:39,829 [Thread-2647495] INFO mortbay.log: Stopped 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0
2015-03-10 06:16:39,937 [Thread-2647495] INFO datanode.DataNode: Waiting for 
threadgroup to exit, active threads is 1
2015-03-10 06:16:39,939 [Thread-2647495] INFO datanode.DataNode: Waiting for 
threadgroup to exit, active threads is 1
2015-03-10 06:16:39,942 [Thread-2647495] INFO datanode.DataNode: Waiting for 
threadgroup to exit, active threads is 1
...
2015-03-10 06:16:44,076 [Thread-2647495] INFO datanode.DataNode: Waiting for 
threadgroup to exit, active threads is 1
2015-03-10 06:16:44,580 [main] WARN datanode.DataNode: Exiting Datanode
2015-03-10 06:16:44,772 [main] INFO util.ExitUtil: Exiting with status 0
2015-03-10 06:16:44,775 [Thread-2] INFO datanode.DataNode: SHUTDOWN_MSG: 
/
SHUTDOWN_MSG: Shutting down DataNode at xxx
/
2015-03-10 06:16:45,077 [Thread-2647495] INFO datanode.DataNode: Waiting for 
threadgroup to exit, active threads is 1
2015-03-10 06:16:46,078 [Thread-2647495] INFO datanode.DataNode: Waiting for 
threadgroup to exit, active threads is 1
{noformat}

 Speed up the shutdown of datanode during rolling upgrade
 

 Key: HDFS-7932
 URL: https://issues.apache.org/jira/browse/HDFS-7932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee

 Datanode normally exits in 3 seconds after receiving {{shutdownDatanode}} 
 command. However, sometimes it doesn't, especially when the IO is busy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7932) Speed up the shutdown of datanode during rolling upgrade

2015-03-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7932:
-
Assignee: Kihwal Lee
  Status: Patch Available  (was: Open)

 Speed up the shutdown of datanode during rolling upgrade
 

 Key: HDFS-7932
 URL: https://issues.apache.org/jira/browse/HDFS-7932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-7932.patch


 Datanode normally exits in 3 seconds after receiving {{shutdownDatanode}} 
 command. However, sometimes it doesn't, especially when the IO is busy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2605) CHANGES.txt has two Release 0.21.1 sections

2015-03-13 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2605:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this. Thank you Allen.

 CHANGES.txt has two Release 0.21.1 sections
 -

 Key: HDFS-2605
 URL: https://issues.apache.org/jira/browse/HDFS-2605
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Allen Wittenauer
 Fix For: 2.7.0

 Attachments: HDFS-2605-01.patch, HDFS-2605.patch


 CHANGES.txt in hdfs-project has 2 sections titled Release 0.21.1 - 
 Unreleased. They are not identical, should be merged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7435) PB encoding of block reports is very inefficient

2015-03-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7435:
-
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks everyone for reviews and Daryn for working on the patch.

 PB encoding of block reports is very inefficient
 

 Key: HDFS-7435
 URL: https://issues.apache.org/jira/browse/HDFS-7435
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, 
 HDFS-7435.002.patch, HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, 
 HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, HDFS-7435.patch, 
 HDFS-7435.patch, HDFS-7435.patch


 Block reports are encoded as a PB repeating long.  Repeating fields use an 
 {{ArrayList}} with default capacity of 10.  A block report containing tens or 
 hundreds of thousand of longs (3 for each replica) is extremely expensive 
 since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
 fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7931) Spurious Error message Could not find uri with key [dfs.encryption.key.provider.uri] to create a key appears even when Encryption is dissabled

2015-03-13 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7931:
--
Description: 
The {{addDelegationTokens}} method in {{DistributedFileSystem}} calls 
{{DFSClient#getKeyProvider()}} which attempts to get a provider from the 
{{KeyProvderCache}} but since the required key, 
*dfs.encryption.key.provider.uri* is not present (due to encryption being 
dissabled), it throws an exception.

{noformat}
2015-03-11 23:55:47,849 [JobControl] ER ROR 
org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
[dfs.encryption.key.provider.uri] to create a keyProvider !!
{noformat}

 Spurious Error message Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a key appears even when 
 Encryption is dissabled
 

 Key: HDFS-7931
 URL: https://issues.apache.org/jira/browse/HDFS-7931
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor

 The {{addDelegationTokens}} method in {{DistributedFileSystem}} calls 
 {{DFSClient#getKeyProvider()}} which attempts to get a provider from the 
 {{KeyProvderCache}} but since the required key, 
 *dfs.encryption.key.provider.uri* is not present (due to encryption being 
 dissabled), it throws an exception.
 {noformat}
 2015-03-11 23:55:47,849 [JobControl] ER ROR 
 org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a keyProvider !!
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7931) Spurious Error message Could not find uri with key [dfs.encryption.key.provider.uri] to create a key appears even when Encryption is dissabled

2015-03-13 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned HDFS-7931:
-

Assignee: Arun Suresh

 Spurious Error message Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a key appears even when 
 Encryption is dissabled
 

 Key: HDFS-7931
 URL: https://issues.apache.org/jira/browse/HDFS-7931
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: The {{addDelegationTokens}} method in 
 {{DistributedFileSystem}} calls {{DFSClient#getKeyProvider()}} which attempts 
 to get a provider from the {{KeyProvderCache}} but since the required key, 
 *dfs.encryption.key.provider.uri* is not present (due to encryption being 
 dissabled), it throws an exception.
 {noformat}
 2015-03-11 23:55:47,849 [JobControl] ER ROR 
 org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a keyProvider !!
 {noformat}
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7931) Spurious Error message Could not find uri with key [dfs.encryption.key.provider.uri] to create a key appears even when Encryption is dissabled

2015-03-13 Thread Arun Suresh (JIRA)
Arun Suresh created HDFS-7931:
-

 Summary: Spurious Error message Could not find uri with key 
[dfs.encryption.key.provider.uri] to create a key appears even when Encryption 
is dissabled
 Key: HDFS-7931
 URL: https://issues.apache.org/jira/browse/HDFS-7931
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: The {{addDelegationTokens}} method in 
{{DistributedFileSystem}} calls {{DFSClient#getKeyProvider()}} which attempts 
to get a provider from the {{KeyProvderCache}} but since the required key, 
*dfs.encryption.key.provider.uri* is not present (due to encryption being 
dissabled), it throws an exception.

{noformat}
2015-03-11 23:55:47,849 [JobControl] ER ROR 
org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
[dfs.encryption.key.provider.uri] to create a keyProvider !!
{noformat}
Reporter: Arun Suresh
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7931) Spurious Error message Could not find uri with key [dfs.encryption.key.provider.uri] to create a key appears even when Encryption is dissabled

2015-03-13 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7931:
--
Environment: (was: The {{addDelegationTokens}} method in 
{{DistributedFileSystem}} calls {{DFSClient#getKeyProvider()}} which attempts 
to get a provider from the {{KeyProvderCache}} but since the required key, 
*dfs.encryption.key.provider.uri* is not present (due to encryption being 
dissabled), it throws an exception.

{noformat}
2015-03-11 23:55:47,849 [JobControl] ER ROR 
org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
[dfs.encryption.key.provider.uri] to create a keyProvider !!
{noformat})

 Spurious Error message Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a key appears even when 
 Encryption is dissabled
 

 Key: HDFS-7931
 URL: https://issues.apache.org/jira/browse/HDFS-7931
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7903) Cannot recover block after truncate and delete snapshot

2015-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361041#comment-14361041
 ] 

Hudson commented on HDFS-7903:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7319 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7319/])
HDFS-7903. Cannot recover block after truncate and delete snapshot. Contributed 
by Plamen Jeliazkov. (shv: rev 6acb7f2110897264241df44d564db2f85260348f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java


 Cannot recover block after truncate and delete snapshot
 ---

 Key: HDFS-7903
 URL: https://issues.apache.org/jira/browse/HDFS-7903
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Plamen Jeliazkov
Priority: Blocker
 Attachments: HDFS-7903.1.patch, HDFS-7903.2.patch, HDFS-7903.patch, 
 testMultipleTruncate.patch


 # Create a file.
 # Create a snapshot.
 # Truncate the file in the middle of a block.
 # Delete the snapshot.
 The block cannot be recovered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7932) Speed up the shutdown of datanode during rolling upgrade

2015-03-13 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-7932:


 Summary: Speed up the shutdown of datanode during rolling upgrade
 Key: HDFS-7932
 URL: https://issues.apache.org/jira/browse/HDFS-7932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


Datanode normally exits in 3 seconds after receiving {{shutdownDatanode}} 
command. However, sometimes it doesn't, especially when the IO is busy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7932) Speed up the shutdown of datanode during rolling upgrade

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361165#comment-14361165
 ] 

Hadoop QA commented on HDFS-7932:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12704514/HDFS-7932.patch
  against trunk revision 6fdef76.

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9878//console

This message is automatically generated.

 Speed up the shutdown of datanode during rolling upgrade
 

 Key: HDFS-7932
 URL: https://issues.apache.org/jira/browse/HDFS-7932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-7932.patch


 Datanode normally exits in 3 seconds after receiving {{shutdownDatanode}} 
 command. However, sometimes it doesn't, especially when the IO is busy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7369) Erasure coding: distribute recovery work for striped blocks to DataNode

2015-03-13 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361176#comment-14361176
 ] 

Zhe Zhang commented on HDFS-7369:
-

Thanks for the review Kai!

# Naming: Thanks for bringing it up. I thought about it but found there are 
many more places to change (listed below). We should have a separate JIRA to 
discuss and handle that.
#* {{UnderReplicatedBlocks}} (and instances of it like {{neededReplications}}. 
#* {{PendingReplicationBlocks}}
#* {{ReplicationMonitor}}
# Type inference: I'm using Java 7, which supports type inference. Since we are 
already at 2.7 I think it's safe to use Java 7 features.
# bq. In the codes for above, could we have initial capacity value for the two 
array lists ?
The default initial capacity is 10; I think no need to change it?

Will change other minor places in the next rev. How about {{BlockCodecInfo}} - 
{{BlockECRecoveryInfo}} and {{BlockCodecCommand}} - {{BlockECRecoveryCommand}}?

Regarding {{testMissingStripedBlock}}: if we have 6 DNs in total and kills 1 of 
them, it's uncertain how many blocks are affected. There should at least be 1 
because of balanced placement algorithm. But likely  4. I'll try to design a 
more accurate test.

 Erasure coding: distribute recovery work for striped blocks to DataNode
 ---

 Key: HDFS-7369
 URL: https://issues.apache.org/jira/browse/HDFS-7369
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-7369-000-part1.patch, HDFS-7369-000-part2.patch, 
 HDFS-7369-001.patch, HDFS-7369-002.patch


 This JIRA updates NameNode to handle background / offline recovery of erasure 
 coded blocks. It includes 2 parts:
 # Extend {{UnderReplicatedBlocks}} to recognize EC blocks and insert them to 
 appropriate priority levels. 
 # Update {{ReplicationMonitor}} to distinguish block codec tasks and send a 
 new DataNode command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7838) Expose truncate API for libhdfs

2015-03-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361279#comment-14361279
 ] 

Colin Patrick McCabe commented on HDFS-7838:


The libhdfs part looks correct, but you didn't add a stub function to 
libwebhdfs which returns {{ENOTSUP}} like I asked.

I understand that you are planning on adding full support in libwebhdfs in a 
follow-on jira, but it would be better to have the function introduced there in 
this JIRA, so that people would not get linker failures when trying to link a 
binary using {{hdfsTruncate}} against {{libwebhdfs}}.  {{libwebhdfs}} must 
implement every function in {{hdfs.h}}.

 Expose truncate API for libhdfs
 ---

 Key: HDFS-7838
 URL: https://issues.apache.org/jira/browse/HDFS-7838
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: 2.7.0
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-7838.001.patch, HDFS-7838.002.patch, 
 HDFS-7838.003.patch


 It's good to expose truncate in libhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-13 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361289#comment-14361289
 ] 

Brandon Li commented on HDFS-5523:
--

[~Rosa], I guess you posted comments to a wrong JIRA.

 Support multiple subdirectory exports in HDFS NFS gateway 
 --

 Key: HDFS-5523
 URL: https://issues.apache.org/jira/browse/HDFS-5523
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Reporter: Brandon Li

 Currently, the HDFS NFS Gateway only supports configuring a single 
 subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
 Supporting multiple subdirectory exports can make data and security 
 management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-1841) Enforce read-only permissions in FUSE open()

2015-03-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-1841.

Resolution: Duplicate

duplicate of HDFS-4139 from 2012

 Enforce read-only permissions in FUSE open()
 

 Key: HDFS-1841
 URL: https://issues.apache.org/jira/browse/HDFS-1841
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 0.20.2
 Environment: Linux 2.6.35
Reporter: Brian Bloniarz
Priority: Minor
 Attachments: patch.fuse-dfs, patch.fuse-dfs.kernel


 fuse-dfs currently allows files to be created on a read-only filesystem:
 $ fuse_dfs_wrapper.sh dfs://example.com:8020 ro ~/hdfs
 $ touch ~/hdfs/foobar
 Attached is a simple patch, which does two things:
 1) Checks the read_only flag inside dfs_open().
 2) Passes the read-only mount option to FUSE when ro is specified on the 
 commandline. This is probably a better long-term solution; the kernel will 
 enforce the read-only operations without it being necessary inside the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7929) inotify unable fetch pre-upgrade edit log segments once upgrade starts

2015-03-13 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7929:

Attachment: HDFS-7929-000.patch

 inotify unable fetch pre-upgrade edit log segments once upgrade starts
 --

 Key: HDFS-7929
 URL: https://issues.apache.org/jira/browse/HDFS-7929
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-7929-000.patch


 inotify is often used to periodically poll HDFS events. However, once an HDFS 
 upgrade has started, edit logs are moved to /previous on the NN, which is not 
 accessible. Moreover, once the upgrade is finalized /previous is currently 
 lost forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7929) inotify unable fetch pre-upgrade edit log segments once upgrade starts

2015-03-13 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7929:

Status: Patch Available  (was: Open)

Very rough initial patch to demo the idea. 

[~cmccabe] Could you take a look and see if it's on the right direction?

 inotify unable fetch pre-upgrade edit log segments once upgrade starts
 --

 Key: HDFS-7929
 URL: https://issues.apache.org/jira/browse/HDFS-7929
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-7929-000.patch


 inotify is often used to periodically poll HDFS events. However, once an HDFS 
 upgrade has started, edit logs are moved to /previous on the NN, which is not 
 accessible. Moreover, once the upgrade is finalized /previous is currently 
 lost forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361484#comment-14361484
 ] 

Hudson commented on HDFS-7915:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7323 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7323/])
Revert HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot 
and fail to tell the DFSClient about it because of a network error (cmccabe) 
(jenkins didn't run yet) (cmccabe: rev 32741cf3d25d85a92e3deb11c302cc2a718d71dd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ShortCircuitRegistry.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtocol.java


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.0

 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch, HDFS-7915.006.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7697) Document the scope of the PB OIV tool

2015-03-13 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361610#comment-14361610
 ] 

Lei (Eddy) Xu commented on HDFS-7697:
-

Thanks to bring it up, [~wheat9]. I will post the document no later than 
earlier next week.

 Document the scope of the PB OIV tool
 -

 Key: HDFS-7697
 URL: https://issues.apache.org/jira/browse/HDFS-7697
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai

 As par HDFS-6673, we need to document the applicable scope of the new PB OIV 
 tool so that it won't catch users by surprise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361480#comment-14361480
 ] 

Hudson commented on HDFS-7915:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7322 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7322/])
HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and fail 
to tell the DFSClient about it because of a network error (cmccabe) (cmccabe: 
rev 5aa892ed486d42ae6b94c4866b92cd2b382ea640)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ShortCircuitRegistry.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtocol.java


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.0

 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch, HDFS-7915.006.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361481#comment-14361481
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7926:
---

idempotent means applying the same operations multiple times will get the 
same result.  If there is an append in the middle, the retry could get 
different results.

E.g. getPermission is idempotent.  However, if there is a setPermission (or 
delete, rename, etc.) in the middle, the retry of getPermission could get a 
different result.

 NameNode implementation of ClientProtocol.truncate(..) is not idempotent
 

 Key: HDFS-7926
 URL: https://issues.apache.org/jira/browse/HDFS-7926
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.7.0

 Attachments: h7926_20150313.patch, h7926_20150313b.patch


 If dfsclient drops the first response of a truncate RPC call, the retry by 
 retry cache will fail with DFSClient ... is already the current lease 
 holder.  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
 the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7929) inotify unable fetch pre-upgrade edit log segments once upgrade starts

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361568#comment-14361568
 ] 

Hadoop QA commented on HDFS-7929:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12704542/HDFS-7929-000.patch
  against trunk revision 6fdef76.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9883//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9883//console

This message is automatically generated.

 inotify unable fetch pre-upgrade edit log segments once upgrade starts
 --

 Key: HDFS-7929
 URL: https://issues.apache.org/jira/browse/HDFS-7929
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-7929-000.patch


 inotify is often used to periodically poll HDFS events. However, once an HDFS 
 upgrade has started, edit logs are moved to /previous on the NN, which is not 
 accessible. Moreover, once the upgrade is finalized /previous is currently 
 lost forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361567#comment-14361567
 ] 

Hadoop QA commented on HDFS-7915:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12704540/HDFS-7915.006.patch
  against trunk revision 6fdef76.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9884//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9884//console

This message is automatically generated.

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.0

 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch, HDFS-7915.006.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6946) TestBalancerWithSaslDataTransfer fails in trunk

2015-03-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6946:
-
Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

 TestBalancerWithSaslDataTransfer fails in trunk
 ---

 Key: HDFS-6946
 URL: https://issues.apache.org/jira/browse/HDFS-6946
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Assignee: Stephen Chu
Priority: Minor
 Attachments: HDFS-6946.1.patch, testBalancer0Integrity-failure.log


 From build #1849 :
 {code}
 REGRESSION:  
 org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity
 Error Message:
 Cluster failed to reached expected values of totalSpace (current: 750, 
 expected: 750), or usedSpace (current: 140, expected: 150), in more than 
 4 msec.
 Stack Trace:
 java.util.concurrent.TimeoutException: Cluster failed to reached expected 
 values of totalSpace (current: 750, expected: 750), or usedSpace (current: 
 140, expected: 150), in more than 4 msec.
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.waitForHeartBeat(TestBalancer.java:253)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:578)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:551)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:437)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:645)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0Internal(TestBalancer.java:759)
 at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity(TestBalancerWithSaslDataTransfer.java:34)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7915:
---
Status: Patch Available  (was: Reopened)

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.0

 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch, HDFS-7915.006.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-03-13 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361536#comment-14361536
 ] 

Karthik Kambatla commented on HDFS-7858:


If possible, it would be nice to make the solution here accessible to YARN as 
well. 

Simultaneously connecting to all the masters (NNs in HDFS and RMs in YARN) 
might work most of the time. How do we plan to handle a split-brain? In YARN, 
we don't use an explicit fencing mechanism. IIRR, one is not required to 
configure a fencing mechanism when using QJM? 


 Improve HA Namenode Failover detection on the client
 

 Key: HDFS-7858
 URL: https://issues.apache.org/jira/browse/HDFS-7858
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arun Suresh
Assignee: Arun Suresh
 Attachments: HDFS-7858.1.patch, HDFS-7858.2.patch, HDFS-7858.2.patch, 
 HDFS-7858.3.patch


 In an HA deployment, Clients are configured with the hostnames of both the 
 Active and Standby Namenodes.Clients will first try one of the NNs 
 (non-deterministically) and if its a standby NN, then it will respond to the 
 client to retry the request on the other Namenode.
 If the client happens to talks to the Standby first, and the standby is 
 undergoing some GC / is busy, then those clients might not get a response 
 soon enough to try the other NN.
 Proposed Approach to solve this :
 1) Since Zookeeper is already used as the failover controller, the clients 
 could talk to ZK and find out which is the active namenode before contacting 
 it.
 2) Long-lived DFSClients would have a ZK watch configured which fires when 
 there is a failover so they do not have to query ZK everytime to find out the 
 active NN
 2) Clients can also cache the last active NN in the user's home directory 
 (~/.lastNN) so that short-lived clients can try that Namenode first before 
 querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361552#comment-14361552
 ] 

Yi Liu commented on HDFS-7926:
--

You have a point there, thanks.

 NameNode implementation of ClientProtocol.truncate(..) is not idempotent
 

 Key: HDFS-7926
 URL: https://issues.apache.org/jira/browse/HDFS-7926
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.7.0

 Attachments: h7926_20150313.patch, h7926_20150313b.patch


 If dfsclient drops the first response of a truncate RPC call, the retry by 
 retry cache will fail with DFSClient ... is already the current lease 
 holder.  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
 the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7915:
---
   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

committed. thanks, guys.

I will file a follow-up to look into if we can do more logging.  Note that in 
the specific case where we caught this bug (writeArray failing), we actually 
got as much logging as possible from the DataNode.  Everything we needed was 
logged there, including the failed domain socket I/O stack traces.  Similarly, 
I can't think of any DFSClient logs we needed and didn't get.  We got the 
domain socket I/O stack traces there was well.  What we don't know is why the 
write failed, but we logged as much information as the kernel gave us (it 
returned EAGAIN, which means timeout).

In general socket reads and writes can fail, and HDFS needs to be able to 
handle that.  The cause of the timeout in the case we saw is outside the scope 
of this JIRA.

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.0

 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch, HDFS-7915.006.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reopened HDFS-7915:


oops, I just saw that jenkins didn't run on v6 yet.  sigh...

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.7.0

 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch, HDFS-7915.006.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361210#comment-14361210
 ] 

Hadoop QA commented on HDFS-7915:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12704525/HDFS-7915.005.patch
  against trunk revision 6fdef76.

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9879//console

This message is automatically generated.

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5356) MiniDFSCluster shoud close all open FileSystems when shutdown()

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361258#comment-14361258
 ] 

Hadoop QA commented on HDFS-5356:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12704235/HDFS-5356-7.patch
  against trunk revision 6fdef76.

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9880//console

This message is automatically generated.

 MiniDFSCluster shoud close all open FileSystems when shutdown()
 ---

 Key: HDFS-5356
 URL: https://issues.apache.org/jira/browse/HDFS-5356
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.2.0
Reporter: haosdent
Assignee: Rakesh R
Priority: Critical
 Attachments: HDFS-5356-1.patch, HDFS-5356-2.patch, HDFS-5356-3.patch, 
 HDFS-5356-4.patch, HDFS-5356-5.patch, HDFS-5356-6.patch, HDFS-5356-7.patch, 
 HDFS-5356.patch


 After add some metrics functions to DFSClient, I found that some unit tests 
 relates to metrics are failed. Because MiniDFSCluster are never close open 
 FileSystems, DFSClients are alive after MiniDFSCluster shutdown(). The 
 metrics of DFSClients in DefaultMetricsSystem are still exist and this make 
 other unit tests failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Rosa Ali (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361260#comment-14361260
 ] 

Rosa Ali commented on HDFS-7915:


Please tell me how i can use batch file HDFS-1783
 https://issues.apache.org/jira/browse/HDFS-1783
Please

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361264#comment-14361264
 ] 

Colin Patrick McCabe commented on HDFS-7915:


I did answer that comment, here: 
https://issues.apache.org/jira/browse/HDFS-7915?focusedCommentId=14359377page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14359377

The summary is that it would not be easier because we'd have to start worrying 
about locking.  Both the hash maps and the objects inside them are mutable and 
you need a lock to access them.  The visitor hides this detail, but we'd have 
to worry about it with accessors.

thanks

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-13 Thread Rosa Ali (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361259#comment-14361259
 ] 

Rosa Ali commented on HDFS-5523:


Please tell me how i can use batch file HDFS-1783
 https://issues.apache.org/jira/browse/HDFS-1783
Please

 Support multiple subdirectory exports in HDFS NFS gateway 
 --

 Key: HDFS-5523
 URL: https://issues.apache.org/jira/browse/HDFS-5523
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Reporter: Brandon Li

 Currently, the HDFS NFS Gateway only supports configuring a single 
 subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
 Supporting multiple subdirectory exports can make data and security 
 management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7369) Erasure coding: distribute recovery work for striped blocks to DataNode

2015-03-13 Thread Rosa Ali (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361263#comment-14361263
 ] 

Rosa Ali commented on HDFS-7369:


Please tell me how i can use batch file HDFS-1783
 https://issues.apache.org/jira/browse/HDFS-1783
Please

 Erasure coding: distribute recovery work for striped blocks to DataNode
 ---

 Key: HDFS-7369
 URL: https://issues.apache.org/jira/browse/HDFS-7369
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-7369-000-part1.patch, HDFS-7369-000-part2.patch, 
 HDFS-7369-001.patch, HDFS-7369-002.patch


 This JIRA updates NameNode to handle background / offline recovery of erasure 
 coded blocks. It includes 2 parts:
 # Extend {{UnderReplicatedBlocks}} to recognize EC blocks and insert them to 
 appropriate priority levels. 
 # Update {{ReplicationMonitor}} to distinguish block codec tasks and send a 
 new DataNode command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7915:
---
Attachment: HDFS-7915.006.patch

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch, HDFS-7915.006.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-5193) Unifying HA support in HftpFileSystem, HsftpFileSystem and WebHdfsFileSystem

2015-03-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HDFS-5193.
--
Resolution: Won't Fix

As hftp is phasing out there are few motivations to get this fixed.

 Unifying HA support in HftpFileSystem, HsftpFileSystem and WebHdfsFileSystem
 

 Key: HDFS-5193
 URL: https://issues.apache.org/jira/browse/HDFS-5193
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai

 Recent changes in HDFS-5122 implement the HA support for the WebHDFS client. 
 Similar to WebHDFS client, both HftpFileSystem and HsftpFilesystem access 
 HDFS via HTTP, but their current implementation hinders the implementation of 
 HA support.
 I propose to refactor HftpFileSystem, HsftpFileSystem, and WebHdfsFileSystem 
 to provide unified abstractions to support HA cluster over HTTP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361389#comment-14361389
 ] 

Yi Liu commented on HDFS-7926:
--

Thanks [~szetszwo] for the fix. Actually I agree that {{idempotent}} is good as 
you said. 
But the trouble issue is lease check/recovery for truncate, we just want to 
make sure the truncate retries get the same result. I have found one case that 
currently fix can't cover:
# Client invokes truncate, but it's on block boundary, so we will not add lease 
and there is no block recovery. And the result should be *true*.
# Meanwhile network issue for the client, and retry happens after few time. 
# Before retry happen or before it arrives NN, some other client invokes append 
on this file.
# Then NN receives the retry, so there is lease failure.
# Actually the truncate is successful, but client sees failure.


 NameNode implementation of ClientProtocol.truncate(..) is not idempotent
 

 Key: HDFS-7926
 URL: https://issues.apache.org/jira/browse/HDFS-7926
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.7.0

 Attachments: h7926_20150313.patch, h7926_20150313b.patch


 If dfsclient drops the first response of a truncate RPC call, the retry by 
 retry cache will fail with DFSClient ... is already the current lease 
 holder.  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
 the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7697) Document the scope of the PB OIV tool

2015-03-13 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361390#comment-14361390
 ] 

Haohui Mai commented on HDFS-7697:
--

I think it is best to get it done before 2.7 is out. A simple change is to mark 
the tool as experimental should be sufficient.

 Document the scope of the PB OIV tool
 -

 Key: HDFS-7697
 URL: https://issues.apache.org/jira/browse/HDFS-7697
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai

 As par HDFS-6673, we need to document the applicable scope of the new PB OIV 
 tool so that it won't catch users by surprise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7191) WebHDFS prematurely closes connections under high concurrent loads

2015-03-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HDFS-7191.
--
Resolution: Duplicate

HDFS-7279 should fix this problem.

 WebHDFS prematurely closes connections under high concurrent loads
 --

 Key: HDFS-7191
 URL: https://issues.apache.org/jira/browse/HDFS-7191
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Priority: Critical

 We're seeing the DN prematurely closes APPEND connections:
 {noformat]
 2014-09-22 23:53:12,721 WARN 
 org.apache.hadoop.hdfs.web.resources.ExceptionHandler: INTERNAL_SERVER_ERROR
 java.nio.channels.CancelledKeyException
 at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55)
 at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59)
 at 
 org.mortbay.io.nio.SelectChannelEndPoint.updateKey(SelectChannelEndPoint.java:325)
 at 
 org.mortbay.io.nio.SelectChannelEndPoint.blockReadable(SelectChannelEndPoint.java:242)
 at 
 org.mortbay.jetty.HttpParser$Input.blockForContent(HttpParser.java:1169)
 at org.mortbay.jetty.HttpParser$Input.read(HttpParser.java:1122)
 at java.io.InputStream.read(InputStream.java:85)
 at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:84)
 at 
 org.apache.hadoop.hdfs.server.datanode.web.resources.DatanodeWebHdfsMethods.put(DatanodeWebHdfsMethods.java:239)
 at 
 org.apache.hadoop.hdfs.server.datanode.web.resources.DatanodeWebHdfsMethods.access$000(DatanodeWebHdfsMethods.java:87)
 at 
 org.apache.hadoop.hdfs.server.datanode.web.resources.DatanodeWebHdfsMethods$1.run(DatanodeWebHdfsMethods.java:205)
 at 
 org.apache.hadoop.hdfs.server.datanode.web.resources.DatanodeWebHdfsMethods$1.run(DatanodeWebHdfsMethods.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at 
 org.apache.hadoop.hdfs.server.datanode.web.resources.DatanodeWebHdfsMethods.put(DatanodeWebHdfsMethods.java:202)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7063) WebHDFS: Avoid using sockets in datanode when the traffic is local

2015-03-13 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361427#comment-14361427
 ] 

Haohui Mai commented on HDFS-7063:
--

It should not be an issue if short circuit read is turned on.

 WebHDFS: Avoid using sockets in datanode when the traffic is local
 --

 Key: HDFS-7063
 URL: https://issues.apache.org/jira/browse/HDFS-7063
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Reporter: Tsz Wo Nicholas Sze

 When a WebHDFS client accesses a local replica in a Datanode, the Datanode 
 uses DFSClient and connects to itself using a socket via 
 DataTransferProtocol.  The socket connection is unnecessary.  It can be 
 avoided such as using PipedInputStream and PipedOutputStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7922) ShortCircuitCache#close is not releasing ScheduledThreadPoolExecutors

2015-03-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361255#comment-14361255
 ] 

Colin Patrick McCabe commented on HDFS-7922:


Note that this is never a problem in production, only in unit tests.

Test failures looks related, did you get a chance to look at them?

 ShortCircuitCache#close is not releasing ScheduledThreadPoolExecutors
 -

 Key: HDFS-7922
 URL: https://issues.apache.org/jira/browse/HDFS-7922
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: 001-HDFS-7922.patch


 ShortCircuitCache has the following executors. It would be good to shutdown 
 these pools during ShortCircuitCache#close to avoid leaks.
 {code}
   /**
* The executor service that runs the cacheCleaner.
*/
   private final ScheduledThreadPoolExecutor cleanerExecutor
   = new ScheduledThreadPoolExecutor(1, new ThreadFactoryBuilder().
   setDaemon(true).setNameFormat(ShortCircuitCache_Cleaner).
   build());
   /**
* The executor service that runs the cacheCleaner.
*/
   private final ScheduledThreadPoolExecutor releaserExecutor
   = new ScheduledThreadPoolExecutor(1, new ThreadFactoryBuilder().
   setDaemon(true).setNameFormat(ShortCircuitCache_SlotReleaser).
   build());
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361271#comment-14361271
 ] 

Hadoop QA commented on HDFS-7915:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12704525/HDFS-7915.005.patch
  against trunk revision 6fdef76.

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9881//console

This message is automatically generated.

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361281#comment-14361281
 ] 

Chris Nauroth commented on HDFS-7915:
-

Thanks, Colin.  I had missed that the first time.

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7261) storageMap is accessed without synchronization in DatanodeDescriptor#updateHeartbeatState()

2015-03-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361282#comment-14361282
 ] 

Colin Patrick McCabe commented on HDFS-7261:


thanks, [~brahmareddy].

403   synchronized (storageMap) {
404   failedStorageInfos = new HashSetDatanodeStorageInfo(
404   storageMap.values()); 405   storageMap.values());
406   }
the indentation is off on line 404

similarly, the indentation is off in {{pruneStorageMap}}

 storageMap is accessed without synchronization in 
 DatanodeDescriptor#updateHeartbeatState()
 ---

 Key: HDFS-7261
 URL: https://issues.apache.org/jira/browse/HDFS-7261
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7261-001.patch, HDFS-7261.patch


 Here is the code:
 {code}
   failedStorageInfos = new HashSetDatanodeStorageInfo(
   storageMap.values());
 {code}
 In other places, the lock on DatanodeDescriptor.storageMap is held:
 {code}
 synchronized (storageMap) {
   final CollectionDatanodeStorageInfo storages = storageMap.values();
   return storages.toArray(new DatanodeStorageInfo[storages.size()]);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7930) commitBlockSynchronization() does not remove locations, which were not confirmed

2015-03-13 Thread Byron Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361304#comment-14361304
 ] 

Byron Wong commented on HDFS-7930:
--

The failure is reproducible (after applying the latest patch from HDFS-7886) by 
commenting out the {{cluster.triggerBlockReports();}} in 
{{testTruncateWithDataNodesRestartImmediately}} and running all test in 
TestFileTruncate.

 commitBlockSynchronization() does not remove locations, which were not 
 confirmed
 

 Key: HDFS-7930
 URL: https://issues.apache.org/jira/browse/HDFS-7930
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Priority: Blocker

 When {{commitBlockSynchronization()}} has less {{newTargets}} than in the 
 original block it does not remove unconfirmed locations. This results in that 
 the the block stores locations of different lengths or genStamp (corrupt).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361354#comment-14361354
 ] 

Hadoop QA commented on HDFS-7917:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12704541/HDFS-7917.000.patch
  against trunk revision 6fdef76.

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9882//console

This message is automatically generated.

 Use file to replace data dirs in test to simulate a disk failure. 
 --

 Key: HDFS-7917
 URL: https://issues.apache.org/jira/browse/HDFS-7917
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-7917.000.patch


 Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
 {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
 directory's executable permission as false. However, it raises the risk that 
 if the cleanup code could not be executed, the directory can not be easily 
 removed by Jenkins job. 
 Since in {{DiskChecker#checkDirAccess}}:
 {code}
 private static void checkDirAccess(File dir) throws DiskErrorException {
 if (!dir.isDirectory()) {
   throw new DiskErrorException(Not a directory: 
+ dir.toString());
 }
 checkAccessByFileMethods(dir);
   }
 {code}
 We can replace the DN data directory as a file to achieve the same fault 
 injection goal, while it is safer for cleaning up in any circumstance. 
 Additionally, as [~cnauroth] suggested: 
 bq. That might even let us enable some of these tests that are skipped on 
 Windows, because Windows allows access for the owner even after permissions 
 have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7930) commitBlockSynchronization() does not remove locations, which were not confirmed

2015-03-13 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu reassigned HDFS-7930:


Assignee: Yi Liu

 commitBlockSynchronization() does not remove locations, which were not 
 confirmed
 

 Key: HDFS-7930
 URL: https://issues.apache.org/jira/browse/HDFS-7930
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
Priority: Blocker

 When {{commitBlockSynchronization()}} has less {{newTargets}} than in the 
 original block it does not remove unconfirmed locations. This results in that 
 the the block stores locations of different lengths or genStamp (corrupt).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5946) Webhdfs DN choosing code is flawed

2015-03-13 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361443#comment-14361443
 ] 

Haohui Mai commented on HDFS-5946:
--

Any updates on this?

 Webhdfs DN choosing code is flawed
 --

 Key: HDFS-5946
 URL: https://issues.apache.org/jira/browse/HDFS-5946
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, webhdfs
Affects Versions: 3.0.0, 2.4.0
Reporter: Daryn Sharp
Priority: Critical

 HDFS-5891 improved the performance of redirecting webhdfs clients to a DN.  
 Instead of attempting a connection with a 1-minute timeout, the NN skips 
 decommissioned nodes.
 The logic appears flawed.  It finds the index of the first decommissioned 
 node, if any, then:
 * Throws an exception if index = 0, even if other nodes later in the list are 
 not decommissioned.
 * Else picks a random node prior to the index.  Let's say there are 10 
 replicas, 2nd location is decommissioned.  All clients will be redirected to 
 the first location even though there are 8 other valid locations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6118) Code cleanup

2015-03-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HDFS-6118.
--
Resolution: Fixed

 Code cleanup
 

 Key: HDFS-6118
 URL: https://issues.apache.org/jira/browse/HDFS-6118
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas

 HDFS code needs cleanup related to many typos, undocumented parameters, 
 unused methods, unnecessary cast, imports and exceptions declared as thrown 
 to name a few.
 I plan on working on cleaning this up as I get time. To keep code review 
 manageable, I will create sub tasks and cleanup the code a few classes at a 
 time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-13 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361225#comment-14361225
 ] 

Brandon Li commented on HDFS-5523:
--

[~zhz], please feel free to take over. I am not working on it currently.

A few things/questions I can think about regarding the support of this feature:
1. if there are multiple exports, each export may need an access setting like 
that in Linux export table
2. do we want to allow exporting both a directory and its subdirectory, e.g., 
export /a and /a/b?
3. if the exports are not allowed to be nested, do we want to allow users to 
mount the subdirectory of the export? e.g., the export is /a,  can user mount 
/a/b even /a/b is not in the export table?



 Support multiple subdirectory exports in HDFS NFS gateway 
 --

 Key: HDFS-5523
 URL: https://issues.apache.org/jira/browse/HDFS-5523
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Reporter: Brandon Li

 Currently, the HDFS NFS Gateway only supports configuring a single 
 subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
 Supporting multiple subdirectory exports can make data and security 
 management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7922) ShortCircuitCache#close is not releasing ScheduledThreadPoolExecutors

2015-03-13 Thread Rosa Ali (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361257#comment-14361257
 ] 

Rosa Ali commented on HDFS-7922:


Please tell me how i can use batch file HDFS-1783
 https://issues.apache.org/jira/browse/HDFS-1783
Please

 ShortCircuitCache#close is not releasing ScheduledThreadPoolExecutors
 -

 Key: HDFS-7922
 URL: https://issues.apache.org/jira/browse/HDFS-7922
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: 001-HDFS-7922.patch


 ShortCircuitCache has the following executors. It would be good to shutdown 
 these pools during ShortCircuitCache#close to avoid leaks.
 {code}
   /**
* The executor service that runs the cacheCleaner.
*/
   private final ScheduledThreadPoolExecutor cleanerExecutor
   = new ScheduledThreadPoolExecutor(1, new ThreadFactoryBuilder().
   setDaemon(true).setNameFormat(ShortCircuitCache_Cleaner).
   build());
   /**
* The executor service that runs the cacheCleaner.
*/
   private final ScheduledThreadPoolExecutor releaserExecutor
   = new ScheduledThreadPoolExecutor(1, new ThreadFactoryBuilder().
   setDaemon(true).setNameFormat(ShortCircuitCache_SlotReleaser).
   build());
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6585) INodesInPath.resolve is called multiple times in FSNamesystem.setPermission

2015-03-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6585:
-
Labels:   (was: patch)

 INodesInPath.resolve is called multiple times in FSNamesystem.setPermission
 ---

 Key: HDFS-6585
 URL: https://issues.apache.org/jira/browse/HDFS-6585
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Zhilei Xu
Assignee: Zhilei Xu
 Attachments: patch_ab60af58e03b323dd4b18d32c4def1f008b98822.txt, 
 patch_f15b7d505f12213f1ee9fb5ddb4bdaa64f9f623d.txt


 Most of the APIs (both internal and external) in FSNamesystem calls 
 INodesInPath.resolve() to get the list of INodes corresponding to a file 
 path. Usually one API will call resolve() multiple times and that's a waste 
 of time.
 This issue particularly refers to FSNamesystem.setPermission, which calls 
 resolve() twice indirectly: one from checkOwner(), another from 
 dir.setPermission().
 Should save the result of resolve(), and use it whenever possible throughout 
 the lifetime of an API call, instead of making new resolve() calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6585) INodesInPath.resolve is called multiple times in FSNamesystem.setPermission

2015-03-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361382#comment-14361382
 ] 

Hadoop QA commented on HDFS-6585:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12651837/patch_f15b7d505f12213f1ee9fb5ddb4bdaa64f9f623d.txt
  against trunk revision 6fdef76.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9885//console

This message is automatically generated.

 INodesInPath.resolve is called multiple times in FSNamesystem.setPermission
 ---

 Key: HDFS-6585
 URL: https://issues.apache.org/jira/browse/HDFS-6585
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Zhilei Xu
Assignee: Zhilei Xu
 Attachments: patch_ab60af58e03b323dd4b18d32c4def1f008b98822.txt, 
 patch_f15b7d505f12213f1ee9fb5ddb4bdaa64f9f623d.txt


 Most of the APIs (both internal and external) in FSNamesystem calls 
 INodesInPath.resolve() to get the list of INodes corresponding to a file 
 path. Usually one API will call resolve() multiple times and that's a waste 
 of time.
 This issue particularly refers to FSNamesystem.setPermission, which calls 
 resolve() twice indirectly: one from checkOwner(), another from 
 dir.setPermission().
 Should save the result of resolve(), and use it whenever possible throughout 
 the lifetime of an API call, instead of making new resolve() calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7050) Implementation of NameNodeMXBean.getLiveNodes() skips DataNodes started on the same host

2015-03-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HDFS-7050.
--
Resolution: Duplicate

Fixed in HDFS-7303

 Implementation of NameNodeMXBean.getLiveNodes() skips DataNodes started on 
 the same host
 

 Key: HDFS-7050
 URL: https://issues.apache.org/jira/browse/HDFS-7050
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, webhdfs
Reporter: Przemyslaw Pretki
Priority: Minor

 If two or more data nodes are running on the same host only one of them is 
 reported via tab-datanode web page (and NameNodeMXBean interface)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7915:
---
Attachment: HDFS-7915.005.patch

updated patch

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361208#comment-14361208
 ] 

Colin Patrick McCabe commented on HDFS-7915:


bq. 1. I think we should look harder in logging a reason when having to 
unregister a slot for better supportability (e.g., we want to find out the root 
cause). I agree that to make it 100% right would result in too complex logic 
though. I would propose the following:

I understand your concerns, but every log I've looked at does display the 
reason why the fd passing failed, including the full exception.  It simply is 
logged in a catch block further up in the DataXceiver.  Logging it again in 
this function would just be repetitious.  Sorry if that was unclear.

bq. 2. question: change in BlockReaderFactory.java to move  return new 
ShortCircuitReplicaInfo(replica); to within the try block is not important, I 
mean, it's ok not to move it, correct?

Yes, it is OK not to move it, because currently the ShortCircuitReplicaInfo 
can't fail (never throws).  But it is better to have it in the catch block in 
case the constructor later has a throw... added to it.  It is safer.

bq. suggest to change sock.getOutputStream().write((byte).. to 
sock.getOutputStream().write((int), since we are using {{DomainSocket#public 
void write(int val) throws IOException }} API.

OK

bq. Should we define 0 as an constant somewhere and check equivalence instead 
of val  0 at the reader?

It's not necessary.  We don't care what the value is.  Adding checks is 
actually bad because it means we can't decide to use it later for some other 
purpose.

bq. Looks to me that the message should be Reading receipt byte for  
right?

thanks, fixed

 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361218#comment-14361218
 ] 

Chris Nauroth commented on HDFS-7915:
-

I'm repeating my earlier comment in case it got lost while focusing on the 
admittedly much more important discussion about the actual bug.

{quote}
Thanks for the patch, Colin. The change looks good. In the test, is the Visitor 
indirection necessary, or would it be easier to add 2 VisibleForTesting getters 
that return the segments and slots directly to the test code?
{quote}


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7878) API - expose an unique file identifier

2015-03-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361303#comment-14361303
 ] 

Colin Patrick McCabe commented on HDFS-7878:


bq. Jing wrote: Could you please add more details here? Note that the getFileId 
API in the current patch only calls getFileStatus and returns the inode id 
field contained in the HdfsFileStatus. Or you mean the client is making both 
calls separately? Then why the subclass approach can solve this?

My point is that if the client makes two different calls to getFileStatus, the 
file status could change in between.  So we could end up with the ID of one 
file and the other details of another file.  This is also inefficient, clearly, 
since we're doing 2x the RPCs to the NameNode that we need to.  And since the 
NN is the hardest part of HDFS to scale (it hasn't been scaled horizontally) 
this is another concern.

bq. If you call getFileStatus and open currently, you can have the same problem 
- status from one file, open from different file.

Sure, and we ought to fix this too, by making it possible for the client to get 
{{FileStatus}} from a {{DFSInputStream}}.  It would be as easy as just having a 
method inside DFSInputStream that called 
{{open(/.reserved/.inodes/inode-id-of-file)}}.

bq. Sergey wrote: ID allows to overcome this by getting ID first, then using 
ID-based path. Of course if ID is obtained separately there's no guarantee but 
there's no way to overcome this.

It seems like there is a very easy way to overcome this... just add an abstract 
function inside {{FileStatus}} that either throws {{OperationNotSupported}} or 
returns the inode ID.  Then FileStatus objects returned from HDFS (and any 
other function that has user-visible inode IDs) can return the inode ID, and 
the default implementation can be throwing {{OperationNotSupported}}.  We do 
1/2 the RPCs of the current patch, put 1/2 the load on the NN, and don't open 
up another race condition.

What do you think?

 API - expose an unique file identifier
 --

 Key: HDFS-7878
 URL: https://issues.apache.org/jira/browse/HDFS-7878
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, HDFS-7878.patch


 See HDFS-487.
 Even though that is resolved as duplicate, the ID is actually not exposed by 
 the JIRA it supposedly duplicates.
 INode ID for the file should be easy to expose; alternatively ID could be 
 derived from block IDs, to account for appends...
 This is useful e.g. for cache key by file, to make sure cache stays correct 
 when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7878) API - expose an unique file identifier

2015-03-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361303#comment-14361303
 ] 

Colin Patrick McCabe edited comment on HDFS-7878 at 3/13/15 11:24 PM:
--

bq. Jing wrote: Could you please add more details here? Note that the getFileId 
API in the current patch only calls getFileStatus and returns the inode id 
field contained in the HdfsFileStatus. Or you mean the client is making both 
calls separately? Then why the subclass approach can solve this?

My point is that if the client makes two different calls to getFileStatus, the 
file status could change in between.  So we could end up with the ID of one 
file and the other details of another file.  This is also inefficient, clearly, 
since we're doing 2x the RPCs to the NameNode that we need to.  And since the 
NN is the hardest part of HDFS to scale (it hasn't been scaled horizontally) 
this is another concern.

bq. If you call getFileStatus and open currently, you can have the same problem 
- status from one file, open from different file.

Sure, and we ought to fix this too, by making it possible for the client to get 
{{FileStatus}} from a {{DFSInputStream}}.  It would be as easy as just having a 
method inside DFSInputStream that called 
{{open(/.reserved/.inodes/inode-id-of-file)}}.

bq. Sergey wrote: ID allows to overcome this by getting ID first, then using 
ID-based path. Of course if ID is obtained separately there's no guarantee but 
there's no way to overcome this.

It seems like there is a very easy way to overcome this... just add an abstract 
function inside {{FileStatus}} that either throws {{OperationNotSupported}} or 
returns the inode ID.  Then FileStatus objects returned from HDFS (and any 
other filesystem that has user-visible inode IDs) can return the inode ID, and 
the default implementation can be throwing {{OperationNotSupported}}.  We do 
1/2 the RPCs of the current patch, put 1/2 the load on the NN, and don't open 
up another race condition.

What do you think?


was (Author: cmccabe):
bq. Jing wrote: Could you please add more details here? Note that the getFileId 
API in the current patch only calls getFileStatus and returns the inode id 
field contained in the HdfsFileStatus. Or you mean the client is making both 
calls separately? Then why the subclass approach can solve this?

My point is that if the client makes two different calls to getFileStatus, the 
file status could change in between.  So we could end up with the ID of one 
file and the other details of another file.  This is also inefficient, clearly, 
since we're doing 2x the RPCs to the NameNode that we need to.  And since the 
NN is the hardest part of HDFS to scale (it hasn't been scaled horizontally) 
this is another concern.

bq. If you call getFileStatus and open currently, you can have the same problem 
- status from one file, open from different file.

Sure, and we ought to fix this too, by making it possible for the client to get 
{{FileStatus}} from a {{DFSInputStream}}.  It would be as easy as just having a 
method inside DFSInputStream that called 
{{open(/.reserved/.inodes/inode-id-of-file)}}.

bq. Sergey wrote: ID allows to overcome this by getting ID first, then using 
ID-based path. Of course if ID is obtained separately there's no guarantee but 
there's no way to overcome this.

It seems like there is a very easy way to overcome this... just add an abstract 
function inside {{FileStatus}} that either throws {{OperationNotSupported}} or 
returns the inode ID.  Then FileStatus objects returned from HDFS (and any 
other function that has user-visible inode IDs) can return the inode ID, and 
the default implementation can be throwing {{OperationNotSupported}}.  We do 
1/2 the RPCs of the current patch, put 1/2 the load on the NN, and don't open 
up another race condition.

What do you think?

 API - expose an unique file identifier
 --

 Key: HDFS-7878
 URL: https://issues.apache.org/jira/browse/HDFS-7878
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, HDFS-7878.patch


 See HDFS-487.
 Even though that is resolved as duplicate, the ID is actually not exposed by 
 the JIRA it supposedly duplicates.
 INode ID for the file should be easy to expose; alternatively ID could be 
 derived from block IDs, to account for appends...
 This is useful e.g. for cache key by file, to make sure cache stays correct 
 when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-13 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7917:

Attachment: HDFS-7917.000.patch

Add {{DataNodeTestUtils#injectDataDirFailure()}} and 
{{restoreDataDirFromFailure()}} to simulate a disk failure, by replacing data 
dirs with a regular file.

This patch fixes {{TestDataNodeVolumeFailure, 
TestDataNodeVolumeFailureReporting,TestDataNodeVolumeFailureToleration,TestDataNodeHotSwapVolumes}}.
 

The only place left untouched is 
{{TestDataNodeVolumeFailure#testVolumeFailure()}}, because it scans the failure 
directory to count metadata files. What would be appropriate to change this 
test case? Thanks!

 Use file to replace data dirs in test to simulate a disk failure. 
 --

 Key: HDFS-7917
 URL: https://issues.apache.org/jira/browse/HDFS-7917
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-7917.000.patch


 Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
 {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
 directory's executable permission as false. However, it raises the risk that 
 if the cleanup code could not be executed, the directory can not be easily 
 removed by Jenkins job. 
 Since in {{DiskChecker#checkDirAccess}}:
 {code}
 private static void checkDirAccess(File dir) throws DiskErrorException {
 if (!dir.isDirectory()) {
   throw new DiskErrorException(Not a directory: 
+ dir.toString());
 }
 checkAccessByFileMethods(dir);
   }
 {code}
 We can replace the DN data directory as a file to achieve the same fault 
 injection goal, while it is safer for cleaning up in any circumstance. 
 Additionally, as [~cnauroth] suggested: 
 bq. That might even let us enable some of these tests that are skipped on 
 Windows, because Windows allows access for the owner even after permissions 
 have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-13 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7917:

Status: Patch Available  (was: Open)

 Use file to replace data dirs in test to simulate a disk failure. 
 --

 Key: HDFS-7917
 URL: https://issues.apache.org/jira/browse/HDFS-7917
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-7917.000.patch


 Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
 {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
 directory's executable permission as false. However, it raises the risk that 
 if the cleanup code could not be executed, the directory can not be easily 
 removed by Jenkins job. 
 Since in {{DiskChecker#checkDirAccess}}:
 {code}
 private static void checkDirAccess(File dir) throws DiskErrorException {
 if (!dir.isDirectory()) {
   throw new DiskErrorException(Not a directory: 
+ dir.toString());
 }
 checkAccessByFileMethods(dir);
   }
 {code}
 We can replace the DN data directory as a file to achieve the same fault 
 injection goal, while it is safer for cleaning up in any circumstance. 
 Additionally, as [~cnauroth] suggested: 
 bq. That might even let us enable some of these tests that are skipped on 
 Windows, because Windows allows access for the owner even after permissions 
 have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6585) INodesInPath.resolve is called multiple times in FSNamesystem.setPermission

2015-03-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6585:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

I think the patch has been superseded by the work in HDFS-7508.

 INodesInPath.resolve is called multiple times in FSNamesystem.setPermission
 ---

 Key: HDFS-6585
 URL: https://issues.apache.org/jira/browse/HDFS-6585
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Zhilei Xu
Assignee: Zhilei Xu
 Attachments: patch_ab60af58e03b323dd4b18d32c4def1f008b98822.txt, 
 patch_f15b7d505f12213f1ee9fb5ddb4bdaa64f9f623d.txt


 Most of the APIs (both internal and external) in FSNamesystem calls 
 INodesInPath.resolve() to get the list of INodes corresponding to a file 
 path. Usually one API will call resolve() multiple times and that's a waste 
 of time.
 This issue particularly refers to FSNamesystem.setPermission, which calls 
 resolve() twice indirectly: one from checkOwner(), another from 
 dir.setPermission().
 Should save the result of resolve(), and use it whenever possible throughout 
 the lifetime of an API call, instead of making new resolve() calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361389#comment-14361389
 ] 

Yi Liu edited comment on HDFS-7926 at 3/14/15 12:29 AM:


Thanks [~szetszwo] for the fix. Actually I agree that {{idempotent}} is good as 
you said. 
But the trouble issue is lease check/recovery for truncate, we just want to 
make sure the truncate retries get the same result. I have found one case that 
currently fix can't cover:
# Client invokes truncate, but it's on block boundary, so we will not add lease 
and there is no block recovery. And the result should be *true*.
# Meanwhile network issue for the client, and retry happens after few time. 
# Before retry happen or before it arrives NN, some other client invokes append 
on this file.
# Then NN receives the truncate retry, so there is lease failure.
# Actually the truncate is successful, but client sees failure.



was (Author: hitliuyi):
Thanks [~szetszwo] for the fix. Actually I agree that {{idempotent}} is good as 
you said. 
But the trouble issue is lease check/recovery for truncate, we just want to 
make sure the truncate retries get the same result. I have found one case that 
currently fix can't cover:
# Client invokes truncate, but it's on block boundary, so we will not add lease 
and there is no block recovery. And the result should be *true*.
# Meanwhile network issue for the client, and retry happens after few time. 
# Before retry happen or before it arrives NN, some other client invokes append 
on this file.
# Then NN receives the retry, so there is lease failure.
# Actually the truncate is successful, but client sees failure.


 NameNode implementation of ClientProtocol.truncate(..) is not idempotent
 

 Key: HDFS-7926
 URL: https://issues.apache.org/jira/browse/HDFS-7926
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.7.0

 Attachments: h7926_20150313.patch, h7926_20150313b.patch


 If dfsclient drops the first response of a truncate RPC call, the retry by 
 retry cache will fail with DFSClient ... is already the current lease 
 holder.  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
 the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7915) The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error

2015-03-13 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361433#comment-14361433
 ] 

Yongjun Zhang commented on HDFS-7915:
-

Hi [~cmccabe],

Thanks for the updated patch. +1 on rev 6 pending jenkins. 

Would you please commit and let's create follow-up jira for better logging and 
possible improvements.

Thanks.




 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error
 -

 Key: HDFS-7915
 URL: https://issues.apache.org/jira/browse/HDFS-7915
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7915.001.patch, HDFS-7915.002.patch, 
 HDFS-7915.004.patch, HDFS-7915.005.patch, HDFS-7915.006.patch


 The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell 
 the DFSClient about it because of a network error.  In 
 {{DataXceiver#requestShortCircuitFds}}, the DataNode can succeed at the first 
 part (mark the slot as used) and fail at the second part (tell the DFSClient 
 what it did). The try block for unregistering the slot only covers a 
 failure in the first part, not the second part. In this way, a divergence can 
 form between the views of which slots are allocated on DFSClient and on 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6496) WebHDFS cannot open file

2015-03-13 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HDFS-6496.
--
Resolution: Invalid

 WebHDFS cannot open file
 

 Key: HDFS-6496
 URL: https://issues.apache.org/jira/browse/HDFS-6496
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: Fengdong Yu
 Attachments: webhdfs.PNG


 WebHDFS cannot open the file on the name node web UI. I attched screen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360023#comment-14360023
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7926:
---

 ... for a file being written or appended, truncate will still return false if 
 the oldlength happens to be same as newlength. It should throw an exception 
 in this scenario. ...

That's true.  Will update the patch.

 NameNode implementation of ClientProtocol.truncate(..) is not idempotent
 

 Key: HDFS-7926
 URL: https://issues.apache.org/jira/browse/HDFS-7926
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h7926_20150313.patch


 If dfsclient drops the first response of a truncate RPC call, the retry by 
 retry cache will fail with DFSClient ... is already the current lease 
 holder.  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
 the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7854) Separate class DataStreamer out of DFSOutputStream

2015-03-13 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359991#comment-14359991
 ] 

Li Bo commented on HDFS-7854:
-

Patch 004 includes several changes advised by Jing. Function  
{{queueCurrentPacket}} is moved to class DataStreamer and renamed to 
{{queuePacket}}. Currently {{waitAndQueueCurrentPacket}} and 
{{waitForAckedSeqno}} are still kept in DFSOutputStream due to the following 
considerations: there’re two threads, the main thread and streamer thread, the 
main thread will wait until the data queue has space; it’s more reasonable to 
keep the main thread waiting by direct calling {{ waitForAckedSeqno }} than by 
calling {{streamer. waitForAckedSeqno }}; {{ waitAndQueueCurrentPacket }} and 
{{waitForAckedSeqno}} have to check {{DFSOutputStream.closed}} which can’t be 
substituted by {{DataStreamer.streamClosed}}.
{{dataQueue}} is still kept in DFSOutputStream currently and we can also move 
it to DataStreamer. We can treat the two class as producer and consumer, I 
think it’s reasonable to let the producer be aware of the sharing pool.


 Separate class DataStreamer out of DFSOutputStream
 --

 Key: HDFS-7854
 URL: https://issues.apache.org/jira/browse/HDFS-7854
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7854-001.patch, HDFS-7854-002.patch, 
 HDFS-7854-003.patch, HDFS-7854-004.patch


 This sub task separate DataStreamer from DFSOutputStream. New DataStreamer 
 will accept packets and write them to remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >