[jira] [Commented] (HDFS-11261) Document missing NameNode metrics

2016-12-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766214#comment-15766214
 ] 

Hudson commented on HDFS-11261:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11024 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11024/])
HDFS-11261. Document missing NameNode metrics. Contributed by Yiqun Lin. 
(aajisaka: rev f6e2521eb216dae820846cab31397e9a88ba2f88)
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md


> Document missing NameNode metrics
> -
>
> Key: HDFS-11261
> URL: https://issues.apache.org/jira/browse/HDFS-11261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11261.001.patch
>
>
> Found two missing metric names in {{Metric.md}}. One is introduced by 
> HDFS-10872, another one is HDFS-10676. In HDFS-10872, it adds MutableRate 
> metrics for FSNamesystemLock operations. In HDFS-10676, it adds the metric 
> for generating EDEKs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11150) [SPS]: Provide persistence when satisfying storage policy.

2016-12-20 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766211#comment-15766211
 ] 

Yuanbo Liu commented on HDFS-11150:
---

[~umamaheswararao] Thanks for your comment.
{quote}
How about you raise a JIRA for it and think to optimize separately?
{quote}
Sure, once I finish this issue, I'll raise another JIRA to think about the 
optimization.

> [SPS]: Provide persistence when satisfying storage policy.
> --
>
> Key: HDFS-11150
> URL: https://issues.apache.org/jira/browse/HDFS-11150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11150-HDFS-10285.001.patch, 
> HDFS-11150-HDFS-10285.002.patch, editsStored, editsStored.xml
>
>
> Provide persistence for SPS in case that Hadoop cluster crashes by accident. 
> Basically we need to change EditLog and FsImage here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11258) File mtime change could not save to editlog

2016-12-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766188#comment-15766188
 ] 

Akira Ajisaka commented on HDFS-11258:
--

LGTM, +1.

> File mtime change could not save to editlog
> ---
>
> Key: HDFS-11258
> URL: https://issues.apache.org/jira/browse/HDFS-11258
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Critical
> Attachments: hdfs-11258.1.patch, hdfs-11258.2.patch, 
> hdfs-11258.3.patch, hdfs-11258.4.patch
>
>
> When both mtime and atime are changed, and atime is not beyond the precision 
> limit, the mtime change is not saved to edit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11100) Recursively deleting file protected by sticky bit should fail

2016-12-20 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765522#comment-15765522
 ] 

Wei-Chiu Chuang edited comment on HDFS-11100 at 12/21/16 5:31 AM:
--

Thanks [~jzhuge] for the code contribution! I've completed my first review of 
the patch.

My review notes are as follows:
# The patch adds an extra for loop and we should be careful to avoid pausing 
NameNode from other operations for too long.
# Given that the original method body is essentially a -breadth-first-search- 
(edit: my bad. This is a DFS), it should be easy to separate from the new code, 
could we refactor the new code into a new method, to make it easier to 
understand?
# One nit: the following line is repeated for every child inodes which only 
need to be done once. Performance-wise this shouldn't have much impact.
{code}
// checkStickyBit only uses 2 entries in childInodeAttrs
childInodeAttrs[parentIdx] = inodeAttr;
{code}
# In the patch, an INodeAttributes array is instantiated per INodeDirectory. 
This can be a performance hitter. Would it make sense to optimize it? For 
example, pre-initialize this array, and double its length if 
{{childComponents.length}} is larger than {{childInodeAttrs.length}}. We can 
file a new jira for the optimization if you think this is reasonable.


was (Author: jojochuang):
Thanks [~jzhuge] for the code contribution! I've completed my first review of 
the patch.

My review notes are as follows:
# The patch adds an extra for loop and we should be careful to avoid pausing 
NameNode from other operations for too long.
# Given that the original method body is essentially a breadth-first-search, it 
should be easy to separate from the new code, could we refactor the new code 
into a new method, to make it easier to understand?
# One nit: the following line is repeated for every child inodes which only 
need to be done once. Performance-wise this shouldn't have much impact.
{code}
// checkStickyBit only uses 2 entries in childInodeAttrs
childInodeAttrs[parentIdx] = inodeAttr;
{code}
# In the patch, an INodeAttributes array is instantiated per INodeDirectory. 
This can be a performance hitter. Would it make sense to optimize it? For 
example, pre-initialize this array, and double its length if 
{{childComponents.length}} is larger than {{childInodeAttrs.length}}. We can 
file a new jira for the optimization if you think this is reasonable.

> Recursively deleting file protected by sticky bit should fail
> -
>
> Key: HDFS-11100
> URL: https://issues.apache.org/jira/browse/HDFS-11100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>  Labels: permissions
> Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, 
> HDFS-11100.003.patch, hdfs_cmds
>
>
> Recursively deleting a directory that contains files or directories protected 
> by sticky bit should fail but it doesn't in HDFS. In the case below, 
> {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive 
> deleting {{/tmp/test/sticky_dir}} should fail.
> {noformat}
> + hdfs dfs -ls -R /tmp/test
> drwxrwxrwt   - jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir
> -rwxrwxrwx   1 jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir/f2
> + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2
> rm: Permission denied by sticky bit: user=hadoop, 
> path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, 
> parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt
> + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir
> Deleted /tmp/test/sticky_dir
> {noformat}
> Centos 6.4 behavior:
> {noformat}
> $ ls -lR /tmp/test
> /tmp/test: 
> total 4
> drwxrwxrwt 2 systest systest 4096 Nov  3 18:36 sbit
> /tmp/test/sbit:
> total 0
> -rw-rw-rw- 1 systest systest 0 Nov  2 13:45 f2
> $ sudo -u mapred rm -fr /tmp/test/sbit
> rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted
> $ chmod -t /tmp/test/sbit
> $ sudo -u mapred rm -fr /tmp/test/sbit
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11261) Document missing NameNode metrics

2016-12-20 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11261:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.8. Thanks [~linyiqun] for the 
contribution!

> Document missing NameNode metrics
> -
>
> Key: HDFS-11261
> URL: https://issues.apache.org/jira/browse/HDFS-11261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11261.001.patch
>
>
> Found two missing metric names in {{Metric.md}}. One is introduced by 
> HDFS-10872, another one is HDFS-10676. In HDFS-10872, it adds MutableRate 
> metrics for FSNamesystemLock operations. In HDFS-10676, it adds the metric 
> for generating EDEKs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11261) Document missing NameNode metrics

2016-12-20 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11261:
-
Target Version/s: 2.8.0, 3.0.0-alpha2
Hadoop Flags: Reviewed
 Summary: Document missing NameNode metrics  (was: Document missing 
metric names)

+1, checking this in.

> Document missing NameNode metrics
> -
>
> Key: HDFS-11261
> URL: https://issues.apache.org/jira/browse/HDFS-11261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11261.001.patch
>
>
> Found two missing metric names in {{Metric.md}}. One is introduced by 
> HDFS-10872, another one is HDFS-10676. In HDFS-10872, it adds MutableRate 
> metrics for FSNamesystemLock operations. In HDFS-10676, it adds the metric 
> for generating EDEKs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11150) [SPS]: Provide persistence when satisfying storage policy.

2016-12-20 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766127#comment-15766127
 ] 

Uma Maheswara Rao G commented on HDFS-11150:


{quote}
I've tried that before. There is an issue here if we only mark the directory. 
When recovering from FsImage, the InodeMap isn't built up, so we don't know the 
sub-inode of a given inode, in the end, We cannot add these inodes to movement 
queue in FSDirectory#addToInodeMap, any thoughts?
{quote}
I got what you are saying. Ok for simplicity we can add for all Inodes now. For 
this to handle 100%, we may need intermittent processing, like first we should 
add them to some intermittentList while loading fsImage, once fully loaded and 
when starting active services, we should process that list and do required 
stuff. But it would add some additional complexity may be. Let's do with all 
file inodes now and we can revisit later if it is really creating issues. How 
about you raise a JIRA for it and think to optimize separately?

> [SPS]: Provide persistence when satisfying storage policy.
> --
>
> Key: HDFS-11150
> URL: https://issues.apache.org/jira/browse/HDFS-11150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11150-HDFS-10285.001.patch, 
> HDFS-11150-HDFS-10285.002.patch, editsStored, editsStored.xml
>
>
> Provide persistence for SPS in case that Hadoop cluster crashes by accident. 
> Basically we need to change EditLog and FsImage here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11121) Add assertions to BlockInfo#addStorage to protect from breaking reportedBlock-blockGroup mapping

2016-12-20 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766123#comment-15766123
 ] 

Takanobu Asanuma commented on HDFS-11121:
-

Sorry for late reply and thank you for your kind review, [~jojochuang].

bq. I wonder if we can add a similar isStripedBlockId assertion in 
BlockInfoStriped constructor.

It seems some unit tests don't have the assumption that a striped blockId is 
always a negative number (please see TestStripedINodeFile.java). Should we 
unify all source code with the assumption?

> Add assertions to BlockInfo#addStorage to protect from breaking 
> reportedBlock-blockGroup mapping
> 
>
> Key: HDFS-11121
> URL: https://issues.apache.org/jira/browse/HDFS-11121
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11121.1.patch
>
>
> There are not any assertions in {{BlockInfo.addStorage}}. This may cause that 
> {{BlockInfo}} instances accept strange block reports and result in serious 
> bugs, like HDFS-10858.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11150) [SPS]: Provide persistence when satisfying storage policy.

2016-12-20 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766014#comment-15766014
 ] 

Yuanbo Liu commented on HDFS-11150:
---

[~umamaheswararao]
{quote}
So, for persistence part, how about keeping Xattrs only of that directory..
{quote}
I've tried that before. There is an issue here if we only mark the directory. 
When recovering from FsImage, the InodeMap isn't built up, so we don't know the 
sub-inode of a given inode, in the end, We cannot add these inodes to movement 
queue in {{FSDirectory#addToInodeMap}}, any thoughts?

> [SPS]: Provide persistence when satisfying storage policy.
> --
>
> Key: HDFS-11150
> URL: https://issues.apache.org/jira/browse/HDFS-11150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11150-HDFS-10285.001.patch, 
> HDFS-11150-HDFS-10285.002.patch, editsStored, editsStored.xml
>
>
> Provide persistence for SPS in case that Hadoop cluster crashes by accident. 
> Basically we need to change EditLog and FsImage here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6984) In Hadoop 3, make FileStatus serialize itself via protobuf

2016-12-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765957#comment-15765957
 ] 

Andrew Wang commented on HDFS-6984:
---

I think "the serialization can omit fields" is an abstraction breakage. If 
we're going to have an open API that takes a FileStatus, the implementation 
should be allowed to use all the fields expected to be present for a FileStatus 
of that FileSystem. This means serialization that omits fields for efficiency 
isn't supportable, and is why I'd prefer a PathHandle. Regarding TOCTOU, this 
would be addressed by including the PathHandle in the returned FileStatus, 
right?

This discussion is an aside though to the matter at hand, and we should 
continue on HDFS-7878. I think we already agreed above that as long as we can 
add new fields to FileStatus, we can satisfy the basic requirements of 
HDFS-7878.

bq. I didn't mean to suggest that HdfsFileStatus should be a public API (with 
all the restrictions on evolving it)

If we don't intend to make HdfsFileStatus public, what's the point of 
cross-serialization? We also need to qualify the path for an HdfsFileStatus to 
become a FileStatus, so I don't know how zero-copy it can be anyway.

I don't feel *that* strongly about removing Writable, but the nowritable patch 
is simple and to the point, and I still haven't grasped the benefit of keeping 
FileStatus Writable, even via PB. We don't think there are many (any?) apps out 
there using the Writable interface. Cross-serialization doesn't have an 
immediate usecase. HDFS-7878 IMO needs a serializable PathHandle, not a full 
FileStatus.

> In Hadoop 3, make FileStatus serialize itself via protobuf
> --
>
> Key: HDFS-6984
> URL: https://issues.apache.org/jira/browse/HDFS-6984
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6984.001.patch, HDFS-6984.002.patch, 
> HDFS-6984.003.patch, HDFS-6984.nowritable.patch
>
>
> FileStatus was a Writable in Hadoop 2 and earlier.  Originally, we used this 
> to serialize it and send it over the wire.  But in Hadoop 2 and later, we 
> have the protobuf {{HdfsFileStatusProto}} which serves to serialize this 
> information.  The protobuf form is preferable, since it allows us to add new 
> fields in a backwards-compatible way.  Another issue is that already a lot of 
> subclasses of FileStatus don't override the Writable methods of the 
> superclass, breaking the interface contract that read(status.write) should be 
> equal to the original status.
> In Hadoop 3, we should just make FileStatus serialize itself via protobuf so 
> that we don't have to deal with these issues.  It's probably too late to do 
> this in Hadoop 2, since user code may be relying on the existing FileStatus 
> serialization there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11251) ConcurrentModificationException during DataNode#refreshVolumes

2016-12-20 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765950#comment-15765950
 ] 

Manoj Govindassamy commented on HDFS-11251:
---

Thanks [~linyiqun].

Yes, I do see the {{addVolume}} volume operation from the stack trace. But, 
that volume adding thread shown in the stack trace is just in the 
{{DataStorage#prepareVolume}} phase and it is only traversing the 
{{storageDirs}} in {{Storage#containsStorageDir}}. That is, there is yet 
another thread which is mutating the same ArrayList around the same time when 
the first Volume Add was happening.

Looked at the test code again and there are 2 Volume Add happening as part of 
the test. As you said, these Volume add operations are run in Executors via 
FutureTask and hence these are submitted in quick succession and are running 
parallely and are mutating the same {{storageDirs}} list. 

I am able to recreate the {{ConcurrentModificationException}} by following 2 
ways:
* Running a new Thread which continuously runs a read operation on the 
StorageDirs (like listStorageDirectories) and then having Volume add in 
parallel,  OR
* Adding 10 volumes with a small delay between each other, so that each of 
volume add's list traversing will trip over previous volume add list 
modification. 

My fix proposal is to create {{Storage#storageDirs}} as {{new 
CopyOnWriteArrayList()}}; Will test out and submit a patch.

> ConcurrentModificationException during DataNode#refreshVolumes
> --
>
> Key: HDFS-11251
> URL: https://issues.apache.org/jira/browse/HDFS-11251
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Manoj Govindassamy
>
> The testAddVolumesDuringWrite case failed with a ReconfigurationException 
> which appears to have been caused by a ConcurrentModificationException.  
> Stacktrace details to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11251) ConcurrentModificationException during DataNode#refreshVolumes

2016-12-20 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765882#comment-15765882
 ] 

Yiqun Lin edited comment on HDFS-11251 at 12/21/16 2:26 AM:


Thanks [~manojg] for the analysis. I think that's the reason of the failure 
case. Here the add volume or remove volume is a  asynchronized operation so 
there is a chance to lead the CME.
{quote}
Want to look at logs to find the parallel operations on the storageDir
{quote}
Here it's the {{addVolume}} operation caused this as you can see the stack info 
that [~jlowe] provided above. Hope this can help you.
{code}
org.apache.hadoop.conf.ReconfigurationException: Could not change property 
dfs.datanode.data.dir from 
'[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data4'
 to 
'[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data3,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data4'
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.refreshVolumes(DataNode.java:777)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.reconfigurePropertyImpl(DataNode.java:532)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.addVolumes(TestDataNodeHotSwapVolumes.java:310)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testAddVolumesDuringWrite(TestDataNodeHotSwapVolumes.java:404)
{code}


was (Author: linyiqun):
Thanks [~manojg] for the analysis. I think that's the reason of the failure 
case.
{quote}
Want to look at logs to find the parallel operations on the storageDir
{quote}
Here it's the {{addVolume}} operation caused this as you can see the stack info 
that [~jlowe] provided above. Hope this can help you.
{code}
org.apache.hadoop.conf.ReconfigurationException: Could not change property 
dfs.datanode.data.dir from 
'[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data4'
 to 
'[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data3,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data4'
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.refreshVolumes(DataNode.java:777)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.reconfigurePropertyImpl(DataNode.java:532)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.addVolumes(TestDataNodeHotSwapVolumes.java:310)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testAddVolumesDuringWrite(TestDataNodeHotSwapVolumes.java:404)
{code}

> ConcurrentModificationException during DataNode#refreshVolumes
> --
>
> Key: HDFS-11251
> URL: https://issues.apache.org/jira/browse/HDFS-11251
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Manoj Govindassamy
>
> The testAddVolumesDuringWrite case failed with a ReconfigurationException 
> which appears to have been caused by a ConcurrentModificationException.  
> Stacktrace details to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11251) ConcurrentModificationException during DataNode#refreshVolumes

2016-12-20 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765882#comment-15765882
 ] 

Yiqun Lin commented on HDFS-11251:
--

Thanks [~manojg] for the analysis. I think that's the reason of the failure 
case.
{quote}
Want to look at logs to find the parallel operations on the storageDir
{quote}
Here it's the {{addVolume}} operation caused this as you can see the stack info 
that [~jlowe] provided above. Hope this can help you.
{code}
org.apache.hadoop.conf.ReconfigurationException: Could not change property 
dfs.datanode.data.dir from 
'[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data4'
 to 
'[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data3,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data4'
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.refreshVolumes(DataNode.java:777)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.reconfigurePropertyImpl(DataNode.java:532)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.addVolumes(TestDataNodeHotSwapVolumes.java:310)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testAddVolumesDuringWrite(TestDataNodeHotSwapVolumes.java:404)
{code}

> ConcurrentModificationException during DataNode#refreshVolumes
> --
>
> Key: HDFS-11251
> URL: https://issues.apache.org/jira/browse/HDFS-11251
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Manoj Govindassamy
>
> The testAddVolumesDuringWrite case failed with a ReconfigurationException 
> which appears to have been caused by a ConcurrentModificationException.  
> Stacktrace details to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11195) Return error when appending files by webhdfs rest api fails

2016-12-20 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765875#comment-15765875
 ] 

Yuanbo Liu commented on HDFS-11195:
---

[~xiaochen] Thanks a lot! 

> Return error when appending files by webhdfs rest api fails
> ---
>
> Key: HDFS-11195
> URL: https://issues.apache.org/jira/browse/HDFS-11195
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11195.001.patch, HDFS-11195.002.patch, 
> HDFS-11195.003.patch, HDFS-11195.004.patch
>
>
> Suppose that there is a Hadoop cluster contains only one datanode, and 
> dfs.replication=3. Run:
> {code}
> curl -i -X POST -T  
> "http://:/webhdfs/v1/?op=APPEND"
> {code}
> it returns 200, even though append operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11195) Return error when appending files by webhdfs rest api fails

2016-12-20 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765875#comment-15765875
 ] 

Yuanbo Liu edited comment on HDFS-11195 at 12/21/16 2:16 AM:
-

[~xiaochen] / [~ajisakaa] Thanks a lot! 


was (Author: yuanbo):
[~xiaochen] Thanks a lot! 

> Return error when appending files by webhdfs rest api fails
> ---
>
> Key: HDFS-11195
> URL: https://issues.apache.org/jira/browse/HDFS-11195
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11195.001.patch, HDFS-11195.002.patch, 
> HDFS-11195.003.patch, HDFS-11195.004.patch
>
>
> Suppose that there is a Hadoop cluster contains only one datanode, and 
> dfs.replication=3. Run:
> {code}
> curl -i -X POST -T  
> "http://:/webhdfs/v1/?op=APPEND"
> {code}
> it returns 200, even though append operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11150) [SPS]: Provide persistence when satisfying storage policy.

2016-12-20 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765869#comment-15765869
 ] 

Yuanbo Liu commented on HDFS-11150:
---

[~umamaheswararao] Thanks a lot for your comments. They're quite helpful!

> [SPS]: Provide persistence when satisfying storage policy.
> --
>
> Key: HDFS-11150
> URL: https://issues.apache.org/jira/browse/HDFS-11150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11150-HDFS-10285.001.patch, 
> HDFS-11150-HDFS-10285.002.patch, editsStored, editsStored.xml
>
>
> Provide persistence for SPS in case that Hadoop cluster crashes by accident. 
> Basically we need to change EditLog and FsImage here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11239) [SPS]: Check Mover file ID lease also to determine whether Mover is running

2016-12-20 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765619#comment-15765619
 ] 

Uma Maheswara Rao G commented on HDFS-11239:


Hi [~zhouwei], Thanks for raising the issue and working on it.
Please find my feedback below.

# Implementing INode related code inside SPS class is not a good idea. All of 
this implementation parts should got to name system layer. How about adding 
small API which returns true if INode file is opened for write? Lets say API 
name as like isINodeFileOpenedForWrite(), this can have all these lease check 
and return boolean.
# -
{code}
+  running = hdfsCluster.getFileSystem()
+  .getClient().isStoragePolicySatisfierRunning();
+  Assert.assertFalse("SPS should not be able to run as file "
+  + HdfsServerConstants.MOVER_ID_PATH + " is being hold.", running);
{code}
I think it will be good if you can add test case to call satisfySatoragePolicy 
as well to make sure functionality also working after SPS restart successfully 
with MoverID checks


> [SPS]: Check Mover file ID lease also to determine whether Mover is running
> ---
>
> Key: HDFS-11239
> URL: https://issues.apache.org/jira/browse/HDFS-11239
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Attachments: HDFS-11239-HDFS-10285.00.patch
>
>
> Currently in SPS only checks the Mover ID file existence to determine whether 
> a Mover is running, this can be an issue when Mover exists unexpected without 
> deleting the ID file,  and this further stops SPS to function. This is a 
> following on to HDFS-10885 and there we bypassed this due to some 
> implementation problems.  This issue can be fixed after HDFS-11123.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11258) File mtime change could not save to editlog

2016-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765599#comment-15765599
 ] 

Hadoop QA commented on HDFS-11258:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 10 unchanged - 1 fixed = 10 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11258 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844144/hdfs-11258.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux afd485da6b7f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f678080 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17923/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17923/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17923/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> File mtime change could not save to editlog
> ---
>
> Key: HDFS-11258
> URL: https://issues.apache.org/jira/browse/HDFS-11258
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jimm

[jira] [Commented] (HDFS-11251) ConcurrentModificationException during DataNode#refreshVolumes

2016-12-20 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765566#comment-15765566
 ] 

Manoj Govindassamy commented on HDFS-11251:
---

Hi [~jlowe], 

Amny chance you have the full log file for this failure case ? Would like to 
take a look.


{{Storage#storageDirs}} is not a concurrent list. So, parallel addition or 
removal of volumes with list iteration can throw 
ConcurrentModificationException. Want to look at logs to find the parallel 
operations on the storageDir. One of the fixes could be building 
{{Storage#storageDirs}} as a Collections.synchronizedList(..). Other could be 
locking down the modification operations and iteration operations. 

> ConcurrentModificationException during DataNode#refreshVolumes
> --
>
> Key: HDFS-11251
> URL: https://issues.apache.org/jira/browse/HDFS-11251
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Manoj Govindassamy
>
> The testAddVolumesDuringWrite case failed with a ReconfigurationException 
> which appears to have been caused by a ConcurrentModificationException.  
> Stacktrace details to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11100) Recursively deleting file protected by sticky bit should fail

2016-12-20 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765522#comment-15765522
 ] 

Wei-Chiu Chuang commented on HDFS-11100:


Thanks [~jzhuge] for the code contribution! I've completed my first review of 
the patch.

My review notes are as follows:
# The patch adds an extra for loop and we should be careful to avoid pausing 
NameNode from other operations for too long.
# Given that the original method body is essentially a breadth-first-search, it 
should be easy to separate from the new code, could we refactor the new code 
into a new method, to make it easier to understand?
# One nit: the following line is repeated for every child inodes which only 
need to be done once. Performance-wise this shouldn't have much impact.
{code}
// checkStickyBit only uses 2 entries in childInodeAttrs
childInodeAttrs[parentIdx] = inodeAttr;
{code}
# In the patch, an INodeAttributes array is instantiated per INodeDirectory. 
This can be a performance hitter. Would it make sense to optimize it? For 
example, pre-initialize this array, and double its length if 
{{childComponents.length}} is larger than {{childInodeAttrs.length}}. We can 
file a new jira for the optimization if you think this is reasonable.

> Recursively deleting file protected by sticky bit should fail
> -
>
> Key: HDFS-11100
> URL: https://issues.apache.org/jira/browse/HDFS-11100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>  Labels: permissions
> Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, 
> HDFS-11100.003.patch, hdfs_cmds
>
>
> Recursively deleting a directory that contains files or directories protected 
> by sticky bit should fail but it doesn't in HDFS. In the case below, 
> {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive 
> deleting {{/tmp/test/sticky_dir}} should fail.
> {noformat}
> + hdfs dfs -ls -R /tmp/test
> drwxrwxrwt   - jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir
> -rwxrwxrwx   1 jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir/f2
> + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2
> rm: Permission denied by sticky bit: user=hadoop, 
> path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, 
> parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt
> + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir
> Deleted /tmp/test/sticky_dir
> {noformat}
> Centos 6.4 behavior:
> {noformat}
> $ ls -lR /tmp/test
> /tmp/test: 
> total 4
> drwxrwxrwt 2 systest systest 4096 Nov  3 18:36 sbit
> /tmp/test/sbit:
> total 0
> -rw-rw-rw- 1 systest systest 0 Nov  2 13:45 f2
> $ sudo -u mapred rm -fr /tmp/test/sbit
> rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted
> $ chmod -t /tmp/test/sbit
> $ sudo -u mapred rm -fr /tmp/test/sbit
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11231) federation support to linux-like mount type

2016-12-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765462#comment-15765462
 ] 

Andrew Wang commented on HDFS-11231:


Is this similar to HADOOP-13055? My [recent 
comment|https://issues.apache.org/jira/browse/HADOOP-13055?focusedCommentId=15733822&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15733822]
 on that issue talks about some of the differences between ViewFS and Linux VFS 
that makes it difficult to emulate the same behavior.

> federation support to linux-like mount type
> ---
>
> Key: HDFS-11231
> URL: https://issues.apache.org/jira/browse/HDFS-11231
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: He Xiaoqiao
>
> Based on Federation Arch, it can mount multi namespaces using mount table, 
> currently, mount point only support leaf node of mount tree as mount point, 
> it is hard to one directory, especially there are many children dir, for 
> instance you have to mount hundreds of path /foo/ children dir in mount table 
> if you have mount one child of path /foo/. this issue will implement 
> linux-like mount type, and it may be more convenient to partition existing 
> dir or add new dir to one of arbitrary namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter

2016-12-20 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11094:
-
Target Version/s: 2.8.0, 3.0.0-alpha2  (was: 2.9.0, 3.0.0-alpha2)
   Fix Version/s: (was: 2.9.0)
  2.8.0

Thanks, [~ebadger]. I've backported to {{branch-2.8}} branch.

> Send back HAState along with NamespaceInfo during a versionRequest as an 
> optional parameter
> ---
>
> Key: HDFS-11094
> URL: https://issues.apache.org/jira/browse/HDFS-11094
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11094-branch-2.011.patch, HDFS-11094.001.patch, 
> HDFS-11094.002.patch, HDFS-11094.003.patch, HDFS-11094.004.patch, 
> HDFS-11094.005.patch, HDFS-11094.006.patch, HDFS-11094.007.patch, 
> HDFS-11094.008.patch, HDFS-11094.009-b2.patch, HDFS-11094.009.patch, 
> HDFS-11094.010-b2.patch, HDFS-11094.010.patch, HDFS-11094.011.patch
>
>
> The datanode should know which NN is active when it is connecting/registering 
> to the NN. Currently, it only figures this out during its first (and 
> subsequent) heartbeat(s) and so there is a period of time where the datanode 
> is alive and registered, but can't actually do anything because it doesn't 
> know which NN is active. A byproduct of this is that the MiniDFSCluster will 
> become active before it knows what NN is active, which can lead to NPEs when 
> calling getActiveNN(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11182) Update DataNode to use DatasetVolumeChecker

2016-12-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765446#comment-15765446
 ] 

Hudson commented on HDFS-11182:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11022 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11022/])
HDFS-11182. Update DataNode to use DatasetVolumeChecker. Contributed by (xyao: 
rev f678080dbd25a218e0406463a3c3a1fc03680702)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockStatsMXBean.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/checker/TestDatasetVolumeChecker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/checker/TestDatasetVolumeCheckerFailures.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/DatasetVolumeChecker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java


> Update DataNode to use DatasetVolumeChecker
> ---
>
> Key: HDFS-11182
> URL: https://issues.apache.org/jira/browse/HDFS-11182
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Update DataNode to use the DatasetVolumeChecker class introduced in 
> HDFS-11149 to parallelize disk checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-291) combine FsShell.copyToLocal to ChecksumFileSystem.copyToLocalFile

2016-12-20 Thread chendong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765403#comment-15765403
 ] 

chendong commented on HDFS-291:
---

CopyToLocal is useful for us. We use this as a backup of whole HDFS data. 
We only have 500G HDFS data. That is not big data. 

We don't have much budget to create another cluster for backup purpose. 
So we backup the whole HDFS data into local file system.

This is a use case which seems not a typical big data scenario. However, as far 
as I know, a lot of business start from small data with the requirement for the 
disaster recovery. 

> combine FsShell.copyToLocal to ChecksumFileSystem.copyToLocalFile
> -
>
> Key: HDFS-291
> URL: https://issues.apache.org/jira/browse/HDFS-291
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tsz Wo Nicholas Sze
>Priority: Minor
>
> - Two methods provide similar functions
> - ChecksumFileSystem.copyToLocalFile(Path src, Path dst, boolean copyCrc) is 
> no longer used anywhere in the system
> - It is better to use ChecksumFileSystem.getRawFileSystem() for copying crc 
> in FsShell.copyToLocal
> - FileSystem.isDirectory(Path) used in FsShell.copyToLocal is deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11258) File mtime change could not save to editlog

2016-12-20 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HDFS-11258:
---
Attachment: hdfs-11258.4.patch

> File mtime change could not save to editlog
> ---
>
> Key: HDFS-11258
> URL: https://issues.apache.org/jira/browse/HDFS-11258
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Critical
> Attachments: hdfs-11258.1.patch, hdfs-11258.2.patch, 
> hdfs-11258.3.patch, hdfs-11258.4.patch
>
>
> When both mtime and atime are changed, and atime is not beyond the precision 
> limit, the mtime change is not saved to edit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11182) Update DataNode to use DatasetVolumeChecker

2016-12-20 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765378#comment-15765378
 ] 

Arpit Agarwal commented on HDFS-11182:
--

Thanks for the detailed code reviews and committing it [~xyao]!

> Update DataNode to use DatasetVolumeChecker
> ---
>
> Key: HDFS-11182
> URL: https://issues.apache.org/jira/browse/HDFS-11182
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Update DataNode to use the DatasetVolumeChecker class introduced in 
> HDFS-11149 to parallelize disk checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter

2016-12-20 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765376#comment-15765376
 ] 

Eric Badger commented on HDFS-11094:


[~liuml07], can we cherry-pick this to 2.8? I'm seeing test failures from 
{{TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit}} due to a 
race condition in getActiveNN() that this will fix. 

> Send back HAState along with NamespaceInfo during a versionRequest as an 
> optional parameter
> ---
>
> Key: HDFS-11094
> URL: https://issues.apache.org/jira/browse/HDFS-11094
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11094-branch-2.011.patch, HDFS-11094.001.patch, 
> HDFS-11094.002.patch, HDFS-11094.003.patch, HDFS-11094.004.patch, 
> HDFS-11094.005.patch, HDFS-11094.006.patch, HDFS-11094.007.patch, 
> HDFS-11094.008.patch, HDFS-11094.009-b2.patch, HDFS-11094.009.patch, 
> HDFS-11094.010-b2.patch, HDFS-11094.010.patch, HDFS-11094.011.patch
>
>
> The datanode should know which NN is active when it is connecting/registering 
> to the NN. Currently, it only figures this out during its first (and 
> subsequent) heartbeat(s) and so there is a period of time where the datanode 
> is alive and registered, but can't actually do anything because it doesn't 
> know which NN is active. A byproduct of this is that the MiniDFSCluster will 
> become active before it knows what NN is active, which can lead to NPEs when 
> calling getActiveNN(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11182) Update DataNode to use DatasetVolumeChecker

2016-12-20 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDFS-11182.
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2

Thanks [~arpitagarwal] for the contribution and all for the reviews. I've 
commit the patch to trunk. 

> Update DataNode to use DatasetVolumeChecker
> ---
>
> Key: HDFS-11182
> URL: https://issues.apache.org/jira/browse/HDFS-11182
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
>
> Update DataNode to use the DatasetVolumeChecker class introduced in 
> HDFS-11149 to parallelize disk checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11251) ConcurrentModificationException during DataNode#refreshVolumes

2016-12-20 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy reassigned HDFS-11251:
-

Assignee: Manoj Govindassamy

> ConcurrentModificationException during DataNode#refreshVolumes
> --
>
> Key: HDFS-11251
> URL: https://issues.apache.org/jira/browse/HDFS-11251
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Manoj Govindassamy
>
> The testAddVolumesDuringWrite case failed with a ReconfigurationException 
> which appears to have been caused by a ConcurrentModificationException.  
> Stacktrace details to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10913) Introduce fault injectors to simulate slow mirrors

2016-12-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765350#comment-15765350
 ] 

Hudson commented on HDFS-10913:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11021 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11021/])
HDFS-10913. Introduce fault injectors to simulate slow mirrors. (xyao: rev 
5daa8d8631835de97d4e4979e507a080017ca159)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeFaultInjector.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java


> Introduce fault injectors to simulate slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, 
> HDFS-10913.002.patch, HDFS-10913.003.patch, HDFS-10913.004.patch, 
> HDFS-10913.005.patch, HDFS-10913.006.patch, HDFS-10913.007.patch, 
> HDFS-10913.008.patch, HDFS-10913.009.patch, HDFS-10913.010.patch, 
> HDFS-10913.011.patch, HDFS-10913.012.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11182) Update DataNode to use DatasetVolumeChecker

2016-12-20 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765344#comment-15765344
 ] 

Xiaoyu Yao commented on HDFS-11182:
---

+1 for the latest patch. I tried the failed Jenkins test but can't repro it. 
I plan to commit it shortly with "git apply --whitespace=fix". 
The checkstyle indentation level issue with Java8 Lambdas seems to be a known 
issue. Will open separate ticket to fix that. 

> Update DataNode to use DatasetVolumeChecker
> ---
>
> Key: HDFS-11182
> URL: https://issues.apache.org/jira/browse/HDFS-11182
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> Update DataNode to use the DatasetVolumeChecker class introduced in 
> HDFS-11149 to parallelize disk checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9391) Update webUI/JMX to display maintenance state info

2016-12-20 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765333#comment-15765333
 ] 

Manoj Govindassamy commented on HDFS-9391:
--

HDFS-11265 has been filed to track the item 2. For item 1, it will be tracked 
in a separate jira outside of Maintenance Mode.


> Update webUI/JMX to display maintenance state info
> --
>
> Key: HDFS-9391
> URL: https://issues.apache.org/jira/browse/HDFS-9391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Ming Ma
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9391-MaintenanceMode-WebUI.pdf, HDFS-9391.01.patch, 
> HDFS-9391.02.patch, Maintenance webUI.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10913) Introduce fault injectors to simulate slow mirrors

2016-12-20 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765327#comment-15765327
 ] 

Xiaobing Zhou commented on HDFS-10913:
--

Thank you all for reviewing/committing the patches.

> Introduce fault injectors to simulate slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, 
> HDFS-10913.002.patch, HDFS-10913.003.patch, HDFS-10913.004.patch, 
> HDFS-10913.005.patch, HDFS-10913.006.patch, HDFS-10913.007.patch, 
> HDFS-10913.008.patch, HDFS-10913.009.patch, HDFS-10913.010.patch, 
> HDFS-10913.011.patch, HDFS-10913.012.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11265) Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI

2016-12-20 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11265:
-

 Summary: Extend visualization for Maintenance Mode under Datanode 
tab in the NameNode UI
 Key: HDFS-11265
 URL: https://issues.apache.org/jira/browse/HDFS-11265
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy



With HDFS-9391, DataNodes in MaintenanceModes states are shown under DataNode 
page in NameNode UI, but they are lacking icon visualization like the ones 
shown for other node states. Need to extend the icon visualization to cover 
Maintenance Mode.

{code}

[jira] [Commented] (HDFS-11258) File mtime change could not save to editlog

2016-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765285#comment-15765285
 ] 

Hadoop QA commented on HDFS-11258:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 10 unchanged - 1 fixed = 17 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 63m 
28s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11258 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844121/hdfs-11258.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dfc3e2a5eac1 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 523411d |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17922/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17922/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17922/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> File mtime change could not save to editlog
> ---
>
> Key: HDFS-11258
> URL: https://issues.apache.org/jira/browse/HDFS-11258
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Critical
> Attachments: hdfs-

[jira] [Commented] (HDFS-11100) Recursively deleting file protected by sticky bit should fail

2016-12-20 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765280#comment-15765280
 ] 

Wei-Chiu Chuang commented on HDFS-11100:


Note that HDFS is not meant to be _Linux compatible_, rather, it's meant to be 
*POSIX-compatible*

Quoting _Hadoop Security_ on sticky bit:

{quote}
... it \[sticky bit\] means that files in a directory can only be deleted by 
the owner of that file. Without the sticky bit set, a file can be deleted by 
anyone that has write access to the directory. In HDFS, the owner of a 
directory and the HDFS superuser can also delete files regardless of whether 
the sticky bit is set. The sticky bit is useful for directories, such as /tmp, 
where you want all users to have write access to the directory but only the 
owner of the data should be able to delete data.
{quote}

The Linux's behavior is documented/defined here: 
http://man7.org/linux/man-pages/man1/chmod.1.html
{quote}
RESTRICTED DELETION FLAG OR STICKY BIT 

   The restricted deletion flag or sticky bit is a single bit, whose
   interpretation depends on the file type.  For directories, it
   prevents unprivileged users from removing or renaming a file in the
   directory unless they own the file or the directory; this is called
   the restricted deletion flag for the directory, and is commonly found
   on world-writable directories like /tmp.  For regular files on some
   older systems, the bit saves the program's text image on the swap
   device so it will load more quickly when run; this is called the
   sticky bit.

{quote}

However, I found no mention of sticky bit handling in Open Group specification 
regarding rm command: 
http://pubs.opengroup.org/onlinepubs/009695399/utilities/rm.html

This wikipage describes the behavior of removing files under sticky-bit 
attributed directories: https://en.wikipedia.org/wiki/Sticky_bit
As you can see, there is no common behavior across all Unix-like operating 
systems. But for operating systems that use sticky bit to protect files 
removals, HDFS's behavior is definitely different. So I think it's safe to say 
the old behavior is unexpected to most Unix-like users, and the new behavior is 
what most Unix-like users would expect.

> Recursively deleting file protected by sticky bit should fail
> -
>
> Key: HDFS-11100
> URL: https://issues.apache.org/jira/browse/HDFS-11100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>  Labels: permissions
> Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, 
> HDFS-11100.003.patch, hdfs_cmds
>
>
> Recursively deleting a directory that contains files or directories protected 
> by sticky bit should fail but it doesn't in HDFS. In the case below, 
> {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive 
> deleting {{/tmp/test/sticky_dir}} should fail.
> {noformat}
> + hdfs dfs -ls -R /tmp/test
> drwxrwxrwt   - jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir
> -rwxrwxrwx   1 jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir/f2
> + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2
> rm: Permission denied by sticky bit: user=hadoop, 
> path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, 
> parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt
> + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir
> Deleted /tmp/test/sticky_dir
> {noformat}
> Centos 6.4 behavior:
> {noformat}
> $ ls -lR /tmp/test
> /tmp/test: 
> total 4
> drwxrwxrwt 2 systest systest 4096 Nov  3 18:36 sbit
> /tmp/test/sbit:
> total 0
> -rw-rw-rw- 1 systest systest 0 Nov  2 13:45 f2
> $ sudo -u mapred rm -fr /tmp/test/sbit
> rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted
> $ chmod -t /tmp/test/sbit
> $ sudo -u mapred rm -fr /tmp/test/sbit
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10913) Introduce fault injectors to simulate slow mirrors

2016-12-20 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10913:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Thanks [~xiaobingo] for the contribution and all for the reviews. I've commit 
the patch to trunk.

> Introduce fault injectors to simulate slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, 
> HDFS-10913.002.patch, HDFS-10913.003.patch, HDFS-10913.004.patch, 
> HDFS-10913.005.patch, HDFS-10913.006.patch, HDFS-10913.007.patch, 
> HDFS-10913.008.patch, HDFS-10913.009.patch, HDFS-10913.010.patch, 
> HDFS-10913.011.patch, HDFS-10913.012.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11247) Add a test to verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes shown in NameNode WebUI

2016-12-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765263#comment-15765263
 ] 

Hudson commented on HDFS-11247:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11020 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11020/])
HDFS-11247. Add a test to verify NameNodeMXBean#getDecomNodes() and (xiao: rev 
4af66b1d664b05590c39e34ae04f1f304c3cd227)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java


> Add a test to verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes 
> shown in NameNode WebUI
> -
>
> Key: HDFS-11247
> URL: https://issues.apache.org/jira/browse/HDFS-11247
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11247.01.patch, HDFS-11247.02.patch
>
>
> Add unit test to verify the following
> * NameNodeMXBean#getDecomNodes()  -- Decommission in progress nodes -- 
> Displayed under "Decommissioning" in NameNode WebUI page 
> dfshealth.html#tab-datanode
> * Decommissioned Live and Dead nodes displayed under "Summary" in 
> dfshealth.html#tab-overview page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11195) Return error when appending files by webhdfs rest api fails

2016-12-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765221#comment-15765221
 ] 

Hudson commented on HDFS-11195:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11019 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11019/])
HDFS-11195. Return error when appending files by webhdfs rest api fails. (xiao: 
rev 5b7acdd206f5a7d1b7af29b68adaa7587d7d8c43)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/HdfsWriter.java


> Return error when appending files by webhdfs rest api fails
> ---
>
> Key: HDFS-11195
> URL: https://issues.apache.org/jira/browse/HDFS-11195
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11195.001.patch, HDFS-11195.002.patch, 
> HDFS-11195.003.patch, HDFS-11195.004.patch
>
>
> Suppose that there is a Hadoop cluster contains only one datanode, and 
> dfs.replication=3. Run:
> {code}
> curl -i -X POST -T  
> "http://:/webhdfs/v1/?op=APPEND"
> {code}
> it returns 200, even though append operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11247) Add a test to verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes shown in NameNode WebUI

2016-12-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-11247:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed to trunk, thanks Manoj for the contribution!

> Add a test to verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes 
> shown in NameNode WebUI
> -
>
> Key: HDFS-11247
> URL: https://issues.apache.org/jira/browse/HDFS-11247
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11247.01.patch, HDFS-11247.02.patch
>
>
> Add unit test to verify the following
> * NameNodeMXBean#getDecomNodes()  -- Decommission in progress nodes -- 
> Displayed under "Decommissioning" in NameNode WebUI page 
> dfshealth.html#tab-datanode
> * Decommissioned Live and Dead nodes displayed under "Summary" in 
> dfshealth.html#tab-overview page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11247) Add a test to verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes shown in NameNode WebUI

2016-12-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765194#comment-15765194
 ] 

Xiao Chen commented on HDFS-11247:
--

Thanks for revving Manoj. +1 on patch 2, committing this.

> Add a test to verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes 
> shown in NameNode WebUI
> -
>
> Key: HDFS-11247
> URL: https://issues.apache.org/jira/browse/HDFS-11247
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11247.01.patch, HDFS-11247.02.patch
>
>
> Add unit test to verify the following
> * NameNodeMXBean#getDecomNodes()  -- Decommission in progress nodes -- 
> Displayed under "Decommissioning" in NameNode WebUI page 
> dfshealth.html#tab-datanode
> * Decommissioned Live and Dead nodes displayed under "Summary" in 
> dfshealth.html#tab-overview page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11247) Add a test to verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes shown in NameNode WebUI

2016-12-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-11247:
-
Summary: Add a test to verify NameNodeMXBean#getDecomNodes() and Live/Dead 
Decom Nodes shown in NameNode WebUI  (was: Add tests to verify 
NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes shown in NameNode 
WebUI)

> Add a test to verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes 
> shown in NameNode WebUI
> -
>
> Key: HDFS-11247
> URL: https://issues.apache.org/jira/browse/HDFS-11247
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11247.01.patch, HDFS-11247.02.patch
>
>
> Add unit test to verify the following
> * NameNodeMXBean#getDecomNodes()  -- Decommission in progress nodes -- 
> Displayed under "Decommissioning" in NameNode WebUI page 
> dfshealth.html#tab-datanode
> * Decommissioned Live and Dead nodes displayed under "Summary" in 
> dfshealth.html#tab-overview page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11247) Add tests to verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes shown in NameNode WebUI

2016-12-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-11247:
-
Summary: Add tests to verify NameNodeMXBean#getDecomNodes() and Live/Dead 
Decom Nodes shown in NameNode WebUI  (was: Verify 
NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes shown in NameNode 
WebUI)

> Add tests to verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes 
> shown in NameNode WebUI
> 
>
> Key: HDFS-11247
> URL: https://issues.apache.org/jira/browse/HDFS-11247
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11247.01.patch, HDFS-11247.02.patch
>
>
> Add unit test to verify the following
> * NameNodeMXBean#getDecomNodes()  -- Decommission in progress nodes -- 
> Displayed under "Decommissioning" in NameNode WebUI page 
> dfshealth.html#tab-datanode
> * Decommissioned Live and Dead nodes displayed under "Summary" in 
> dfshealth.html#tab-overview page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11195) Return error when appending files by webhdfs rest api fails

2016-12-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-11195:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Failed test look unrelated and passed locally. Committed to trunk, branch-2 and 
branch-2.8.

Thanks [~yuanbo] for the contribution and [~ajisakaa] for the review!

> Return error when appending files by webhdfs rest api fails
> ---
>
> Key: HDFS-11195
> URL: https://issues.apache.org/jira/browse/HDFS-11195
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11195.001.patch, HDFS-11195.002.patch, 
> HDFS-11195.003.patch, HDFS-11195.004.patch
>
>
> Suppose that there is a Hadoop cluster contains only one datanode, and 
> dfs.replication=3. Run:
> {code}
> curl -i -X POST -T  
> "http://:/webhdfs/v1/?op=APPEND"
> {code}
> it returns 200, even though append operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11257) Evacuate DN when the remaining is negative

2016-12-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765174#comment-15765174
 ] 

Andrew Wang commented on HDFS-11257:


If the goal is to keep some free space around for performance, you can specify 
reserved space at the FS level:

http://unix.stackexchange.com/questions/7950/reserved-space-for-root-on-a-filesystem-why

The point of {{du.reserved}} is to soft-partition the disks to leave space for 
MR shuffle space. It seems weird for HDFS to move data to always leave 100GB 
free, since that impacts HDFS performance and the root cause is some other app 
that's filling up the disk.

This probably doesn't come up much since most HDFS clusters run with some 
headroom, but IMO this config should really be more like a {{df.reserved}} like 
what Linux does, rather than a {{du.reserved}}.

> Evacuate DN when the remaining is negative
> --
>
> Key: HDFS-11257
> URL: https://issues.apache.org/jira/browse/HDFS-11257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.3
>Reporter: Inigo Goiri
>
> Datanodes have a maximum amount of disk they can use. This is set using 
> {{dfs.datanode.du.reserved}}. For example, if we have a 1TB disk and we set 
> the reserved to 100GB, the DN can only use ~900GB. However, if we fill the DN 
> and later other processes (e.g., logs or co-located services) start to use 
> the disk space, the remaining space will go to a negative and the used 
> storage >100%.
> The Rebalancer or decommissioning would cover this situation. However, both 
> approaches require administrator intervention while this is a situation that 
> violates the settings. Note that decommisioning, would be too extreme as it 
> would evacuate all the data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11195) Return error when appending files by webhdfs rest api fails

2016-12-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765167#comment-15765167
 ] 

Xiao Chen commented on HDFS-11195:
--

+1 on patch 4, committing this.

> Return error when appending files by webhdfs rest api fails
> ---
>
> Key: HDFS-11195
> URL: https://issues.apache.org/jira/browse/HDFS-11195
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11195.001.patch, HDFS-11195.002.patch, 
> HDFS-11195.003.patch, HDFS-11195.004.patch
>
>
> Suppose that there is a Hadoop cluster contains only one datanode, and 
> dfs.replication=3. Run:
> {code}
> curl -i -X POST -T  
> "http://:/webhdfs/v1/?op=APPEND"
> {code}
> it returns 200, even though append operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11195) Return error when appending files by webhdfs rest api fails

2016-12-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-11195:
-
Summary: Return error when appending files by webhdfs rest api fails  (was: 
When appending files by webhdfs rest api fails, it returns 200)

> Return error when appending files by webhdfs rest api fails
> ---
>
> Key: HDFS-11195
> URL: https://issues.apache.org/jira/browse/HDFS-11195
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11195.001.patch, HDFS-11195.002.patch, 
> HDFS-11195.003.patch, HDFS-11195.004.patch
>
>
> Suppose that there is a Hadoop cluster contains only one datanode, and 
> dfs.replication=3. Run:
> {code}
> curl -i -X POST -T  
> "http://:/webhdfs/v1/?op=APPEND"
> {code}
> it returns 200, even though append operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10913) Introduce fault injectors to simulate slow mirrors

2016-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765153#comment-15765153
 ] 

Hadoop QA commented on HDFS-10913:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 65 unchanged - 1 fixed = 66 total (was 66) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 97m 
11s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10913 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844107/HDFS-10913.012.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 52d59da0812c 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1b401f6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17920/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17920/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17920/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Introduce fault injectors to simulate slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>   

[jira] [Commented] (HDFS-11182) Update DataNode to use DatasetVolumeChecker

2016-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765101#comment-15765101
 ] 

Hadoop QA commented on HDFS-11182:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 14 new + 466 unchanged - 10 fixed = 480 total (was 476) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11182 |
| GITHUB PR | https://github.com/apache/hadoop/pull/168 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a6dfe967cca9 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1b401f6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17921/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17921/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17921/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17921/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17921/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generat

[jira] [Updated] (HDFS-11258) File mtime change could not save to editlog

2016-12-20 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HDFS-11258:
---
Attachment: hdfs-11258.3.patch

Thanks [~wheat9] for the review. Fixed the checkstyles.

> File mtime change could not save to editlog
> ---
>
> Key: HDFS-11258
> URL: https://issues.apache.org/jira/browse/HDFS-11258
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Critical
> Attachments: hdfs-11258.1.patch, hdfs-11258.2.patch, 
> hdfs-11258.3.patch
>
>
> When both mtime and atime are changed, and atime is not beyond the precision 
> limit, the mtime change is not saved to edit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9391) Update webUI/JMX to display maintenance state info

2016-12-20 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765093#comment-15765093
 ] 

Manoj Govindassamy commented on HDFS-9391:
--

Discussed Maintenance Mode UI proposal with [~dilaver] and he has the following 
comments 
# In the NameNode UI, under DataNode Information there are few legends like "In 
Service", "Down", "Decommissioned & Dead" etc., (Refer Page 1, item 2 in the 
[UI 
attached|https://issues.apache.org/jira/secure/attachment/12843697/HDFS-9391-MaintenanceMode-WebUI.pdf]).
** What is the difference between Down and Dead nodes ? Better to be consistent 
in naming and terminology.
** Any help hover text for these Icons would be very useful
# Icon visualization should be extended to cover Maintenance Mode states.

> Update webUI/JMX to display maintenance state info
> --
>
> Key: HDFS-9391
> URL: https://issues.apache.org/jira/browse/HDFS-9391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Ming Ma
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9391-MaintenanceMode-WebUI.pdf, HDFS-9391.01.patch, 
> HDFS-9391.02.patch, Maintenance webUI.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10913) Introduce fault injectors to simulate slow mirrors

2016-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765078#comment-15765078
 ] 

Hadoop QA commented on HDFS-10913:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 65 unchanged - 1 fixed = 66 total (was 66) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.TestBlockStoragePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10913 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844105/HDFS-10913.012.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 92c85f7290e5 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1b401f6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17919/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17919/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17919/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17919/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Introduce fault injectors to simulate slow mirrors
> --
>
> Key: HDFS-

[jira] [Commented] (HDFS-11247) Verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes shown in NameNode WebUI

2016-12-20 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765051#comment-15765051
 ] 

Manoj Govindassamy commented on HDFS-11247:
---

[~xiaochen], test failures are not related to the patch v02.

> Verify NameNodeMXBean#getDecomNodes() and Live/Dead Decom Nodes shown in 
> NameNode WebUI
> ---
>
> Key: HDFS-11247
> URL: https://issues.apache.org/jira/browse/HDFS-11247
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11247.01.patch, HDFS-11247.02.patch
>
>
> Add unit test to verify the following
> * NameNodeMXBean#getDecomNodes()  -- Decommission in progress nodes -- 
> Displayed under "Decommissioning" in NameNode WebUI page 
> dfshealth.html#tab-datanode
> * Decommissioned Live and Dead nodes displayed under "Summary" in 
> dfshealth.html#tab-overview page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10913) Introduce fault injectors to simulate slow mirrors

2016-12-20 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15764866#comment-15764866
 ] 

Arpit Agarwal commented on HDFS-10913:
--

+1 on the v12 patch, pending Jenkins. Thanks for addressing the feedback.

> Introduce fault injectors to simulate slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, 
> HDFS-10913.002.patch, HDFS-10913.003.patch, HDFS-10913.004.patch, 
> HDFS-10913.005.patch, HDFS-10913.006.patch, HDFS-10913.007.patch, 
> HDFS-10913.008.patch, HDFS-10913.009.patch, HDFS-10913.010.patch, 
> HDFS-10913.011.patch, HDFS-10913.012.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10913) Introduce fault injectors to simulate slow mirrors

2016-12-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10913:
-
Attachment: HDFS-10913.012.patch

> Introduce fault injectors to simulate slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, 
> HDFS-10913.002.patch, HDFS-10913.003.patch, HDFS-10913.004.patch, 
> HDFS-10913.005.patch, HDFS-10913.006.patch, HDFS-10913.007.patch, 
> HDFS-10913.008.patch, HDFS-10913.009.patch, HDFS-10913.010.patch, 
> HDFS-10913.011.patch, HDFS-10913.012.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10913) Introduce fault injectors to simulate slow mirrors

2016-12-20 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15764813#comment-15764813
 ] 

Xiaobing Zhou edited comment on HDFS-10913 at 12/20/16 6:10 PM:


Thanks [~arpitagarwal] and [~xyao] for your reviews.
Posted v12 patch to address your comments.




was (Author: xiaobingo):
Thanks [~arpitagarwal] and [~xyao] for your reviews.
Posted v12 patch to address your comments. Downcast is needed to call get 
#getDelayMs.



> Introduce fault injectors to simulate slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, 
> HDFS-10913.002.patch, HDFS-10913.003.patch, HDFS-10913.004.patch, 
> HDFS-10913.005.patch, HDFS-10913.006.patch, HDFS-10913.007.patch, 
> HDFS-10913.008.patch, HDFS-10913.009.patch, HDFS-10913.010.patch, 
> HDFS-10913.011.patch, HDFS-10913.012.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10913) Introduce fault injectors to simulate slow mirrors

2016-12-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10913:
-
Attachment: (was: HDFS-10913.012.patch)

> Introduce fault injectors to simulate slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, 
> HDFS-10913.002.patch, HDFS-10913.003.patch, HDFS-10913.004.patch, 
> HDFS-10913.005.patch, HDFS-10913.006.patch, HDFS-10913.007.patch, 
> HDFS-10913.008.patch, HDFS-10913.009.patch, HDFS-10913.010.patch, 
> HDFS-10913.011.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10913) Introduce fault injectors to simulate slow mirrors

2016-12-20 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15764813#comment-15764813
 ] 

Xiaobing Zhou commented on HDFS-10913:
--

Thanks [~arpitagarwal] and [~xyao] for your reviews.
Posted v12 patch to address your comments. Downcast is needed to call get 
#getDelayMs.



> Introduce fault injectors to simulate slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, 
> HDFS-10913.002.patch, HDFS-10913.003.patch, HDFS-10913.004.patch, 
> HDFS-10913.005.patch, HDFS-10913.006.patch, HDFS-10913.007.patch, 
> HDFS-10913.008.patch, HDFS-10913.009.patch, HDFS-10913.010.patch, 
> HDFS-10913.011.patch, HDFS-10913.012.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10913) Introduce fault injectors to simulate slow mirrors

2016-12-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10913:
-
Attachment: HDFS-10913.012.patch

> Introduce fault injectors to simulate slow mirrors
> --
>
> Key: HDFS-10913
> URL: https://issues.apache.org/jira/browse/HDFS-10913
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, 
> HDFS-10913.002.patch, HDFS-10913.003.patch, HDFS-10913.004.patch, 
> HDFS-10913.005.patch, HDFS-10913.006.patch, HDFS-10913.007.patch, 
> HDFS-10913.008.patch, HDFS-10913.009.patch, HDFS-10913.010.patch, 
> HDFS-10913.011.patch, HDFS-10913.012.patch
>
>
> BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow 
> mirrors. BlockReceiver only writes some warning logs. In order to better test 
> behaviors of slow mirrors, it necessitates introducing fault injectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11263) ClassCastException when we use Bzipcodec for Fsimage compression

2016-12-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15764596#comment-15764596
 ] 

Hudson commented on HDFS-11263:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11016 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11016/])
HDFS-11263. ClassCastException when we use Bzipcodec for Fsimage (brahma: rev 
1b401f6a734df4e23a79b3bd89c816a1fc0de574)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java


> ClassCastException when we use Bzipcodec for Fsimage compression
> 
>
> Key: HDFS-11263
> URL: https://issues.apache.org/jira/browse/HDFS-11263
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-11263-002.patch, HDFS-11263.patch
>
>
>  *Trace* 
> {noformat}
> java.lang.ClassCastException: 
> org.apache.hadoop.io.compress.BZip2Codec$BZip2CompressionOutputStream cannot 
> be cast to org.apache.hadoop.io.compress.CompressorStream
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.flushSectionOutputStream(FSImageFormatProtobuf.java:420)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.commitSection(FSImageFormatProtobuf.java:405)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.saveNameSystemSection(FSImageFormatProtobuf.java:583)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.saveInternal(FSImageFormatProtobuf.java:494)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.save(FSImageFormatProtobuf.java:431)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:913)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:964)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11263) ClassCastException when we use Bzipcodec for Fsimage compression

2016-12-20 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11263:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.7.4
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk,branch-2,branch-2.8 and branch-2.7..[~linyiqun] and 
[~jojochuang] thanks for your reviews.

> ClassCastException when we use Bzipcodec for Fsimage compression
> 
>
> Key: HDFS-11263
> URL: https://issues.apache.org/jira/browse/HDFS-11263
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-11263-002.patch, HDFS-11263.patch
>
>
>  *Trace* 
> {noformat}
> java.lang.ClassCastException: 
> org.apache.hadoop.io.compress.BZip2Codec$BZip2CompressionOutputStream cannot 
> be cast to org.apache.hadoop.io.compress.CompressorStream
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.flushSectionOutputStream(FSImageFormatProtobuf.java:420)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.commitSection(FSImageFormatProtobuf.java:405)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.saveNameSystemSection(FSImageFormatProtobuf.java:583)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.saveInternal(FSImageFormatProtobuf.java:494)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.save(FSImageFormatProtobuf.java:431)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:913)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:964)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11262) Remove unused variables in FSImage.java

2016-12-20 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15764379#comment-15764379
 ] 

Jagadesh Kiran N commented on HDFS-11262:
-

Thanks [~ajisakaa] for committing the same

> Remove unused variables in FSImage.java
> ---
>
> Key: HDFS-11262
> URL: https://issues.apache.org/jira/browse/HDFS-11262
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11262_00.patch, HDFS-11262_01.patch, 
> HDFS-11262_02.patch
>
>
> {code:title=FSImage.java}
> Exception le = null;
> FSImageFile imageFile = null;
> for (int i = 0; i < imageFiles.size(); i++) {
>   try {
> imageFile = imageFiles.get(i);
> loadFSImageFile(target, recovery, imageFile, startOpt);
> break;
>   } catch (IllegalReservedPathException ie) {
> throw new IOException("Failed to load image from " + imageFile,
> ie);
>   } catch (Exception e) {
> le = e;
> LOG.error("Failed to load image from " + imageFile, e);
> target.clear();
> imageFile = null;
>   }
> }
> {code}
> Exception {{le}} is not used. It can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11032) [SPS]: Handling of block movement failure at the coordinator datanode

2016-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15764347#comment-15764347
 ] 

Hadoop QA commented on HDFS-11032:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
34s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844061/HDFS-11032-HDFS-10285-01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c36c7f42bef3 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 06a0cbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17917/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17917/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17917/testReport/ |
| modules | C: hadoop-hdfs-project

[jira] [Commented] (HDFS-11248) [SPS]: Handle partial block location movements

2016-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15764170#comment-15764170
 ] 

Hadoop QA commented on HDFS-11248:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 6 unchanged - 0 fixed = 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11248 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844042/HDFS-11248-HDFS-10285-02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 42c81cc9535e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 06a0cbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17916/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17916/artifact/patchprocess/whitespace-eol.txt

[jira] [Updated] (HDFS-11032) [SPS]: Handling of block movement failure at the coordinator datanode

2016-12-20 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11032:

Attachment: HDFS-11032-HDFS-10285-01.patch

Attached new patch fixing checkstyle warnings.

> [SPS]: Handling of block movement failure at the coordinator datanode
> -
>
> Key: HDFS-11032
> URL: https://issues.apache.org/jira/browse/HDFS-11032
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11032-HDFS-10285-00.patch, 
> HDFS-11032-HDFS-10285-01.patch
>
>
> The idea of this jira is to discuss and implement an efficient failure(block 
> movement failure) handling logic at the datanode cooridnator.  [Code 
> reference|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StoragePolicySatisfyWorker.java#L243].
> Following are the possible errors during block movement:
> # Block pinned - no retries marked as success/no-retry to NN. It is not 
> possible to relocate this block to another datanode.
> # Network errors(IOException) - no retries maked as failure/retry to NN.
> # No disk space(IOException) - no retries maked as failure/retry to NN.
> # Gen_Stamp mismatches - no retries marked as failure/retry to NN. Could be a 
> case that the file might have re-opened.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11248) [SPS]: Handle partial block location movements

2016-12-20 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15763900#comment-15763900
 ] 

Rakesh R commented on HDFS-11248:
-

Thanks [~umamaheswararao]. Race condition is really a good finding. Attached 
new patch addressing the comments.

> [SPS]: Handle partial block location movements
> --
>
> Key: HDFS-11248
> URL: https://issues.apache.org/jira/browse/HDFS-11248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11248-HDFS-10285-00.patch, 
> HDFS-11248-HDFS-10285-01.patch, HDFS-11248-HDFS-10285-02.patch
>
>
> This jira is to handle partial block location movements due to unavailability 
> of target nodes for the matching storage type. 
> For example, We have only A(disk,archive), B(disk) and C(disk,archive) are 
> live nodes with A & C have archive storage type. Say, we have a block with 
> locations {{A(disk), B(disk), C(disk)}}. Again assume, user changed the 
> storage policy to COLD. Now, SPS internally starts preparing the src-target 
> pairing like, {{src=> (A, B, C) and target=> (A, C)}} and sends 
> BLOCK_STORAGE_MOVEMENT to the coordinator. SPS is skipping B as it doesn't 
> have archive media to indicate that it should do retries to satisfy all block 
> locations after some time. On receiving the movement command, coordinator 
> will pair the src-target node to schedule actual physical movements like, 
> {{movetask=> (A, A), (B, C)}}. Here ideally it should do {{(C, C)}} instead 
> of {{(B, C)}} but mistakenly choosing the source C and creates problem.
> IMHO, the implicit assumptions of retry needed is creating confusions and 
> leads to coding mistakes. One idea to fix this problem is to create a new 
> flag {{retryNeeded}} flag to make it more readable. With this, SPS will 
> prepare only the matching pair and dummy source slots will be avoided like, 
> {{src=> (A, C) and target=> (A, C)}} and mark {{retryNeeded=true}} to convey 
> the message that this {{trackId}} has only partial blocks movements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11248) [SPS]: Handle partial block location movements

2016-12-20 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11248:

Attachment: HDFS-11248-HDFS-10285-02.patch

> [SPS]: Handle partial block location movements
> --
>
> Key: HDFS-11248
> URL: https://issues.apache.org/jira/browse/HDFS-11248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11248-HDFS-10285-00.patch, 
> HDFS-11248-HDFS-10285-01.patch, HDFS-11248-HDFS-10285-02.patch
>
>
> This jira is to handle partial block location movements due to unavailability 
> of target nodes for the matching storage type. 
> For example, We have only A(disk,archive), B(disk) and C(disk,archive) are 
> live nodes with A & C have archive storage type. Say, we have a block with 
> locations {{A(disk), B(disk), C(disk)}}. Again assume, user changed the 
> storage policy to COLD. Now, SPS internally starts preparing the src-target 
> pairing like, {{src=> (A, B, C) and target=> (A, C)}} and sends 
> BLOCK_STORAGE_MOVEMENT to the coordinator. SPS is skipping B as it doesn't 
> have archive media to indicate that it should do retries to satisfy all block 
> locations after some time. On receiving the movement command, coordinator 
> will pair the src-target node to schedule actual physical movements like, 
> {{movetask=> (A, A), (B, C)}}. Here ideally it should do {{(C, C)}} instead 
> of {{(B, C)}} but mistakenly choosing the source C and creates problem.
> IMHO, the implicit assumptions of retry needed is creating confusions and 
> leads to coding mistakes. One idea to fix this problem is to create a new 
> flag {{retryNeeded}} flag to make it more readable. With this, SPS will 
> prepare only the matching pair and dummy source slots will be avoided like, 
> {{src=> (A, C) and target=> (A, C)}} and mark {{retryNeeded=true}} to convey 
> the message that this {{trackId}} has only partial blocks movements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11150) [SPS]: Provide persistence when satisfying storage policy.

2016-12-20 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15763884#comment-15763884
 ] 

Uma Maheswara Rao G commented on HDFS-11150:


HI [~yuanbo], Thank you for working on this patch. Below is my feedback on the 
patch.

# Can we rename addSatisfyMovement —> addStoragePolicySatisfierXAttr ?
# Suggestion to think: One another thought I am having but, it could be tricky 
to handle. To save memory, when user call satisfyStoragePolicy on directory, we 
would take immediate files under that directory. So, for persistence part, how 
about keeping Xattrs only of that directory. Build  required elements based on 
this. 
Below is some details where I am talking about
{code}
for (INode node : candidateNodes) {
+  bm.satisfyStoragePolicy(node.getId());
+  List existingXAttrs = XAttrStorage.readINodeXAttrs(inode);
+  List newXAttrs = FSDirXAttrOp.setINodeXAttrs(
+  fsd, existingXAttrs, xattrs, EnumSet.of(XAttrSetFlag.CREATE));
+  XAttrStorage.updateINodeXAttrs(inode, newXAttrs, snapshotId);
+}
{code}
Can we think to add Xattr to only dir here?  The idea here is, when loading 
from FSImage, we can think to process and add the childs(if files) to 
bm.satisfyStoragePolicy
In that case, you need to change below part as well to handle to recalculate 
childs when it is dir.
{code}
 if (isFile && XATTR_SATISFY_STORAGE_POLICY.equals(xaName)) {
+  fsd.getBlockManager().satisfyStoragePolicy(inode.getId());
+  }
{code}
And also 
{code}
 private void addSatisfyMovement(INodeWithAdditionalFields inode,
+  XAttrFeature xaf) {
+if (xaf == null || inode.isDirectory()) {
+  return;
+}
+XAttr xattr = xaf.getXAttr(XATTR_SATISFY_STORAGE_POLICY);
+if (xattr == null) {
+  return;
+}
+getBlockManager().satisfyStoragePolicy(inode.getId());
+  }
{code}
Here instead of calling directly getBlockManager().satisfyStoragePolicy, we can 
think to build unprotected API, which should find childs(only files) in the 
case of directory and call 
getBlockManager().satisfyStoragePolicy(inode.getId()) for each. Does that work?
# Can we add test cases with checkpoints and multiple restarts? Can you also 
add test with HA cases?
# Simply  handling below retryCache implementation won’t enough, You need to 
mark ClientProtocol#satisfyStoragePolicy API annotate with @AtMostOnce. 
{code}
 CacheEntry cacheEntry = RetryCache.waitForCompletion(retryCache);
+if (cacheEntry != null && cacheEntry.isSuccess()) {
+  return; // Return previous response
+}
+boolean success = false;
+try {
+  namesystem.satisfyStoragePolicy(src, cacheEntry != null);
+  success = true;
+} finally {
+  RetryCache.setState(cacheEntry, success);
+}
{code}
# Also please consider fixing the retryCache related testcase. Its because one 
more API is getting added to @AtMostOnce, so API count increases.
# -
{code}
throw new IOException("Failed to satisfy storage policy for "
+  + iip.getPath()
+  + " since it has been added to satisfy movement queue." );
{code}
After iip.getPath(), please put comma. and also make message clear saying 
something like “Cannot request to call satisfy storage policy on path 
iip.getPath(), as this file/dir was already called for satisfying storage 
policy."
# Could you please add timeout and javadoc for testcase?
# -
{code}
Assert.assertTrue(e.getMessage().contains(
+String.format("Failed to satisfy storage policy for %s since "
++ "it has been added to satisfy movement queue.", file)));
{code}
Can you use GenericTestUtils#assertExceptionContains instead?

> [SPS]: Provide persistence when satisfying storage policy.
> --
>
> Key: HDFS-11150
> URL: https://issues.apache.org/jira/browse/HDFS-11150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11150-HDFS-10285.001.patch, 
> HDFS-11150-HDFS-10285.002.patch, editsStored, editsStored.xml
>
>
> Provide persistence for SPS in case that Hadoop cluster crashes by accident. 
> Basically we need to change EditLog and FsImage here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11239) [SPS]: Check Mover file ID lease also to determine whether Mover is running

2016-12-20 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15763748#comment-15763748
 ] 

Wei Zhou commented on HDFS-11239:
-

The failures are not related and can not be reproduced in local tests.

> [SPS]: Check Mover file ID lease also to determine whether Mover is running
> ---
>
> Key: HDFS-11239
> URL: https://issues.apache.org/jira/browse/HDFS-11239
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Attachments: HDFS-11239-HDFS-10285.00.patch
>
>
> Currently in SPS only checks the Mover ID file existence to determine whether 
> a Mover is running, this can be an issue when Mover exists unexpected without 
> deleting the ID file,  and this further stops SPS to function. This is a 
> following on to HDFS-10885 and there we bypassed this due to some 
> implementation problems.  This issue can be fixed after HDFS-11123.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11186) [SPS]: Daemon thread of SPS should start only in Active NN

2016-12-20 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-11186:

Attachment: HDFS-11186-HDFS-10285.01.patch

Thanks [~rakeshr] for reviewing the patch! Patch updated accordingly. For the 
test failures reported, {{TestSeveralNameNodes}} is caused by the overheads of 
stopping SPS daemon thread and the total execution time exceeds the threshold 
set by the unit test, so changed it to a larger value; the other failures can 
not be reproduced in my environment.

> [SPS]: Daemon thread of SPS should start only in Active NN
> --
>
> Key: HDFS-11186
> URL: https://issues.apache.org/jira/browse/HDFS-11186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Attachments: HDFS-11186-HDFS-10285.00.patch, 
> HDFS-11186-HDFS-10285.01.patch
>
>
> As discussed in [HDFS-10885 
> |https://issues.apache.org/jira/browse/HDFS-10885], we need to ensure that 
> SPS is started only in Active NN. This JIRA is opened for discussion and 
> tracking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org